The business value of ChatGPT

Issue 6 2023 Security Services & Risk Management, AI & Data Analytics

ChatGPT, a generative artificial intelligence (GAI) developed by OpenAI, designed to generate human-like text based on the input it receives, was quick to break records when launched on 30 November 2022. It reached 100 million monthly active users within two months, making it the fastest growing consumer application in history.


Lizaan Lewis.

The platform now has more than 180 million active users who use it for personal and work-related purposes in organisations worldwide. While it offers immense value and time-saving capabilities, particularly to organisations wanting to leverage its capabilities to optimise tasks – it is a tool that requires careful handling to minimise the risks that come with this technology.

ChatGPT does offer time-saving capabilities. It has immense potential, but it also provides limited visibility into sources, information bias, plagiarism and references. This digital fog of war, so to speak, then limits the businesses’ ability to verify information, protect their data, and ensure employees are using the technology correctly.

One of the first concerns around ChatGPT and its use cases within organisations is that regulations and legislation have not caught up. For example, under South African copyright law, if a computer creates something, the person who generated the work of art using the computer owns the copyright. Whereas in the United States, it has to be a work created by a human being. Different countries have different expectations, yet the legal concerns around copyright are proving to be significant in practice across the globe as infringements caused by AI are increasing.

This is already reflected in a recent announcement made by Microsoft. The company has said it will pay ‘legal damages on behalf of customers using its artificial intelligence (AI) products if they are sued for copyright infringement for the output generated by such systems’. People using the platform to produce content may not realise that the AI is using copyrighted information to create their articles, reports and blog posts. Because of that, they are not crediting the right people.

Another challenge is that it is the organisation's responsibility to ensure compliance with regulations. This is a massive challenge when regulations are not even in place. Companies are expected to ensure their policies and procedures provide them with a measure of protection and guidance, but where does this put them when it comes to ChatGPT?

This is where it becomes critical for companies to focus on refining their policies and procedures consistently, catching each potential use case and creating best practice methodologies. For example, is it the responsibility of the employee who generates code using AI to test the code? Yes, if there is a clearly defined policy that mandates the validation of the output received from ChatGPT to ensure they have a usable line of code.

Matched with policies and procedures is the need to provide employees with training. They need to understand what is considered personal information and confidential information so that copyright law or PoPIA is not contravened. This is particularly important when the AI is used to generate reports or presentations and business information is being plugged into this open-source platform, potentially putting the business at risk and violating privacy laws.

AI is here forever. It is the future. Companies should not abandon AI because it gets too risky; they need to pay attention and plan ahead to avoid getting burned by its growing legal complexity. Companies that put the right policies in place will be in a far stronger position when it comes to tackling problems as they arrive – ready with their mallet for AI whack-a-mole as ChatGPT throws up increasingly complex and convoluted concerns around copyright, code validation, false information and information protection, among others.




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

DeepAlert appoints Howard Harrison as CEO
DeepAlert News & Events AI & Data Analytics
DeepAlert has appointed Howard Harrison as chief executive officer. DeepAlert’s founder and CEO of the past six years, Dr Jasper Horrell, will transition into a newly created role as chief innovation officer.

Read more...
The year of the agent
Information Security AI & Data Analytics
The dominant attack patterns in Q4 2025 included system-prompt extraction attempts, subtle content-safety bypasses, and exploratory probing. Indirect attacks required fewer attempts than direct injections, making untrusted external sources a primary risk vector heading into 2026.

Read more...
AI agent suite for control rooms
Milestone Systems News & Events Surveillance AI & Data Analytics
Visionplatform.ai announced the public launch of its new visionplatform.ai Agent Suite for Milestone XProtect, adding reasoning, context and assisted decision-making on top of existing video analytics and events — without sending video to the cloud.

Read more...
The year of machine deception
Security Services & Risk Management AI & Data Analytics
The AU10TIX Global Fraud Report, Signals for 2026, warns of the looming agentic AI and quantum risk, leading to a surge in adaptive, self-learning fraud, and outlines how early warning systems are fighting back.

Read more...
Smarter access, stronger defence
Secutel Technologies Security Services & Risk Management Access Control & Identity Management Retail (Industry)
The holiday season brings excitement, increased foot traffic and, unfortunately, a spike in criminal activity targeting retail environments. Taking a proactive approach to security is essential in ensuring staff and assets remain safe.

Read more...
SA availability of immutable backup storage appliance
CASA Software Infrastructure Security Services & Risk Management
CASA Software has launched the newly released Nexsan VHR-Series, a fully integrated, enterprise-class, immutable backup storage appliance purpose-built for Veeam software environments, with usable capacity ranging from 64 TB to 3,3 PB.

Read more...
Beagle Watch named best security company in Johannesburg
News & Events Security Services & Risk Management
Beagle Watch Armed Response has been named Johannesburg’s Best Security Company in the 2025 Best of Joburg Awards, surpassing about 26 nominated private security firms in the greater Johannesburg region, thanks to overwhelming public support.

Read more...
What is your ‘real’ security posture?
BlueVision Editor's Choice Information Security Infrastructure AI & Data Analytics
Many businesses operate under the illusion that their security controls, policies, and incident response plans will hold firm when tested by cybercriminals, but does this mean you are really safe?

Read more...
The HR Trap
Security Services & Risk Management Training & Education
When human resources becomes a risk factor. Andre du Venage examines why your CCTV security and other technology risks are covered, but human resources are often overlooked.

Read more...
IQ and AI
Leaderware Editor's Choice Surveillance AI & Data Analytics
Following his presentation at the Estate Security Conference in October, Craig Donald delves into the challenge of balancing human operator ‘IQ’ and AI system detection within CCTV control rooms.

Read more...










While every effort has been made to ensure the accuracy of the information contained herein, the publisher and its agents cannot be held responsible for any errors contained, or any loss incurred as a result. Articles published do not necessarily reflect the views of the publishers. The editor reserves the right to alter or cut copy. Articles submitted are deemed to have been cleared for publication. Advertisements and company contact details are published as provided by the advertiser. Technews Publishing (Pty) Ltd cannot be held responsible for the accuracy or veracity of supplied material.




© Technews Publishing (Pty) Ltd. | All Rights Reserved.