The business value of ChatGPT

Issue 6 2023 Security Services & Risk Management, Risk Management & Resilience

ChatGPT, a generative artificial intelligence (GAI) developed by OpenAI, designed to generate human-like text based on the input it receives, was quick to break records when launched on 30 November 2022. It reached 100 million monthly active users within two months, making it the fastest growing consumer application in history.


Lizaan Lewis.

The platform now has more than 180 million active users who use it for personal and work-related purposes in organisations worldwide. While it offers immense value and time-saving capabilities, particularly to organisations wanting to leverage its capabilities to optimise tasks – it is a tool that requires careful handling to minimise the risks that come with this technology.

ChatGPT does offer time-saving capabilities. It has immense potential, but it also provides limited visibility into sources, information bias, plagiarism and references. This digital fog of war, so to speak, then limits the businesses’ ability to verify information, protect their data, and ensure employees are using the technology correctly.

One of the first concerns around ChatGPT and its use cases within organisations is that regulations and legislation have not caught up. For example, under South African copyright law, if a computer creates something, the person who generated the work of art using the computer owns the copyright. Whereas in the United States, it has to be a work created by a human being. Different countries have different expectations, yet the legal concerns around copyright are proving to be significant in practice across the globe as infringements caused by AI are increasing.

This is already reflected in a recent announcement made by Microsoft. The company has said it will pay ‘legal damages on behalf of customers using its artificial intelligence (AI) products if they are sued for copyright infringement for the output generated by such systems’. People using the platform to produce content may not realise that the AI is using copyrighted information to create their articles, reports and blog posts. Because of that, they are not crediting the right people.

Another challenge is that it is the organisation's responsibility to ensure compliance with regulations. This is a massive challenge when regulations are not even in place. Companies are expected to ensure their policies and procedures provide them with a measure of protection and guidance, but where does this put them when it comes to ChatGPT?

This is where it becomes critical for companies to focus on refining their policies and procedures consistently, catching each potential use case and creating best practice methodologies. For example, is it the responsibility of the employee who generates code using AI to test the code? Yes, if there is a clearly defined policy that mandates the validation of the output received from ChatGPT to ensure they have a usable line of code.

Matched with policies and procedures is the need to provide employees with training. They need to understand what is considered personal information and confidential information so that copyright law or PoPIA is not contravened. This is particularly important when the AI is used to generate reports or presentations and business information is being plugged into this open-source platform, potentially putting the business at risk and violating privacy laws.

AI is here forever. It is the future. Companies should not abandon AI because it gets too risky; they need to pay attention and plan ahead to avoid getting burned by its growing legal complexity. Companies that put the right policies in place will be in a far stronger position when it comes to tackling problems as they arrive – ready with their mallet for AI whack-a-mole as ChatGPT throws up increasingly complex and convoluted concerns around copyright, code validation, false information and information protection, among others.




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Understanding the power of digital identity
Access Control & Identity Management Security Services & Risk Management Financial (Industry)
The way we perceive business flourishing is undergoing a paradigm shift, as digital identity and consumer consent redefine the dynamics of transactions, says Shanaaz Trethewey.

Read more...
What you can expect from digital identity in 2024
Access Control & Identity Management Security Services & Risk Management
As biometric identity becomes a central tenet in secure access to finance, government, telecommunications, healthcare services and more, 2024 is expected to be a year where biometrics evolve and important regulatory conversations occur.

Read more...
The human factor side of video management systems
Leaderware Editor's Choice Surveillance Risk Management & Resilience
A video management system (VMS) is central to, and the most vital element to any control room operation using CCTV as part of its service delivery, however, all too often, it is seen as a technical solution rather than an operational solution.

Read more...
Get the basics right to win more business
ServCraft Editor's Choice Risk Management & Resilience
The barriers to entry in security are not high. More people are adding CCTV and fencing to their repertoire every year. Cowboys will not last long in a space where customers trust you with their safety.

Read more...
More than just a criminal record check
iFacts Security Services & Risk Management
When it comes to human-related risks, organisations and their most senior leaders focus on a narrow set of workforce risks, the potential risks that human workers pose to the business.

Read more...
South Africa shows a 1200% increase in deepfake fraud
News & Events Risk Management & Resilience
Sumsub released its third annual Identity Fraud Report of the year, analysing identity fraud across industries and regions based on millions of verification checks across 28 industries and over 2 million fraud cases.

Read more...
How hackers exploit our vulnerabilities
Information Security Risk Management & Resilience
Distractions, multi-tasking, and emotional responses increase individuals’ vulnerability to social engineering, manipulation, and various forms of digital attacks; 74% of all data breaches included a human element.

Read more...
Tech developments lead hologram growth in 2024
News & Events Security Services & Risk Management
Micro-lenses, micro-mirrors and plasmonics are among the rapidly-emerging optical devices that have evolved on the back of holographic and diffractive technologies, and are seen as part of the natural evolution of optical science by R&D teams.

Read more...
Rapid rise in DNS attacks drives demand for new approach
Infrastructure Risk Management & Resilience
As ransomware grows more sophisticated and DNS attacks become more frequent, businesses are increasingly trying to protect themselves by adopting innovative approaches and technologies to bolster the integrity and security of their backup systems.

Read more...
South Africans play a role in becoming scam victims
Editor's Choice Risk Management & Resilience
The South African fraud landscape is becoming increasingly risky as fraudsters and scammers look to target individuals with highly sophisticated scams, in an environment where it is becoming increasingly difficult for lawmakers and authorities to bring these criminals to justice.

Read more...