Four privacy risks in using ChatGPT for business

Issue 6 2023 AI & Data Analytics, Information Security

Today, many people rely on neural network-based language models like ChatGPT for their jobs. A Kaspersky survey revealed that 11% of respondents had used chatbots, with nearly 30% believing in their potential to replace jobs in the future. Other surveys indicate that 50% of Belgian office workers and 65% in the UK rely on ChatGPT. Moreover, the prominence of the search term ‘ChatGPT’ in Google Trends suggests a pronounced weekday usage, likely tied to work related tasks.

The growing integration of chatbots in the workplace prompts a crucial question: can they be entrusted with sensitive corporate data? Kaspersky researchers have identified four key risks associated with employing ChatGPT for business purposes.

Data leak or hack on the provider’s side

Although tech majors operate LLM-based chatbots, they are not immune to hacking or accidental leakage. For example, there was an incident in which ChatGPT users could see messages from others’ chat histories.

Theoretically, chats with chatbots might be used to train future models. Considering that LLMs are susceptible to ‘unintended memorisation’, wherein they remember unique sequences like phone numbers that do not enhance model quality but pose privacy risks, any data in the training corpus may inadvertently or intentionally be accessed by other users from the model.

In places where official services like ChatGPT are blocked, users might resort to unofficial alternatives like programs, websites, or messenger bots, and download malware disguised as a non-existing client or app.

Attackers can get into employee accounts, accessing their data through phishing attacks or credential stuffing. Moreover, Kaspersky Digital Footprint Intelligence regularly finds posts on dark web forums selling access to chatbot accounts.

Summarising above, data loss is a significant privacy concern for users and businesses when using chatbots. Responsible developers outline how data is used for model training in their privacy policies. Kaspersky’s analysis of popular chatbots, including ChatGPT, ChatGPT API, Anthropic Claude, Bing Chat, Bing Chat Enterprise, You.com, Google Bard, and Genius App by Alloy Studios, shows that in the B2B sector, there are higher security and privacy standards, given the more significant risks of corporate information exposure. Consequently, the terms and conditions for data usage, collection, storage, and processing are more focused on safeguarding compared to the B2C sector. The B2B solutions in this study typically do not automatically save chat histories, and in some cases, no data is sent to the company's servers, as the chatbot operates locally in the customer's network.




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Who are you?
Access Control & Identity Management Information Security
Who are you? This question may seem strange, but it can only be answered accurately by implementing an Identity and Access Management (IAM) system, a crucial component of any company’s security strategy.

Read more...
Check Point launches African Perspectives on Cybersecurity report
News & Events Information Security
Check Point Software Technologies released its African Perspectives on Cybersecurity Report 2025, revealing a sharp rise in attacks across the continent and a major shift in attacker tactics driven by artificial intelligence

Read more...
What is your ‘real’ security posture?
BlueVision Editor's Choice Information Security Infrastructure AI & Data Analytics
Many businesses operate under the illusion that their security controls, policies, and incident response plans will hold firm when tested by cybercriminals, but does this mean you are really safe?

Read more...
What is your ‘real’ security posture? (Part 2)
BlueVision Editor's Choice Information Security Infrastructure
In the second part of this series of articles from BlueVision, we explore the human element: social engineering and insider threats and how red teaming can expose and remedy them.

Read more...
IQ and AI
Leaderware Editor's Choice Surveillance AI & Data Analytics
Following his presentation at the Estate Security Conference in October, Craig Donald delves into the challenge of balancing human operator ‘IQ’ and AI system detection within CCTV control rooms.

Read more...
New agent gateway to mitigate shadow MCP risk
AI & Data Analytics
Agent Gateway, a new capability in the Tray AI Orchestration platform, gives IT power to develop approved MCP tools with policies, permissions, versioning and compliance, then publish them via MCP for secure agent use.

Read more...
AI and automation are rewriting the cloud security playbook
Technews Publishing AI & Data Analytics
Old-school security relied on rules-based systems that flagged only what was already known. AI flips the script: it analyses massive volumes of data in real-time, spotting anomalies that humans or static rules would miss.

Read more...
Onsite AI avoids cloud challenges
SMART Security Solutions Technews Publishing Editor's Choice Infrastructure AI & Data Analytics
Most AI programs today depend on constant cloud connections, which can be a liability for companies operating in secure or high-risk environments. That reliance exposes sensitive data to external networks, but also creates a single point of failure if connectivity drops.

Read more...
neaMetrics 2025: Year in review
AI & Data Analytics
With a stronger team, a broader portfolio, and a clear vision for what’s next, neaMetrics is well positioned to continue delivering smarter, more connected security solutions in 2026 and beyond.

Read more...
Kaspersky finds security flaws that threaten vehicle safety.
News & Events Information Security Transport (Industry)
At its Security Analyst Summit 2025, Kaspersky presented the results of a security audit that exposed a significant security flaw enabling unauthorised access to all connected vehicles of one automotive manufacturer.

Read more...










While every effort has been made to ensure the accuracy of the information contained herein, the publisher and its agents cannot be held responsible for any errors contained, or any loss incurred as a result. Articles published do not necessarily reflect the views of the publishers. The editor reserves the right to alter or cut copy. Articles submitted are deemed to have been cleared for publication. Advertisements and company contact details are published as provided by the advertiser. Technews Publishing (Pty) Ltd cannot be held responsible for the accuracy or veracity of supplied material.




© Technews Publishing (Pty) Ltd. | All Rights Reserved.