Consumers don't trust AI

Issue 4 2023 News & Events

Despite the hype, many consumers admit they don't trust artificial intelligence, with a significant proportion also expressing cynicism about the benefits it brings.

Research conducted by Herbert Smith Freehills reveals that just 5% of UK consumers are unconcerned about the growing presence of AI in everyday life. Only 20% say they have a high level of faith that AI systems are trustworthy.

Undertaken to mark the launch of the firm's Emerging Tech Academy, the research explored views among 1000 consumers between the ages of 18 and 80. Respondents were asked about the type of AI systems they use today, expectations about future usage, and comfort levels with the way machines gather data and operate. Key findings include:

Manipulative machines: just over half (56%) do not accept that AI can be impartial. Additionally, more than one third of respondents (37%) fear the outputs of AI systems could be biased against specific groups and over half (53%) also fear AI will make decisions that directly impact them, using information that is wrong.

Responsive, but not responsible: while 60% accept that AI will make the world run more efficiently by offering solutions quickly, just over half (53%) say they are concerned about a lack of accountability in AI systems. One third (31%) also suggest that AI tools failing to meet ethical expectations is a problem.

Modern, yet outdated: although a significant proportion accept that AI can help reduce human errors (44%), just 16% believe AI tools give accurate information. More than one-third (38%) also fear that AI systems use out-of-date information.

"Artificial intelligence can undoubtedly benefit consumers, but there is clearly still work to do to win their trust and overcome cynicism. The AI market risks being seen as the 'wild west' so, as policymakers define their strategies to address the risks of AI, they must ensure they are creating a system that delivers certainty and confidence now, while being flexible enough to promote and account for future innovations," says Alexander Amato-Cravero, Regional Head of Herbert Smith Freehills' Emerging Technology Group.

Based on the findings and ahead of the UK hosting the first major global summit on AI safety, Herbert Smith Freehills' Emerging Tech Academy has identified three steps which, taken together, can foster an environment in which consumer and business confidence in AI will improve. These are:

Accelerating the development and implementation of legally binding AI rules: the sooner policymakers can plug the gaps in the current patchwork of rules that apply to AI with laws, regulations, guidance, and principles that are fit for purpose and have the force of law, the sooner consumers and businesses will be comfortable engaging with AI systems.

Increasing alignment among domestic and global policymakers on AI: the risks associated with AI are overseen by multiple regulators and authorities. A harmonised approach is needed to address gaps in the existing collection of laws and regulations. With consumers engaging with businesses around the world, this discourse must go beyond domestic policy and address global alignment and interoperability as well.

Improving dialogue and better educating consumers and markets on AI risks: despite excitement about the possibilities of AI systems now and in the future, consumers' fear and distrust will be minimised through balanced dialogue about the benefits and risks.

Amato-Cravero concludes, "The key to long-term success is dialogue rather than fanfare. It's easy to get caught up in the hype, but building confidence in AI requires cutting through the noise with sharp focus on the opportunities and risks. At the same time, policymakers must deliver certainty to consumers and businesses by clarifying the patchwork of existing laws and regulations."

The research was conducted during May and June 2023 and is based on 1000 respondents. Respondent profiles include individuals including those in full time employment and education and across 11 UK regions.


Additional statistics

• Only 34% of respondents think AI is reliable.

• Those aged 55+ are less likely (21%) than those 35-years or less (34%) to say AI helps them make better decisions.

• Fewer women (48%) than men (55%) are comfortable with the idea of companies using AI to diagnose health problems.

• More people (53%) are uncomfortable with the idea of AI being used to settle legal disputes than those who think it is a good idea (29%).





Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Highest increase in global cyberattacks in two years
Information Security News & Events
Check Point Global Research released new data on Q2 2024 cyber-attack trends, noting a 30% global increase in Q2 2024, with Africa experiencing the highest average weekly per organisation.

Read more...
From the editor's desk: Showtime for Securex
Technews Publishing News & Events
We have once again reached the time of year when the security industry focuses on Securex. This issue includes a short preview, with more coming online and via our special Securex Preview news briefs. ...

Read more...
Chubbsafes celebrates 190 years
Gunnebo Safe Storage Africa News & Events Security Services & Risk Management
Chubbsafes marks its 190th anniversary in 2025 and as a highlight of the anniversary celebrations it is launching the Chubbsafes 1835, a limited edition 190th-anniversary collector’s safe.

Read more...
Suprema unveils BioStar Air
Suprema neaMetrics News & Events Access Control & Identity Management Infrastructure
Suprema launches BioStar Air, the first cloud-based access control platform designed to natively support biometric authentication and feature true zero-on-premise architecture. BioStar Air simplifies deployment and scales effortlessly to secure SMBs, multi-branch companies, and mixed-use buildings.

Read more...
New law enforcement request portal
News & Events Security Services & Risk Management
inDrive launches law enforcement request portal in South Africa to support safety investigations. New portal allows authorised South African law enforcement officials to securely request user data related to safety incidents.

Read more...
Igniting standards, powering protection
Securex South Africa News & Events Fire & Safety
Fire safety is more than compliance, it is a critical commitment to protecting lives, assets, and infrastructure. At Firexpo 2025, taking place from 3 to 5 June at Gallagher Convention Centre, that commitment takes centre stage.

Read more...
The rise of AI-powered cybercrime and defence
Information Security News & Events AI & Data Analytics
Check Point Software Technologies launched its inaugural AI Security Report, offering an in-depth exploration of how cybercriminals are weaponising artificial intelligence (AI), alongside strategic insights defenders need to stay ahead.

Read more...
From the editor's desk: We’ve only just begun
Technews Publishing News & Events
The surveillance market has expanded far beyond the analogue days of just recording and/or monitoring screens. The capabilities of surveillance technology today extend to black screen monitoring with ...

Read more...
SAFPS issues SAPS impersonation scam warning
News & Events Security Services & Risk Management
The Southern African Fraud Prevention Service (SAFPS) is warning the public against a scam in which scammers pose as members of the South African Police Service (SAPS) and trick and intimidate individuals into handing over personal and financial information.

Read more...
Strong industry ties set Securex South Africa apart
News & Events Training & Education
Securex South Africa, co-located with A-OSH EXPO, Facilities Management Expo, and Firexpo, is a meeting place of minds, where leading security, safety, fire, and facilities professionals come together, backed by strong ties with the industry’s most influential bodies.

Read more...