A new KPMG report about artificial intelligence found that a staggering 75% of respondents in the US are deeply concerned about possible negative outcomes due to AI, while 45% believe the risks outweigh the benefits. Further, 55% of Americans report having experienced inaccurate outcomes from AI.
In addition, 52% of US-based respondents expressed worry that AI-generated content is manipulating elections, and an overwhelming 85% are calling for laws and actions to combat AI-generated misinformation.
Consumers want more AI regulation
According to KPMG’s report, “Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025,” only 29% of US consumers believe current regulations are sufficient for AI safety, and 72% say more regulation is needed. If laws and policies were in place, 81% of US consumers would be more willing to trust AI systems, the report noted.
Yet, 43% of US consumers have low confidence in the government to develop and use AI, preferring to put their trust in universities, research institutions, healthcare providers, and big tech companies to develop and use AI in the public’s best interests.
Enthusiasm for AI in the workplace – and misuse
In the workplace, 69% of organizations are using AI, while 67% of workers are enhancing their productivity with the tech.
However, there is a thin line between appropriate use and misuse: 44% of workers report using AI in inappropriate ways, such as uploading sensitive company information and intellectual property to AI platforms (46%), violating policies and putting their organization at risk.
Fifty-seven percent of US workers surveyed admit to making mistakes in their work due to AI errors. And, 53% of employees have presented AI-generated content as their own.
These issues highlight “a significant gap in governance and raise serious concerns about transparency, ethical behavior, and the accuracy of AI-generated content,” said Samantha Gloede, trusted enterprise leader at KPMG, in a statement. “This should be a wake-up call for employers to provide comprehensive AI training to not only manage risks but also to maintain trust.”
Need for AI to be more trustworthy
The mixed reaction toward AI reveals a growing divide between its benefits and its dangers. Even as AI demonstrates its power to revolutionize industries, there’s a pervasive fear of overreach, particularly as misinformation spreads faster and faster.
Implementing clear governance policies, guardrails, and accountability will make the technology more trustworthy, the report said.
DOWNLOAD: How to Keep AI Trustworthy from TechRepublic Premium
Methodology
The study’s findings are based on a survey of more than 48,000 people across 47 countries between November 2024 and January 2025, including 1,019 people in the US, KPMG said. This article is focused on US-based responses.
The research was conducted by KPMG International in conjunction with the University of Melbourne research team.