HomeBlogThe Public’s Growing Distrust of AI: A Wake-Up Call for the Industry

The Public’s Growing Distrust of AI: A Wake-Up Call for the Industry

The Publics Growing Distrust of AI: A Wake-Up Call for the Industry - Scott Dylan

Introduction: The Trust Crisis

Two years ago, artificial intelligence was the darling of the technology world. Companies were announcing AI initiatives weekly, investors were pouring billions into the sector, and the media treated every model release as a revolutionary moment. Yet something fundamental has shifted. As AI systems have moved from research labs into everyday applications, public confidence has fractured. The latest trust data paints a sobering picture: public scepticism about AI is growing faster than AI capability itself. This isn’t simply healthy technological scepticism—it’s a profound erosion of confidence that threatens to undermine legitimate AI development and cede the narrative entirely to alarmists and ideologues. The industry created this credibility crisis through overpromising, underdelivering, and failing to address genuine concerns about harm. And now, we must confront uncomfortable truths about what we’ve built and what we’ve claimed about it.

The Data: Edelman’s Trust Collapse

The Edelman Trust Barometer, which has tracked public confidence in institutions for over two decades, recently released findings on AI trust that should concern anyone seriously invested in this technology’s future. Globally, AI trust has declined from already modest baselines. In the United States, approximately 55% of respondents express some level of concern about AI, with only 35% indicating genuine confidence in the technology. Perhaps more tellingly, trust in technology companies to develop AI responsibly has collapsed to its lowest levels on record. Only 37% of respondents trust technology companies to develop AI in society’s interest rather than their own. This isn’t marginal erosion—it’s a wholesale loss of confidence in the stewards of AI development. When nearly two-thirds of people don’t trust tech companies’ motives regarding AI, we’re not facing a messaging problem that better PR can solve. We’re facing a credibility problem rooted in justified doubts about corporate intentions.

Ipsos Data: Deepfakes and Societal Fear

Ipsos research into AI and societal impact reveals a different but equally concerning picture. Their surveys show that deepfakes and synthetic media represent the public’s primary AI-related fear, with over 70% of respondents expressing serious concern about their potential. This concern isn’t abstract—it’s rooted in visible evidence. In recent months, deepfakes have been weaponised in election interference, workplace harassment, and celebrity abuse. A particularly disturbing case involved deepfake sexual content created without consent, destroying reputations and causing profound psychological harm. The Ipsos data shows that these incidents aren’t anomalies that the public dismisses as unlikely—they’re vivid demonstrations of genuine harms that AI enables. What makes this particularly concerning from a trust perspective is that the technology industry has done remarkably little to address deepfake harms. Regulatory frameworks remain inadequate, detection tools remain unreliable, and criminal consequences remain weak. When the public sees clear harms occurring and the industry shrugging, trust evaporates rationally.

AI Hallucinations: When Confidence Becomes Dangerous

Beyond the dramatic harms of deepfakes and election interference, a quieter but pervasive problem has eroded public trust: AI hallucinations. This term refers to AI systems confidently generating false information, fabricating citations, inventing statistics, and presenting fiction as fact. To the general public, hallucination is perhaps a more devastating failure than any dramatic security vulnerability because it undermines the fundamental premise that AI systems are reliable sources of information. A prominent example involved an AI chatbot providing completely fabricated legal precedents to a lawyer, who then submitted them to a court. The court was not amused, and the lawyer faced professional consequences. Similar incidents have occurred in healthcare contexts, where AI systems provided plausible-sounding but entirely false medical information. What makes hallucination particularly corrosive to trust is that it’s not obviously wrong to users. When an AI system confidently asserts something false, most people lack the expertise to identify the falsehood. This creates a confidence trap: the system sounds authoritative precisely at the moment it’s most likely to be wrong. The industry’s response to hallucination has been inadequate—largely consisting of disclaimers that ‘AI may hallucinate’ rather than genuine efforts to fix the underlying problem. This feels like an admission that we’ve built systems whose fundamental flaw we can’t eliminate.

The Promise-Reality Gap

Much of the trust erosion stems from a widening gap between claims and reality. The industry has spent years making extraordinary promises about AI’s capabilities. We were told that artificial general intelligence was imminent, that AI would solve cancer, that AI would make human expertise obsolete. These weren’t careful, qualified predictions—they were often breathless assertions presented as near-certainties by credible figures in the industry. Yet the reality of AI’s capabilities is considerably more modest. Current AI excels at narrow, well-defined tasks within domains it’s trained on. It struggles with genuine reasoning, struggles with novel situations, and struggles with the kind of contextual understanding that humans take for granted. This doesn’t mean AI isn’t valuable—it can be enormously useful within appropriate domains. But there’s a massive difference between ‘useful tool for specific applications’ and ‘transformative technology that will reshape society.’ The industry has largely claimed the latter whilst delivering the former, and the public recognises this gap. Every overstated claim creates credibility damage that can’t be undone with a corrected forecast.

Data Privacy and the Extraction Problem

Another major driver of public distrust is the way AI systems are trained on vast quantities of data harvested from the internet and digital devices without meaningful consent. Large language models are trained on hundreds of billions of words scraped from websites, social media, intellectual property, and user-generated content. The public increasingly understands that their data is being used to train AI systems from which corporations extract significant value, whilst creators receive nothing. Copyright holders see their work incorporated into training datasets without permission or compensation. Individuals see their personal information used to train systems designed to generate further personal information about them. This feels, with considerable justification, like exploitation. The industry’s typical response—that the data is publicly available and therefore fair game—rings hollow to people who neither consented to their data being used this way nor benefit from the arrangement. When a company builds a billion-pound AI system trained partly on copyrighted books, academic papers, and personal information harvested without consent, then refuses to compensate creators, people reasonably perceive that as greedy rather than innovative. This perception translates directly into reduced trust in the companies building these systems.

Employment Displacement and Economic Anxiety
The Publics Growing Distrust of AI: A Wake-Up Call for the Industry - Scott Dylan

Beyond specific AI harms, broader economic anxiety drives distrust. AI is already displacing workers in certain sectors. Customer service roles are being automated, content moderation jobs are disappearing, and routine analytical work is being replaced by AI systems. Whilst it’s almost certainly true that AI will create new job categories over time, the immediate impact is job loss for specific workers with limited prospects for retraining. From their perspective, trust in AI isn’t rational—AI is a direct threat to their livelihood. The industry’s response to job displacement has been largely dismissive. Tech executives assure the public that this is creative destruction, that new jobs will emerge, that workers should simply retrain. This advice, however true it might be in the long term, feels callous to someone losing their job today. The industry has made little effort to address the human cost of rapid automation or to distribute AI’s benefits more equitably. Instead, they’ve extracted maximum shareholder value whilst externalising human costs. This is excellent business, but it’s terrible for public trust.

Lack of Transparency and Accountability

The opacity of AI systems is another major trust eroder. We don’t fully understand how large language models work—the field of AI interpretability remains nascent, and many of the systems deployed in the world operate as ‘black boxes’ even to their creators. When regulators ask companies to explain why their system made a particular decision, companies often cannot provide a satisfactory answer. When civil rights groups ask how AI systems encode and perpetuate bias, the response is often a shrug. When the public asks who’s responsible when an AI system causes harm, the answer is frustratingly unclear. This opacity wouldn’t be tolerable in industries that affect public welfare—we don’t allow pharmaceutical companies to deploy drugs without understanding how they work, and we don’t allow banks to use opaque financial systems without regulatory oversight. Yet in AI, opacity has somehow become acceptable. The industry argues that transparency is impossible due to trade secrets and technical complexity, and therefore the solution is to simply… proceed. This isn’t confidence-building. It reads as ‘we can’t explain what we’ve built, don’t fully understand it ourselves, but we’re going to deploy it anyway because it’s profitable.’

Regulatory Gaps and Inadequate Safeguards

The regulatory environment around AI remains fragmented and inadequate. The EU’s AI Act represents the most comprehensive regulatory framework, establishing risk-based requirements and accountability measures. Yet even this pioneering regulation faces criticism as both insufficient and overly bureaucratic. In the UK and US, regulatory approaches remain piecemeal, addressing specific harms in specific sectors rather than establishing comprehensive frameworks. This creates a situation wherein companies face different requirements in different jurisdictions, and many operate in regulatory gaps where requirements don’t yet exist. The natural response from a company perspective is to lobby for weak regulation. The natural response from a public perspective is to distrust a system where the companies benefiting from a lack of regulation are simultaneously writing the rules about their own conduct. When Facebook/Meta is asked to regulate its own AI systems, when OpenAI assures the public of its safety measures whilst remaining a private company accountable only to investors, when companies establish ‘ethics boards’ with minimal teeth, the public reasonably perceives regulatory capture. The industry has been asked to police itself and, unsurprisingly, is finding in its own favour.

The Nexatech Perspective: How Venture Capital Got This Wrong

I’ve invested significantly in AI through Nexatech Ventures, so I’m speaking from inside the industry when I say that venture capital has fundamentally mismanaged the public trust problem. The prevailing VC philosophy has been to move fast and break things, to prioritise growth over responsibility, and to assume that any problems created by rapid scaling can be solved later through either technology or regulation. This philosophy might work for social media platforms, where the primary cost is user attention. It doesn’t work for AI, where the costs include job displacement, deepfake harassment, misinformation at scale, and potential systemic economic disruption. The venture capital industry has been so focused on returns and so enamoured with AI’s transformative potential that we’ve largely ignored the credibility problem we were creating. We funded companies with little transparency, we celebrated founders who made grandiose claims without evidence, we invested in applications that clearly posed risks without clear benefits. We told founders that raising capital was predicated on bold claims and exponential growth, not careful assessment of actual benefits and genuine attempts to mitigate harms. Now we’re surprised that the public doesn’t trust us.

Bias in AI Systems: The Perpetuation Problem

AI systems trained on historical data inevitably encode the biases present in that data. Facial recognition systems perform worse on darker-skinned individuals because training data over-represented lighter-skinned faces. Hiring algorithms discriminate against women because historical hiring decisions under-represented women in certain roles. Healthcare algorithms systematically deprioritise Black patients because health outcome measures reflect historical inequities. These aren’t bugs that will eventually be fixed—they’re features of how AI systems work. When you train a system on biased data, you get a system that reproduces and often amplifies that bias. The public has increasingly recognised that AI systems are not neutral arbiters but rather digital encodings of existing prejudices. This matters enormously for trust because it means AI threatens to automate and legitimise discrimination. Rather than disrupting unfair systems, AI codifies them with the authority of mathematics and objectivity. The industry’s response to algorithmic bias has been inadequate—often amounting to acknowledging the problem without fundamentally changing business models or development practices.

Misinformation and Election Interference

Perhaps the most politically significant threat to public trust is AI’s potential to supercharge misinformation and election interference. Large language models can generate convincing false narratives at scale. Generative audio and video can create synthetic media of political figures saying things they never said. Bad actors can use AI to automate misinformation campaigns across social media platforms. The 2024 elections saw the first serious instances of AI-generated misinformation at meaningful scale, and the situation is only going to worsen as technology improves. The public has noticed this. They understand that if you can’t distinguish real from synthesised media, and if bad actors can flood information ecosystems with AI-generated content, then the epistemic foundations of democracy become fragile. Trust in democratic institutions depends partly on confidence that major events actually happened as reported and that public figures genuinely said what they’re credited with saying. AI threatens both of those foundations. The industry’s response has been basically non-existent. Technology companies have shown little interest in addressing these risks, partly because AI-generated misinformation profits them through engagement and attention, and partly because meaningful responses would require significant economic sacrifice.

The Environmental Toll: Hidden Costs

Training large AI models consumes vast quantities of electricity. A single training run for a large language model can consume as much electricity as a hundred homes use in a year. This environmental cost is rarely discussed in industry circles, yet it’s very real. As AI systems become more powerful and more numerous, the aggregate environmental impact becomes substantial. The public, increasingly concerned about climate change, recognises that a technology requiring this much energy carries real costs. These costs are particularly troubling when the benefits of AI—impressive as they are in specific applications—are far from guaranteed to justify the environmental price. This isn’t to say AI should be abandoned on environmental grounds, but the industry’s tendency to ignore or minimise environmental costs whilst celebrating technological achievement contributes to the sense that we’re pursuing innovation without genuine consideration of consequences.

How the Industry Lost the Narrative

The technology industry had the opportunity to shape the AI narrative and maintain public confidence by being thoughtful about risks, transparent about limitations, and genuinely focused on benefits distribution. Instead, we chose differently. We released systems with inadequate testing, we made grandiose claims we couldn’t substantiate, we ignored obvious harms until they became unavoidable, and we maximised shareholder value at the expense of public interest. In doing so, we ceded narrative control to critics and alarmists who, whilst sometimes hyperbolic, are responding rationally to genuine failures of responsibility. When the industry speaks about AI’s potential whilst simultaneously failing to address deepfakes, election interference, and algorithmic bias, people reasonably conclude that we’re more interested in profit than in impact. The narrative vacuum we created has been filled by voices warning about existential risks and demanding precautionary regulation. These voices are often overblown, but they emerge from rational distrust that the industry created through its own conduct.

The Path to Rebuilding Trust

Rebuilding public trust in AI will require fundamental changes in how the industry operates. First, it requires honesty. We need to stop making grandiose claims and start carefully articulating what AI can and cannot do. We need to be transparent about current limitations, be honest about uncertainty, and avoid the temptation to promise transformative change we can’t guarantee. Second, it requires addressing harms. Companies need to invest genuinely in understanding and mitigating the negative consequences of their systems. This means funding research into deepfake detection and mitigation, it means addressing algorithmic bias systematically, it means developing robust safeguards against misuse. Third, it requires equitable distribution of benefits. If AI is going to displace workers, companies profiting from that displacement should bear responsibility for transition support and retraining. If AI is going to concentrate wealth, we need mechanisms to distribute benefits more equitably. Fourth, it requires real accountability. Companies should welcome independent auditing of AI systems, should support transparency about how algorithms function, and should accept liability when their systems cause harm.

Regulation’s Role: The Necessary Evil

The industry’s default response to calls for regulation has been resistance, arguing that regulation will stifle innovation. This perspective, whilst understandable from a business standpoint, misses the larger picture. Without meaningful regulation, public trust will continue eroding, and eventually public pressure will demand precautionary regulation so restrictive it will genuinely stifle beneficial innovation. The choice isn’t between regulation and no regulation—it’s between reasonable regulation that addresses genuine harms whilst allowing legitimate development, or harsh regulation that emerges from public backlash and justified distrust. From both an ethical and practical business perspective, the industry should welcome reasonable regulation and actively participate in designing it. Companies that have genuinely high standards should be the loudest voices supporting regulation that enforces those standards, because regulation that merely enforces minimum standards actually protects responsible players from being undercut by less scrupulous competitors. This isn’t happening, which itself reduces trust.

The Role of Media and Education

Rebuilding trust also requires better public education about what AI actually is and what it can actually do. Much media coverage of AI oscillates between uncritical hype and dystopian panic. Nuanced reporting that helps the public understand AI’s genuine capabilities and genuine risks is rare. The industry could contribute to this by being more willing to engage with sceptical journalists, to acknowledge limitations, and to discuss concerns seriously rather than dismissing them as Luddite resistance to progress. Educational institutions should be developing AI literacy so that citizens can think critically about AI rather than simply accepting industry claims or dismissive scepticism. This requires investment in education, support for careful journalism, and willingness from the industry to have uncomfortable conversations.

What Businesses Must Do Immediately

For individual companies, rebuilding trust starts with auditing their own AI systems for harms, acknowledging those harms publicly, and committing to substantive fixes rather than cosmetic adjustments. It means bringing in diverse perspectives to identify bias and risks that homogenous teams might miss. It means publishing regular transparency reports about how AI systems are being used and what harms have been identified. It means investing in safety research, not as a cost centre but as a core business function. It means accepting that sometimes, the most responsible choice is to not deploy a technology even when you could, because the risks exceed the benefits. These steps will reduce short-term profits—there’s no way around that. But they’re the price of rebuilding the credibility that the industry has squandered.

Looking Forward: What Success Looks Like

Success in rebuilding trust doesn’t mean returning to the uncritical enthusiasm of 2023-24. It means developing a mature relationship wherein the public understands AI’s genuine capabilities and genuine risks, wherein companies are transparent about limitations, wherein harms are genuinely mitigated rather than ignored, and wherein benefits are distributed reasonably equitably. This is actually more stable than the previous environment of uncritical enthusiasm, because it rests on genuine achievement rather than inflated claims. AI can still be profoundly valuable—in healthcare, in education, in scientific research, in solving specific problems—but that value needs to be demonstrated through actual benefits rather than asserted through marketing. The public is willing to accept AI where it delivers genuine benefits, but they’re right to be sceptical of an industry that has repeatedly overpromised and underdelivered, that has concentrated wealth whilst externalising costs, that has ignored harms until forced to confront them. The question is whether the industry has the maturity to learn from this credibility crisis and operate differently, or whether we’ll continue on the current trajectory until public backlash forces change we should have embraced voluntarily.

Related reading: How Emotional AI Claims to Read Your Feelings — and Why It Probably Can’t, Developing Ethical Frameworks for AI Implementation and What is information communication technology ict: A concise guide to ICT basics.


Discover more from Scott Dylan

Subscribe to get the latest posts sent to your email.

Written by
Scott Dylan