We live in an era where a video of a politician can be fabricated so convincingly that millions believe it’s real. Where an audio deepfake can impersonate a CEO and authorise fraudulent wire transfers. Where a news article about a stock market crash can be generated, posted, and deleted before anyone realises it never happened. This isn’t science fiction anymore. This is happening today, and most of us are completely unprepared for the implications. The trust that underpins civilised society—trust in media, in institutions, in the people we see on screens—is eroding at a speed we’re only beginning to comprehend.
I’ve spent the last fifteen years building and investing in technology companies. I’ve seen how innovation transforms industries, creates value, and occasionally, destroys it. But AI-generated content represents something fundamentally different. It’s not just a new tool or a new market opportunity. It’s a direct attack on the epistemological foundation of modern information society. When you can no longer trust what you see and hear, you can’t trust anything. That’s not progress. That’s catastrophe.
Understanding the Technical Capability
Let’s be clear about what’s actually possible now. Generative AI models, particularly those based on transformer architectures, can produce text, images, video, and audio that are increasingly indistinguishable from authentic human-created content. We’ve moved past the obvious deepfakes—the ones with glitchy eyes and uncanny valleys. Modern systems can create realistic videos of people saying things they never said, with perfect lip sync, appropriate lighting, and natural body language. The audio can match vocal patterns so precisely that even trained listeners struggle to spot the fraud.
Text generation is perhaps even more insidious because it’s so much more difficult to detect. An AI can write a news article, a scientific paper, a financial analysis, or a legal brief with remarkable fluency. It can adopt specific writing styles, incorporate real data alongside fabricated details, and create narratives that are internally consistent and emotionally compelling. A recent study found that readers couldn’t reliably distinguish between human-written and AI-written news articles, even when explicitly told some articles were AI-generated. The implication is stark: we’re not going to detect our way out of this problem.
The Deepfake Problem in Real Time
Deepfakes were initially presented as a fringe concern, something that would only affect celebrities and public figures. That perspective was naïve. We’ve already seen deepfakes used in election interference campaigns, in fraud schemes targeting business leaders, and in extortion plots. A deepfake video of a political candidate can spread across social media faster than any correction or fact-check can catch up. By the time the falsity is established, millions have already formed their opinions. The damage is done, and the truth becomes just another claim in an increasingly unreliable information ecosystem.
Consider the 2024 incident where a synthetic audio deepfake was used to impersonate a company executive during a financial call. The perpetrator successfully triggered stock trading worth millions before the fraud was detected. Now multiply that across thousands of potential targets. Every CEO, every financial analyst, every political figure becomes vulnerable to impersonation. The economic implications alone are staggering. How many transactions will be conducted with appropriate suspicion? How much time and resources will be wasted verifying authenticity? What happens to trust in markets when participants can’t be certain they’re dealing with who they think they are?
AI-Generated News: The Misinformation Multiplier
Perhaps the most concerning application is AI-generated news content. A bad actor can now generate dozens of plausible news articles about any topic, distribute them across hundreds of websites, and create the impression of a widespread consensus where none exists. This is different from traditional misinformation, which requires human effort to create and distribute. AI can industrialise falsehood at scale. A single person with access to a text generation model can flood the information ecosystem with thousands of articles in an afternoon.
The sophistication matters too. These aren’t crude, obviously false articles. They include real data points, legitimate citations, plausible narratives, and persuasive rhetorical structures. They can be targeted at specific demographics with tailored messaging. They can exploit existing biases and anxieties. They can be generated in multiple languages and distributed across different regions simultaneously. The coordination challenges that once limited misinformation campaigns no longer exist. One person with technical knowledge and malicious intent can wage an information warfare campaign that previously would have required an entire team.
The Collapse of Journalistic Authority
Journalism is already struggling. Traditional newsrooms have been gutted by digital disruption, classified advertising migration to specialised platforms, and the fragmentation of attention across countless content sources. Journalists are overworked, under-resourced, and increasingly pressured to produce content quickly rather than thoroughly. Now layer on top of this the problem of AI-generated content. How do journalists distinguish authentic sources from AI-generated fiction? How do they verify the authenticity of videos, audio, or images? The tools for verification are becoming increasingly inadequate for the challenges they face.
What’s particularly insidious is that AI-generated content can be used to discredit legitimate journalism. A deepfake video of a journalist saying something inappropriate or revealing a source can destroy their credibility and compromise an investigation. The attacks don’t need to be successful to be damaging. Simply suggesting that a particular piece of journalism might be AI-generated, or that a journalist’s appearance was artificially created, introduces doubt. In an environment of pervasive misinformation, doubt is often sufficient to undermine truth. Journalism’s entire authority depends on credibility, and credibility is increasingly difficult to establish when authenticity itself is in question.
Introducing Content Authentication: C2PA and Beyond
Recognising this existential threat, a coalition of technology companies, media organisations, and security experts has developed the Coalition for Content Provenance and Authenticity (C2PA) standard. This is an attempt to create a universal framework for content authentication that travels with media throughout its lifecycle. The basic idea is elegant: embed cryptographic metadata into images, video, and audio that provides a complete chain of custody for that content. Who created it? When? What modifications have been made? What AI processes were applied? This metadata is cryptographically signed, making it extremely difficult to forge or modify without detection.
The C2PA standard is supported by major technology platforms, media organisations, and camera manufacturers. Sony, Canon, and other camera makers are implementing it at the hardware level. Adobe, Microsoft, and other software companies are integrating it into their products. The intent is to make authenticity verification the default rather than an exception. When you see an image or video online, you should be able to verify its provenance just as easily as checking a food label for expiration dates.
The Limitations of Technical Solutions
But here’s the uncomfortable reality: technical solutions alone won’t solve this problem. The C2PA standard requires widespread adoption, and adoption requires coordination across competing companies, different sectors, and multiple countries. It’s technically and politically complex. Moreover, bad actors can strip metadata, create deepfakes without any authentication markers, and simply claim that authentication systems are themselves compromised. The cat-and-mouse game between defenders and attackers will continue indefinitely, with attackers constantly developing new techniques and defenders struggling to keep pace.
There’s also the problem of adoption inertia. For content authentication to be effective, it needs to be ubiquitous. A C2PA-authenticated video is only useful if viewers know to look for the authentication marker and understand what it means. If content authentication becomes standard for professional media but not for social media, then social media remains a vector for misinformation. If it’s standard in Europe and North America but not in Asia or Africa, then you’ve simply created different information ecosystems with different trust architectures. The problem gets solved locally but persists globally.
Regulatory Responses: The EU Digital Services Act
The European Union has responded to the AI and misinformation challenge with the Digital Services Act, which came into force in November 2024. This legislation imposes strict obligations on large online platforms to address harmful content, including misinformation and AI-generated deepfakes. Platforms must conduct risk assessments, implement content moderation systems, and provide transparency about how they address harmful content. The financial penalties for non-compliance are severe: up to six percent of global annual revenue for serious violations.
The Digital Services Act is significant because it shifts responsibility for content authentication and verification from individual users to platforms. Platforms can no longer simply argue that they’re neutral intermediaries with no responsibility for what appears on their services. They must actively work to prevent the spread of harmful AI-generated content, deepfakes, and misinformation. Whether they’ll succeed is another question entirely, but the regulatory framework is now in place to demand they try.
The UK’s Online Safety Bill: A Different Approach
The United Kingdom has taken a somewhat different approach with the Online Safety Bill, which received Royal Assent in November 2023 and is being implemented in phases through 2025 and 2026. The UK framework focuses on protecting users from harmful content through a duty of care imposed on online services. Platforms must identify harms to children and vulnerable adults, take reasonable steps to mitigate those harms, and provide users with tools to control their experience. The Office of Communications (Ofcom) has been appointed as the regulator responsible for enforcement.
The UK approach is potentially more flexible than the EU’s rules-based Digital Services Act. Rather than prescribing specific technical measures, the Online Safety Bill establishes a duty of care and allows Ofcom to assess whether platforms are meeting their obligations based on their specific circumstances. This could be more effective at adapting to emerging threats like sophisticated AI-generated content. However, it also introduces regulatory uncertainty. Platforms must invest in compliance infrastructure without knowing precisely what will be required of them.
The Trust Infrastructure Problem
At the deepest level, the challenge of AI-generated content is a challenge of trust infrastructure. Modern information society depends on institutions—media organisations, academic publishers, courts, regulatory agencies—that serve as arbiters of truth and authenticity. These institutions work because we collectively agree to trust them. They’ve built reputational capital over decades or centuries. They have processes designed to establish truth through rigorous investigation, peer review, or cross-examination. They have something at stake; their credibility is their currency.
But those institutions are increasingly fragile. Trust in media has declined sharply. Politicians and other figures routinely dismiss inconvenient journalism as fake news. Academic publishing has been disrupted by predatory journals and replication crises. Courts struggle with digital evidence. Regulatory agencies lack the resources to investigate every potential fraud. When institutions are weakened, the alternatives are worse. We’re left with a landscape where everyone must be their own fact-checker, where trust is determined by tribal affiliation rather than evidence, where misinformation spreads freely because there’s no trusted authority to counter it.
The Economic Implications
There’s a genuine economic cost to the erosion of trust in content authenticity. Financial markets depend on accurate information. If traders can’t be certain that news about a company is authentic, they’ll demand risk premiums. Transactions will become more expensive. Deals will fall through. The cost of capital will rise for companies most vulnerable to deepfake attacks. Insurance companies will create new products to cover deepfake fraud, and those products will be expensive. Businesses will invest heavily in verification infrastructure, duplicating effort across countless organisations. In aggregate, the economic cost of AI-generated content misinformation could be substantial.
Political campaigns will face similar challenges. A deepfake video of a candidate could swing elections. Campaigns will need to invest in rapid-response teams capable of debunking deepfakes in real time. They’ll need authentication experts, legal teams, and media analysts standing by 24/7. The cost of political participation will increase. The advantage will go to candidates and campaigns with the largest budgets. Democracy itself could be distorted not just by misinformation, but by the defensive costs of fighting it.
What Individuals Can Do
The challenge feels overwhelming at the individual level. You can’t personally verify the authenticity of every piece of content you encounter. You can’t implement C2PA standards or regulatory frameworks. But you can develop better media literacy. You can slow down before sharing. You can check multiple sources. You can look for authentication metadata when it’s available. You can be appropriately sceptical of content that confirms your existing beliefs; that’s often exactly the content designed to manipulate you. You can support institutional journalism by paying for news when possible. You can report deepfakes and misinformation when you encounter it.
More fundamentally, you can resist the slide toward complete epistemological breakdown. That sounds abstract, but it matters. When you choose to engage carefully with information, to support trusted institutions, to demand evidence before accepting claims, you’re defending the infrastructure that allows society to function. You’re not saving the world, but you’re not accelerating its collapse either. And in an environment where the default trajectory is toward increasing misinformation and decreasing trust, refusing to accelerate is itself an act of resistance.
The Role of AI Companies and Platforms
Technology companies have a particular responsibility here. Companies that build and deploy generative AI systems need to take seriously the potential for misuse. That means implementing safety measures, conducting responsible disclosure of vulnerabilities, and collaborating with researchers and platforms to address emerging harms. It means thinking carefully about default settings and designing systems to make responsible use the path of least resistance. It means being transparent about capabilities and limitations. It means investing in content provenance and authentication technology.
Platforms that host user-generated content have equally important responsibilities. They need to invest in detection capabilities, in partnership with researchers and external experts. They need to respond swiftly when deepfakes and misinformation are discovered. They need to amplify reliable sources and demote low-quality information. They need to label or remove content when its authenticity is in doubt. And they need to do all of this while respecting freedom of expression and avoiding the impossible task of perfect content moderation.
A Path Forward
The death of trust in content is not inevitable, but it’s increasingly probable if we don’t act decisively. We need technical solutions like C2PA, but we also need regulatory frameworks that hold platforms accountable. We need institutional journalism to survive and thrive, because no amount of AI regulation will replace the investigative work that journalists do. We need media literacy programmes in schools. We need investment in research on detection and resilience. We need cross-sector collaboration between technology companies, media organisations, government agencies, and academic institutions.
Most importantly, we need to stop treating AI-generated content misinformation as an isolated problem to be managed and start treating it as a civilisational challenge that will shape the future of democratic discourse, market function, and social trust. The technical capabilities are arriving faster than our social and institutional responses. That gap is where catastrophe lives. Closing it requires urgency, resources, and a genuine commitment to defending the epistemological foundations that allow civilised society to function. We’re not doing enough yet. We need to do more.
Scott Dylan is Dublin based British entrepreneur, investor, and mental health advocate. He is the Founder of NexaTech Ventures, a venture capital firm with a £100 million fund supporting AI and technology startups across Europe and beyond. With over two decades of experience in business growth, turnaround, and digital innovation, Scott has helped transform and invest in companies spanning technology, retail, logistics, and creative industries.
Beyond business, Scott is a passionate campaigner for mental health awareness and prison reform, drawing from personal experience to advocate for compassion, fairness, and systemic change. His writing explores entrepreneurship, AI, leadership, and the human stories behind success and recovery.
Scott Dylan is Dublin based British entrepreneur, investor, and mental health advocate. He is the Founder of NexaTech Ventures, a venture capital firm with a £100 million fund supporting AI and technology startups across Europe and beyond. With over two decades of experience in business growth, turnaround, and digital innovation, Scott has helped transform and invest in companies spanning technology, retail, logistics, and creative industries.
Beyond business, Scott is a passionate campaigner for mental health awareness and prison reform, drawing from personal experience to advocate for compassion, fairness, and systemic change. His writing explores entrepreneurship, AI, leadership, and the human stories behind success and recovery.