Privacy feels like a quaint concern in 2026. Your devices track your location. Your apps track your behaviour. Your utilities track your consumption. Companies you’ve never heard of hold detailed profiles of your interests, purchasing patterns, and vulnerabilities. Governments deploy sophisticated surveillance infrastructure. In that context, worrying about privacy might seem naive or futile. And yet, privacy remains one of the most important rights we have—and it’s increasingly under pressure from AI.
The question I get asked frequently, particularly by people aware of my work at Nexatech Ventures, is: What should I actually worry about? Privacy concerns can feel overwhelming and abstract. GDPR compliance. Data brokers. Facial recognition. AI training. How do you separate genuine risks from paranoia? How do you take protective action without withdrawing entirely from digital life—which, practically speaking, isn’t possible anymore?
Let me be direct: there are real risks. Privacy is genuinely at risk in the AI age. But most people’s worry is misdirected. They fixate on Facebook or specific tech companies whilst ignoring bigger privacy threats. They worry about Big Tech whilst tolerating smaller companies harvesting data recklessly. They concern themselves with implausible scenarios whilst ignoring probable harms. Understanding actual privacy risks means cutting through noise and focusing on what genuinely matters.
GDPR Enforcement and Reality
The EU’s General Data Protection Regulation, which applies to UK operations through equivalent UK frameworks, is the world’s strongest privacy law. On paper, it’s impressive: it gives individuals rights to access their data, request deletion, understand how data is used, and move data between services. Companies must obtain genuine consent before using personal data. They must implement privacy by design. They must keep data only as long as necessary. Violation carries substantial fines.
But here’s the reality: GDPR enforcement is inconsistent at best, toothless at worst. The UK’s Information Commissioner’s Office (ICO) has the authority to fine companies up to £17.5 million or four per cent of global turnover for serious breaches. Sounds formidable, right? Except the ICO’s budget is limited, cases move slowly, and penalties are often negotiated down. Between announcement and actual enforcement, years can pass. Smaller companies operating in the grey areas often calculate that the risk of small fines is acceptable compared to the value of the data.
There’s also the enforcement gap around international data flows. Data protection laws matter less if your data leaves the UK for jurisdictions without equivalent protection. Tech companies have become quite sophisticated at using legal mechanisms to move data internationally, creating privacy leakage that existing regulation struggles to prevent. A company might comply with GDPR in the UK whilst simultaneously sending your data to American servers where it’s available to government agencies under different legal frameworks. Technically compliant. Practically privacy-eroding.
The Data Broker Problem
Here’s what I genuinely worry about: data brokers. These are companies you’ve likely never heard of, that you’ve never interacted with, and that hold detailed dossiers about you. Data brokers aggregate information from hundreds of sources: public records, commercial transactions, social media, browsing history, purchase history, credit reports, location data, and much more. They stitch all this together into comprehensive profiles that they sell to marketers, financial institutions, insurance companies, employers, and potentially others.
The data broker industry operates in regulatory shadows. Whilst GDPR applies, enforcement is minimal. You likely have no idea which brokers hold your data, what information they possess, or who’s bought access to it. You have theoretical rights to request access and deletion, but exercising those rights is enormously time-consuming. You’d need to identify brokers (hard), contact them (they make it difficult), prove your identity (to their satisfaction), request your data (often denied or incomplete), and request deletion (which they dispute). Most people don’t bother.
Why does this matter? Because that data gets used to make decisions about you. Insurers use it to set premiums. Lenders use it to determine creditworthiness. Employers use it in hiring. Marketing companies use it to target you with information designed to manipulate your behaviour. The problem isn’t that data exists; it’s that you have no visibility or control, and the data is often inaccurate. A data broker might mistake you for someone else, hold outdated information, or combine data points that misrepresent your actual situation. You’re being assessed, judged, and treated differently based on profiles you didn’t create and can’t see.
AI Training on Personal Data
Now add AI to that equation. Large language models—systems like the major AI assistants available today—are trained on vast amounts of text scraped from the internet. That text includes your personal data. Your online purchases, if publicly posted. Your social media history. Your blog posts. Your forum discussions. Your email if it was compromised and released in a breach. All of this becomes training data for AI systems.
Here’s the privacy risk: personal information can be memorised by these systems and potentially extracted. Researchers have demonstrated that you can query large language models and retrieve specific personal information they’ve memorised—sometimes contact details, sometimes financial information, sometimes sensitive personal revelations that someone posted online years ago. It’s not that the AI system intentionally stored and indexed this information; it’s that the training process incorporated so much data that personal details were incidentally memorised.
The broader concern is that AI systems trained on your personal data are being commercialised and used to build products that profit from data you never consented to have incorporated. You didn’t agree that your email or posts would be part of an AI training dataset. You certainly didn’t agree that companies could commercialise AI systems trained on your data. Yet that’s exactly what’s happening. This is particularly problematic in healthcare and financial data contexts, where highly sensitive personal information is being incorporated into training datasets sometimes without explicit consent.
Facial Recognition and Identification
Facial recognition deserves particular attention because it’s fundamentally different from other data collection. Most data collection is passive—you’re being tracked but not identified. Facial recognition is identification. It links your face to your identity. And it’s proliferating.
Governments use facial recognition for border control, law enforcement, and surveillance. Retailers use it to track customers. Airports use it for security. Financial institutions use it for identity verification. The capability is advancing rapidly, and accuracy rates have improved dramatically. In low-resolution, difficult lighting conditions, accuracy still has issues. But in controlled conditions, modern facial recognition is extraordinarily accurate.
The privacy concern is fundamental: facial recognition removes anonymity in public spaces. You cannot walk down a street without your presence being documented. You cannot attend a protest without being identified. You cannot visit locations without that visiting being recorded. This is qualitatively different from other data collection. It’s the infrastructure of mass surveillance.
The UK doesn’t have explicit legal prohibition on facial recognition, though civil liberties organisations have challenged its use repeatedly. The ICO has expressed concerns but lacks clear regulatory framework for managing it. This is a gap in protection that matters enormously.
The Memorisation Problem with Large Language Models
Let me delve into the specific problem with LLM memorisation because it’s a privacy issue people often don’t understand. These systems are trained on enormous datasets. They learn patterns. But alongside learning patterns, they incidentally memorise specific examples. If the training data contains “My phone number is 07700 900123,” the model doesn’t just learn that phone numbers have a particular format; it potentially memorises that specific number.
This is particularly concerning with sensitive data. Health information, financial details, personal contact information—if these appear in training data, they might be memorised. You can extract this information by querying the model in the right way. The model isn’t trying to breach privacy; it just happens to have memorised the data as a side effect of training.
Companies developing large language models are becoming more aware of this problem and implementing techniques to reduce memorisation. But the techniques aren’t perfect, and the tension between capability and privacy persists. The more comprehensive the training data, the more capable the model. But the more data incorporated, the greater the memorisation risk. It’s an inherent trade-off, and one that’s usually resolved in favour of capability rather than privacy.
Nexatech’s Approach to Privacy-Respecting AI
Through Nexatech Ventures, I deliberately invest in companies building privacy-respecting AI. This matters to me both as a technology investor and as someone who believes privacy is fundamental. Privacy-respecting AI isn’t a contradiction; it’s a design choice. It’s possible to build capable AI systems that don’t require harvesting vast amounts of personal data without consent.
What does this look like in practice? Differential privacy techniques that add noise to data so that individual records can’t be identified. Federated learning that keeps data on individual devices and only aggregates insights rather than centralising raw data. Encryption that keeps data secure throughout processing. Synthetic data generation that trains models on artificial data rather than real personal data. These approaches are more computationally expensive and sometimes reduce model capability slightly. But they’re entirely viable, and companies should be required to use them when personal data is involved.
The problem is incentives. Companies that use less data, that collect less granularly, that respect privacy more carefully, are often less profitable than competitors that maximise data collection without restraint. Until regulation or market pressure forces the issue, most companies will choose maximum data extraction. This is where policy matters. We need regulatory requirements that privacy-respecting approaches become necessary, not optional.
Practical Steps to Protect Your Privacy
So what should you actually do? Here are practical steps that make real difference without requiring you to abandon digital life entirely.
First, use privacy-focused tools. Password managers not only secure passwords but also reduce the need to register multiple accounts with the same email. Virtual phone numbers and email aliases limit how much personal contact information you expose. VPNs encrypt your internet traffic, preventing your ISP from seeing what you visit. These tools won’t make you invisible, but they reduce the data trails you leave.
Second, be intentional about what you share online. Everything you post potentially becomes training data for AI systems. Every email you send might be stored on company servers. Every review, comment, or post contributes to databases about you. Ask yourself before sharing: do I want this potentially combined with other data and analysed by AI systems? That doesn’t mean never sharing; it means being thoughtful about what you publicly attach to your identity.
Third, manage your privacy settings across platforms and services. Most companies default to maximum data collection. You can usually reduce this by navigating settings and opting out of optional data collection. It’s tedious and companies intentionally make it difficult, but it’s worth doing.
Fourth, exercise your data rights. GDPR gives you rights to access your data and request deletion. Contact platforms you use and ask what data they hold. Request deletion of data you don’t want them retaining. It’s time-consuming, but it matters.
Fifth, be sceptical of free services. When you’re not paying, you’re often the product. Free email services, social networks, and apps often monetise through data collection. Paid alternatives that don’t monetise users’ data are worth considering.
Sixth, monitor your digital footprint. Services like Google’s Data & Privacy dashboard or third-party data broker search tools show you what’s publicly findable about you. It’s sobering, but it’s useful information.
What Regulation Should Address
Individual protective measures matter, but they’re insufficient. Structural change requires regulation. The UK should strengthen GDPR enforcement with better ICO funding and clearer penalties. It should require explicit consent for AI training on personal data, not just vague terms of service. It should establish clear frameworks around facial recognition, probably restricting its use in many contexts. It should regulate data brokers, requiring transparency about what data they hold and genuinely easy mechanisms for access and deletion.
The EU is moving in some of these directions through the AI Act, which will require that certain uses of personal data in AI systems get explicit consent and that high-risk AI applications face particular oversight. The UK should track these developments or, better still, exceed them. We have the opportunity to set standards that other countries follow.
The Bigger Picture
Privacy in the AI age isn’t about keeping secrets. It’s about having agency. It’s about data about you not being used to manipulate you without your knowledge. It’s about retaining some boundary between your private self and the profiles companies and governments create. It’s about not living under constant surveillance where your every movement and interaction is documented and analysed.
The risks are real. Data brokers hold profiles on most adults. Your data is incorporated into training datasets for AI systems. Facial recognition is proliferating. Data breaches expose sensitive information regularly. Companies prioritise growth over privacy protection. Governments deploy surveillance. These are facts, not paranoia.
But the risks are also manageable if you understand them and take appropriate action. You can’t achieve perfect privacy in a digital age—that’s not realistic. But you can significantly improve your privacy posture through thoughtful choices. And you can support policy changes that make it easier to protect privacy across the board. What you can’t do is ignore the problem and hope it doesn’t affect you. Privacy erosion is gradual, but it’s real.
My Personal Approach
I practice what I preach. I use encrypted email and VPNs. I’m thoughtful about what I share publicly, recognising that whatever I post might be analysed by AI. I work only with companies building privacy-respecting products. I support policy changes that strengthen privacy protection. And I’m transparent about the trade-offs—complete privacy in digital life isn’t possible if you want to participate in modern society, so we’re managing risk, not eliminating it.
But perhaps most importantly, I remain sceptical of the idea that privacy isn’t worth protecting. I know that the default trajectory, without conscious effort, is toward complete transparency—toward companies and governments knowing everything about us, using that knowledge to influence our behaviour, restrict our choices, and extract value from our data. That’s not the world I want to live in, and it’s not the world I want my work at Nexatech to help build. Privacy is worth fighting for, even imperfectly, even in difficult circumstances. The question is whether we collectively decide to protect it or whether we accept its gradual erosion as the cost of digital convenience.
Scott Dylan is Dublin based British entrepreneur, investor, and mental health advocate. He is the Founder of NexaTech Ventures, a venture capital firm with a £100 million fund supporting AI and technology startups across Europe and beyond. With over two decades of experience in business growth, turnaround, and digital innovation, Scott has helped transform and invest in companies spanning technology, retail, logistics, and creative industries.
Beyond business, Scott is a passionate campaigner for mental health awareness and prison reform, drawing from personal experience to advocate for compassion, fairness, and systemic change. His writing explores entrepreneurship, AI, leadership, and the human stories behind success and recovery.
Scott Dylan is Dublin based British entrepreneur, investor, and mental health advocate. He is the Founder of NexaTech Ventures, a venture capital firm with a £100 million fund supporting AI and technology startups across Europe and beyond. With over two decades of experience in business growth, turnaround, and digital innovation, Scott has helped transform and invest in companies spanning technology, retail, logistics, and creative industries.
Beyond business, Scott is a passionate campaigner for mental health awareness and prison reform, drawing from personal experience to advocate for compassion, fairness, and systemic change. His writing explores entrepreneurship, AI, leadership, and the human stories behind success and recovery.