HomeBlogAI-Powered Personalisation: Where Convenience Ends and Surveillance Begins

AI-Powered Personalisation: Where Convenience Ends and Surveillance Begins

AI-Powered Personalisation: Where Convenience Ends and Surveillance Begins - Scott Dylan

The Seductive Promise of Personalisation

Google launched Personal Intelligence in January 2026, and the product demos were impressive. The AI anticipates what you want before you articulate it. It knows your preferences and habits. It surfaces information relevant to you at the exact moment you need it. It organises your digital life intelligently. It’s genuinely useful. And it’s genuinely troubling.

I watched the demos with the ambivalence that defines how I think about personalisation in AI. On one level, I understand the appeal. I’m a busy person with a lot of information coming at me constantly. Something that intelligently filters that information, that surfaces what’s relevant, that anticipates what I need—that’s genuinely valuable. I can see why people want this. I can see why companies want to build it. But I also understand what’s required to build it, and that’s where the trouble lies.

Personalisation requires data. Lots of it. Data about what you search for, what you click on, what you read, what you buy, where you go, who you communicate with, how long you spend on things, what you skip past, what you come back to. The more granular the data, the more sophisticated the personalisation can be. And the more data you surrender, the more someone knows about you—not just what you do, but who you are, what you want, what you fear, what you’re ashamed of, what you’re vulnerable about.

The promise of personalisation is convenience. The price of personalisation is that every moment of your digital life becomes visible to companies that are building profiles of you that are more comprehensive and more revealing than you’d ever voluntarily disclose. That line between helpful and invasive isn’t obvious, and the companies building these systems have every financial incentive to push the boundary in directions that favour more data collection, more profiling, more personalisation.

Understanding Modern Data Collection

Most people don’t fully grasp how much data is being collected about them. You think you’re using a search engine, or reading email, or checking social media. In reality, you’re interacting with highly sophisticated data collection and analysis systems. Every search is logged. Every email you send and receive is scanned for keywords. Every webpage you visit is tracked. Every click is recorded. Your location is tracked through your device’s GPS and through the websites and apps you visit. Your device’s identifiers are used to follow you across the internet. Your interactions with ads are tracked in extraordinary detail.

Google is particularly aggressive at this because search is fundamentally a surveillance business masquerading as a service. Google doesn’t charge you for search because you’re not the customer—you’re the product. Advertisers are the customers, and they’re paying for access to an incredibly detailed understanding of what you want. Google’s ability to target ads with precision comes from having tracked your behaviour exhaustively. When Google launches Personal Intelligence, what they’re doing is taking all that behavioural data and using it to build a personal AI that anticipates your needs. The convenience is real. The data collection that enables it is extraordinary.

What’s changed recently is that AI makes this data collection more valuable and more obvious. With simple data analytics, companies could tell that you searched for symptoms, then searched for doctor appointments, and infer that you’re sick. That’s already creepy. But with modern AI, they can predict your health status before you go to the doctor. They can model your psychology, your vulnerabilities, your decision-making processes. They can predict your behaviour with unsettling accuracy. The same data that merely seemed intrusive a few years ago now enables something that starts to feel like mind-reading. That’s not actually more intrusive on a technical level—the data collection was already invasive—but it reveals how invasive it actually was.

The GDPR Framework and Why It Matters

The General Data Protection Regulation, which came into force in Europe in 2018, attempted to establish principles for how personal data should be handled. The framework is built around several key ideas: data minimisation (companies should collect only data they actually need), purpose limitation (data collected for one purpose shouldn’t be used for entirely different purposes), transparency (people should know what data is being collected and how it’s used), and consent (for sensitive data, explicit agreement is required).

These principles are fundamentally at odds with how modern personalisation works. Personalisation depends on maximal data collection, because more data enables better personalisation. It depends on using data in ways that weren’t necessarily contemplated when it was collected—you might collect browsing data for showing relevant ads, but that same data is incredibly useful for training AI systems that predict behaviour. It depends on obscuring how data is used, because if people understood the detail of how their data was being mined for insights about them, many would object. And it depends on weak consent mechanisms, where people agree to terms of service that are hundreds of pages long and comprehensively permit whatever the company wants to do.

GDPR tries to create meaningful constraints here. Data minimisation is straightforward: companies shouldn’t be collecting everything, everywhere, forever. Google’s approach of collecting comprehensive behavioural data for years, even for purposes you didn’t originally consent to, violates the spirit of data minimisation. Purpose limitation means that if you agreed to Google using your data to show you relevant search results, Google shouldn’t then repurpose that same data to build an AI that predicts your behaviour. That requires explicit consent, which people haven’t really given. Transparency means companies should clearly explain how they’re using your data. Most privacy policies are deliberately obscure. Consent means people should actively choose to share data for specific purposes, not have data collection be the default.

The UK Information Commissioner’s Office has been particular about this. They’ve examined how companies use data for personalisation and have increasingly found that companies aren’t complying with GDPR. Data is being collected beyond what’s necessary. Purposes are being changed without explicit consent. The bases for processing data are weak. Companies collect vast amounts of data and then claim it’s all necessary for personalisation, which is convenient for the company but doesn’t necessarily respect user rights.

The Cambridge Analytica Moment That Didn’t End
AI-Powered Personalisation: Where Convenience Ends and Surveillance Begins - Scott Dylan

Cambridge Analytica was the scandal that revealed to the general public just how detailed personal profiles can be built from digital data and how those profiles can be manipulated for political purposes. In 2013, a researcher had developed a personality quiz app that was downloaded by around 270,000 people. The app requested permission to access users’ Facebook data, which it did. But Facebook’s privacy model at the time allowed the app to also access the data of the users’ friends, without those friends’ consent. So 87 million people’s data was harvested without their knowledge.

Cambridge Analytica used that data to build detailed psychological profiles of voters. They determined which voters were persuadable and on which issues. They created targeted messaging designed to exploit identified psychological vulnerabilities. They used this for political campaigns and, investigations suggested, for spreading disinformation. The scandal revealed that personal data could be weaponised in ways that undermined democratic processes. It was shocking to the public, damaging to Facebook, and led to regulatory attention.

What’s remarkable is that Cambridge Analytica was exposed and shut down, but the underlying problem didn’t go away. Data is still being collected comprehensively. Psychological profiles are still being built. Targeting is still being used to manipulate behaviour. The difference is that it’s mostly happening for commercial purposes now rather than political ones, and people have become somewhat inured to the idea that they’re being profiled and targeted. Cambridge Analytica was the dramatic revelation that broke the illusion of privacy in the digital age. Now, years later, we’re supposed to just accept that our digital lives are comprehensively surveilled.

The issue with Personal Intelligence and similar systems is that they’re essentially applying the Cambridge Analytica playbook for benevolent purposes. Instead of manipulating voters, they’re personalising your search results and email. Instead of exploiting psychological vulnerabilities for political ends, they’re using them to show you things you probably want to see. The technical capability is the same. The data collection is the same. The psychological profiling is the same. The intent is different, which matters. But the underlying power dynamics haven’t changed. You’re still being modelled and manipulated, even if the manipulation is pointed toward showing you things you probably like.

The Line Between Helpful and Invasive

Where does helpful personalisation end and surveillance begin? This isn’t a technical question—it’s a values question. From a technical standpoint, there’s a continuum. At one end, you might personalise based on data you’ve explicitly collected about preferences stated by the user. If I tell Gmail “I want to see email from my wife first,” that’s personalisation based on stated preference. Further along the continuum, you personalise based on behaviour. If I consistently open emails from certain people before others, Gmail might infer that I want to see those people’s emails first. That’s personalisation based on inferred preference. Even further, you personalise based on psychological profiling. If you’ve determined from my behaviour that I have certain vulnerabilities or interests, you target content or ads exploiting those insights. That’s where it gets morally shadier.

What companies describe as personalisation can exist anywhere on this spectrum. Some personalisation is genuinely user-serving. If I tell a music app I like jazz, and it plays more jazz, that’s helpful. If a shopping app learns that I prefer certain brands and shows them to me, that’s fine. But when personalisation means tracking me across the internet, building a detailed psychological profile, inferring unstated vulnerabilities, and using that to manipulate my behaviour—that’s not really personalisation for me, that’s surveillance. I’m not benefiting from personalisation because I’m not the one controlling what’s being personalised based on what. The company is controlling it based on what serves the company’s interest.

The invasiveness isn’t necessarily about the volume of data collected. It’s about whether you’re maintaining agency. If I understand what’s being collected and can control how it’s used, that’s manageable. If I don’t know what’s being collected and can’t control how it’s used, that’s invasive. The promise of Personal Intelligence is that you’ll benefit from personalisation—the fine print is that you lose any agency over how you’re being profiled and what that profiling is used for.

There’s also a power asymmetry that matters. Google understands me better than I understand myself. They have models of my behaviour, my preferences, my vulnerabilities. I have no equivalent understanding of them. I don’t get to see their algorithms. I don’t get to see their data about me. I can’t audit whether they’re treating my data fairly or using it for purposes I’d disapprove of. This asymmetry is fundamental to why surveillance is concerning. Not because companies will necessarily do something terrible with data, but because the capacity to do something terrible, without accountability, is concentrated entirely on one side.

The Business Model Problem

The core issue with most AI-powered personalisation is that it’s built on a business model fundamentally misaligned with user interests. Google doesn’t make money from being useful to you. Google makes money from selling your attention to advertisers. Your eyeballs, your clicks, your behaviour—those are what’s valuable. The personalisation is just the mechanism to make you more valuable to advertisers. This creates systematic pressure to collect more data, to build more detailed profiles, to personalise in ways that increase engagement even if that engagement isn’t in the user’s interest.

Consider what happens when Personal Intelligence is working well. It anticipates what you want and serves it to you without you having to search. That’s convenient. But from an advertising perspective, that’s also dangerous because you’re less likely to click on ads if the AI is preempting your needs. So there’s pressure to personalise in ways that show you more ads, that create needs you didn’t know you had, that exploit psychological vulnerabilities to increase engagement. The personalisation becomes less about serving you and more about optimising your value to advertisers.

This is why you see patterns like notification abuse—apps engineered to send notifications designed to trigger compulsive checking. Or algorithmic feeds designed to be addictive rather than informative. Or recommendations designed to extend engagement rather than satisfy needs. These aren’t accidents. They’re the natural result of companies being optimised for engagement and advertising value rather than for user wellbeing. When AI personalisation is added to this dynamic, it becomes even more powerful because the AI can personalise specifically for addictiveness and engagement.

Investors often ask me about the ethics of this. My response is that from a pure investment perspective, companies optimising primarily for user wellbeing might make less money than companies optimising for engagement and advertising value. But from a business perspective over a longer time horizon, they’re more sustainable. Users eventually resent being manipulated. Regulators eventually intervene. Trust deteriorates. The most successful technology companies over time will probably be those that align their financial incentives with user interests, not those that depend on extracting value from users.

What Regulation Is Actually Trying to Do

GDPR and emerging AI regulations aren’t trying to stop personalisation or data-driven services. What they’re trying to do is create friction that makes companies think about whether data collection is actually necessary, whether uses are legitimate, whether people have genuinely consented. They’re trying to restore some balance of power by giving people the right to know what data is being collected, the right to correct it, the right to move it, the right to delete it in some cases.

The UK ICO has been particularly aggressive about this. They’ve examined how companies handle data for personalisation and have increasingly required that companies only process data where there’s a legitimate basis for it. That sounds abstract, but the practical effect is that companies can’t just claim they need all your data because personalisation is valuable to you—they need to justify why each specific piece of data is necessary for each specific purpose.

Europe is also developing more specific AI regulations. The AI Act, which came into force in January 2024, requires that high-risk AI systems have documentation about training data and outputs. For AI used in personalisation, companies need to be transparent about how decisions are made. They need to be able to explain why you were shown something, not just that it was personalised to you.

What’s interesting is how these regulations are forcing companies to make explicit choices that were previously implicit. If I need to document that I’m collecting behavioural data to train a personalisation AI, I need to be clear about what data I’m collecting and why. If I need consent for that purpose, I can’t hide it in a 200-page terms of service. The company can’t just assume personalisation is obviously desirable to everyone. The regulation requires disclosure and active consent.

This creates pressure toward more privacy-respecting approaches. Companies starting fresh might decide that fewer data, more transparent purposes, more robust consent mechanisms is actually a better path than trying to justify comprehensive data collection. Companies being regulated might discover that they don’t actually need as much data as they thought, and that leaner systems work well and face less regulatory risk.

Privacy-First Personalisation: A Different Path

What’s interesting is that companies don’t actually need to harvest comprehensive behavioural data to deliver personalisation. You can personalise based on stated preference. You can personalise based on real-time behaviour in the current session without retaining years of historical data. You can personalise locally—the AI runs on the user’s device rather than in the cloud, so user data isn’t being sent to the company’s servers. You can give users control over personalisation, letting them see what the AI knows about them and adjust it.

At Nexatech Ventures, we’ve been investing in companies building privacy-first personalisation. The idea is that you can deliver most of the benefit of personalisation whilst collecting much less data and giving users much more control. The technical quality might be marginally lower than systems trained on comprehensive behavioural data, but the privacy properties are vastly better. Users benefit from personalisation without losing agency. Companies benefit from being able to operate in regulated environments without constant friction with regulators.

One approach is federated learning, where the AI model is trained on users’ devices rather than in the cloud. Your device learns what you like. The learning happens locally. Only the learned model is sent to the company, not the raw data. This gives you personalisation without comprehensive surveillance. Another approach is differential privacy, where statistical noise is added to data to protect individual privacy whilst still allowing the company to learn patterns. A third is consent-driven personalisation, where users can see exactly what data is being used for personalisation and can adjust it.

These approaches aren’t perfect, but they’re dramatically more privacy-respecting than current systems. And they’re technically feasible. The main barrier isn’t technical—it’s that companies that have built business models around comprehensive data collection don’t want to move away from them. If you’re Google and you have 30 years of behavioural data on billions of people, moving to privacy-first personalisation means losing that advantage. But if you’re a startup, privacy-first personalisation might be a competitive advantage, not a limitation.

What Users Should Actually Demand

If you’re using systems with AI personalisation, you should be asking hard questions. What data is being collected about you? For what specific purposes? Do you have meaningful control over it? Can you access it? Can you correct it? Can you delete it? Can you actually see how the personalisation works, or is it entirely opaque? Are there people overseeing the system, or is it pure algorithm? What’s the recourse if something goes wrong?

Most systems today have terrible answers to these questions. The data collection is opaque. The purposes are vague. You agree to terms that basically say the company can use your data however they want. You have no meaningful control. You can’t see how the system works. There’s no meaningful recourse. This is the status quo, and most people have accepted it because the alternative—not using these systems—feels infeasible in a world where so much is digital.

But acceptance has costs. Every bit of behavioural data that’s collected and retained is data that could be stolen, data that could be misused, data that creates a detailed digital profile that could be exploited. You’re not just losing privacy—you’re creating vulnerability. If someone else gets access to your behavioural profile, they can manipulate you in sophisticated ways. If your profile is analysed by an algorithm you don’t control, you might be discriminated against or exploited based on inferred characteristics.

Users should demand better. Demand that companies collect only data they actually need. Demand transparency about what’s collected and how it’s used. Demand genuine control over your data. Demand the ability to delete data. Demand that personalisation serves you, not the other way around. Most companies won’t give you what you demand until regulators force them to, but the asking matters. It creates pressure. It prevents the narrative that comprehensive surveillance is normal and necessary. It opens space for companies building better alternatives.

The Reckoning Coming

I think we’re heading toward a reckoning with how much data is being collected and how it’s being used. The public is becoming more aware that personalisation comes at a cost. Regulators are tightening requirements. New technologies like differential privacy and federated learning are making privacy-respecting personalisation technically feasible. Companies are discovering that they can operate profitably without being maximally invasive.

Google’s Personal Intelligence is an impressive technical achievement. It demonstrates what’s possible when you have comprehensive behavioural data and sophisticated AI. But it also crystallises what’s troubling about current approaches: the convenience comes from surrendering comprehensive visibility into your behaviour. The question is whether that trade-off is one we should accept.

I think we shouldn’t. I think personalisation can happen in ways that don’t require surveillance. I think companies can be profitable and serve users well without extracting maximum value from behavioural data. I think users can have agency in how they’re profiled and personalised. The technological barriers to this are lower than many assume. The main barrier is that companies built around surveillance business models don’t want to change. But eventually, either they’ll change, or regulations will force them to, or they’ll lose customers to companies that respect privacy better.

The future of personalisation doesn’t have to be a future of comprehensive surveillance. It could be a future where you get personalisation that serves you, where you control what’s known about you, where companies compete on how well they serve you rather than on how much value they extract from you. Getting there requires choice: the choice to demand better from companies, the choice to support companies that respect privacy, the choice to accept slightly less convenient personalisation in exchange for maintaining agency over your own data. It’s a choice that’s increasingly available. Whether we make it depends on how much we actually value privacy.

Related reading: How Emotional AI Claims to Read Your Feelings — and Why It Probably Can’t, Developing Ethical Frameworks for AI Implementation and What is information communication technology ict: A concise guide to ICT basics.

You May Also Like


Discover more from Scott Dylan

Subscribe to get the latest posts sent to your email.

Written by
Scott Dylan