The Healthcare Revolution Nobody Asked For (Yet Everyone Needs)
Artificial intelligence is transforming healthcare in ways that seemed impossible just five years ago. Through Nexatech Ventures, I’ve had the privilege of investing in some of the most innovative health technology companies working on AI applications in medicine. And what I’ve observed firsthand is a genuine revolution unfolding in diagnostic imaging, drug discovery, patient monitoring, and clinical decision support. Yet this revolution comes with both extraordinary promise and genuine peril.
The promise is straightforward: AI can process medical imaging faster and more accurately than human radiologists. AI can predict disease progression and treatment responses. AI can discover new drug compounds at a pace impossible for human researchers. AI can monitor patients in real-time, catching complications before they become critical. These aren’t theoretical possibilities anymore; they’re happening now. But alongside the promise sits a constellation of challenges that we’re not adequately addressing: data privacy, algorithmic bias, liability questions, and the fundamental question of how we maintain human judgement and patient autonomy in increasingly AI-mediated healthcare.
I want to explore both the exciting reality of AI in healthcare and the serious issues we need to solve for this technology to genuinely benefit patients rather than creating new systems of harm and inequality.
Where AI Is Already Delivering in Healthcare
Let’s start with the concrete achievements, because they’re genuinely impressive. In medical imaging—radiography, CT scans, MRI—AI systems have reached radiologist-level accuracy in certain tasks and are exceeding human performance in others. A study published recently showed AI systems identifying breast cancer in mammograms with sensitivity and specificity comparable to experienced radiologists. For chest X-rays, AI systems are detecting pneumonia, tuberculosis, and other conditions at rates matching or exceeding human radiologists.
What makes this significant isn’t just the accuracy—it’s what accuracy enables. In the NHS and healthcare systems globally, there’s a shortage of radiologists. Waiting times for scan interpretation can be weeks or months. An AI system that can rapidly provide an initial interpretation, flagging abnormalities for radiologist review, means faster diagnosis, earlier intervention, and better patient outcomes. It also means radiologists spend their time on complex cases and quality assurance rather than routine scans, which is more satisfying work and makes better use of their expertise.
In drug discovery, AI is transforming the timeline from ten years and billions of pounds to significantly faster and cheaper. AI systems can screen millions of molecular compounds, predicting which are likely to have therapeutic effects and which are likely to have toxic side effects. When OpenAI launched ChatGPT Health in January 2026, it represented a new frontier in accessible medical information and decision support. These tools can help patients understand their conditions, suggest when medical consultation is needed, and provide ongoing support for chronic disease management.
In patient monitoring, AI systems are enabling continuous tracking without constant hospitalisation. Remote monitoring with AI analysis can detect early warning signs of deterioration in heart failure, COPD, diabetes, and numerous other conditions. Patients can stay at home whilst maintaining safety through automated alerts when intervention is needed. This is simultaneously better for patients and more cost-effective for healthcare systems.
The Data Challenge: Privacy in an Increasingly Digitised System
Here’s where I need to be direct: using AI effectively in healthcare requires massive amounts of medical data. To train an AI system to recognise patterns in imaging or to predict treatment outcomes, you need thousands or tens of thousands of examples. That data is incredibly sensitive. It contains genetic information, mental health records, medication histories, and deeply personal details about people’s bodies and health.
The challenge is this: the systems that will most benefit from AI (because they have the most data) are precisely the systems where privacy concerns are most acute. The NHS holds decades of medical records on millions of people. That’s exactly the kind of dataset that could train extraordinarily effective AI systems. It’s also exactly the kind of dataset that, if breached or misused, could cause profound harm to millions of people.
We’ve already seen examples of the risks. When healthcare organisations share data for AI training without adequate anonymisation, it’s often possible to re-identify individuals. When data is retained by commercial AI companies, privacy risks extend far beyond the original healthcare provider. When AI systems are trained on biased datasets—datasets that systematically underrepresent certain populations—the resulting AI perpetuates and amplifies that bias.
The NHS England exploration of AI applications is happening against this backdrop of genuine privacy concerns. How do we enable beneficial AI development whilst maintaining data privacy? How do we ensure that data shared for AI training isn’t used for purposes individuals haven’t consented to? How do we maintain control over our own medical data in an era of increasing digitalisation? These are difficult questions without easy answers, but they’re fundamental to getting AI in healthcare right.
Algorithmic Bias: When AI Replicates and Amplifies Human Prejudice
There’s a seductive idea in technology circles that AI is objective—that by removing human judgement, we remove bias. This is profoundly false. AI systems are trained on data created and labelled by humans, reflecting human decisions, human prejudices, and human patterns of discrimination. An AI trained on imaging data predominantly from white patients will be less accurate at diagnosing disease in non-white patients. An AI trained on treatment records that reflect historical discrimination in who received which treatments will perpetuate those patterns.
This isn’t theoretical. Research has documented bias in commercial AI health systems. One widely-publicised case involved an algorithm used to allocate healthcare resources that was systematically biased against Black patients because it used healthcare cost as a proxy for health needs, not recognising that Black patients face systemic barriers to healthcare access and thus have lower costs despite greater health needs. The algorithm was making worse resource allocation decisions than explicit human judgement would have made.
The challenge in healthcare is that the stakes are high. An AI system with biased accuracy isn’t just making imperfect decisions; it’s potentially harming the patients most underrepresented in its training data. A diagnostic AI that’s less accurate for certain populations means those populations get delayed diagnoses. A treatment-recommendation AI that’s biased in favour of certain types of intervention perpetuates healthcare inequality.
The World Health Organisation recognised these concerns by publishing AI ethics guidelines for health earlier this year. These guidelines emphasis the importance of diverse training data, transparency about limitations, ongoing monitoring for bias, and human oversight of AI-driven clinical decisions. They’re important, but guidelines are only effective if they’re actually implemented. That requires investment, commitment, and honest acknowledgment of bias rather than technological solutionism.
The Question of Liability and Responsibility
Here’s a question that keeps healthcare lawyers awake: if an AI system makes a diagnosis recommendation and a clinician follows that recommendation leading to patient harm, who’s responsible? The company that built the AI? The healthcare provider that implemented it? The clinician who acted on it? The patient who should have sought a second opinion?
This isn’t academic. As AI becomes more integrated into clinical decision-making, these questions become urgent and practically important. Clinicians need to know whether they’re expected to defer to AI recommendations or to treat them as decision support requiring independent judgement. Healthcare organisations need to know what their liability exposure is. Patients need to know whether their treatment was decided by a clinician’s judgement or delegated to an algorithm.
The legal framework for this is still developing. There’s emerging consensus that AI should be treated as decision support rather than decision replacement—AI provides recommendations, but clinicians retain responsibility for clinical decisions. But that consensus isn’t universal, and it’s not clearly reflected in liability law or professional guidance. Some companies building health AI systems seem to be operating under the assumption that they can disclaim liability by calling their product “decision support.” That’s potentially problematic both legally and ethically.
What we need is clarity. We need legal frameworks that define when AI recommendations can be followed without independent verification and when they require clinician validation. We need professional standards that make clear the responsibility relationship between AI systems and clinicians. We need transparency so patients understand when AI is involved in their care and what it means. Without this, we risk implementing AI in ways that shift liability without shifting actual responsibility, creating gaps where patient harm isn’t adequately addressed.
The Patient Autonomy Problem
One of the aspects of AI in healthcare that concerns me most is the potential erosion of patient autonomy. When treatment decisions are mediated through AI systems, patients may lose meaningful input into their own care. This happens subtly. A clinician armed with an AI recommendation might not explore alternative options as thoroughly. A patient might feel pressured to accept a treatment an AI has recommended. The authority of technology combined with the vulnerability of being unwell creates conditions where patient agency can be diminished.
This is particularly concerning in contexts where patients are already marginalised. A patient from a community underrepresented in AI training data is facing an AI system that’s less accurate for their population. If that patient doesn’t understand how the AI works (and most don’t), and if they’re not supported in questioning recommendations (and many aren’t), they may accept less-optimal care.
There’s also the question of informed consent. Can informed consent truly exist when the decision-making process is partly opaque to the patient? If you’re told “this AI recommends treatment X,” without understanding how that recommendation was derived, whether it was trained on people like you, and what the alternative options are, is that consent or is that deference to authority?
I’m not arguing against AI in healthcare. I’m arguing that we need to design AI implementation with explicit attention to patient autonomy. This means transparency about when AI is involved in decisions. It means ensuring patients understand recommendations and alternatives. It means maintaining space for patient judgement about their own bodies and their own values. It means particular attention to ensuring that marginalised patients retain autonomy even when facing an AI system that’s less accurate for their population.
What Good AI Implementation Actually Looks Like
Through Nexatech Ventures, I work with health technology companies trying to get this right. What I see in the best implementations is serious attention to the challenges I’ve outlined. They’re not treating AI as a replacement for clinical judgement but as augmentation of it. They’re building diverse training datasets explicitly to reduce bias. They’re conducting ongoing evaluation to detect bias as their systems are deployed. They’re being transparent about limitations and uncertainty.
Good AI implementation in healthcare involves several key elements. First, transparency about how the AI works, what data it was trained on, what its accuracy is, and what its limitations are. This allows clinicians and patients to use the AI appropriately—not treating it as infallible, but as a valuable tool with understood boundaries.
Second, ongoing monitoring for bias and performance variation across populations. An AI system might perform well overall but poorly for certain groups. That needs to be detected and corrected. This requires diverse implementation teams, external auditing, and willingness to acknowledge and address problems when they’re found.
Third, maintaining human judgement as the final decision point. AI makes recommendations; humans make decisions. This preserves responsibility, maintains patient autonomy, and ensures that recommendations are filtered through clinical judgement and patient values.
Fourth, explicit engagement with the ethical implications of AI use. Health technology companies should be thinking deeply about privacy, about bias, about patient autonomy, about liability. Not because regulators force them to, but because good implementation actually requires this thinking.
Finally, investment in training and support for clinicians using AI tools. An AI system that’s poorly understood or inappropriately deployed creates more problems than it solves. Clinicians need training on what the AI is actually doing, how to interpret its recommendations, when to rely on them and when to question them.
The Investment Perspective: Funding Innovation Responsibly
As someone investing in healthcare innovation through Nexatech Ventures, I’m genuinely excited about AI’s potential. The companies working on AI diagnostics, on drug discovery acceleration, on patient monitoring—they’re working on problems that matter. They’re positioned to deliver value both economically and in terms of improved patient outcomes. But investment comes with responsibility.
When we invest in health technology companies, we’re not just investing in financial returns. We’re investing in systems that will affect people’s health. That means we have responsibility to push companies toward the kind of responsible AI implementation I’ve described. It means asking hard questions about bias, about data privacy, about patient impact. It means being willing to step back from investment opportunities where companies are building AI systems in ways that ignore these challenges.
It also means funding the infrastructure that responsible AI requires. Research on bias detection and mitigation. Development of privacy-preserving data-sharing techniques. Clinical trial design that properly evaluates AI systems. Education and training for clinicians. These aren’t the sexiest use of investment capital, but they’re essential if AI in healthcare is going to reach its genuine potential without creating new systems of harm.
I’d argue that this is an area where public funding should play a significant role. The NHS, universities, research councils—these institutions should be investing heavily in understanding how to implement AI responsibly in healthcare. That public investment protects patients and creates conditions where commercial innovation can happen more safely.
The Path Forward: Building Trust in AI Healthcare
AI in healthcare is coming regardless of whether we’re ready for it. The question is whether we’ll implement it in ways that patients trust, that increase rather than decrease health equity, and that preserve human judgement and autonomy. That requires deliberate choices about how we develop, deploy, and govern these systems.
Clinicians need clear professional guidance on how to use AI responsibly. Patients need to understand when AI is involved in their care and what it means. Regulators need frameworks that ensure safety without stifling innovation. Companies need to be incentivised to do the hard work of building systems that are accurate across diverse populations. Researchers need to continue studying how AI implementations actually affect health outcomes and health equity.
I’m optimistic that this can happen. The healthcare sector is generally thoughtful about patient welfare. Many people working in health technology are genuinely committed to improving outcomes. There’s growing recognition of the risks we need to manage. But optimism isn’t enough; we need action.
We need researchers conducting rigorous studies of bias in clinical AI systems. We need healthcare organisations piloting AI implementations thoughtfully, with explicit attention to equity and patient autonomy. We need privacy advocates and patient advocates at the table when AI systems are being designed for healthcare. We need investment in alternatives like federated learning and differential privacy that allow AI training without centralising sensitive data. We need transparency standards and auditing processes that make clear how AI systems are performing.
The Broader Vision
The future of healthcare doesn’t have to be AI replacing human judgement. It can be AI augmenting human judgement, freeing clinicians from routine tasks to focus on complex cases and patient relationships. It can be AI expanding access to healthcare expertise in areas where expertise is scarce. It can be AI helping patients understand their conditions and make informed decisions about their care.
For this vision to be realised, we need to take seriously the challenges I’ve outlined: data privacy, algorithmic bias, the question of responsibility, the preservation of patient autonomy. We need diverse teams building these systems. We need honest acknowledgment of limitations. We need ongoing evaluation and adjustment based on real-world outcomes.
The technology is real. The potential is genuine. The risks are significant. None of this is inevitable—the outcomes depend on the choices we make now about how to develop and deploy AI in healthcare. I’m committed to pushing for responsible innovation both as an investor and as someone who believes in getting technology right. The tools we build to care for people’s health deserve that commitment.
Discover more from Scott Dylan
Subscribe to get the latest posts sent to your email.
Scott Dylan is Dublin based British entrepreneur, investor, and mental health advocate. He is the Founder of NexaTech Ventures, a venture capital firm with a £100 million fund supporting AI and technology startups across Europe and beyond. With over two decades of experience in business growth, turnaround, and digital innovation, Scott has helped transform and invest in companies spanning technology, retail, logistics, and creative industries.
Beyond business, Scott is a passionate campaigner for mental health awareness and prison reform, drawing from personal experience to advocate for compassion, fairness, and systemic change. His writing explores entrepreneurship, AI, leadership, and the human stories behind success and recovery.
Scott Dylan is Dublin based British entrepreneur, investor, and mental health advocate. He is the Founder of NexaTech Ventures, a venture capital firm with a £100 million fund supporting AI and technology startups across Europe and beyond. With over two decades of experience in business growth, turnaround, and digital innovation, Scott has helped transform and invest in companies spanning technology, retail, logistics, and creative industries.
Beyond business, Scott is a passionate campaigner for mental health awareness and prison reform, drawing from personal experience to advocate for compassion, fairness, and systemic change. His writing explores entrepreneurship, AI, leadership, and the human stories behind success and recovery.