HomeBlogAI in Education: Closing Gaps or Widening Divides?

AI in Education: Closing Gaps or Widening Divides?

AI in Education: Closing Gaps or Widening Divides? - Scott Dylan

The Promise of Personalised Learning

Artificial intelligence is quietly reshaping how students learn. It’s no longer science fiction; it’s happening in classrooms across the UK right now. AI tutoring systems promise something that traditional education has struggled to deliver: genuinely personalised learning at scale. Rather than a teacher standing in front of thirty students, all moving at the same pace regardless of individual understanding, AI systems adapt in real time to each student’s pace, learning style, and knowledge gaps.

The mechanics are impressive. An AI tutoring platform tracks precisely where a student struggles—whether that’s algebraic expressions, cellular respiration, or Shakespeare’s syntax—and adjusts difficulty accordingly. When a concept isn’t sticking, it doesn’t simply repeat the same explanation; it recognises that different students learn differently and presents material through multiple modalities. Some students visualise better. Others learn through analogies or hands-on problems. AI systems can personalise not just what gets taught, but how it gets taught.

Adaptive testing represents another significant advance. Instead of blanket assessments where every student answers the same questions, AI-powered tests adjust difficulty in real time. Get a question right? The next one is harder. Struggle with a question? The system recalibrates. This means assessment becomes far more efficient—teachers get accurate pictures of what students know without students wasting time on material they’ve already mastered or hitting brick walls on impossible questions. The data generated is also invaluable for identifying where cohorts struggle collectively.

The Digital Divide Problem

And yet, this is where my optimism collides with reality. The same AI education tools that could transform learning for students with access become another way to entrench advantage for those without. The digital divide in UK education isn’t ancient history; it’s present, significant, and getting more complex.

Students in affluent areas attend schools with modern technology infrastructure, reliable internet connectivity, and budgets for licencing AI platforms. Meanwhile, students in deprived areas often attend schools struggling with basic broadband, ageing hardware, and teacher shortages that make implementation of new technology even more challenging. An AI tutoring system requires decent internet and device access; without that, it simply doesn’t exist for those students. We risk creating a situation where AI exacerbates existing inequalities rather than reducing them.

The problem isn’t merely about hardware. It’s about expertise. Schools with strong tech leadership and experienced staff can implement AI tools effectively. Schools with minimal tech support, staff resistant to change, and outdated infrastructure often struggle. An AI system installed badly, used inconsistently, or unsupported by teachers becomes a burden rather than a benefit. The gains accrue to the already-advantaged; the disadvantaged get left further behind.

Teacher Displacement and Workforce Anxiety

Let’s address the elephant in the room: teachers are worried about their jobs. I don’t dismiss that concern. I’ve been part of industry transformations, and I know that technological change creates real anxieties for workforces. When your profession faces disruption, that’s worth taking seriously, not dismissing as Luddism.

Some of the anxiety is overblown. AI isn’t going to replace classroom teachers in the near term. Teaching involves far more than content delivery—it requires building relationships, modelling behaviour, offering emotional support, and making countless micro-decisions about classroom dynamics. An AI system can personalise maths instruction. It cannot mentor a struggling teenager or help a young person navigate social anxiety or sense that something’s wrong at home.

But some of the anxiety is legitimate. The teachers most at risk are those who primarily deliver content—who spend their time explaining concepts that could be explained by an adaptive AI system. As AI capability increases, that risk increases too. What we need is honest conversation about this. Rather than pretending AI won’t change teaching, we should be asking: What will teachers actually do in an AI-enhanced classroom? How do we retrain them? What does a valuable teaching career look like when content delivery is partially automated?

Privacy and Student Data
AI in Education: Closing Gaps or Widening Divides? - Scott Dylan

Here’s where I become genuinely concerned. AI education systems generate extraordinary amounts of detailed data about student learning. Not just which questions they get wrong, but precisely where they hesitate, which explanations they revisit, how long they spend on each concept, which hints they request. Over months and years, you build a comprehensive picture of how each student thinks.

That data is valuable. It’s valuable for educational purposes—genuinely improving learning. But it’s also valuable for other purposes: marketers trying to understand consumer behaviour patterns that begin in childhood, governments interested in psychological profiles of citizens, and bad actors of various kinds. The question isn’t whether this data could be misused; it’s whether we have adequate safeguards to prevent that misuse.

The UK’s GDPR protections apply to school data, but enforcement is inconsistent. Many AI education platforms are supplied by companies based outside the UK or EU, making enforcement of data rights complex. Teachers and parents often don’t fully understand what data is being collected, how it’s stored, who can access it, and how long it’s retained. I’ve invested in tech companies through Nexatech Ventures, and I can tell you that data minimisation and privacy-respecting design require intentional effort. They’re not defaults in many tech companies. Schools need to be far more rigorous about demanding that AI tools meet genuine privacy standards, not merely nominal compliance.

Accessibility and Inclusion Benefits

But the technology does offer genuine accessibility benefits that deserve recognition. For students with learning disabilities, AI tutoring systems can provide infinite patience without judgement. A student with dyslexia can access text-to-speech without the social stigma of asking a teacher. A student who processes information slowly can work at their own pace without anxiety about holding back the class. For neurodivergent students, personalised systems can be transformative.

I work extensively with neurodivergent individuals through my work at Nexatech Ventures, and I understand how profoundly the right tools and accommodations can change lives. AI systems can be configured to provide extra scaffolding for students who need it, adjust presentation format for visual or hearing processing differences, or allow unlimited time without creating the shame that timed assessments sometimes generate. When implemented thoughtfully, AI can level playing fields rather than tilting them further.

The Connection to Prison Education

Through my work with Inside Out Justice, I’m acutely aware of how education transforms lives in prison contexts. Many people in the criminal justice system have experienced educational failure. They left school without qualifications, possibly with undiagnosed learning disabilities or neurodivergence, carrying trauma and negative self-perceptions about their own capacity to learn. Prison education, when done well, can genuinely rehabilitate by restoring belief in the possibility of learning and competence.

AI tutoring systems could be transformative in prison education, but they require exactly the opposite conditions to typical use. Prisons have limited device access, bandwidth constraints, and security considerations. Yet precisely because prisoners often have acute learning needs and disrupted educational histories, personalised AI tutoring could be powerful. A prisoner rebuilding literacy skills could work at their own pace without the shame of competing with peers. An AI system could identify and accommodate learning disabilities that institutional screening missed. This is where technology could genuinely close gaps rather than widen them—if the will and resources existed to implement it thoughtfully.

The Implementation Reality in UK Schools

Across the UK right now, schools are adopting various AI education tools. Some are doing so strategically, having thought through pedagogy and implementation carefully. Others are adopting because there’s pressure to modernise or because local authority recommendations suggest it. The range of quality and thoughtfulness is enormous.

Successful implementation requires several things: adequate staff training, genuine integration with curriculum design rather than bolt-on technology, sufficient hardware and connectivity, sustained support, and thoughtful data governance. Schools excelling at these elements report genuine benefits. Students make faster progress. Teachers get better data about what’s working. Learning becomes more adaptive. But schools rushing implementation without these foundations often end up frustrated, with expensive tools underutilised and teachers reverting to traditional methods.

Nexatech Ventures’ Perspective on EdTech

Through Nexatech Ventures, I’ve looked closely at education technology companies. The space attracts genuine innovators trying to improve learning and entrepreneurs trying to build products people want to buy. The best companies in this space are asking difficult questions: How do we make this truly accessible rather than merely available? How do we respect privacy while using data to improve learning? How do we support teachers rather than deskilling them? How do we ensure this benefits the students who most need support, not just the privileged?

I’m backing companies attempting these challenges because I believe AI can genuinely improve education. But I’m deeply sceptical of companies focused solely on growth metrics, content delivery, or efficiency without wrestling with equity questions. Technology is never neutral. A system that works perfectly for privileged students might exclude disadvantaged ones. A system that harvests data without consent violates privacy regardless of its pedagogical benefits. These trade-offs matter, and they should be made intentionally, not accidentally.

Moving Toward Responsible AI in Education

What would responsible AI in education look like? First, equity as a starting principle, not an afterthought. Technology budgets should prioritise reaching disadvantaged students, not merely affluent ones who’d benefit anyway. Second, genuine teacher partnership. Teachers should shape AI implementation, not have it imposed. Third, robust privacy safeguards. Student data should be protected as precious, not treated as a revenue stream. Fourth, ongoing evaluation of whether implementation is actually improving outcomes for the intended students, not just creating metrics that look good.

There’s also the need for regulation. The current Wild West approach to education technology leaves schools vulnerable to poor products and troubling data practices. Regulatory frameworks should ensure that education companies meet genuine standards around accessibility, privacy, and educational effectiveness. And schools need support—funding for training, time for implementation, and access to expertise in evaluating which tools actually work.

Final reality check: technology is not a substitute for investment in students, teachers, and school infrastructure. The most powerful factor in educational outcomes remains teacher quality. Before we invest massively in AI systems, let’s ensure we’re paying teachers adequately, providing ongoing professional development, and giving them reasonable workloads. An excellent teacher with limited technology will outperform a mediocre teacher armed with cutting-edge AI. Let’s not use technology as an excuse to avoid the hard work of building excellent schools.

The Question We Should Be Asking

So, is AI in education closing gaps or widening divides? The honest answer is: it could do either. The technology itself is neutral. What matters is how we implement it. Are we using it to reach students who previously had limited access to personalised support? Or are we using it to turbocharge advantage for already-privileged students? Are we protecting student privacy and building genuine consent around data use? Or are we harvesting data without ethical consideration? Are we supporting teachers to use these tools effectively? Or are we adding burdens to already stretched workforces?

The coming years will determine which version of AI in education we actually build. I’m cautiously optimistic that we can do this well—that the UK can lead in developing education AI that’s genuinely inclusive, respects privacy, empowers teachers, and improves outcomes particularly for students who’ve been left behind. But that requires intention, oversight, and willingness to say no to quick gains if they come at the cost of equity or privacy. It’s possible. Whether we actually manage it remains an open question.

Related reading: How Emotional AI Claims to Read Your Feelings — and Why It Probably Can’t, What is information communication technology ict: A concise guide to ICT basics and Improving Diagnostic Accuracy with AI Technologies.


Discover more from Scott Dylan

Subscribe to get the latest posts sent to your email.

Written by
Scott Dylan