HomeBlogAI in Financial Services: Opportunity, Risk, and the Regulatory Tightrope

AI in Financial Services: Opportunity, Risk, and the Regulatory Tightrope

AI in Financial Services: Opportunity, Risk, and the Regulatory Tightrope - Scott Dylan

The Promise and Peril of AI in Finance

Artificial intelligence has fundamentally transformed how financial institutions operate, from the algorithms determining credit approval within milliseconds to the machine learning models predicting market movements with increasing sophistication. As someone deeply embedded in venture capital, particularly through Nexatech Ventures’ focus on AI-driven innovation, I’ve observed firsthand both the extraordinary potential and the genuine risks embedded within these systems. Finance was one of the first sectors to embrace computational decision-making, yet it remains one of the most contentious, precisely because the stakes are so personal and material.

The appeal is obvious. AI can process vastly more data than humans, identify patterns invisible to traditional analysis, and operate at scale and speed impossible for manual review. A bank using machine learning for credit decisions can assess applications in real-time rather than days. Insurance companies can price policies with greater accuracy by incorporating hundreds of data points rather than relying on crude demographic categories. Trading algorithms can respond to market movements faster than human traders ever could. These capabilities translate into competitive advantage and often into cost reduction that flows to consumers.

Yet this promise comes packaged with risks that we’re still learning to identify and manage. AI systems in finance make decisions affecting people’s lives—whether they can access credit, what they’ll pay for insurance, whether their mortgage application succeeds, whether their investment accounts are flagged for fraud investigation. These decisions shape financial security, opportunity, and life trajectories. When algorithmic systems make errors or encode bias, the impact is scaled across thousands or millions of people, often without individual recourse or explanation.

How AI Is Currently Used in Banking

Modern banks are essentially technology companies that also happen to manage money. At their core, lending decisions—once made by human underwriters who might know borrowers personally—are increasingly automated through machine learning models. These systems analyse credit history, income verification, employment records, and increasingly, alternative data points like payment patterns on mobile services or social media purchasing behaviour. Major UK banks including Lloyds, Barclays, and others have invested substantially in AI-driven underwriting systems that now process the majority of consumer lending decisions.

Customer service has been revolutionised by conversational AI, with chatbots handling initial inquiries and routing complex cases to human agents. These systems have improved response times and reduced operational costs substantially, though they’ve also frustrated customers with experiences of being trapped in automated systems unable to handle variation from standard queries. Fraud detection relies almost entirely on AI now, with real-time anomaly detection monitoring transaction patterns and flagging unusual activity—a genuinely valuable application that protects both institutions and customers from criminal activity.

Cash flow forecasting, portfolio management, and risk assessment all employ machine learning. Some of the largest banks operate proprietary AI systems that manage billions in assets, making decisions about where to invest, how to balance portfolios, and when to rebalance in response to market movements. Open banking regulations have enabled fintech companies to access banking data through APIs, allowing AI-driven apps to provide personalised financial advice and spending insights to consumers. The banking sector has become thoroughly permeated by AI.

The Insurance Sector’s AI Transformation

Insurance, perhaps more than any other financial sector, has been transformed by AI. Insurers have always been in the business of prediction—predicting risk, pricing accordingly, and profiting from being more accurate than competitors. Machine learning performs this task with astonishing sophistication, incorporating hundreds of variables that humans couldn’t coherently analyse simultaneously. A car insurance quotation that once required a lengthy form now involves AI analysing your location, driving patterns from telematics data, previous claims history, and countless other factors to generate personalised pricing.

Claims processing has been automated to an extraordinary degree. AI systems now assess whether claims are fraudulent, determine liability, and estimate compensation, all without human intervention for routine cases. Some insurers use computer vision AI to analyse photographs of damage from accidents, generating damage assessments more quickly and sometimes more accurately than human adjusters. This has improved claims processing speed, but it’s also created situations where customers are denied claims by algorithmic systems without meaningful opportunity to contest the decision.

Life insurance underwriting has become increasingly data-driven, with AI analysing medical records, health data from wearables, and genetic information where permitted. Some insurers have used AI to analyse facial expressions in video applications, claiming to detect personality traits and risk propensity. These applications represent the frontier of AI in insurance, pushing into territory where the scientific basis for the predictions is questionable and the ethical implications are profound.

Algorithmic Trading and Market Impact

Quantitative trading, powered by machine learning algorithms, now represents a massive share of activity in financial markets. Algorithms can trade across multiple markets simultaneously, identifying arbitrage opportunities, responding to market movements, and executing trades at speeds measured in milliseconds. The 2010 Flash Crash, where the FTSE 100 fell over 900 points in minutes, demonstrated both the scale of algorithmic trading and the risks it poses when systems interact in unexpected ways or when feedback loops amplify volatility.

The concentration of trading in algorithmic systems creates systemic risk. When market conditions change rapidly, many algorithms respond in similar ways—selling when prices fall, for example—which can amplify volatility and create feedback loops that destabilise markets. Regulators have implemented circuit breakers and trading halts to prevent runaway scenarios, but the underlying issue remains: we’ve built financial markets where much of the decision-making happens at scales and speeds where human understanding breaks down.

For investors, algorithmic trading changes market dynamics in subtle ways. Traditional value investing relies on identifying stocks trading below their fundamental value, but when algorithms can identify the same opportunities microseconds faster than human traders, those opportunities disappear at speed. This may favour quantitative approaches but disadvantages the patient, research-based investing that was historically rewarded in markets. The shift has concentrated wealth towards those with resources to build the most sophisticated algorithms.

The FCA’s Approach to AI Regulation

The Financial Conduct Authority has taken a cautiously progressive approach to AI regulation, recognising both the innovation opportunity and the risks. Their approach centres on the principle that existing regulations apply to AI—consumer protection law, fair dealing requirements, transparency obligations—but they’ve acknowledged that applying these to opaque algorithmic systems requires new thinking. The FCA’s 2021 Discussion Paper on machine learning in financial services established that firms remain responsible for outcomes of their AI systems, even when those systems operate with limited human oversight.

Importantly, the FCA has indicated that they expect firms using AI to be able to explain their systems’ decisions—the ‘explainability’ requirement. If an AI system denies someone credit, the bank should theoretically be able to explain why in terms the customer can understand. In practice, this requirement creates significant challenges for deep learning models, which can operate as ‘black boxes’ where the path from inputs to decisions is mathematically complex and humanly inexplicable. The FCA has been pragmatic about this, accepting that perfect explainability isn’t always possible whilst insisting that firms understand their systems and can communicate material reasons for decisions.

The FCA has also emphasised the importance of testing before deployment and of ongoing monitoring of AI systems once they’re in operation. They’ve warned firms about the risks of model drift—where algorithmic systems’ performance degrades over time as real-world conditions change—and of feedback loops where biased systems create distorted data that further biases subsequent versions of the system. Their regulation is evolving rapidly, reflecting both the pace of technological change and their learning about what problems actually emerge in practice.

Algorithmic Bias in Credit Decisions
AI in Financial Services: Opportunity, Risk, and the Regulatory Tightrope - Scott Dylan

The most persistent issue in AI for financial services is algorithmic bias in credit decisions. This is where my concerns as both a technologist and an advocate for equal opportunity intersect most sharply. A credit algorithm is trained on historical lending data, incorporating applications approved or denied by human underwriters over years. If those human underwriters exhibited bias—consciously or unconsciously—that bias becomes encoded in the training data. When the machine learning model learns to replicate the patterns in that data, it learns the bias alongside the legitimate predictive patterns.

The bias can be direct or indirect. Direct bias would involve explicitly using protected characteristics like race or gender in the algorithm—which is illegal. But indirect bias is far more subtle and widespread. An algorithm might use postal code as a predictor, incorporating the fact that certain postcodes historically had lower repayment rates. But those postcodes are correlated with ethnicity and deprivation, so the algorithm is effectively using a proxy for protected characteristics. Similarly, an algorithm might consider employment sector—but women and men are distributed differently across sectors, so this seemingly neutral variable encodes gender bias.

The harm is that these biased algorithms are often more accurate in their predictions than unbiased alternatives, because they’re successfully replicating bias that was present in the historical data. A perfectly calibrated algorithm that denies credit to women at slightly higher rates than men might be making statistically accurate predictions—perhaps because of real differences in default rates attributable to historical discrimination and its lingering effects. But this ‘accuracy’ comes at the cost of perpetuating discrimination and denying equal access to credit.

Case Studies of Algorithmic Failure

Perhaps the most famous example of algorithmic bias in finance is Amazon’s hiring algorithm, which whilst not directly financial, demonstrates the principle. Amazon built an AI system to screen job applications, trained on data from their existing workforce where engineering roles were predominantly male. The algorithm learned to penalise applications from women because the training data showed that men were more likely to succeed in those roles. The system was penalising women for not being men—a perfect demonstration of how historical discrimination becomes automated.

In lending specifically, multiple studies have found that algorithmic systems produce different approval rates across racial groups even when controlling for legitimate variables like credit score and income. A landmark study by the Federal Reserve found that algorithms made approval recommendations that appeared to discriminate against Black borrowers. Apple Card, despite claims of fairness, made headlines when a woman reported being denied a higher credit limit than her husband despite having higher credit scores and income—incidents attributed to algorithmic decisions that favoured certain patterns of creditworthiness.

In insurance, concerns have emerged about algorithmic pricing that results in higher costs for certain groups. The use of alternatives to traditional data—social media behaviour, purchasing patterns, even gaming habits—creates risk that algorithms identify correlations that happen to track protected characteristics. An insurance algorithm trained on data where certain neighbourhoods have higher claim rates might identify factors that correlate with those neighbourhoods as risk predictors, effectively discriminating based on geography, which is correlated with race.

The Technical Challenge of Fair AI

Creating genuinely fair algorithms is technically complex and sometimes philosophically intractable. There are multiple definitions of fairness, and they’re often in tension. ‘Statistical parity’ means equal approval rates across groups. ‘Equalised odds’ means equal true positive and false positive rates. ‘Calibration’ means that predicted probabilities match actual outcomes for each group. These different definitions of fairness can’t all be satisfied simultaneously if there are real differences in outcomes across groups. Choosing which definition to optimise for is itself a value judgement, not a technical one.

Additionally, financial institutions are reluctant to acknowledge bias in their systems or to implement fairness interventions that might reduce accuracy or profitability. Rejecting a seemingly good credit risk to satisfy a fairness constraint might increase the actual default rate relative to an unfair algorithm. From the institution’s perspective, fairness is expensive. This creates regulatory challenge: we need firms to care about fairness, but their economic incentives push in the opposite direction.

There’s also the challenge of discovering bias when you don’t know what to look for. An algorithm might disadvantage a group in ways that aren’t obvious from surface statistics. It might make different errors for different groups—high false positive rates for one group, high false negatives for another—that don’t show up in overall accuracy metrics. Regular auditing is essential, but many firms don’t conduct thorough bias testing, and regulatory requirements for testing are still emerging.

Transparency and Explainability Barriers

When someone is denied credit, they have a legal right to know why. The Consumer Rights Act requires that creditors inform customers of factors material to the decision. Yet modern machine learning systems often can’t provide this information in a meaningful way. A deep neural network might have dozens of layers, millions of parameters, and decision processes that no human has ever fully traced through. Asking why the network made a particular decision is sometimes genuinely unanswerable—not because firms are being secretive, but because the mathematical structures don’t admit human-level explanation.

The FCA’s requirement for explainability is therefore creating genuine tension between regulatory demands and technical reality. Some firms have responded by using simpler, less accurate models that remain explainable—potentially harming their competitiveness but gaining regulatory compliance. Others have implemented ‘explainability layers’—post-hoc explanations added after the algorithm makes a decision—but these risk being inaccurate or misleading. The tension is real and unresolved.

There’s also a consumer-side transparency problem. Most people using financial services don’t understand that their treatment is determined by algorithms, and even fewer understand the statistical basis of those algorithms. A customer might assume their credit denial reflects their individual circumstances, unaware that it reflects patterns the algorithm learned from training data. Improving transparency would require not just technical disclosures but education enabling consumers to understand what’s actually happening.

The UK FinTech Landscape and AI Adoption

The UK has positioned itself as a global fintech hub, with hundreds of startups operating across lending, payments, wealth management, and insurance. These companies have been early and enthusiastic adopters of AI, partly because they lack legacy systems and regulatory history, partly because AI-driven personalisation is core to their competitive advantage. Firms like Revolut, Wise, and Clearpay use machine learning for fraud detection, credit decisions, and customer service at scales that would have been impossible before.

The fintech sector has also pioneered alternative lending models powered by AI. Peer-to-peer lending platforms use algorithms to match borrowers and lenders and to assess default risk. These platforms serve borrowers who might be rejected by traditional banks—potentially expanding access to credit, though sometimes at rates that raise fairness concerns. Alternative lending based on alternative data—mobile payment history, supermarket shopping patterns, social media activity—creates opportunities for the unbanked to access credit, but it also creates risks of predatory lending to vulnerable populations.

At Nexatech Ventures, we’ve invested in multiple fintech companies using AI to transform financial services. What I’ve learned is that AI capability alone isn’t sufficient—firms also need robust governance, thoughtful consideration of fairness, and genuine commitment to user protection. Some of the most promising fintech companies have integrated fairness considerations into their engineering culture, recognising that long-term success requires customer trust and that trust is undermined by algorithmic discrimination.

Regulatory Evolution and Future Frameworks

The regulatory environment for AI in finance is rapidly evolving. The FCA has published guidance on algorithmic trading, on algorithmic decision-making systems, and on operational resilience. The PRA (Prudential Regulation Authority) has indicated that they consider AI system failures as potential prudential risks that could threaten financial stability. Internationally, frameworks like the EU AI Act provide precedents for more prescriptive regulation, though the UK has chosen a lighter-touch approach so far.

Future regulation will likely include mandatory testing and auditability requirements, mandates for human oversight of critical decisions, enhanced transparency requirements, and potentially prohibitions on specific risky applications like emotion recognition in lending decisions. There will also likely be requirements for firms to maintain detailed documentation of how their AI systems work, training data characteristics, and testing results. This documentation should enable regulators to understand potential risks even if individual decisions remain opaque.

What’s emerging is a framework where the regulatory burden falls not on making algorithms perfectly transparent or fair, but on firms demonstrating they’ve understood potential risks, tested their systems, and have processes for identifying and responding to failures. This is pragmatic regulation that acknowledges the limitations of current technology whilst creating accountability for outcomes.

The Role of Systemic Risk

Beyond individual fairness concerns, there’s a systemic risk dimension to AI in finance. When many financial institutions use similar or identical machine learning frameworks, they can create correlated decision-making that amplifies financial stress. If multiple banks’ credit algorithms simultaneously tighten lending in response to signals of economic downturn, that tightening could itself trigger a downturn. If algorithmic trading systems respond similarly to market movements, volatility can cascade. These network effects create risks that individual firms acting rationally in their own interest might not fully internalise.

The concentration of talent and technology in AI development in finance creates another risk. A small number of companies develop the frameworks and techniques that many institutions use, and a small number of data sources provide the training data. This concentration means that common mistakes or vulnerabilities could affect the entire system. If a widely-used machine learning framework contains a hidden bias or systematic error, that error could propagate across institutions. Regulatory awareness of this systemic dimension is still developing.

There’s also risk associated with over-reliance on AI systems and under-reliance on human judgement. When banks decommission experienced human underwriters and replace them entirely with algorithms, they lose the capacity to catch errors, identify novel situations, and exercise judgement grounded in understanding of human complexity. The path dependency is concerning—as firms invest in AI systems, they lose human expertise, making it harder to revert to hybrid approaches if AI systems fail.

The Investment Perspective

As an investor focused on AI innovation through Nexatech Ventures, I find the finance sector particularly interesting and concerning. The financial opportunity is enormous—AI that improves decision-making in a £2+ trillion industry can generate extraordinary returns. Yet the externalities are also significant. An algorithm that optimises profitability whilst discriminating against certain groups transfers wealth from those groups to shareholders. This works economically but fails morally and increasingly fails legally.

We’ve been deliberate in our investment approach to seek AI companies in finance that approach the technology with genuine consideration for user outcomes, not just institution profitability. Companies that invest in fairness testing, that maintain human oversight of critical decisions, and that are transparent with users about how AI affects them are more likely to be sustainable long-term. Regulatory risk is real, and firms that get this wrong face reputational damage, customer loss, and eventual regulatory enforcement.

The best fintech companies are also increasingly recognising that fairness is a competitive advantage. When customers know they’re being treated by algorithms that actively consider fairness, when they have transparency about how decisions are made, when they have meaningful recourse if things go wrong, trust is stronger. This translates into customer loyalty and lower regulatory risk. This isn’t just ethics; it’s good business.

Looking Forward: Challenges and Opportunities

The trajectory of AI in financial services is set—the technology will become more sophisticated, more central to decision-making, more influential in determining who accesses financial services and at what cost. The challenge is ensuring this trajectory doesn’t increase inequality, undermine trust in financial systems, or create systemic risks. This requires action on multiple fronts: better regulation, stronger industry standards, more research into fairness and robustness, and cultural change within finance towards seeing user protection as a priority not a constraint.

Technological solutions alone are insufficient. We need regulation that creates incentives for fairness, we need transparency requirements that let users understand what’s happening to them, we need audit trails that enable investigation when things go wrong. We also need public understanding of the extent to which financial decisions affecting their lives are being made by algorithms they don’t understand and can’t contest.

The exciting opportunity is that AI in finance can improve outcomes for everyone if approached with genuine consideration for fairness and user wellbeing. Better risk assessment that’s also fair could extend credit to people wrongly excluded by biased systems. More sophisticated fraud detection could reduce financial crime. Algorithmic trading can function well if properly regulated and designed to prevent systemic risks. The question isn’t whether to use AI in finance but how to do so in ways that serve public interest alongside commercial interest.

Conclusion: Balancing Innovation and Protection

AI in financial services represents extraordinary opportunity and genuine risk. The opportunity to improve decision-making, reduce costs, and deliver better products to customers is real. The risk of automating discrimination, creating uncontestable barriers to financial access, and destabilising financial systems is equally real. Navigating this requires neither blanket prohibition nor unconstrained adoption, but rather thoughtful regulation, industry commitment to fairness, and genuine focus on user protection.

As someone investing in this space, I’m convinced that the firms that will ultimately succeed are those that get this balance right—that use AI’s power whilst maintaining human oversight and judgement, that test for fairness, that’re transparent with users, and that genuinely consider outcomes beyond their own profitability. This is not just ethics; it’s good business in a world where regulatory risk is rising and customer expectations increasingly include fairness and transparency. The regulatory tightrope is indeed precarious, but it’s navigable for those thoughtful enough to understand what balance actually requires.

Related reading: How Emotional AI Claims to Read Your Feelings — and Why It Probably Can’t, Developing Ethical Frameworks for AI Implementation and What is information communication technology ict: A concise guide to ICT basics.


Discover more from Scott Dylan

Subscribe to get the latest posts sent to your email.

Written by
Scott Dylan