Introduction
The UK Treasury Committee has sharply criticised the Government, the Bank of England and the Financial Conduct Authority for a “wait-and-see” stance on AI in finance, warning that this reactive approach risks “serious harm” to consumers and financial stability.
The Unaddressed Dangers of AI in Finance
Consumer Protection and Fraud Concerns
MPs highlight opaque AI decisioning in credit scoring and insurance pricing that can disadvantage vulnerable customers and entrench bias. The report also flags a rise in AI-enabled fraud and the spread of misleading automated financial advice, stressing the need for clearer accountability when models harm consumers.
Systemic Stability Under Threat
Beyond individuals, the Committee warns of system-wide exposures: increased cyber vulnerability from AI tools, concentration risk from dependence on a few large US tech suppliers, and the prospect of herd behaviour that could amplify shocks across markets and trigger wider crises.
Demands for Urgent Regulatory Action
From ‘Wait-and-See’ to Proactive Measures
MPs call for concrete steps including AI-focused stress tests to assess the City of London’s resilience to model-driven market dislocation, and practical guidance from the FCA by year-end that clarifies consumer protection rules, model governance and lines of accountability for firms using AI.
Authorities Respond: Commitment Versus Urgency
The FCA, Treasury and Bank of England have acknowledged the report and point to ongoing work on model risk, cybersecurity and AI governance. Yet MPs say reassurances are not enough and urge faster, clearer regulatory signals. The UK’s experience serves as an early warning for global regulators: delayed action could turn AI opportunity into a source of avoidable harm.
For financial executives and risk teams, the message is clear: prepare for tougher oversight, new stress scenarios and stricter expectations on transparency and accountability.




