The Bank of England (BoE), the Financial Conduct Authority (FCA) and the Treasury have been told by MPs they are not doing enough to manage the risks presented by the increased use of artificial intelligence (AI) in the financial services sector.
A new report by the Treasury Select Committee warned that regulators are potentially exposing the public and the financial system to “serious harm” due to their current positions on the rise of AI.
According to evidence received by the Committee, more than 75% of UK financial services firms are now using AI. The tech is being used by businesses in a variety of ways, including to automate administrative functions and to deliver core services such as processing credit assessments and insurance claims.
In the report published today, MPs acknowledged that AI and wider technological developments could bring considerable benefits to consumers, and the Treasury Committee has encouraged firms and the FCA to work together to ensure the UK capitalises on AI’s opportunities.
However, the Committee also suggested that action is needed to ensure that this is done safely. One recommendation in the report is for the BoE and the FCA to conduct AI-specific stress-testing to boost businesses’ readiness for any future AI-driven market shock.
The Treasury Committee is also recommending that the FCA publish practical guidance on AI for firms by the end of 2026. This would include how consumer protection rules apply to their use of AI as well as setting out a clearer explanation of who in those organisations should be accountable for harm caused through AI.
“Firms are understandably eager to try and gain an edge by embracing new technology, and that’s particularly true in our financial services sector which must compete on the global stage,” commented chair of the Treasury Select Committee, Dame Meg Hillier.
“The use of AI in the city has quickly become widespread and it is the responsibility of the BoE, the FCA and the Government to ensure the safety mechanisms within the system keeps pace.
“Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk.”








Recent Stories