The use of artificial intelligence (AI) in financial services is still only at a “nascent” phase, with a number of firms yet to ask themselves fundamental questions around ethical use, according to the Financial Conduct Authority’s executive director of strategy and competition.
Announcing the launch of the FCA’s partnership with the Alan Turing Institute at a conference this morning, Christopher Woolard said the project would focus on the practical challenges of transparency and explainability of AI in the financial sector.
He said the results of a recent survey conducted with the Bank of England had revealed that the use of AI in regulated financial firms was “best described as nascent”, explaining that it is employed largely for back office functions, with customer-facing technology largely in the exploration stage.
In addition, some firms are already considering the ethical implications of using AI-driven technologies. “By and large those who lead financial services firms seem to be cognisant of the need to act responsibly, and from an informed position,” Woolard said.
However, he struck a note of caution when it came to the growing divide between incumbent financial services organisations and the less wary challengers prepared to experiment with new technologies and innovative use cases for AI.
“Some larger, more established firms are displaying particular cautiousness, some newer market entrants can be less risk averse, some firms haven’t done any thinking around these questions at all – which is obviously a concern,” Woolard warned. “There is a balance to be struck here.”
However, whilst uptake in the sector remained slow, he predicted that the use of AI in innovations such as Open Banking and data sharing could transform the banking landscape in the coming years. “In retail banking - a utility of central importance to consumers that has long been bedevilled by a lack of innovation - AI has the ability to be genuinely transformative.
“The implementation of Open Banking last year heralded the beginning of what is likely to be a period of profound evolution in the banking sector,” he said.
But while early indications suggested that Open Banking has increased competition, Woolard warned that such innovations do not “exist in a vacuum”, with several “big caveats” remaining around public trust and consumer willingness to understand the value of sharing their data with financial services providers.
He pointed out that the FCA would be conducting its own work to examine the regulatory and ethical implications of outsourcing decisions such as mortgages and credit checks to machines that could “materially affect people’s lives” and suggested that the technologies used by regulators themselves would need to be updated for a fully digital age.
Woolard explained that the regulator is approaching AI with the following principles in mind: continuity, which takes into account which parts of the debate are genuinely new and which are not; public value, taking account of how AI can create value for citizens; and collaboration with other entities to answer the questions AI poses.
He suggested that despite dystopian predictions of robots taking over, the financial services was not yet experiencing a crisis of algorithmic control. Despite this, Woolard acknowledged that the growth of AI would entail certain risks, but underlined these would vary according to the context in which it is used.
“The FCA doesn’t have one universal approach to harm across financial services – because harm takes different forms in different markets and therefore has to be dealt with on a case by case basis,” he stated, emphasising the need for greater awareness of governance issues surrounding AI, with a particular focus on senior staff factoring in the risks of failure, as well as improving the ‘explainability’ of AI driven-decisions to consumers.
“If firms are deploying AI and machine learning they need to ensure they have a solid understanding of the technology and the governance around it,” he said. “This is true of any new product or service, but will be especially pertinent when considering ethical questions around data – we want to see boards asking themselves 'what is the worst thing that can go wrong' and providing mitigations against those risks.”
Recent Stories