Title: The Financial Sector is Increasing Its Use of Artificial Intelligence. What's the Risk? Description: The financial services sector is increasingly using artificial intelligence to automate services and decisions. While this could provide benefits for consumers, it also comes with some risks. We learn more about this issue from GAO's Michael Clements. Related work: GAO-25-107197, Artificial Intelligence: Use and Oversight in Financial Services Released: May 2025 {Music} [Mike Clements:] AI has a lot of potential benefits in financial services, but there are risks associated with it. [Holly Hobbs:] Hi, and welcome to GAO's Watchdog Report, your source for fact-based, nonpartisan news and information from the U.S. Government Accountability Office. I'm your host, Holly Hobbs. The financial services sector—which includes things like banks, investment firms, and Wall Street—is increasingly using artificial intelligence to automate services and decisions. While this could provide benefits for consumers, it also comes with some risks. Here to tell us more about this issue is GAO's Michael Clements, who led work for a new report about this topic. Thanks for joining us. [Mike Clements:] I'm pleased to be here, Holly. [Holly Hobbs:] Mike, maybe we can start with how is AI currently being used by the financial sector and how widely is it being used? [Mike Clements:] Sure. So, there's a variety of flavors of AI. One is machine learning where the computer simply gets better at a given task as each time it learns what it's doing. The more current, updated version is generative AI, where the computer is in fact creating new content. So, I do that introduction because in the financial services industry, what we're mostly dealing with at this point is machine learning for internal function—sort of thinking back-office type of activities at the firm. So we can think of a couple of examples. One would be illicit finance. Financial institutions are required to help root out money laundering, terrorism financing, other sort of criminal activities. And AI can really be beneficial in this space by going through tremendous numbers of transactions and finding that kernel of a pattern that could illustrate illicit activities. That's the type of back-office functionality that can be very beneficial. Another function is automated trading. In this case the firm is trading securities, such as stocks; and it can be very effective at doing those type of activities. We're focused mostly on back-office machine learning. There is some customer facing activities. Some of our viewers when they get on a webpage they may see the little box pop up and they ask you if you need help or a question. That's essentially artificial intelligence, right? It allows you to type in a question and get an answer back. But, again, mostly what we're looking at this point are sort of these back-office machine learning type of activities. [Holly Hobbs:] That all sounds like positives. So, what's the concern here? [Mike Clements:] There are benefits to those type of activities right. It helps lower cost, enhancing customer experiences, and also expanding access to service, which is really critical in financial services because we still have a large number of people who don't have access to services. But you're right to point out some of the concerns in this space. A lot of the concerns really are just perpetuating risks that already exist, that, that artificial intelligence just continues those patterns, moving forward. A few examples. One would be in the space of fair lending. Lenders are not allowed to consider factors such as the borrower's age, gender, ethnicity and race in making decisions. But to the extent that that's happened in the past, and an AI model is learning based upon the past, it could perpetuate fair lending problems moving forward. Similar with conflicts of interest. In some instances, the firm is required to be a fiduciary. The firm is required to act in the customer's best interest. But sometimes, you know, that can bump up against the firm's profit motive. Again, to the extent that there were problems in the past, right, those could be perpetuated moving forward. There are some unique, problems associated with AI that might pop up. One is known as herding. So, if a large number of firms are using the same model or a very similar model, and we think back to our automated trading example, all the firms may be buying or all the firms may be selling at the same time, which could cause price gyrations. [Holly Hobbs:] So who or what entity is responsible for protecting consumers from some of these risks? [Mike Clements:] We've previously reported that the U.S. financial regulatory environment, it's complex and it's fragmented. That plays out here in the artificial intelligence space as well. So if we're thinking about a depository institution, these would be banks or credit unions, a number of entities might be involved. Two of them I can name would be FDIC, which many of our viewers are probably familiar with. Another one would be the National Credit Union Administration, NCUA, which oversees credit unions. If we're dealing with securities such as stocks and a bond, then we're looking at the Securities and Exchange Commission. So, you ask, how do they go about doing this? For the most part, at this point, they're relying on existing laws, regulations and guidance. To some extent, the agencies are technologically agnostic. So, if we think back to our fair lending example, a financial institution and its regulator are going to look at that the same way, whether it's conducted with a paper and pencil or with artificial intelligence, right, the same rules apply. That said, there's a few instances where the regulators have provided some guidance. And in one case, that was with artificial intelligence and its interaction with lending. [Holly Hobbs:] Have the regulators told us about, or the banks themselves, any challenges that they face in overseeing this technology? [Mike Clements:] In the case of the regulators, they really need to have an understanding of, sort, of how this technology works. These benefits and risks we're talking about, and that also where it's appearing in the industry. Some of our past work we've talked about, the regulators need to up their game in the area of personnel management as it applies to these, what are known as "fintech," which artificial intelligence is a version of that. One of the things we talked about is they needed to enhance workforce planning, to ensure that they had the staff with the skills in place to be able to do that. We found in this current report is that agencies are taking efforts to enhance training for their staff, to get them up to speed. [Holly Hobbs:] I know it's kind of early days and technology's always evolving, but where do we see this technology going in this sector? [Mike Clements:] It's very tough to predict where this may go. What was interesting, is that the agencies are, in fact adopting artificial intelligence, right? And to some extent, they're facing and doing it in a similar way as the firms that they are regulating, focusing a lot on sort of their internal operations. So, what we saw was, these regulators looking at things such as using AI to assess risk, assess whether some illegal activity is happening, and also to conduct research. One of the things we did find is, and what they've reported is, they're using AI output as an input to their decisions. They're not exclusively relying on AI to make, for example, regulatory decisions. {Music} [Holly Hobbs:] So the use of AI by the financial sector may come with benefits to consumers. But it also may perpetuate existing fair lending concerns, and the regulators who monitor this issue are playing catch up. Mike, what more do we think should be done to make sure consumers are protected from any issues this technology creates? [Mike Clements]: We identified two gaps in this area. And actually both relate to credit unions and NCUA. The first gap we saw related to NCUA was what's known as their model risk management guidance, right? Credit unions and really all sort of financial institutions rely on models, one of which is an AI model, right? And so, the regulators produced this model risk management guidance to help both their staff understand and evaluate firms, but also help the firms understand. And what we found is NCUA's guidance was not as robust as it could have been. So we have a recommendation there for them to enhance their guidance. The second challenge we found, within NCUA, concerns its authority to look at third party service providers. Now, in many instances, a credit union or even smaller banks, they're not developing AI models, right. They'll rely on a third-party vendor to do that. In the case of banks, the banking regulators have the authority to monitor and oversee those third-party service providers. NCUA does not. And so we think that's been, an area of risk we've, in the past, recommended to Congress consider giving NCUA that authority. And we've recommended that again this time. [Holly Hobbs:] And last question, what's the bottom line of this report? [Mike Clements:] AI has a lot of potential benefits in this space in terms of lowering costs that we've talked about, enhancing customer service and really getting at perhaps creating more accessible financial services. But there are risks associated with it. And what we've seen at this point is that both the firms and the regulators are moving forward with it, adopting it, but doing so in a cautious manner. [Holly Hobbs:] That was Mike Clements talking about our new report on AI use in financial services. Thanks for your time, Mike. [Mike Clements:] Thanks for having me. [Holly Hobbs:] And thank you for listening to the Watchdog Report. To hear more podcasts, subscribe to us on Apple Podcasts, Spotify, or wherever you listen. And make sure to leave a rating and review to let others know about the work we're doing. For more from the congressional watchdog, the U.S. Government Accountability Office, visit us at GAO.gov.