Title: GAO Stands Out in its Ability to Answer the Tough Questions. Here’s How We Do it. Description: Before Congress makes tough policy decisions, it often comes to GAO for information. This episode looks at some of the toughest questions Congress has asked GAO to answer, and how we did it. We’ll learn more from our Applied Research and Methods Team experts: Lawrance Evans, Jared Smith, and Mike Hoffman. Related Work: GAO.gov Released: February 2026 {Music} [Lawrance Evans:] What you’re going to get is something that is fact-based. It’s balanced, it’s non-ideological, nonpartisan. We are uniquely advantaged to deliver that because of the process. [Holly Hobbs:] Welcome to GAO's Watchdog Report, your source for fact-based, nonpartisan news and information from the U.S. Government Accountability Office—I’m your host, Holly Hobbs. Congress often has tough questions to ask when trying to make policy decisions that can affect American’s lives, as well as national security, defense, and our economy. And when members of Congress need answers to these tough questions, they come to GAO. On this special episode of the Watchdog Report podcast, we’ll talk with GAO experts Lawrance Evans, Jared Smith, and Mike Hoffman about how we do this work. Guys, thanks for joining us. [Lawrance Evans:] Thanks for having us. [Mike Hoffman:] Thanks, Holly. [Jared Smith:] Yeah, thank you. [Holly Hobbs]: Lawrance, you’re the managing director, here at GAO, of the team that designs some of the methodology that help us answer Congress’s questions. So, in your experience, what are some of the toughest questions they’ve asked us to answer? [Lawrance Evans:] I love that question. And we've certainly had our share of tough questions. And that might be tough for any number of reasons. The subject matter just may be complex requiring specialized talent or skill to execute or unpack it. It could be that we have to assemble data that's hard to get, or it has data reliability concerns, availability concerns. And we have to navigate those challenges. It could be a causal question. And those are notoriously difficult to unpack. Or we could be asked to estimate or measure something that is extremely difficult to measure or estimate. So, here's a few: Do big banks, because of perceptions that the government will bail them out if they get into financial trouble, have benefits relative to their smaller counterparts? Do they benefit from that protection? Are we treating their securities like Treasury bills--that they have no risk? How about, how much fraud is there in the entire federal government? Give us an estimate of that. How about this one—there's a regulation that's designed to force companies to disclose their supply chains and where they're sourcing their minerals. Does that have the desired impact, which is safety and security in the Congo? How do you unpack that particular question? Regulation in the United States. Safety and security in the Congo. All right? So those are some pretty difficult questions to answer. Could be questions about homelessness. What are the causes of homelessness? And is rental costs one of those factors? So, questions like that, I think, would rise to the level of being extremely difficult because it checks many of those boxes. [Holly Hobbs:] Jared, one of the toughest questions that Lawrance just mentioned was—we were asked to look at how much fraud there is in federal spending. As our chief statistician, you actually led that work and we’re lucky to have you here today. Tell us how we created that estimate? [Jared Smith:] Yeah. So it's difficult for many reasons. One is, the fraudsters don't go out and raise their hand and say, ‘Hey, you know, I just committed fraud. This is how much it was. Make sure you put it in your books.’ And there's no data set where there's a column where the government's like, ‘Yep! And this was—this is fraud, this is fraud’, right? Most of fraud is hidden. There’s a small amount that's caught and captured, but it's like an iceberg. Most of it goes undetected. And so, we had to find a way to, given what we know about fraud, estimate what is unknown. And so, the first step was getting the experts we needed to do the work, right? So we got statisticians, economists, actuaries, and experts in fraud, you know, brought them together. And then we reached out to the agencies and got the data that was available. So data about what fraud had been caught and prosecuted. Right? Investigation data, data about leads about fraud. And so, we worked then to develop a model to take that data that we had and estimate the amount of fraud that was not visible. And so, really a combination of getting the right experts and then the right data together. But, yeah, it was a really big challenge. And we were very happy with the report. [Holly Hobbs:] So many of the tough questions we get asked are about the economy. Mike, you’re a long-time economist here at GAO and you’re our new Chief Economist, congratulations! How do we answer those questions? [Mike Hoffman:] Big picture—we answer them in a data driven way. Our reports, they don't involve speculation. We don't advise Congress on what would economic theory suggest under ideal conditions. We gather data. We analyze it, and we answer the question based on what the data tells us. In particular, we can answer some pretty tough questions about banks that Lawrance mentioned a moment ago. So it's a challenge that we called ‘too big to fail.’ So, what if large banks could borrow money cheaply because their creditors believe those banks might ultimately be bailed out? We call that an implicit subsidy. It's not a subsidy that's based on a legal or regulatory benefit. It's implied and therefore, not unlike fraud, it's essentially invisible. So how do you—how do you measure an invisible subsidy? You get the sense of what makes this such a challenging question. To do that you assemble a lot of economic data. You measure how cheaply or expensively banks can borrow. And then you measure every factor that might govern how much it should cost for that firm to borrow--absent some kind of expectation of government bailout. So you need data on how profitable the bank is? How liquid are its assets? How risky is its business model? What kind of reputation does it have? And you analyze this data together. And we found the invisible subsidy. Turns out it was quite sizable. In 2008, when the federal government was actively bailing out the financial system and many of the largest banks. And it got smaller over time, suggesting that some of the efforts regulators and legislators had undertaken were successful in reducing those expectations of government support in a crisis. [Holly Hobbs:] How is it that GAO specifically is able to answer these questions and not, say, a think tank down the street in D.C.? [Lawrance Evans:] So certainly, look, there are think tanks that are good and we use some of their work in our work. But I think what sets GAO apart is one, our access to information. We simply have access to government records, information, and data that other entities can’t access, right? That unparalleled access allows us to answer questions that others can't. Just as important is our reputation. It's how we do the work. We're not wedded to the answer. We have no skin in the game. We're wedded to creating a process that generates valid answers. And so, based on that reputation, we have institutional trust. And that institutional trust gives us a passport to travel in places even when we don't have access authority. Right? When we call, people call back. We talk to Nobel Prize winning economists, financial experts. Some of our international peers will answer the call to be part of our data collecting process. And so that's extremely important. And then lastly, we have the collective competence to do the work. We have all the talent, the human capital, expertise, we need to answer some really difficult questions. [Holly Hobbs:] Lawrance, you lead a team of specialists. But most of the teams here at GAO are auditors and analysts. Why is that collaboration between the teams so important? What are we able to do? [Lawrance Evans:] So the Applied Research and Methods group at GAO houses a large percentage of our technical expertise. It's the biggest concentration of technical expertise in the agency. So, we have our statisticians, data analysts, our economists, actuarial scientists. Our social science methodologists, who bring knowledge of a wide range of qualitative and quantitative methodologies. And, of course, our library scientists, which are critical to the research ecosystem at GAO. So we pair that with our subject matter expertise—and that's the analysts and the project planners. They are critical. But we marry those two together. And we're then able to really answer some pretty tough questions. [Holly Hobbs:] Mike and Jared, you guys are both “number guys.” But sometimes we’re asked to look beyond the numbers, right, to understand an issue. How do we do that? [Mike Hoffman:] So I think it's important to realize that we develop a good understanding of the data that we use. We don't take the quality or completeness of the data for granted. So even if our instincts are to, to use data quantitatively to answer our questions, we approach it from a skeptical mindset. We talk to the groups who have produced the data. We figure out what we can or can't say with the data. From the economic side, we use qualitative information, develop a deep understanding of, say, the markets or industries that we're studying. And often we find gaps or limitations in the data. And that leads us to, you know, rather than naively talk about trends in the data, we begin to gather diverse perspectives about the situation on the ground related to a particular program or a particular industry. So we will talk to academics. We will talk to industry. We will talk to the government officials leading a program. We will talk to the people who are benefiting from a program. So we're gathering diverse perspectives. We might be using those perspectives to challenge our own assumptions. We might be gathering evidence that either corroborates or conflicts with what we learned from the data. As we bring that information together, we're creating the kind of information that Congress can rely on to make decisions, that agencies can, you know, have confidence in. [Holly Hobbs:] Jared? [Jared Smith:] I mean, the example I can give is like a technical tool that's used to get beyond the numbers, which is we do sampling. So, from the fraud side, they’ll do some data analytic work, for example, find an anomaly that maybe an individual was paid unemployment benefits in two states. Okay, well that's a risk because you're only supposed to get paid in one state. But maybe the person moved within the month. Maybe they didn't understand the rules and submitted two applications. So then what you do is you sample a subset. We did this in the unemployment insurance fraud report. We look and we say, okay, well where's the records of where they lived? Let's look at their case file. Right? Let's get an understanding of what percent of these actually are fraud. And then we then use an estimation procedure to then carry that forward. And we do that all the time, right? Where we have some broad result, we sample, and we looked really closely. And it might involve going out and doing an inventory, right? Where we actually go on site and look at--look at records. And so it's sort of a nice combination that we do where we take a technical method, which is sampling, and then we use that to go out, look really closely at a record, and then calculate estimate —whether it's fraud detection or compliance review or inventory. So that's just one example where we sort of mix methods of something that's technical and something that's sort of on the ground. [Holly Hobbs:] So have we ever found something unexpected? [Mike Hoffman:] We have. But I think what I would say first is that when we're approaching a question, open-mindedly, we shouldn't be surprised very often. But I think it's still true that we often come to conclusions that challenge the conventional wisdom or some of our preconceptions. An example comes to mind that Lawrance mentioned earlier related to supply chains for conflict minerals. So in 2012, the Securities and Exchange Commission issued a regulation requiring public companies to determine whether or not they were relying on minerals that could have come from conflict zones. And in disclosing that, the hope was that would discourage them from buying those minerals and from inadvertently financing armed groups, particularly in the Democratic Republic of Congo where these conflicts were centered. To figure out whether that regulation was having the intended effect, once again, take a data-driven approach. So that's data on the degree of conflict in the region. That's data on the market for these minerals, their prices, and that's using information on political, economic and geographic conditions that might also influence conflict. What we found, surprisingly, was that the regulation had not reduced conflict, and it may have even increased it. As the regulation made gold mines more valuable to armed groups, they moved to take over more gold mines. That allowed them to melt down, smuggle the gold, and sell it into supply chains that were not in the United States. So while the regulation had induced businesses to get more serious about understanding their supply chains, it didn't achieve the results on the ground. It did not result in reducing conflict in the Democratic Republic of Congo. [Jared Smith:] I've got another example. In the pandemic, unemployment insurance--the government wanted to get money out quickly so it reduced controls. And GAO was asked to look-- well, how much fraud occurred when they were getting this money out? And we sort of expected to find fraud but what was really shocking, we looked at the data, was--was how much it hit specific states. You had states with almost no evidence of fraud. You sort of looked at their unemployment insurance and it was very steady over time. There weren't a lot of indicators of fraud. And in other states that were absolutely overwhelmed. And it seemed like the fraudsters, once they realized there was an opening, they sort of spread word. And you just had certain states that the number of fraudulent applications was much larger than, actually, the normal applications compared to other states where it was almost flat. So we knew there was going to be risk, but seeing how different the risk across states was really surprising. [Holly Hobbs:] Lawrance? [Lawrance Evans:] We got a request once to look at a phenomenon known as bank walk-a-ways. These are cases where banks are walking away from the foreclosure process and not taking foreclosed properties onto their balance sheet. Sounded a bit strange. I couldn't figure out why a bank would make that kind of business decision. And so we built the first database that allowed us to assess how prevalent these bank walk-a-ways were. And lo and behold, there wasn't a significant percentage of them, and they were largely concentrated in certain areas. But for sure, there were a meaningful number of those. And they had impacts on local communities. So that's the case where we went out and actually had to assemble data that didn't exist before. And you talk about getting behind the numbers, we actually did site visits to ensure that these homes were actually vacant, and that they existed where we said they existed. So we did our own data reliability assessment. From there, we were able to do some econometric modeling to determine how and when these occur, and to make some recommendations to reduce the likelihood in the future. [Holly Hobbs:] That’s not the only time we’ve built a database because one didn’t exist, right? There‘s others? [Lawrence Evans:] Yeah, we had one of the first financial statement—restatement databases. So this is during the Enron era when companies were restating their financials. Where you had, you know, large corporations restating their earnings with significant impacts, significant capital market impacts. And we built the first publicly available data set on earnings restatements, why they occurred, the consequent impact on stock prices. And, for a while, that data set was heavily mined by the academics and the auditing research area. So that's another case where we built an original data set. [Holly Hobbs:] Lawrance, I’m gonna give you the tough question—most major studies get peer reviewed. But we’ve largely been talking about how our work is one of a kind. So how does that work? [Lawrance Evans:] Really good question. And I would say the—an external review of our work is built into our robust and pretty rigorous quality assurance framework. So not only are we undergoing significant internal reviews from different units in GAO, we're incorporating reviews from the outside. Sometimes, we will bring academics in on that internal-review process. So, the report that Mike talked about, too big to fail, we had a review from the Bank of England. We had several academics that reviewed the model specification and the results. So that's part of the process. Importantly though, we also have a process by which we share facts in our report with the relevant agencies and then give them an opportunity to comment on the draft report. And if you open up a GAO report, you'll see where they have written a formal response to our report. Right? So that's baked in there. More importantly is—not only are our products review externally—our whole infrastructure for creating reports is peer reviewed by our counterparts. So, GAO is considered a supreme audit institution. There are supreme audit institutions across the globe. Those other institutions will form a group, and they will come and scrutinize GAO’s process to see if it has the culture, the processes, the procedures to generate quality reports. In the last peer review, they commented on the independence of the agency, the reliability and the integrity in the process. And, you know, noted specifically that ARM was important to those outcomes. [Holly Hobbs:] Lawrance, I’ll give you the final word—what would you want members of Congress or the public to know about the work we’re doing here at GAO to answer these really tough questions? [Lawrance Evans:] So two things. Our big question that we have at the end of every engagement is, ‘how do we know?’ And if you open up a report, you go to the Objective, Scope and Methodology section, it will tell you in a pretty sophisticated way how we know what we know. So, I would want Congress to know that that process generates authoritative information for them. So it is the thing that makes our reports unique, makes us trustworthy. Our process ensures that we would never sacrifice scholarship to sentimentality, right? Or place emotions, intuitions before reason and evidence, or inject ideology into the process. What you’re going to get is something that is fact-based, it's balanced, it’s non-ideological, nonpartisan, and is what we need in an era of misinformation, which can undermine decision-making. In a world where the line between fact and opinion is blurred. So we are uniquely advantaged to deliver that because of the process. The other thing I point out is that the answer to these very complex questions aren't going to be simple. We talked about using mixed methods. And triangulating information and assessing the degree to which we're confident, right? So we will provide truth but will also provide an assessment of uncertainty. Tradeoffs. And where we're not as confident about the answer. So people should expect when they ask these very difficult questions to get contextually sophisticated and nuanced results. And that makes us a trustworthy source of information. [Holly Hobbs:] That was Lawrance Evans, Mike Hoffman, and Jared Smith talking about how GAO answer the tough questions for Congress. Thanks for your time, gentlemen! [Lawrance Evans:] Thank you. [Mike Hoffman:] Thank you. [Jared Smith:] Thank you. [Holly Hobbs:] And thank you for listening to the Watchdog Report. To hear more podcasts, subscribe to us on Apple Podcasts, Spotify or wherever you listen. And make sure to leave a rating and review to let others know about the work we're doing. For more from the congressional watchdog, the U.S. Government Accountability Office, visit us at GAO.gov.