>> David Dornisch: Hello. This is David Dornisch from GAO. Welcome to our webinar for today. This is GAO Centennial Webinar on "Managing Complexity Across Public Policy Challenges." Before we get into the meat of the panel, a few housekeeping things for you. First off, regarding the Zoom meeting, you won't be visible on Zoom. Your names won't even come up, and but you can use the chat box, and we can reply to you, to the audience. You can use your chat box and we panelists can also interact with you by chat but your audio is muted and you won't be able to say anything during the panel. This video, this webinar is being recorded and subsequent to the session it will become available on GAO's website. Now, before we get into the actual presentations and discussion, we'd like to play for you a short video from GAOs Comptroller General Gene Dodaro. >> Gene Dodaro: Hello. I'm Gene Dodaro, Comptroller General of the United States and head of the US Government Accountability Office. Twenty twenty-one marks GAO's 100th anniversary serving Congress and the American people. As part of our centennial celebration, we are pleased to present this webinar series called, "Foundations for Accountability: Oversight Issues for the Next 100 Years." We rely on a deep pool of expertise within and outside the agency to help monitor changes in public policy and management. In addition to our own people at GAO, we also consult with advisory panels such as the Comptroller General's Educators Advisory Panel, independent researchers, and agency managers who implement the policies and programs we audit. We are proud to bring these experts together for webinars covering the following topics: Leading Practices to Manage, Empower and Oversee the Federal Workforce; Building Integrated Portfolios of Evidence for Decision-Making; Managing Complexity Across Public Policy Challenges; Legal Context of Accountability; and Major Challenges for the Next 100 Years. These webinars will explore the goals, conflicts, tensions, and challenges that shape the need for rigorous evidence-based decision-making to improve government and support oversight. They will highlight promising and effective practices that can help achieve these goals and demonstrate what GAO has done and will continue to do to support an effective, economical, efficient, equitable, and ethical federal government. I hope you will find them informative. Please enjoy. >> David Dornisch: Hello. My name is David Dornish and I will be moderating this panel. First, I want to thank the Comptroller General for his introduction and our panelists for agreeing to participate in this webinar series. I also want to thank the many people that have been working for more than a year to plan for and make this and the other events that Mr. Dodaro just mentioned happen with a special thanks to Mandi Pritchard and Brody Garner for leadership in the series. As Gene mentioned in his remarks, GAO combines a variety of expertise and evidence from scholarship and practice and believes that one of GAOs biggest strengths is its effort to hear from a wide range of stakeholders and seriously follow the evidence where it leads. As Gene mentioned, aside from today's panel, we've done three others and there's one more to come, actually. But today's panel, again, is called "Managing Complexity Across Public Policy Challenges." This panel grew out of an idea to explore in a centennial panel the ways that administrators, policy makers, and researchers can use technology and analytics tools to evaluate and effectively manage the increasingly complex and cross-cutting issues that government is faced with in areas such as emergency management, climate, transportation, cyber security, immigration, and many others. Like many areas that GAO works in, this topic could, of course, easily span more than ninety minutes. For today, we have a really interesting panel of speakers, each of whom comes at these issues of complexity, management, and analytics from different angles. Specifically, we have three GAO panelists and three external researchers. The order of presentation and the basic topics are as follows (and I'm going to share my screen to give you a list of the panelists). Our first presentation, it will be by Chris Koliba, the Professor at the University of Vermont in the Community Development and Applied Economics Department. Chris is going to set the stage conceptually and empirically for the panel, presenting ideas from his extensive research in the areas of complexity science, public management, and data analytics, and machine learning. The second presenter will be GAO's Chief Data Scientist and the Director of GAO's Data Innovation Lab, Taka Ariga. And he will provide some computer demonstrations of current machine learning and artificial intelligence applications at GAO. Our third, fourth, and fifth, and sixth speakers will be speaking more topically about specific areas of research that they're involved in. David Trimble will be our third speaker. He is the Managing Director of GAO's Physical Infrastructure Team and he'll present on auditing and risk assessment under complexity in areas like environmental cleanup and transportation infrastructure. The fourth speaker is Matt Auer, Professor and Dean of the University of Georgia's School of Public and International Affairs. And his topic will be managing wildfire, urgency, complexity and equity. Fifth, Chris Currie, Director and GAO's Homeland Security and Justice Team is going to present on the extensive work the GAO has done in the area of national preparedness, emergency management, national biodefense, and related topics and his title is "Complexity, and Fragmentation, and National Preparedness." Finally, we have Brian Gerber, Professor at the Arizona State University College of Public Service and Community Solutions. His title is "Practice Improvement in Disaster Risk Reduction and Community Resilience Capacity Building." So, I'm just going to open up the presentations now and as I said the first speaker is Chris Koliba. Chris Koliba is the Inaugural Director of the University of Vermont Office of Engagement and a full professor in the Community Development and Applied Economics Department at UVM. He is the co-director of the Social Ecological Gaming and Simulation Lab there and served as their Director of Master of Public Administration Program from 2002 to 2020. He's the lead author of a book called "Governance Network in Public Administration and Public Policy" and has published many, many peer-reviewed articles as well as lead efforts on over 50 million in grants from federal agencies. With that, I will pass it over to Chris to tell us about data science and simulations. Thank you. >> Chris Koliba: Thank you David. >> David Dornisch: You're welcome. >> Chris Koliba: Thank you David-- >> David Dornisch: Sure. >> Chris Koliba: For that introduction. I'm now going to be taking over the screen and starting in on my presentation. It's an honor to be invited to participate in today's panel and David, I've known David for a very long time and appreciate this opportunity to take another bite at the apple with the GAO Community and talk about data science and simulation methods and what mythological advances can be employed to track government performance. So, the lens for this panel is framed around complexity. And we have grown in our understandings about complex systems, complex adaptive systems over time as our degrees of certainty and uncertainty ebb and flow from high to low as well as our degrees of agreement on the nature of the problems, the scope of the problems, and obviously most importantly consensus around problems. And, of course, our understanding complexity, though, has been driven, in part, by advances in our computational capacities. And so, that's where I'm interested in today's presentations and looking at how some of the methodological advances computationally can drive our understanding of government's performance in this complex environment. And, of course, from a policy science standpoint, we are very much interested in this concept called wicked problems. This is a not a term just based out of Boston but a term of art in the planning and policy world to basically define a scope of problems that have no clear definitions, are often unique, are not easily solved, or solved with at all. There's no stopping rules et cetera, et cetera. And today's panelists are all going to be talking about areas in which wicked problems persist, including emergency management, water management, and COVID pandemic, all of which I've had my own research focusing on also on all these topics. As I mentioned, computational methods have been advanced over the last several decades and there's many frameworks for understanding these. Here's a very common one that's found like the complex systems and the parameters around emergence, and self-organization of complex adaptive systems, and the range of approaches and methods from game theory, to systems theory, to nonlinear modeling, pattern formation, and whatnot that are now revolutionizing our understanding and our approach to researching public policy and government performance. There have been maps laid out around the history of complexity science. I won't get into this very complicated timelines but I will highlight for us right now, just in the last several decades, that we see advances in network science of which David has pioneered in GAO, to the big data science revolution that is now leading to complexity and policy understandings as well as the interdisciplinarity of these methods and the ability to couple human and natural systems, human and technological systems in some very creative and innovative ways like we'll be hearing more of today. I also want to point us to some seminal work that's been done at the federal level. This, by the National Science and Technology Council on the Future of artificial intelligence research. It's been subsequently updated, but the roots are in 2016 with the Obama Administration. And in that document, it lays out both the promises and potential pitfalls of using artificial intelligence to study in advance innovation, and studies of government programming were at large, and public policy, more importantly. One of the major revolutionary drivers of artificial intelligence is something that Taka, I know, will be speaking on and that is machine learning. And some members of the audience may not be as familiar of what machine learning techniques allow us to do that we had not had before. It allows you to process very large data sets and do a number of different functions. Some would argue that some of these functions have been able to be applied through traditional statistical methods but not at the scale that we can do with machine learning. New classification approaches, more advanced modalities for regression analysis, advances and cluster analysis that we have not had an opportunity to do before, multidimensionality, and ranking, and listing. And so, these techniques have really -- although they've been around for a while and the outcomes and the outcomes of these of these techniques have been around for a while -- the ability to take large scales of data, potentially from multiple data sources, is really revolutionizing our field. In terms of applications, we see applications in natural language processing, ideas around computer visualization, and visualization techniques, detection of anomalies, those outliers. Those small-tail anomalies that can actually lead to black swan events and are obviously of critical importance for emergency management and homeland security. We're able to do some very sophisticated time series analysis as well as some projections in terms of recommending best practices, best services, and, of course, these are all practices that are being done to us and with us whenever we jump on social media. And it's time now that we need to be taking advantage of that for processing and evaluating government performance and public policies. Of course, there are a number of frameworks to be thinking about relative to the role of AI and human beings. AI performing functions alongside of human beings. And so, some examples you're here today from Taka, for example, into this category. We're beginning to see more and more though and as the crystal ball shows, we're going to have more AI performing functions to reduce human cognitive burden and overloads. And then, too, eventually that AI functions will perform in lieu of human beings. This is a reality that is basing us. The question is, do we, as we embrace these approaches, how do we do it ethically, morally, effectively in ways and manners that the human agency and the accountability signals remain intact, which are obviously critical in democratic societies such as ours. A couple of things relative to the simulation side of this is in addition to machine learning, just some basic concepts that have been around with us for a very long time in terms of systems theory and cybernetics theories. But understanding the relationship between inputs and outputs, stock and flows, capturing these relationships in dynamic feedback, and feed forward mechanisms can allow us to be able to potentially render better understandings of the relationship between causality, and regression, and correlation as well as to do more anticipatory simulations that look at, let's say, changes in inputs and changes and alterations in feedback systems. An additional simulation modeling approach that's revolutionizing our approach is a concept called agent-based modeling. It allows us to model how different agents -- and they could be human agents, these could be constructs and concepts, these could be facilities -- and allows us to study the interaction effects and allow for the study of emergent characteristics and qualities. And these methodologies can be used to actually formulate actual concrete public policy renderings. And I'm just highlighting very quickly here a recent publication in a model that my colleague Patrick Bitterman and I published in the "Journal of American Public Administration Research and Theory." It's an Asianagent-based model that was in partnership with policy makers at the state level in Vermont. And the wicked problem here was water quality. We were approached by the Secretary of Natural Resources with questions. They want to know how much money to spend, what's the appropriate scale of coordination, and how can we use and allocate human resources to achieve, in this case, reduction of harmful algal blooms. And interestingly enough, the model that I'll quickly just mention here resulted in a piece of public policy and legislation called Act 76, which invented and created clean water service providers framework for regional watershed governance. What we did is we put together an Asianagent-based model that looked at these different scenarios: acting alone, creating planning districts, and planning and implementation districts, and simulating "what if" scenarios in this kind of construct. This is an example of how we can use model development, to develop simulations to develop optimization. And what it does is it circumvents the -- it still is a need to actually actively experiment with policy design, but this allows you to do it in a simulated environment to test out alternative solutions working collaboratively with stakeholders, in this case of GAO. It's your partnering government agencies to try out different policy designs and perspectives before it gets implemented. And so, this is just an output from this particular model. And what it basically suggested is that mandated planning and implementation districts for watershed management, water quality management in our region was the best route to increase nutrient load reductions. They followed our advice and, again, implemented Act 76. So, what we have here today is a series of talks that lay out wicked problems and our approaches to doing so. And I'm now pleased to turn it back to David and we'll be looking to circle back in terms of the discussion roll and ask questions of the panelists relative to some of these methodological advances that are before us. Thank you very much. I'll stop sharing. >> David Dornisch: Thank you. Thank you, Chris. So, I am now going to hand it over to Taka to just -- Taka Ariga. And Taka is the Chief Data Scientist appointed by the Comptroller General of the United States for Government Accountability Office. He also leads GAOs newly established innovation lab in driving problem centric experiments across audit and operational teams through novel use of advanced analytics and emerging technologies. As a member of the senior executive service, Taka is also responsible for working with GAO stakeholders to adopt prospective views on oversight impacts of emerging capabilities such as AI, cloud services, block chains, RPA, 5G, and internet of things. Taka is natively fluent in both Japanese and Mandarin Chinese. In his spare time, he is also a serious classical chamber musician and a competitive tennis player. I'll hand it off to you now, Taka. Thank you. >> Taka Ariga: Thank you, David. And it's such a, you know, great pleasure to be with the Centennial Webinar audience today. I have to say, selfishly speaking, I see sort of my role within GAO as having one of the most fun jobs. You know, I absolutely agree with Chris. We certainly live in a very complicated world with many, many wicked problems to sort of address. And this is really why GAO established the innovation lab is really - to have a prospective and forward-looking view on a range of emerging technologies, whether those are AI-driven, whether those are blockchain, whether those are cloud services, whether those are internet of things, 5G, et cetera. The kind of duality that GAO embodies represents not only the kind of oversight, insight, foresight questions that we need to answer on behalf of the United States Congress, but also how do we ourself take advantage of these capabilities so that we can drive our mission delivery operational work with much more sort of depth and sort of breath in terms of our capacity. So, from an innovation lab perspective this is really taking a computational focus around a range of emerging capabilitiesy, really trying to understand the oversight implication of these technology. So, AI is a good example of that. We very recently published the first of its kind, "AI Accountability Framework." And the goal really is to take/address many of these high-level principles around ethical considerations and transparency. And explain ability and some of the negative societal impacts, pushing down those principles down to a set of practices questions. And more specifically an audit perspective set of procedures when it comes to evaluating artificial intelligence solutions. But at the same time, we, you know, within the innovation lab we have a number of projects that we're developing - what we call minimally viable products or prototypes to really experiment different facets of AI and how they can help us as an agency do our jobs better. And so, today I kind of want to spend some time to really showcase a couple of examples of these prototypes to give you a sense on how active we are in terms of exploring these technical depth. And but, how we are also working across our partners at the international levels and the local level, but also at the federal level collectively addressing some of these wicked problem when it comes to emerging technologies. So, the first prototype I want to showcase, you know, this is really talking to a different forum in which we convey a timely set of information. So specifically, this is around operation warp speed. We stood up this dashboard for back in January of this year when the vaccines themselves were just coming online. So, this is an example where there's a lot of complicated sciences to sort through. There's certainly a lot of complicated policy issues to sort through. So, how do we make the evolving data available and consumable to the general public out there to support, you know, trust in science? To support the availability of vaccines? So, this is a different way on, then, on from how GAO traditionally hasve developed our oversight products. So, this is much more interactive driven. So, in terms of -- you can take a look at the Pfizser vaccine, for example. So, we looked at the technology readiness assessment. Eight, eight out of nine in this case. So, that should convey a level of maturity around how this particular vaccine was developed in terms of a review that it went through, in terms of the platform. And as the data themselves evolve, we contemporaneously updated information in this sort of a digital format and interactive format so that the public can sort of, in a timely way, understand different types of vaccines that are coming online, the maturity of those platforms, and then can make really informed decisions relative to, in terms of, how we can collectively address some of these pandemic mitigation strategies. Another sort of examples that we want to show, and Chris talked about the power of simulation. You know, we certainly live in a complicated world where resource constraints are sort of everywhere. You know, we don't have unlimited budgets, we certainly don't have unlimited people, or sort of scope of technology. So, one of the key challenges that GAO has always been working through is the concept of improper payments. You know, money that has gone out of the federal government that should not have otherwise. One of the key sort of mitigation strategies that we were exploring is how identity verification might actually help mitigate the growth of improper payments, which, you know, if you sort of take last year as a yardstick, last year alone was $206 billion. And that is before we accounted for many of the pandemic relief funding floating out there. So, but understanding that we can't just implement the most stringent controls out there because a lot of sort of vulnerable socioeconomic citizens don't necessarily have access to, let's say, smart phone or having access to digital credential. And similarly, you know, a lot of these controls are still very expensive from a cost implementation perspective. So, given these this set of constraints, what are some of the key trade offs that individual agenciesy can toggle and understand the impact? So, this is a prototype of a simulation tool where on the left-hand side we allow the user to characterize the kind of programs they're running. So, for example, what is the social economic vulnerability of the population that they're serving? Is the balance, is it less vulnerable, is it more vulnerable? So, you know, you can sort of toggle around that. And then, the kind of controls based on our expert panel recommendations that are available whether it's the uniform methods with risk-based methods, and there are other parameters that can be adjusted. The long story short here is this particular simulation tool allows you to see the different trade-offs. So for example, based on a couple of controls that I selected, we are looking at potentially preventing 93% of improper payments. Yay for us. That's a really good news. But at the same time, because of the level of control that we implemented, we're also deterring 37% of legitimate payments. That is, perhaps, not such a great news, right? And then, so if you take a look at this chart here, the way that we are running the simulation is to really articulate the potential trade-offs between legitimate payments made, legitimate payments stopped or deterred, improper payments stopped or deterred, and improper payments made. So, those are the sort of four pillars that we're trying to balance relative to this type of policy, wicked problems that we're trying to solve. And then, we also include elements such as budgetary constraints. So, in this example, the totality of the costs relative to individual users is roughly $1,300. You know, there may be some additional constraints at the sort of different states of rlocality levels that may not necessarily make this type of control possible. And then, you can sort of look at if I choose different type of control, my, you know, the simulation will run in real time to understand the kind of key trade-offs. In this case, we went to a risk-based model. The cost has significantly grown, but the proportion of improper payments preventedive versus the legitimate payments deterred, it's much more reasonable. So, there are a number of these controls here that can be toggled to really show the trade-off decisions necessary to implement a lot of these recommendations that are being put forth. The third example I'll showcase, this is really more about how can we be more collaborative with our congressional stakeholders and be more responsive to the kind of conversations that are happening on the hill? And this is an example where GAO developed our own topic modeling using natural language processing as opposed to going to a commercial solution. Because what we were able to develop in-house was actually performing better, scaling better, and much more adapted to get at the kinds of audit reports that GAO generates. So, this is another example of that prototype looking at all of the conversations that are happening across various committees, whether there's press releases, whether those are hearings. So, every night we essentially scrape that information, apply our own topic modeling capability to say, has GAO ever covered or issued report around the kind of topics that are being discussed? So, I'll give an example. Let's say I'm interested in the concept of telecom, right? So, on the sort of left side of the screen it talks about all of the conversations that have recently happened related to the concept of telecom. And on the right-hand side are all of the matching GAO reports that specifically are relevant to the concept of telecom. And so, this is one way for us to be contemporaneous as we see these, you know, topics are surfacing. How do we sort of be responsive to our congressional stakeholders? Conversely, if there are specific topics with no GAO reports, those potentially become a sort of potential areas that GAO could cover. So, it also helps us sort of provide coverage around sort of opportunity for us to issue additional sort of oversight products on those topics as well. So, this is a really good example of, we started with a very problem-centered approach in terms of needing to develop our own topic models, but quickly organically grew that topic modeling capability to help us to be much more responsive to the legislative agenda to some of the, you know, technical assistance requests that are coming out of the hill. But really keeping the pulses to on our key committees of jurisdiction in terms of how GAO can better serve them. There are, you know, about two dozen or so sort of other prototypes that we're working on but I just wanted to really quickly showcase some of the work that's been done by Innovation Lab. Really, it's the idea here is for us to have much more forward-looking and computational-driven capabilitiesy to help GAO meet its very unique pan- governmental mission in terms of our oversight, insight, and foresight function. So hopefully, that gave you a sort of a preview on some of the work that we're working on. And with that, David, I'm going to turn it back to you. >> David Dornisch: So, hi. Thanks very much, Taka. That's really interesting stuff. And I'm going to move to our next speaker now, David Trimble. David is the Managing Director of GAOs Physical Infrastructure Team. He joined the Physical Infrastructure Team as a director in April 2020, where he led work on the US Postal Service, and federal buildings, and other assets. Prior to joining physical infrastructure, David directed a body of work on in NRE on nuclear weapons, nonproliferation and clean up, and clean water. His work produced numerous recommendations and financial accomplishments concerning the management of large capital projects and the handling of nuclear waste. David has presented before Congress on a wide range of issues and has presented GAO's work before numerous organizations such as the International Atomic Energy Agency and the National Academy of Sciences. He holds an MA in Policy Analysis from the University of Chicago and a BA in Philosophy from Lawrence University. Take it away, David. >> David Trimble: All right. Well thank you very much, David, and thank you for having me. It's an honor to be here with some distinguished colleagues. I'm not used to being in such great and august company so I appreciate the opportunity. Hey, I'm going to change the focus a little bit on the discussion of complexity just to take ita from the perspective of the something I know a little bit about from the auditor's perspective. Because complexity is a challenge, obviously, for program managers but it's also a huge challenge for the evaluation and the accountability community. Could you go to the next slide, please? And one of the things I've seen over the years, is complexity certainly is a growing challenge for program managers and auditors. There's sort of two themes I think that I've seen during my career that's behind this. One is -- I mean, because if you go back twenty, thirty years, problems were complex, right? It's not like those who awere simple times, right? The problems are still complex. But I think what changed largely has been, you know, in some ways the there was a growing recognition, at least in the auditing community that we needed new approaches and sort of more comprehensive, complex methodologies to tackle the problems we had been looking at. I think there was a growing recognition that some of the issues, and challenges, and findings we're having were sticking around even though we've made numerous recommendations. So, they were persistent and they weren't sort of being responsive to our traditional approaches. And so, in the eighties and nineties, there was just an increasing emphasis for audits to consider sort of to expanding their scope, and look at the bigger picture, and consider problems and their causes in a larger context to hope to make progress in tackling these larger problems. And I remember vividly more than one occasion being lectured by Nancy Kingsbury who was sort of one of the legends in GAO from on the methodological side, lecturing folks about embracing contextual sophistication in their work. Which I took as, you know, as advice that you need to get out of your foxhole when you're looking and assessing problems and consider the whole system and its complexity if you're going to have any recommendations that are worth their salt. And the other thing I would note on complexity in terms of how it's changed over the years, is that sometimes complexity emerges on an old problem. Not so much that the problem changes as much as what we as a country and society view is an acceptable solution. Because sometimes things, you know, if you think now of, you know, the push on doing more equality and DNI focus equity issues in public policy, you know, that's a new layer of complexity to the analysis that, frankly, the public didn't demand before and we were remiss for many years for not addressing. A more crass example would be, you know, if you look at nuclear waste. In the past, you know, dumping liquid nuclear waste out at Hanford wasn't necessarily a problem because you just dumped it into the ground. And then, you know, at some point they realized that wasn't acceptable anymore. And then, you had a bigger problem to handle. Okay. Next slide, please. So, that's the challenge then for the auditor is that where there's complexity, identifying cause gets very, very difficult, right? Because the more complicated the system when something's amiss, the trains are going off the tracks. Figuring out why, it gets much more difficult to document. And since I was a philosophy major, I had to pull in my favorite quote from Spinoza, which is anything that's difficult, you know, it's going to be hard to tackle but it's worth it in the end, right? "All things excellent are as difficult as they are rare." The next slide, please. So, this leads to difficulty in identifying cause and getting behind this complexity. It really leads to the use of more sophisticated and comprehensive criteria and methodologies and the need for more and better data. Because you really have to identify cause if you're going to be able to make meaningful recommendations and to impact the world to make changes, right? You know, for the auditors, the mantra of an auditor for finding the elements of the finding, you know, condition, criteria, cause, and effect. What is, what should be, why is there delta, and what's the bad effect of that delta, right? But you have to figure out what that cause is and you need the criteria. And the criteria is really driven by the lens by which you're looking at that leads, and that's your sort of model and methodological approach. Next slide, please. So, what does that lead to? That leads to what you see over time is a growing body of work within GAO that embraces more complex methodologies and approaches. And so, in the eighties and nineties you have the growth of the use of best practices as a way of looking at the operation of programs, particularly DOD programs. And then, you also have the emergence of the guides which both serve as an evaluative tool but also are much more proactive to be used by managers. Those (are used) in (the) executive (branch) carrying out these programs in order to better achieve programmatic results. The guides, you know, well the best practices covered everything. I mean, that's from systems logistics to, you know, changing organizational cultures to integrated portfolio management. It's a very large body of powerful work that was done using those methodologies. And similarly the guides, you know, the GAO schedule guiding and the cross-estimating and assessment guides were really important pieces of work that have led to significant findings and improvement government programs by sort of stepping back, and trying to embrace, and get a handle on this complexity. Next slide, please. So, the other tool that we've, you know, I've been using more recently and I think is very valuable and we've had a reference to the artificial intelligence accountability framework is (one) more GAO framework report. So, we had a couple years ago the disaster resilience framework which is a guide for federal actions to, you know, to promote resilience and climate related another for climate related disasters. And then, the report I helped work on a couple years ago dealing on risk informed decision-making framework for environmental issues. This was largely driven out of work looking at DOE's management of nuclear waste. And both of the frameworks really are sort of, again, it's a high level lens to frame and view complex problems and offer a constructive path for managers to address them. You know, again, common theme here is huge problems, you know, high costs, lots of actors at all different levels of government. And that's sort of what drives it and that waswhere its utility comes in. Could you go to the next slide, please? So, for example, the risk informed decision-making framework, you know, it's an overlay, right? It doesn't replace the nuts and bolts requirements of, you know, the environmental laws CERCLA and RCRA. It's broad enough to be applied to multiple types of scales of cleanup decision-making and the framework is really meant to knit together all of the elements that, you know, as we used to focus on, you know, just like the smaller well did you look at alternatives? What was your cost estimate? What was your schedule? You know, it knits all of that into a cohesive whole., And when done well, the goal of this having a good framework is really to establish a process that the public will trust and understand the decision-making process. So, that even though they may not necessarily like what the final answer is, they (know) have the process has credibility and support. Next slide, please. So, all of these approaches as you increase this complexity and the breadth and you're stepping back and trying to take in more of the battlefield in terms of your review, all of those increase the demand for good data as well as and the opportunity to make better use of analytics. And I think you'll see that and I think you'll see that, especially in the programmatic side as folks try to execute these programs. You need to pull that data and the analytic approach. So, the caution I would have here, before I wrap up, is just that -- there's a temptation of always falling over data and your analytics. And the caution I have is just they're not the answer in and of themselves, right? The data you get and the analysis you generate, if it's mysterious, if it looks like a black box, it's not going to garner support. So, there still needs to be a framework within which that data and analysis operates and is communicated for the results to be supported and viewed as legitimate. And I think that's really where something like the risk informed decision-making framework really comes into play here, which is you have to have those elements. Those elements are essential for it to succeed, but the rest of it has to be there in terms of collaboration outreach transparency for the process to have standing and lasting effect. And with that I will concede the floor and look forward to the rest of the presentations. Thank you. >> David Dornisch: Okay. This is David again. Our next speaker is Matt Auer. He's a Professor and Dean of the University of Georgia's School of Public and International Affairs. He previously held positions as Vice President for Academic Affairs and Dean of the Faculty, Professor of Environmental Studies at Bates College, and was also Dean of the Honors College at Indiana University. His research focuses on the politics of decision-making in the arenas of environmental protection, energy policy, and forest policy and has published extensively in journals such as Policy Science and the International Journal of Climate Change, Strategies, and Management. He is the recipient of many teaching and research awards. He holds an MS and PhD from Yale University. He has a degree in forest and environmental studies from Tufts University and an AB from Harvard University. Matt, over to you. >> Matt Auer: Thank you so much, David. I really appreciate being part of the proceedings here and have enjoyed the talk so far. And it occurs to me that many of the concepts that Chris, and Tak, and David have discussed could be now applied to what I'll call sort of an in-depth case as we think about the issue of wildfire. And so, I'll try to use some of the concepts that we're working with here with complexity, add in urgency in equity, and stick to our interest in data and information technology, and try to bring those together. But I do hope that folks will be able to have some of the concepts and some of the tools that have been mentioned so far and think about them in the context of the slides I'm about to share with you. So, I'm adding in urgency in equity. I think Taka actually is also alluded to equity and socioeconomic variables in his presentation. In the context of wildfire and what we're talking about here are just -- it's destructive wildfire in the US. The problem is urgent. If you pick up the newspaper, or listen to the radio, or watch the news, hardly a day goes by during the fire season, so to speak, and the fire season now almost seems like it's 365 days a year. But the issue of destructive wildfire and the impact on ecosystems, on people's lives, on property, on public health has been getting worse. And so, this trend line shows you that only over the past few years has it been the case that as many as 10 million or more acres of land has been destroyed by wildfire and all that since 1960, as we measure this. The instances when that's happened have just been in the last few years. Right now, we're on pace in 2021. We have something close to about six million acres that have been destroyed by erratic wildfire. We'll see if it gets to 10, I hope it doesn't, but it's been another bad fire season. And this is data that was part way into the summer to suggest that 2021 isn't really much better than 2020. And 2020 was a bad year too. We have many more wildfires to deal with this year than we did last year. They aren't necessarily as big, although there are some huge wildfires that we're still battling in California and Oregon. The issue of complexity to think about and what I'm getting at here, now, is really at the landscape scale, so to speak. The complexity that creates a wicked problem in the form of an individual fire or set of fires that now drives the larger problem of how we come up with the right way to think about the risks from wildfire and address those risks as a wicked problem. We have to really turn back the clock to see some of the origins of the causal factors include a strategy that was developed by the first Director of the Forest Service, Gifford Pinchot, who was not of the mind that wildfire could be used in a constructive way to manage forests, particularly fires are (in) adapted forests. The idea that emerged pretty quickly after some very bad and deadly erratic wildfires in the mountain west in the 1920s and 30s was well let's extinguish those fires as quickly as possible. And the 10 a.m. strategy basically dictated that if you detect a wildfire, it needs to be extinguished by 10 a.m. the next morning. And that policy really remains in place until around 1970. What that does is it leads to the build-up of forest fuels on the ground. Now, when you add in climate change and the various factors here -- and folks, I want you to think about Chris Koliba's presentation when he talked about things like complex system modeling. You have, for example, hotter summers, warmer winter temperatures, if you don't get a deep freeze, then maybe that's enabling forest pathogens, and pests have longer, greater opportunities to propagate, more tree death, and more buildt up of forest fuels. So, magnifying the problem of the build-up of those fuels. The hot summers themselves can lead to more tree death because photosynthesis is shutting down, trees are dying. And then, as you probably well know in the southwest and in particular the pacific coastal states are experiencing what would otherwise be cyclical drought is becoming intensified by climate change. So, there's hot summer temperatures, warmer winters, intensification of that drought, yet more tree death, all of those issues, then, are drivers. But for any particular wildfire incident, maybe only some of these variables matter. Then you've got the issue of lightning strikes. It's not clear to what extent climate change and lightning strikes are directly connected to one another. Is there a positive correlation? Well, to the extent that maybe they're more electrical storms that are generated in the context of warmer summer temperatures and moisture in the air, then perhaps there is a correlation, but the jury is still out. But in any case, there's no question that lightning strikes are a major contributor to destructive wildfire. And so, that pattern and that relationship between, let's say, tree death, build up of the forest fuels, and the ignition source coming directly from lightning strikes is an issue. But so are human ignition sources. So, consider for example, the cCampfire hugely catastrophic fire from just a few years ago in California that destroyed Paradise, California. That was largely driven by an approximate source of that is a human ignition source, which is faulty and completely amortized equipment owned by Pacific Gas and Electric. As you may recall, Pacific Gas and Electric ultimately pled guilty to something like 86 or 87 counts of involuntary manslaughter and as part of their bankruptcy set up a fund amounting to tens of billions of dollars to deal with claims from tens of thousands of people whose homes were destroyed. But here's a case where it's a human ignition source. Then, more recently, you may recall this past summer the heat dome that formed over the Pacific Northwest stretching into Canada and British Columbia. The town of Lytton, British Columbia, was completely erased from the map and we, at this time, are not entirely sure what the proximate source of the ignition is in this case. Could it have been a spark from a train track? Right now there are communities that are suing Canadian Pacific and Canadian National Railways pointing the blame at them. But this combination, then, of the human ignition source and the other conditions, when we drill down to a sample size of one -- which is what we need to think about when it comes to addressing a particular wildfire -- really matters a great deal. And that only becomes more complex still when we think about, again, that landscapes scale. Issues like what's the slope of the trainterrain? What kind of vegetation type are we dealing with? Do we have a condition where there are spot fires that might merge with the main fire? Is there a steady wind for us to think about what direction is the wind? What about gusts of wind? Are those gusts running perpendicular to the ridge line, which can definitely magnify a wildfire? What effect will all this have on efforts to treat the fire and address it? For example, using fire retardant that's being dropped by airborne vessels, jets. Airborne embers that may be a mile or more ahead of the main fire. What do we do about trying to predict where those land and to what extent are do built structures have the features in place to mitigate the risks from those embers? Was the weather properly forecast? And as everybody knows the further out we forecast, in some cases, one or two days, the bands of uncertainty are that much bigger for us to address. In the case of the Bootleg fire in Oregon, we had the formation of pyro- cumulonimbus clouds which create all kinds of remarkable and complex uncertainties when it comes to addressing the fire on the ground, including the probability of more lightning strikes, trying to understand the bands of precipitation, and how that's going to change the direction of the fire, et cetera. So, these very local and contextual factors add some complexities to the way that people on the ground are going to respond. The equity issues that I shared with David and others when we shared some read-ahead materials have to do with the simple fact that there are more people in the way of destructive wildfire than ever before. And that's care (part) of some of these complex factors like climate change and drought. And so, with that we need to think about the capability orf the capacity of people to actually make the interventions that they need to make in order to harden their homes, make their homes and the landscape around their homes fireproof. So, some of the work that we're doing has to do with his these equity issues. We're trying to get a handle on some of the data with respect to if a homeowner is in the way of destructive wildfire, and they are in a region where the median household income, maybe, is less than say somebody in Malibu, who has a second home or in the Bay Area, we need to really think about this in terms of equity issues. And among other things, recognize that increasingly we have more homeowners policies being that renewed or being canceled. So, there's the cost of trying to find that replacement household insurance, home insurance combined with the expectation that the insurer is going to require you to make some mitigation measures and those are costly. Well, if I'm in a place like, let's say Montana, Texas, Oklahoma, which we haven't really spent as much time talking about when it comes to wildfire risk as we have places like California, Oregon, Washington, increasingly Idaho, we need to really think about that. So, the equity issues matter in ways that add to the issues of complexity but take us away from some of the typical things that we think about as auditors when we focus on efficacy or efficiency.? We also need to think about equity. So, in the article I sent, I talk about programs that are in place that really require multiple actors to enable households and folks that don't have the resources on their own to make these mitigation measures to harden their homes. And in this case, this image is trying to intentionally take some of the complexity out of the conundrum of how do we deal with complexity by providing people on the ground just enough information that they can act on it? So, if somebody were to say to me, "Hey, I'm a homeowner or I'm in the neighborhood where I want to take advantage of the Idaho Firewise Grants," who tends to be involved in putting together those grants, managing them, cooperating, collaborating with one another to literally help me in my neighborhood. This image is trying to give a sense of the grant activity that's taking place over the past few years and the types of collaborations that are in place. This would be in contrast to what is otherwise, I think, a terrific approach that we have. And I'm really thinking about what David Trimble just said about data and how seductive it can be. We do have the power, for example, with network analysis to make, very nuanced, and granular, and complex maps of the way that organizations, actors, interest groups engage with one another. But for someone on the ground, the question is how do I make sense of this? In this case, and I don't think it's the intention of these particular authors, I probably would be reducing out some of those network connections, and links, and nodes to give something that's going to be a bit more user friendly for those actors on the ground. Those policy actors and those end users who are making an intervention. Yes, we can map all that complexity but you don't necessarily need all that data. So, the closing thoughts that I have get to these questions of how much do we need to know in order to be effective. Both on the diagnostic end of things with understanding risk pathways and estimating risks but also with regard to the prescriptions and that has to do with, then, developing, applying and evaluating including auditing, doing the policy appraisal work GAO does with institutional responses. When issues are urgent, like, let's say wildfire, I think we do have to be sensitive to this notion of diminishing returns to scale. So, there may be particular tools that we use, particular technologies that are very powerful, but in an urgent situation we need to ask do we need to be perfectly accurate and precise or do we move forward with what we have, particularly knowing that we need to translate this for end users on the ground. This is just a little caveat. But what I show you in these two images here - at the top images is a, if you will, an old-fashioned fire lookout. This is a gentleman who is in a fire tower in one of the national forests. His job is literally to use a pair of binoculars and that little circular map, which was actually invented shortly after the disaster of the Titanic to try to pinpoint an emergency. He also has a two-way radio. He happens to be a writer, so he has an Olivetti typewriter there. These are old-fashioned technologies and they could be replaced by much more sophisticated technologies and, in fact, in some cases they are, these fire lookouts, these fire spotters. In the second image you see an unmanned solar drone, which is a remarkable breakthrough for the purpose of spotting wildfire. And in fact, it's probably a little more accurate than an otherwise skillful human fire spotter up in a fire tower. However, the drone costs about $160,000 a pop. So, there are some issues there to think about in terms of cost adoption, of technology, et cetera. So, for us, we do need to think about are we balancing urgency with accuracy and precision? There's no argument with the value of being accurate and precise, for example, with trying to identify a wildfire before it gets out of control, making sure that our prescriptions are translatable to people on the ground. And then, this issue of equity. And those are the topics I'm trying to get at here in this presentation. You know, David just provided a perspective from his favorite philosopher Spinoza. I'm not going to suggest that Donald Rumsfeld is my favorite philosopher but in developing these remarks I recall his own comments, somewhat defensively, when he said that you go to war with the army that you have and not the army that you might want. It's just something we need to think about in the context of urgent problems. And we need to think about that when we're auditing, and appraising policy responses, and aggregating all of those individual fires and the responses to them to make sure that we're capturing each one of these very granular, somewhat idiosyncratic dimensions of the specific context of these fires. So, thank you. >> David Dornisch: I wanted to mention one thing before we move to the next speaker. To the audience, if you have comments or questions, please feel free to start sending them along to us and we will process them and try to get to them during the discussion time. Thanks very much. Our next speaker is Chris Currie. He's a Director in the US Government Accountability Office, Homeland Security and Justice Team. He is responsible for leading GAO's work on disaster preparedness, response, and recovery as well as DHS management-related issues. Chris has been a GAO since 2002 and has led numerous reviews of homeland security issues. He holds an MA in Public Administration from Georgia State University and a BA in History from the University of Georgia. Chris. >> Chris Currie: Thank you, David. And like everyone else has said, it's just an honor to be here with so many smart people talking about interesting things. It was really interesting to hear Professor Auer's presentation on wildfires. That's an issue we focus on a lot now and great to follow a fellow bulldog. So, I'm going to talk a little bit today -- and we can go ahead and go to the next slide, please. You know, we're going to talk a little bit today about some of the areas that we work in in the Homeland Security area. And specifically, these all fall under the national preparedness umbrella, I would call it. We're preparing for anything. But I'm going to talk about biodefense, national preparedness, and disaster recovery. And it's interesting because first of all, I absolutely love working in this area. It's so fascinating and Homeland Security is so fascinating. And what I'd like to do today is just dive into some of these areas, but also provide some specific examples of the complexity that we're talking about today. And then, more so, talk a little bit about why the solutions are even more challenging than identifying the problems themselves. And we've been thinking about this a lot over the last few years and I can't really take credit. You know, I get to work with some incredibly smart, thoughtful people at GAO. And for a while now doing work in this area, we kept saying, you know, all of these issues are really big, they're complicated, they're expensive, they're fragmented, and then they're also occur at every level of government. So, you know, at every level, it's just it's difficult and it's complicated. And also, another common theme was that there's expectations, you know, in the private sector, and from our citizens, and state, local governments. The federal government is going to do more, and more, and more to address these issues because increasingly it's beyond the capacity of anyone but the federal government to address. So, these are things we've been we've been noticing for some time in this area. And, you know, this presentation on the topic I think solidified it for me as I was putting it together. So, if we go to the next slide, I'll talk about biodefense. So, first of all, and this is going to hit home to people right now because of the pandemic. But we've been focused on this for, you know, the better part of 15 years and worried about how challenging of an issue this is to address because of exactly what we just saw with the pandemic. But for years we've been talking about that how this is both complicated and fragmented. It occurs across federal departments. No one department really had responsibility for biodefense, or pandemic planning, and things like that. Also, there was a vacuum of information and relationships between the feds, and state, local, and tribal governments as well. And of course, as we saw in the pandemic, there was some breakdowns in cooperation and coordination between government and the private sector, in this case, the health care industry. And, of course, we have the international coordination and collaboration piece of this too. But you know, one of the things this shows, and you're going to see a common theme in these areas, is that these issues are no longer up to any one department to be responsible for and to address. And it makes the solutions very, very challenging. In this biodefense arena, you have numerous federal agencies, as the graphic shows, involved. They all do different things, they all have different responsibilities, and the solutions are tough. And we've noticed that over the years we've been advocating for a strong strategic approach and strategy at the federal level to guide these efforts. I think everybody got that when we said it but it's very, very difficult to actually achieve because departments don't necessarily have to work together. They don't share resources well. And also, frankly, it cuts across different congressional committees of jurisdiction too. So, you know, legislatively, it's really hard to change the status quo the way systems work because it crosses so many different departments in areas. Next slide, please. So, some of our findings in this area are what I talked about, you know, that need -- and what we're starting to see in these cross departmental big difficult areas is that, you know, no one agency can address these issues. So, when we're making recommendations, you know, we have to think through these issues of we can't just tell the Department of Homeland Security or the Department of Health and Human Services to take an action without thinking about how they're going to work together in the broader context of the overall federal approach to this. So, many of our recommendations and policy findings are very cross-departmental and cross-governmental, which, you know, is the right thing to do but it makes implementation very difficult. I think actually agencies often get kind of frustrated with those types of recommendations because they're not easy to address, but they take a long time to implement. But it's what we're seeing is that there's no -- problems aren't easy and there's no easy solutions either. Next slide, please. So, in the realm of national preparedness, it's another area. So, shortly after Hurricane Katrina, you know, that was kind of the wake-up call that we didn't have an overall federal approach that was coordinated with all levels of government or how we were going to tackle catastrophic events. And because of that, a number of huge gaps were exposed. So, since that time the federal government has built sort of a preparedness apparatus and developed doctrine around that and how coordination is supposed to work. But again, it's a very difficult challenge to address because you have, you know, 14 different federal departments involved in some way. They all have to coordinate with state and local governments. It's very hard to assess the capabilities across all those areas and it's even harder once you do assess those to actually prioritize resources to address them. Because again, different levels of government can't tell each other what to do. Different departments can't tell each other what to do. So, a number of our recommendations in this area have been to develop those coordinating mechanisms that have actually been directed towards coordinating bodies that can work across departments to implement these recommendations. But again, very hard to implement. Often take a long period of time because they require extensive coordination at the federal level and also very difficult to get action without something really bad happening to begin with. And I think the pandemic is a perfect example of that. Unfortunately, a number of the gaps that were identified, you know, in the pandemic were things that had long been identified in prior exercises and after- action reports from prior events. It's just very difficult, if not impossible, to address them or provide the resources to address them until something really bad happens. Next slide, please. Disaster recovery is just another area we've seen. Increasingly over the last 15 years, more and more federal agencies are providing funding and assistance to citizens, to state and local governments, to tribes. As the slide says, since 2005, the federal governments provided about half a trillion dollars in disaster assistance. This comes from 15 different federal agencies. None of these programs were ever designed to really work in concert together in a coordinated way. And so, it's a very fragmented approach. You can imagine if you're a state and local community, you know, you're getting all this money or these opportunities to get money from different federal agencies. And in your view, it's all the federal government, but they all have different rules, different requirements, different time frames. And it makes it very, very difficult to coordinate community recovery and even harder to assess the effectiveness of that recovery as well. So, again, something we've been grappling with in this area is what do we do about this? You know, do we make recommendations to specific departments? We do sometimes if something can be fixed within a certain program, but increasingly we're making recommendations to certain programs, but we're also making recommendations across departments to work together to address these challenging issues. And I think the discussion about equity is a great and timely one that Professor Auer talked about. This is something we're dealing with in the disaster area too, is, you know, how are these programs being administered through the lens of equity to help the communities that need it most? And you can do that on a programmatic level and that's difficult enough. But, you know, how do you do it across the suite of programs that involves multiple departments and multiple programs that all work very differently? It's a huge challenge and something that we're going to be grappling with more and more. So, I think, you know, somebody was talking about 21st century policy challenges. I think that this is going to be increasingly the thing that defines our work for a long period of time is trying to address these huge cross-cutting issues. And then, once we developed the potential solutions that, you know, it's going to have to be very active and thoughtful monitoring of that over time to see if they are effective because they're going to take a long time to implement because there's no quick and easy solutions to these kinds of issues. That's my presentation I look forward to the Q&A later in the discussion. >> David Dornisch: Thank you, Chris, very much. Very interesting presentation. Our next speaker is Brian Gerber. Brian Gerber is an Associate Professor in the College of Public Service and Community Solutions Arizona State University. He's the Academic Director of the Master of Arts and Emergency Management, and Homeland Security, and is Co-director of the ASU Center for Emergency Management and Homeland Security. His research interests include work on disaster policy and management, Homeland Security Policy, and environmental regulation. He's led emergency exercises and has participated in a catastrophic incident planning projects and conducted program evaluations and policy analyses on disasters and pandemic preparedness. He has served as a principal investigator and researcher on grants from the federal state and local level institutions such as the National Science Foundation, US Departments of Education and Navy, HUD, the Louisiana Department of Health and Hospitals, and the Colorado Department of Public Safety. With that, I'll pass it over to Brian. >> Brian Gerber: Great. Thank you, David. Thanks for organizing this panel. Thanks for the invitation today to all the folks at GAO. Very appreciative of that. And I was going to echo David Trimble and say it's a real honor to be with this august group. But then, I looked at the calendar and saw it was September. So then, I got confused and didn't know what he was talking about. So, given you're on mute, I'm going to assume that there were no groans there and then everybody appreciated that joke. Okay. Moving on. So, I want to talk a little bit about disaster risk reduction and community resilience capacity building. And really this is a complement to Chris Currie's presentation, which was excellent. And hopefully, I'll fill out some of the points that Chris just made with a with a slightly different perspective. So, next slide, please. And I'll briefly, in the in the short amount of time we have, I'll briefly try to touch on these three questions. What are the core challenges in disaster risk reduction and community resilience capacity building? Are these intractable challenges or are they surmountable? And how might auditing and evaluation practices contribute to practice improvement in the risk reduction and resilience capacity building areas? So, the next slide. So, just for context, I don't think I need to spend much time on this. I think we're all aware of what 21st century hazard and disaster trends look like. But just in case, if you haven't spent any time looking at the most recent IPCC report, assessment report, the sixth assessment report. It's fairly stark, alarming, I think, is a useful word. We're looking at issues ranging from sea level rise that's going to jeopardize small island nations; probably precipitate withdrawal or retreat from coastal areas in many parts of the globe; massive, massive population displacement across the globe; extreme weather, intensified extreme weather incidents. So obviously, that's a bleak portrait. But if you want to get even more depressed, Swiss RE, which is one of the big global reinsurance firms, they put out an index the most recently. I think it's the most recent one from last year, noted that because of harms to biodiversity and systems that support biodiversity. Ffully one-fifth of the globe is looking at full-scale ecosystem collapse this century. So, that's a very difficult challenge. And putting it in research terms, I just wanted to highlight my friend and colleague, my Co-director at the Center for Emergency Management here at ASU, Melanie Gall. About a decade ago, she and some of her colleagues wrote an excellent paper on "Has Your Lost Trends Been Unsustainable?" She updated that recently last year with another paper. But clearly, I just mention this because there's literally no way to characterize hazards and hazard risk in the US and globally as anything other than the most complex and serious kind of policy challenge that we can confront. I just noted another paper that, you know, it's not widely cited, but it's really a useful paper by some researchers, I think, primarily from the Netherlands in a journal called "Earth's Future" which is a paper from 2019 that looks at consecutive disasters or interdependent risk. It's a good illustration of how we have to really take seriously the notion of interdependent risk. And that is implications not only from a from a policy-making standpoint but also from an agency like GAO and the work that you all do. So, next slide. Sorry, I was trying to click my slides myself . Next slide, please. So, I would define challenges in the hazard or disaster risk reduction and resilience capacity building area as falling under three -- I'm sure you can create your own list -- but for me I would say three primary challenges. One is policy design itself. I won't spend a lot of time talking about this because I think this is probably familiar to everyone in this audience but we're talking about siloed policy domains. Chris Currie was just referencing that. Narrowly defined administrative systems and authorities. What I think if you do any work at the state or local level, which is really where operationally hazard mitigation and hazard risk reduction work really happens and should occupy more attention, I think, from a research and evaluation standpoint. The nature of narrowly-defined funding streams, the shared funding between federal, state, and local government, there's a historic bias towards physical capital investment versus human capital investment. Short time horizons for funded projects. All of these things work against effective long-term risk reduction or resilience promotion. Then we turn to the issue of problem definition itself. Historically, disaster science research, it's sort of organized around discrete hazard. So, you have people that have expertise in seismology or hydrology et cetera. And you talk about flood risk, or earthquake risk, and those sorts of issues. That is obviously important and useful, but it also limits our ability to understand a broad integrated perspective that Chris Currie was just driving at. And, I think, even in the area of resilience, resilience is one of those funny terms because academics talk about resilience. It's become part of the nomenclature in the disaster management world, including at the federal level. But I can tell you from having spent the last 15/20 years working with, especially state, local government, when you talk to people with operational responsibilities, there's a wide variety of understanding what resilience might mean. And typically -- and I don't think I'm wrong about this -- typically if you look at what people typically almost frequently mean by resilience, they're going to, if you ask them practitioner, they're going to talk about standard risk reduction or mitigation practices, which is, I think, not the same thing. They're not quite equivalent. And so, I think one for me, my own perspective, is we really have to think about public goods production processes. Thinking about disaster risk reduction as a and resilience capacity is similar to national defense, which is an amalgamated, often intangible or non-tangible, not hard asset good. So, that leads us to the issue of hazards governance and the need to understand better public resource investments, regulatory screen, and see how those dynamics work. Governance around hazards as a focal point. So, next slide please. I'm just going to quickly go over the last couple. Are these problems really wicked or merely surly? I was going to call them potentially irascible problems but that's too hard to spell. But I think we've seen -- there's a lot of cause for optimism and so I'm not sure wicked is the right way to think about this because we do see lots of at the federal, state, and local level, transboundary or integrated mechanisms like watershed management models, coastal hazards management master plans. If we have time during Q&A, I can tell you a little bit about an illustration of the Healthy Homes Program in Iowa, which is a good model. We have -- and Chris Currie was just driving at this. W - and we have examples of jointly funded or mission funded initiatives that that we could talk about during Q&A if people want. But really the challenge is to have these cross-agency, cross-sector, and multiagency funding mechanisms that span broader time horizons and how that can function in practice. If we get a chance to talk during Q&A, I can elaborate on that. So, last slide. What this all means for GAO, and auditing, and evaluation, I think, again, has been outlined in various previous talks. I think process compliance is obviously a central issue auditing a long process compliance. But we really have to move more toward auditing and an assessment-based on a systems integration focus if we're going to really understand how communities build resilience capacity more effectively. And that also includes rethinking what are standard performance metrics in this domain. But again, I think I'm out of time so I will stop right there and hopefully we can have a few discussion questions on some of those topics. So, thank you. >> David Dornisch: Okay. Thanks very much, Brian. And now, I wanted to turn it back to Chris Koliba for summing up comments and then he'll move us into discussion. >> Chris Koliba: Great. Thank you, David. And thanks to the panelists for a real enlightening, engaging, and really coherent, and integrated set of talks. I'm just going to go through a couple, you know, who, what, when, where, why kind of questions that I from a thematic standpoint beginning with the whys. Why even this panel? Why complexity? And I think it was David Trimble that mentioned the idea of contextual sophistication and our ever-growing need to better understand that contextual framing of how context drives are wicked problems like wildfires, and biodefense needs, and what have you. Another big why, I think, in this space and why we need to think about complexity, and I would argue advancing more computational methodologies to study that contextual sophistication that was raised by our last three speakers, which was the governance and coordination fragmentation problem -- fragmentation or silos -- and how best to overcome those. And what's persuasive from a systems view to support better systems integration, as Brian just mentioned. And how can we get government across all scales governance to become more integrated in addressing these problems? And in addition, another why is this need for coupled systems analysis, examples of biohazards and wildfires, for example. We need the social science community and the policy experts engaged with the hydrologists, and the forest management fields, and what have you. So, there's a need to understand that systems integration through a transdisciplinary lens. And then, what's the what in this? And what ultimately comes down to being an empiricist, I think, first and a theorist theoretician second in my own work is data. Is what kind of data do we have available? And so, Taka and his innovation lab team are really at the vanguard for GAO here of taking very complex data sets and making sense of them. And I believe it was back in the Obama Administration there was the open government initiative in the digitization of government data sets. And from a modeling perspective, our models are only as good as the data we have available. And so, a question to put out there -- and I'll come back to this -- but just thinking about data availability to be able to better understand this contextual sophistication. And another what has to do with the parameter space and what we set around those parameters. And again, Brian was speaking about our performance indicators and that changing dynamic. Several folks, particularly Matt, really focused in on the equity question and how do we begin to fold these new goals, these emerging goals. Equity is not a new goal but emerging goal and prioritization in this. And then, better understand the urgency around it. So, how do we become more time sensitive, how do we develop more early warning systems to be able to be timely, and responsive, and ultimately save lives, and reduce hazards. And then, who? Who's involved in this enterprise? And, I think, this is also begs this question around communication, right? So, the GAO generates reports and then sends it to Congress. Congress filters that back to the agencies, et cetera, et cetera. I'm sure GAO also shares the reports to the agencies. But we need to be thinking of a more sophisticated sense around end users. Thinking, again, of Taka's models is who's using these outputs and how can we expand the domains of who uses these beyond the narrower band? And this may be one way we can overcome governance silos and fragmentation by really thinking about end use, and data uses, and data visualizations, and the cultivation of audience, and how data can actually be used to help cultivate new audiences and get people's attention. The other piece of who goes back to this transdisciplinarity that - the idea around team science and really the harnessing teams of experts around, again, better understanding of contextual sophistication. And then, again, another who is the multilevel governance question. The policycentric governance question that I think it's has been dominating my field of public administrations and Matt's field for, I would say, going on two, two-and-a-half decades, which is how do we overcome that governance fragmentation? How do we better design governance arrangements to better serve our needs? And lastly, this how question. And this gets back to the question that I kind of posed or the framing that I originally posed at the beginning which is how do we incorporate and utilized new methodologies for studying these wicked problems to our benefit? And, you know, I've really evolved my own work in this space to thinking like a model. And David Trimble mentioned the frameworks, the risk assessment framework. You know, researchers really appreciate those frameworks, the system frameworks as we structure model dynamics. And so, the stock and flow visuals that some of you may be coming up with. We're paying attention and what we're trying to do is operationalize those. And then, incorporate emerging goals like equity, like resiliency. And, I guess, the last thing I just to observe is, and this is a question, then I throw to the rest of the panel. You know, there's that metaphor that, you know, we've lost our car keys in the dark and we start searching for those car keys under the lamp light. And this metaphor that lamplight is our methods, are standard methods for inquiry and responsive analysis. How do we expand that purview? And is the data science an AI? Is this a vehicle for helping us to expand our space, our strategy space to better understand? And so, I believe, David, you want me to kick off a question to the panel. And I think this is a kind of a provoking one, is what scenarios would the panels like to see play out and better understood but we're afraid to necessarily even sketch it out and analyze because we felt constrained by our methods or by the availability of data. What questions and possible scenarios are on our wish list to address the problems in our space and to what extent? You know, I think collectively we can begin to think about how do we actually envision ways of running those scenarios in some sort of simulated space or some sort of machine learning approach if possible to address some of the depressing problems and lack of or challenging solutions to so many of our panelists that mentioned. So, you know, again, are you feeling constrained in your expertise domain by our methods, or are we feeling pretty good about where we're going with this, or to what extent are we interested in addressing some of these questions through some new machine learning, or data science, or simulation approaches? So, I guess, that's my opening question. And then, David, I'll let you also manage this flow, and I have a few more, but I also see a bunch of questions that are coming through the chat. >> David Dornisch: Sounds good. Thank you, very much for that Chris. Why don't we just see if the panel have any responses to Chris's question? Why don't you just unmute yourselves and go ahead if you have some ideas? >> David Trimble: Yeah, this is David. Yeah, I mean that's sort of like they are asking for a Christmas wish list, right? There's lots of solutions, I would love to have. The one that comes up, you know, has come up in the past is, you know, when we've looked at cleanup decisions both in the EPA space but also in the DOE nuclear clean-up space is, you know, if you have a omnipotent or whatever, omniscient and you could actually have the data to figure out sort of where's the best return on investment of say $100 million in clean-up budget or $50 million, right? And if you -- so you could prioritize the nation's cleanup activities based on return of investment. Not necessarily that that should dictate what the solution is as much as it is to illustrate that the choices we are making that are at odds with that are being made for to further other values. Like, sometimes it's just like we didn't know we were doing that, but sometimes it's because they have to maximize other equities, like, you know, between states, or certain communities, or other legal commitments. But I think sometimes the value of that kind of analysis is also just as a way to sort of shed light on implicit trade-offs that the system and the process are making that we don't necessarily always acknowledge or talk about but are still legitimate and people need to recognize those. >> Chris Currie: This is the other Chris, Chris Currie. You know, it's interesting. I don't -- in our area, I don't know that it's so much a lack of analysis or data. I think more so maybe it would be if I could wish for anything we could somehow change human nature about being proactive, or preventive about things we know are bad or that are going to happen. And I'll give you a great example. So, it's common in the emergency management world and the preparedness world to exercise all the time. I mean, if you go to any state, local, federal, emergency management department, they're always exercising things. And they do a phenomenal job exercising things. And these exercises spit out tons of data, and gaps, and lessons learned. And then, everyone goes back and says, well that was just a great exercise. All right, let's go back to our regular job. They don't do anything with the exercises. And we see this over and over again. These gaps that are identified don't get closed. And some of the reasons are one, nobody's responsible for sort of centrally following up. Nobody has the authority to tell anybody else what to do to follow up. They don't have the resources to follow up because, you know, think about it. If you're a county, you know, manager and you go to your county commission and say, "We just found out, we have this huge gap and it's going to cost us a million dollars to fix it if this happens in the future." And they say, "Well that sounds great for the future but that's the future." So, I think that's the biggest challenge that I see in some of these issues is that it's not a lack of information or awareness that these things are there, they're going to happen. >> David Dornisch: Brian. >> Brian Gerber: Yeah, if I could jump in. So, I agree, you know, Chris Currie, you know, I've been lots of those exercises, and lots of after action reviews, and planning meetings. I agree with that completely, but I would say two things. One from an analytics standpoint, I think, if you consider the fact that for whether real-world incident or an exercised scenario, we don't understand very well how discreet administrative systems interact. We don't understand very well how government, private, and nonprofit sectors interact in terms of service delivery, both in response, long term recovery. We have almost no comprehensive strategic, strategically integrated risk reduction approaches. Everything's siloed as I was [inaudible]saying. To Chris Koliba's question, I think his summary touched on some of the key issues like better systems analysis. Simulation is a is a huge tool that I don't think that the disaster science community has done well enough from a understanding how governance practices administrative systems work because a lot of historically, a lot of the disaster science has been technically focused on hazards rather than governance aspects, which was what I was trying to drive at. And I think the broader solution has to be rethinking authorizing legislation, rethinking funding strategies, as I was alluding to. You know, FEMA's brick approach is a good step forward but it needs to go much, much further. We have to change time horizons management models that were referenced. Some of them were referenced on a regional scale. Regional scale are just absolutely essential because you can't separate out on arbitrary, say county jurisdictional boundaries and say we're going to have any effectiveness in dealing with something that's inherently transboundary, not at a national level but at a interstate level, but as an international level, but as within national systems. So, I think, it's both the improvement in understanding governance practices but also rethinking problem scale jurisdictional responsibility a different way and really improving funding and performance assessment. >> David Dornisch: Matt, I just wanted to give you an opportunity if you wanted to add anything or-- >> Matt Auer: Yeah, thanks. I keep them coming back to this question of equity and, you know, thinking about this in the context of wildfire, but it could be any of the other natural disaster -- you pick your environmental media or risk. And, I guess, you know, for me, I, you know, I've also like Chris, I think, and Brian have also been involved in everything from war games with the Pentagon dealing with environmental security to the Forest Service with after action, deep dives into particular incidents. And then, okay, now let's use those context to try to project for similar cases and how would you proceed differently. For me, with respect to the equity issue, what I'm really interested in with simulations over the exercises for Chris's [inaudible], is if you kind of recognize that with major disasters, the folks who tend to be really take it on the nose, or folks who are in lower income type of household setting. And if that's an essential part of your normative framework, in other words, equity and fairness really matters in terms of our what are we focusing on here. Then, if you superimpose on that sort of the precautionary principle and imagine doing a set of interventions to promote resilience as we've been talking about here and then run your simulation. Run your wildfire and how we address a set of a wildfire scenario, or a noreaster, or a Cat 5 hurricane, or whatever. We proceed that with attention to the resilience factor. That's where I think the most valuable work can be done in terms of projection and forecasts. But that assumes a priority that that equity really matters. >> David Dornisch: Chris, did you want to follow up here or-- >> Chris Koliba: Yeah, I would just have a quick response to the question or the comments about human agency, right? And these are sort of oftentimes the hidden variables in understanding how human behavior -- and I go back to sort of Thaler and Sunstein's work on nudging, for example. And, I think, this is we need to come back to behavioral science as part of this team science approach to looking at this, which is how are risk communication and messages received by different segments of the population, right? The spread infection reduction models that were predicting the disease spread. You know, a critique of them is that they did not necessarily take into consideration the heterogeneity of the human population and how people would actually respond to different risk messaging. I mean, I don't think anyone could have could have anticipated the politicization of masking and vaccinations to the extent that we have. But we need to start thinking about how human beings respond to policy interventions and policy tools and have that be a part of this equation. And that may then help to anticipate the response landscape. And similarly, we need to understand how the institutional responses and predispositions and incorporate those into our frameworks and model thinking in order to overcome and reduce those barriers and really look at incentives at the institutional scales as well. So, that would just be my point. And we're doing a lot more work, I think, in the behavioral sciences to study using behavioral experiments and simulated environments. And there are scenarios and artificial but to see how people would respond to certain kinds of scenarios. We can take the probability distributions of those behaviors and then incorporate them into more sophisticated models and actually try to get at that stochastic signal of human behavior. And I think we're headed in this direction, generally as a field, and I think we need to be thinking about this in terms of our policy and government design. >> David Trimble: Hey, just to follow up on something you just said. Is there a body knowledge on behavioral sciences applied to, like, community outreach and collaboration? Because, you know, my comment about just having great data and presenting it to people. If it's opaque, if it's presented at the 11th hour and people are just suspicious at that point, you know, there's a context when you present data, right? So, how that data message will be received matters and that got to be in a context of a broader community engagement. So, I'm -- in the risk informed decision making, you know, collaboration's a big piece of that. And, I think, that's probably one of the biggest challenges DOE faces in it's clean up mission is that managing well that collaboration element. So, I'd be curious if there's any lessons to be learned in that space. >> Chris Koliba: I'd open it up to the rest of the panel. I could respond to that but anyone else want to chime in on that? If not I will. >> David Dornisch: Good. Go ahead, Chris, and then I have a question or two from the field. There's one in particular I want to ask. So, but please, please go ahead. >> Chris Koliba: Okay. So, you know, I think it was Brian that mentioned public goods, right, and common pool resources, and private goods. And this is where game theory can come in handy as well, where we look at iterated prisoner's dilemmas and how consensus can be built over time. And basically, the long and the short of that is in generative repeated interactions with people that matter. And over time trust begins to emerge or at least a tit-for-tat that we're in this together and that we need to better coordinate our activities. And, I think, how do we, then, in our in our governance design, how do we create those for a those kind of consistent interactions? And the big challenge that was mentioned several times is the scenario planning gets all the actors in the room. But it's episodic right? I mean, it's a one-shot prisoner's dilemma game and they're also gaming it, right? Of course, I'm going to collaborate in real life, I mean, in a scenario but maybe not in real life. But if you look at -- I'm thinking of Naim Kapucu's work on emergency response and some of his findings about why Florida responds to hurricanes more readily than the Gulf Coast, for example, the you know, Louisiana and whatnot and that may be changing. This was thinking about five or 10 years ago, even longer. But it was the repeated nature of the frequency of the hurricanes that allowed for those pathways to be stimulated. The last thing I want to mention is, again, in the complexity space and the emergence and self-organization is through diffusion of ideas in the viral nature of information flows that we're seeing all the time with social media. The question, I think, for the policy and governance spaces, we've got to harness those tools for the greater good. And, I think, we need to stop worrying too much about sort of the paternal nature of this because if we're not doing it for the greater good with those public goods in mind then someone else is going to do it and we're seeing the end result. So, we need to get into that game with an accountability structure that's democratically anchored and transparent so that we, as a society, can continue, you know, maintaining a level of quality of life and stability for civilization. So anyway, that's my comment. >> David Dornisch: Nice, nice comments there. I wanted to ask a question here. Bring it back to some of the concerns of GAO and more generally the auditing community. And we got to we got an interesting question here. David Trimble gave examples of emerging criteria such as the Green Book, Cost Estimation Guide, risk informed decision-making on the one hand. And then, Brian Gerber talked about rethinking governance and thinking about scale and jurisdiction on the other hand. And so, how -- the question that we got was, "What should the accountability community be doing to help decision-makers define, breakdown, and address these complex challenges that we're facing? And that's really a question to everybody, not just the GAO people but our external researchers and experts as well. >> Chris Currie: I have an idea. This is Chris Currie. So, I think that, you know, we're talking about really big pie-in-the-sky theoretical things for most people, including politicians and, you know, I don't think anyone, like, disagrees with anything that's been said today. No one would disagree that we need to do a better job on looking at this from a systems approach or, you know, no one's going to say that's a bad idea. I think what's hard is making this tangible for people. And I'll use an example of disaster resilience and risk reduction. You know, I think what we've tried to do is use the numbers to our benefit to push policy change, like, by explaining to people the types -- that the amount of damage that you're going to, you know, you're going to take if you don't do these things. And, you know, the infrastructure that's going to be impacted, how much that's going to cost, and how long it's going to be offline, and what sort of economic damage that's going to do to your community. And, you know, beyond a shadow of a doubt and show them that that will happen if this scenario plays out in your community. I think that, and as Chris said, the repeated disasters helps too. Because some when somebody has to face something every three or four years, like the poor folks in southern Louisiana, things get real and they take action. I think that's one place where we can use data and information on costs and spending to make these things more real for people and not just theoretical issues. >> David Dornisch: Thanks, Chris. Anybody else had something to add there? >> David Trimble: Yeah, I'll just say real quick that, you know, I think the guides, you know, are really what we're trying to do there in terms of helping agencies. You know, I think the risk informed decision making, that's really sort of -- you know, for years their recommendations to DOE to, you know, prioritize, do a risk-based approach, right? But then, it was like, well, you know, how do you actually do that in practice? Because there's all sorts of, you know, risk-based requirements in NEPA and, you know, and all the other laws they have to follow. But what does it mean in the context of the problems they tackle? And so, the framework is really is that guide. And I think applying it in the context of our audits sort of brings that to life and provides a roadmap. >> David Dornisch: Matt. >> Matt Auer: I was just saying that back to the topic of urgency, that the likelihood with the environmental context we've been talking about with disaster management, your audits are likely to -- urgency is likely to rise to the top. This notion -- and which I totally agree with Chris. There is very interesting experimental research with repeat games. And you discover in a prisoner's dilemma that you get cooperation. However, it's important, I think, to put on top of that the prospects of irreversibilities with some of the topics that we're talking about here. So, an interesting trend with wildfire, after two or three fire seasons in a very comparatively wealthy county and region of California, we see an exodus of folks from Sonoma. Sonoma? Heck, I'd like to have a second home in Sonoma, right? It's wine country. Well, the folks who are leaving Sonoma can afford to leave. Who's left. And so, we're back to this equity issue again. I would expect that you would find those communities, over time, are going to become more, if you will, fire adapted and resilient but there's a fairness question. So, I have this sense, moving forward, that the audits that you perform are going to be, it'll just be unavoidable, to address these equity issues. >> David Dornisch: Okay. We are just about at the end here. I'm going to thank all of you for participating in this really interesting panel. And to the audience, we will -- we do have a recording of this that will be available on the internet. I'll be sending out information to the addressees of this original panel invite. And I likely will also be sending around the slides. But once again, to all the panelists, thank you so much for your participation. I really enjoyed this panel and a good day to you all and to the audience. >> Thank you. >> Bye. >> David Dornisch: Thank you. >> Brian Gerber: Go Bruins. >> Matt Auer: Thank you. >> David Trimble: Go cubs. >> Brian Gerber: I had to get that in, David. >> David Trimble: Yeah. I'm going to refer to you forever more as merely surly. >> Brian Gerber: Did you like the august September joke? >> David Trimble: I did. I did. I chuckled. >> Brian Gerber: Good. Good. I'm glad. Thanks again, David. That was great. >> David Dornisch: Thank you. >> David Trimble: Yeah. Thanks, David. >> David Dornisch: See you, David, Brian, thanks. >> Brian Gerber: Bye-bye.