The Federal Government Has Increased Its AI Use. But Is Enough Being Done to Secure Privacy?
The federal government is turning to artificial intelligence (AI) as a tool for creating efficiencies, as well as improving customer service. For example, the IRS is using AI chat- and voice-bots to better answer taxpayers’ questions. And if you’re looking for a government job, the Office of Personnel Management (OPM) is using AI to better connect candidates with employment opportunities that match their skill sets.
While these uses may help the public find answers to questions more quickly, there are concerns about how the federal government’s use of AI could affect people’s privacy.
What are these concerns and what’s being done about them? Today’s WatchBlog post looks at our new report.
Image
What are the risks to privacy when using AI?
The federal government collects a lot of sensitive personal data from people to manage programs that directly interact with the public—everything from Social Security to student loans. This includes information that may be publicly available already, such as your address and phone number. But it also includes information you wouldn’t want misused, like your bank account or tax information.
The customer service applications of AI (chat bots, for example) may not have access to your non-public personal information. But other AI uses may. And this has led to concerns about data breaches. For example, last year, school districts that used AI to monitor school-issued devices for potential threats accidentally revealed the private data of thousands of students to reporters. This breach occurred because that data was not protected by the school districts.
How does AI use raise privacy concerns? We gathered experts to hear their concerns about both the risks AI presents and the challenges in addressing them. These experts were from government, industry, and the nonprofit sector. Here’s what they told us:
- AI can make it easier to cross-reference information from multiple datasets, which may reveal sensitive personal information about people that once was anonymous. This can happen even if these datasets don’t explicitly include sensitive information because AI applications can extrapolate information.
- AI can repurpose data. This makes it possible for government agencies, businesses, and other organizations to use personal data for purposes other than the original intent. For example, businesses could use information from tax returns to market products at specific prices.
- AI can be used to intentionally and unintentionally generate false information, like deepfakes or inaccurate outputs like hallucinations.
What’s being done to protect privacy, and why is that not enough?
The federal government is aware of the risks AI poses to people's privacy and is taking action. OMB plays an important role in overseeing federal agencies’ use of AI. As part of this effort, OMB has issued guidance that gives agencies some direction in protecting privacy when using AI. But when we looked at this guidance, we found that it doesn’t give agencies enough direction on how to be transparent about their use of AI and sensitive data. For example, the guidance doesn’t provide information on how agencies should assess privacy risks for AI systems. These types of assessments ensure that agencies consider these risks when using sensitive data with AI and can be used to provide transparency in data use.
The guidance also doesn't identify best practices for addressing AI privacy risks. It also doesn’t identify technology or other tools that can enhance privacy protections when implementing AI. Our report recommends actions OMB could take to address these concerns.
In addition to risks, actions are needed to address challenges with AI use that make protecting privacy more difficult. For example, separating sensitive data from the vast datasets used by AI to protect it is another challenge. Experts also told us that even when the federal government or other organizations have protections in place, they often lack ways to measure how well those protections are working.
As federal agencies and other organizations have increased their use of AI, more work is needed to protect people's sensitive personal information.
Learn more about these issues and our recommendations to OMB on how to address them by reading our new report.
GAO’s fact-based, nonpartisan information helps Congress and federal agencies improve government. The WatchBlog lets us contextualize GAO’s work a little more for the public. Check out more of our posts at GAO.gov/blog.
Got a comment, question? Email us at blog@gao.gov.
GAO Contacts
Related Products
GAO's mission is to provide Congress with fact-based, nonpartisan information that can help improve federal government performance and ensure accountability for the benefit of the American people. GAO launched its WatchBlog in January, 2014, as part of its continuing effort to reach its audiences—Congress and the American people—where they are currently looking for information.
The blog format allows GAO to provide a little more context about its work than it can offer on its other social media platforms. Posts will tie GAO work to current events and the news; show how GAO’s work is affecting agencies or legislation; highlight reports, testimonies, and issue areas where GAO does work; and provide information about GAO itself, among other things.
Please send any feedback on GAO's WatchBlog to blog@gao.gov.