Skip to main content

Artificial Intelligence: Fully Implementing Key Practices Could Help DHS Ensure Responsible Use for Cybersecurity

GAO-24-106246 Published: Feb 07, 2024. Publicly Released: Feb 07, 2024.
Jump To:

Fast Facts

While responsible use of artificial intelligence can improve security, irresponsible use may pose risks. We looked at what the Department of Homeland Security is doing to ensure responsible use of its AI for cybersecurity.

DHS created a public inventory of "AI use cases"—how it uses AI. But DHS doesn't verify whether each case is correctly characterized as AI. Of the 2 cybersecurity cases in the inventory, we found 1 isn't AI.

DHS also hasn't ensured that the data used to develop the AI use case we assessed is reliable—which is an accountability practice in our AI framework.

Our recommendations address these and other issues.

An image that depicts artificial intelligence use in cybersecurity.

Skip to Highlights

Highlights

What GAO Found

To promote transparency and inform the public about how artificial intelligence (AI) is being used, federal agencies are required by Executive Order No. 13960 to maintain an inventory of AI use cases. The Department of Homeland Security (DHS) has established such an inventory, which is posted on the Department's website.

However, DHS's inventory of AI systems for cybersecurity is not accurate. Specifically, the inventory identified two AI cybersecurity use cases, but officials told us one of these two was incorrectly characterized as AI. Although DHS has a process to review use cases before they are added to the AI inventory, the agency acknowledges that it does not confirm whether uses are correctly characterized as AI. Until it expands its process to include such determinations, DHS will be unable to ensure accurate use case reporting.

DHS has implemented some but not all of the key practices from GAO's AI Accountability Framework for managing and overseeing its use of AI for cybersecurity. GAO assessed the one remaining cybersecurity use case known as Automated Personally Identifiable Information (PII) Detection—against 11 AI practices selected from the Framework (see figure).

Status of the Department of Homeland Security's Implementation of Selected Key Practices to Manage and Oversee Artificial Intelligence for Cybersecurity

Status of the Department of Homeland Security's Implementation of Selected Key Practices to Manage and Oversee Artificial Intelligence for Cybersecurity

GAO found that DHS fully implemented four of the 11 key practices and implemented five others to varying degrees in the areas of governance, performance, and monitoring. It did not implement two practices: documenting the sources and origins of data used to develop the PII detection capabilities, and assessing the reliability of data, according to officials. GAO's AI Framework calls for management to provide reasonable assurance of the quality, reliability, and representativeness of the data used in the application, from its development through operation and maintenance. Addressing data sources and reliability is essential to model accuracy. Fully implementing the key practices can help DHS ensure accountable and responsible use of AI.

Why GAO Did This Study

Executive Order No. 14110, issued in October 2023, notes that while responsible AI use has the potential to help solve urgent challenges and make the world more secure, irresponsible use could exacerbate societal harms and pose risks to national security. Consistent with requirements of Executive Order No. 13960, issued in 2020, DHS has maintained an inventory of its AI use cases since 2022.

This report examines the extent to which DHS (1) verified the accuracy of its inventory of AI systems for cybersecurity and (2) incorporated selected practices from GAO's AI Accountability Framework to manage and oversee its use of AI for cybersecurity.

GAO reviewed relevant laws, OMB guidance, and agency documents, and interviewed DHS officials. GAO applied 11 key practices from the Framework to DHS's AI cybersecurity use case—Automated PII Detection. DHS uses this tool to prevent unnecessary sharing of PII. GAO selected the 11 key practices to reflect all four Framework principles, align with early stages of AI adoption, and be highly relevant to the specific use case.

Recommendations

GAO is making eight recommendations to DHS, including that it (1) expand its review process to include steps to verify the accuracy of its AI inventory submissions, and (2) fully implement key AI Framework practices such as documenting sources and ensuring the reliability of the data used. DHS concurred with the eight recommendations.

Recommendations for Executive Action

Agency Affected Recommendation Status Sort descending
Department of Homeland Security The Director of CISA should clearly define the roles and responsibilities and delegation of authority of all relevant stakeholders involved in managing and overseeing the implementation of the Automated PII Detection component to ensure effective operations and sustained oversight. (Recommendation 3)
Open
When we confirm what actions the agency has taken in response to this recommendation, we will provide updated information.
Department of Homeland Security The Director of CISA should document the sources and origins of data used to develop the Automated PII Detection component. (Recommendation 4)
Open
When we confirm what actions the agency has taken in response to this recommendation, we will provide updated information.
Department of Homeland Security The Director of CISA should take steps to assess and document the reliability of data used to enhance the representativeness, quality, and accuracy of the Automated PII Detection component. (Recommendation 5)
Open
When we confirm what actions the agency has taken in response to this recommendation, we will provide updated information.
Department of Homeland Security The Director of CISA should document its process for optimizing the elements used within the Automated PII Detection component. (Recommendation 6)
Open
When we confirm what actions the agency has taken in response to this recommendation, we will provide updated information.
Department of Homeland Security The Director of CISA should document its methods for testing performance including limitations, and corrective actions taken to minimize undesired effects of the Automated PII Detection component to ensure transparency about the system's performance. (Recommendation 7)
Open
When we confirm what actions the agency has taken in response to this recommendation, we will provide updated information.
Department of Homeland Security The Director of CISA should establish specific procedures and frequencies to monitor the Automated PII Detection component to ensure it performs as intended. (Recommendation 8)
Open
When we confirm what actions the agency has taken in response to this recommendation, we will provide updated information.
Department of Homeland Security The Chief Technology Officer should expand its review process to include steps to verify the accuracy of its AI inventory submissions. (Recommendation 1)
Open
When we confirm what actions the agency has taken in response to this recommendation, we will provide updated information.
Department of Homeland Security The Director of CISA should develop metrics to consistently measure progress toward all stated goals and objectives for Automated PII Detection. (Recommendation 2)
Open
When we confirm what actions the agency has taken in response to this recommendation, we will provide updated information.

Full Report

Office of Public Affairs

Topics

Artificial intelligenceBest practicesChief information officersCompliance oversightCritical infrastructure protectionCybersecurityCyberspace threatsFederal agenciesHomeland securityInventoryPersonally identifiable informationPrivacy