As artificial intelligence rapidly evolves, how can the federal government use evidence and data to effectively utilize and oversee this technology?
The field of artificial intelligence (AI) was founded on the idea that algorithms could be developed to simulate human intelligence. AI includes both narrow applications designed for task completion (like online “chatbots” or virtual assistants) and general systems that reason like a human across a range of contexts (such as self-driving cars).
However, AI systems pose unique challenges to accountability, especially as they relate to civil liberties, ethics, and social disparities. While stakeholders are considering using high-level governance principles to ensure that AI technologies can be trusted, there is limited information on how these principles will be implemented and verified.
As AI technologies continue to advance at an incredible speed, federal oversight considerations need to evolve alongside them. In our AI accountability framework, we identified key accountability practices—centered around the principles of governance, data, performance, and monitoring—to help federal agencies and others use AI responsibly.
Some questions for policymakers to consider when assessing these technologies include:
- How is the federal government using AI systems? For example, what data and code are used to power these technologies?
- How should AI systems be evaluated? What approaches should auditors take to develop credible assessments?
- What would an evidence-based AI assessment look like?
What does the future hold for AI oversight?