Skip to main content

Science & Tech Spotlight: Generative AI

GAO-23-106782 Published: Jun 13, 2023. Publicly Released: Jun 13, 2023.
Jump To:

Fast Facts

Generative AI systems—like ChatGPT and Bard—create text, images, audio, video, and other content. This Spotlight examines the technology behind these systems that are surging in popularity.

These systems are trained to recognize patterns and relationships in massive datasets and can quickly generate content from this data when prompted by a user. These growing capabilities could be used in education, government, medicine, law, and other fields.

But these systems can also generate "hallucinations"—misinformation that seems credible—and can be used to purposefully create false information. Other challenges include oversight and privacy concerns.

Illustration of AI showing hands typing on a laptop and cartoon sketches of different technology elements like circuits and interfaces.

Skip to Highlights

Highlights

Why This Matters

Use of generative AI, such as ChatGPT and Bard, has exploded to over 100 million users due to enhanced capabilities and user interest. This technology may dramatically increase productivity and transform daily tasks across much of society. Generative AI may also spread disinformation and presents substantial risks to national security and in other domains.

The Technology

What is it? Generative artificial intelligence (AI) is a technology that can create content, including text, images, audio, or video, when prompted by a user. Generative AI systems create responses using algorithms that are trained often on open-source information, such as text and images from the internet. However, generative AI systems are not cognitive and lack human judgment.

Generative AI has potential applications across a wide range of fields, including education, government, medicine, and law. Using prompts—questions or descriptions entered by a user to generate and refine the results—these systems can quickly write a speech in a particular tone, summarize complex research, or assess legal documents. Generative AI can also create artworks, including realistic images for video games, musical compositions, and poetic language, using only text prompts. In addition, it can aid complex design processes, such as designing molecules for new drugs or generating programming codes.

How does it work? Generative AI systems learn patterns and relationships from massive amounts of data, which enables them to generate new content that may be similar, but not identical, to the underlying training data. They process and create content using sophisticated machine learning algorithms and statistical models. For example, large language models use training data to learn patterns in written language. Generative AI can then use models to emulate a human writing style. Generative AI can also learn to use many other data types, including programming codes, molecular structures, or images.

The systems generally require a user to submit prompts that guide the generation of new content (see fig. 1). Many iterations may be required to produce the intended result because generative AI is sensitive to the wording of prompts.

\\vdifs02\FR_Data\McLellanc\Desktop\Spotlight Articles\Generative AI\Draft Figure.png

Figure 1. Example of a generative AI system creating an image from prompts.

How mature is it? Advanced chatbots, virtual assistants, and language translation tools are mature generative AI systems in widespread use. Improved computing power that can process large amounts of data for training has expanded generative AI capabilities. As of early 2023, emerging generative AI systems have reached more than 100 million users and attracted global attention to their potential applications. For example, a research hospital is piloting a generative AI program to create responses to patient questions and reduce the administrative workload of health care providers. Other companies could adapt pre-trained models to improve communications with customers.

Opportunities

  • Summarizing information. By rapidly aggregating a wide range of content and simplifying the search process, generative AI quickens access to ideas and knowledge and can help people more efficiently gather new information. For example, researchers can identify a new chemical for a drug based on an AI-generated analysis of established drugs.
  • Enabling automation. Generative AI could help automate a wide variety of administrative or other repetitive tasks. For example, it could be used to draft legal templates, which could then be reviewed and completed by a lawyer. It can also improve customer support by creating more nuanced automated responses to customer inquiries.
  • Improving productivity. Because it is capable of quickly automating a variety of tasks, generative AI has the potential to enhance the productivity of many industries. Multiple studies and working papers have shown generative AI can enhance the speed of administrative tasks and computer programming, although users may need to edit the generated result.

Challenges

  • Trust and oversight concerns. In June 2021, GAO identified key practices, such as a commitment to values and principles for responsible use, to help ensure accountability and responsible use of AI across the federal government and other entities. Generative AI systems can respond to harmful instructions, which could increase the speed and scale of real-world harms. For example, generative AI could help produce new chemical warfare compounds. Additionally, generative AI systems share challenges to oversight similar to other AI applications—such as assessing the reliability of data used to develop the model—because their inputs and operations are not always visible. The White House announced in May 2023 that a working group would provide, among other things, input on how best to ensure that generative AI is developed and deployed equitably, responsibly, and safely. Other agencies, such as the National Institute of Standards and Technology, have also promoted responsible use of generative AI.
  • False information. Generative AI tools may produce “hallucinations”—erroneous responses that seem credible. One reason hallucinations occur is when a user requests information not in the training data. Additionally, a user could use AI to purposefully and quickly create inaccurate or misleading text, thus enabling the spread of disinformation. For example, generative AI can create phishing e-mails or fake but realistic social media posts with misleading information. Further, bias in the training data can amplify the potential for harm caused by generative AI output.
  • Economic issues. Generative AI systems could be trained on copyrighted, proprietary, or sensitive data, without the owner's or subject's knowledge. There are unresolved questions about how copyright concepts, such as authorship, infringement, and fair use, will apply to content created or used by generative AI.
  • Privacy risks. Specific technical features of generative AI systems may reduce privacy for users, including minors. For example, a generative AI system may be unable to “forget” sensitive information that a user wishes to delete. Additionally, if a user enters personally identifiable information, that data could be used indefinitely in the future for other purposes without the user's knowledge. Section 230 of the Communications Decency Act of 1996 shields online service providers and users from legal liability for hosting or sharing third-party content, but it is unclear how this statute might apply to AI-generating content systems and their creators.
  • National security risks. Information about how and when some generative AI systems retain and use information entered into them is sparse or unavailable to many users, which poses risks for using these tools. For example, if a user enters sensitive information into a prompt, it could be stored and misused or aggregated with other information in the future. Furthermore, when systems are publically and internationally accessible, they could provide benefits to an adversary. For example, generative AI could help rewrite code making it harder to attribute cyberattacks. It may also generate code for more effective cyberattacks even by attackers with limited computer programming skills.

Policy Context and Questions

  • What AI guidelines can best ensure generative AI systems are used responsibly, and are generative AI systems following existing guidance?
  • What standards could be used or developed to evaluate the methods and materials used to train generative AI models and ensure fairness and accuracy of their responses for different use cases?
  • How can public, private, academic, and nonprofit organizations strengthen their workforce to ensure responsible and accountable use of generative AI technologies?
  • What privacy laws can be used or developed to protect sensitive information used or collected by generative AI systems, including information provided by minors?

For more information, contact Brian Bothwell at (202) 512-6888 or bothwellb@gao.gov and Kevin Walsh at (202) 512-6151 or walshk@gao.gov.

Full Report

GAO Contacts

Topics

Artificial intelligenceAutomated systemsData integrityEmerging technologiesNational securityPrivacyPrivacy protectionProductivity in governmentRisk managementScience and technologyScience and technology issues