Aviation Security: TSA Should Limit Future Funding for Behavior Detection Activities
What GAO Found
Available evidence does not support whether behavioral indicators, which are used in the Transportation Security Administration's (TSA) Screening of Passengers by Observation Techniques (SPOT) program, can be used to identify persons who may pose a risk to aviation security. GAO reviewed four meta-analyses (reviews that analyze other studies and synthesize their findings) that included over 400 studies from the past 60 years and found that the human ability to accurately identify deceptive behavior based on behavioral indicators is the same as or slightly better than chance. Further, the Department of Homeland Security's (DHS) April 2011 study conducted to validate SPOT's behavioral indicators did not demonstrate their effectiveness because of study limitations, including the use of unreliable data. Twenty-one of the 25 behavior detection officers (BDO) GAO interviewed at four airports said that some behavioral indicators are subjective. TSA officials agree, and said they are working to better define them. GAO analyzed data from fiscal years 2011 and 2012 on the rates at which BDOs referred passengers for additional screening based on behavioral indicators and found that BDOs' referral rates varied significantly across airports, raising questions about the use of behavioral indicators by BDOs. To help ensure consistency, TSA officials said they deployed teams nationally to verify compliance with SPOT procedures in August 2013. However, these teams are not designed to help ensure BDOs consistently interpret SPOT indicators.
TSA has limited information to evaluate SPOT's effectiveness, but plans to collect additional performance data. The April 2011 study found that SPOT was more likely to correctly identify outcomes representing a high-risk passenger--such as possession of a fraudulent document--than through a random selection process. However, the study results are inconclusive because of limitations in the design and data collection and cannot be used to demonstrate the effectiveness of SPOT. For example, TSA collected the study data unevenly. In December 2009, TSA began collecting data from 24 airports, added 1 airport after 3 months, and an additional 18 airports more than 7 months later when it determined that the airports were not collecting enough data to reach the study's required sample size. Since aviation activity and passenger demographics are not constant throughout the year, this uneven data collection may have conflated the effect of random versus SPOT selection methods. Further, BDOs knew if passengers they screened were selected using the random selection protocol or SPOT procedures, a fact that may have introduced bias into the study. TSA completed a performance metrics plan in November 2012 that details the performance measures required for TSA to determine whether its behavior detection activities are effective, as GAO recommended in May 2010. However, the plan notes that it will be 3 years before TSA can begin to report on the effectiveness of its behavior detection activities. Until TSA can provide scientifically validated evidence demonstrating that behavioral indicators can be used to identify passengers who may pose a threat to aviation security, the agency risks funding activities that have not been determined to be effective. This is a public version of a sensitive report that GAO issued in November 2013. Information that TSA deemed sensitive has been redacted.
Why GAO Did This Study
TSA began deploying the SPOT program in fiscal year 2007--and has since spent about $900 million--to identify persons who may pose a risk to aviation security through the observation of behavioral indicators. In May 2010, GAO concluded, among other things, that TSA deployed SPOT without validating its scientific basis and SPOT lacked performance measures. GAO was asked to update its assessment. This report addresses the extent to which (1) available evidence supports the use of behavioral indicators to identify aviation security threats and (2) TSA has the data necessary to assess the SPOT program's effectiveness. GAO analyzed fiscal year 2011 and 2012 SPOT program data. GAO visited four SPOT airports, chosen on the basis of size, among other things, and interviewed TSA officials and a nonprobability sample of 25 randomly selected BDOs. These results are not generalizable, but provided insights.
Matter for Congressional Consideration
|To help ensure that security-related funding is directed to programs that have demonstrated their effectiveness, Congress should consider the findings in this report regarding the absence of scientifically validated evidence for using behavioral indicators to identify aviation security threats when assessing the potential benefits of behavior detection activities relative to their cost when making future funding decisions related to aviation security.||The Department of Homeland Security Appropriations Act, 2015, enacted in March 2015, imposed a funding restriction on the Transportation Security Administration (TSA) based in part on the findings in GAO's November 2013 report. Specifically, the act provided that $25 million of TSA's appropriation shall be withheld from obligation for "Headquarters Administration" until TSA submits to the Appropriations Committees of the Senate and House of Representatives a report providing evidence demonstrating that behavioral indicators can be used to identify passengers who may pose a threat to aviation security and the plans that will be put into place to collect additional performance data. In response, in August 2015, TSA submitted a report to Congress that discussed the scientific evidence it had gathered and used as a basis to revise the behavior indicators and a new behavior detection protocol it had developed. The report also discusses test strategies TSA had planned if the decision were made to deploy the protocols nationwide. These tests included a pilot test of the new protocols that was underway at that time, and two efforts that were under development--an operational test of the effectiveness of behavior detection and a study to examine potential disparity issues to ensure that the protocols do not systematically target individuals based on demographic, ethnic, or religious characteristics. With regard to the act's provision requiring TSA to outline its plans to collect performance data, TSA stated in the August 2015 report that following the operational test, it would analyze the test data collected and establish required thresholds for determining behavior detection effectiveness, including the rates at which behavior detection officers accurately assess behavioral indicators and refer individuals for additional screening, and the frequency with which these referrals lead to high-risk outcomes.|
Recommendations for Executive Action
|Department of Homeland Security||
Priority Rec.To help ensure that security-related funding is directed to programs that have demonstrated their effectiveness, the Secretary of Homeland Security should direct the TSA Administrator to limit future funding support for the agency's behavior detection activities until TSA can provide scientifically validated evidence that demonstrates that behavioral indicators can be used to identify passengers who may pose a threat to aviation security.
In November 2013, we reported that available evidence did not support whether behavioral indicators can be used to identify persons who may pose a risk to aviation security. Specifically, we found that decades of peer-reviewed, published research on the complexities associated with detecting deception through human observation called into question the scientific basis for TSA's behavior detection activities. We recommended that TSA limit future funding support for the agency's behavior detection activities until TSA can provide valid evidence that demonstrates that behavioral indicators can be used to identify passengers who may pose a threat to aviation security. Over the last 5 years, DHS and TSA reduced funding for behavior detection activities, ended the standalone behavior detection program, and eliminated the behavior detection officer position. In its fiscal year 2016 and 2017 budget requests, DHS requested funding to support a reduced number of behavior detection officers resulting in a savings of about $72 million. Although DHS did not concur with our recommendation, TSA officials stated that our recommendation was one of several factors in DHS's decision to support a reduction in the number of behavior detection officer positions. In fiscal year 2017, TSA ended the behavior detection program and began integrating the former behavior detection officers into its transportation security officer screening workforce, resulting in a transfer of approximately 2,660 screeners and $196 million in funding to support increased passenger volume at TSA's checkpoints according to TSA officials. These efforts have continued through fiscal year 2018. TSA also revised its list of behavior indicators, reducing the number of indicators from 94 to 36, and hired a contractor to search available literature for sources supporting its revised list of indicators. In 2017, TSA provided GAO with 178 sources to demonstrate that its indicators could be used to identify passengers who pose a threat to aviation security. In July 2017, we reported that we reviewed the sources and found that 175 of 178 of the sources did not provide valid evidence-that is, original research that met generally accepted research standards-for specific behavioral indicators in TSA's revised list and that the remaining 3 sources could be used as valid evidence to support 8 of the 36 indicators. While TSA should continue to limit funding for the agency's behavior detection activities until TSA can provide valid evidence for the indicators, actions taken by TSA over the past 5 years, which have limited funding for behavior detections activities, meet the intent of this recommendation.