This is the accessible text file for GAO report number GAO-14-368 entitled 'Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness' which was released on March 12, 2014. This text file was formatted by the U.S. Government Accountability Office (GAO) to be accessible to users with visual impairments, as part of a longer term project to improve GAO products' accessibility. Every attempt has been made to maintain the structural and data integrity of the original printed product. Accessibility features, such as text descriptions of tables, consecutively numbered footnotes placed at the end of the file, and the text of agency comment letters, are provided but may not exactly duplicate the presentation or format of the printed version. The portable document format (PDF) file is an exact electronic replica of the printed version. We welcome your feedback. Please E-mail your comments regarding the contents or accessibility features of this document to Webmaster@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. Because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. United States Government Accountability Office: GAO: Report to Congressional Requesters: March 2014: Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness: GAO-14-368: GAO Highlights: Highlights of GAO-14-368, a report to congressional requesters. Why GAO Did This Study: In recent years, nearly half of all annual apprehensions of illegal entrants along the southwest border have occurred along the Arizona border. Under the Secure Border Initiative Network (SBInet), CBP deployed surveillance systems along 53 of the 387 miles of the Arizona border with Mexico. After DHS canceled further SBInet procurements, CBP developed the Plan, which includes a mix of radars, sensors, and cameras to help provide security for the remainder of Arizona's border. GAO was asked to review the status of DHS's efforts to implement the Plan. This report addresses the extent to which CBP (1) developed schedules and Life-cycle Cost Estimates for the Plan in accordance with best practices, (2) followed aspects of DHS's acquisition management guidance in managing the Plan's programs, and (3) identified mission benefits and developed performance metrics for surveillance technologies to be deployed under the Plan. GAO reviewed schedule, cost, and acquisition documents and analyzed fiscal year 2010 through June 2013 data on apprehensions and seizures. What GAO Found: The Department of Homeland Security's (DHS) U.S. Customs and Border Protection's (CBP) schedules and Life-cycle Cost Estimates for the Arizona Border Surveillance Technology Plan (the Plan) reflect some, but not all, best practices. Scheduling best practices are summarized into four characteristics of reliable schedules—-comprehensive, well constructed, credible, and controlled (i.e., schedules are periodically updated and progress is monitored). GAO assessed CBP's schedules as of March 2013 for the three highest-cost programs that represent 97 percent of the Plan's estimated cost. GAO found that schedules for two of the programs at least partially met each characteristic (i.e., satisfied about half of the criterion), and the schedule for the other program at least minimally met each characteristic (i.e., satisfied a small portion of the criterion), as shown in the table below. For example, the schedule for one of the Plan's programs partially met the characteristic of being credible in that CBP had performed a schedule risk analysis for the program, but the risk analysis was not based on any connection between risks and specific activities. For another program, the schedule minimally met the characteristic of being controlled in that it did not have valid baseline dates for activities or milestones by which CBP could track progress. Table: Summary of GAO's Schedule Assessments for the Three Highest- Cost Programs under the Arizona Border Surveillance Technology Plan: Schedule characteristic: Comprehensive; Program 1: Partially met; Program 2: Partially met; Program 3: Partially met. Schedule characteristic: Well constructed; Program 1: Substantially met; Program 2: Partially met; Program 3: Partially met. Schedule characteristic: Credible; Program 1: Partially met; Program 2: Partially met; Program 3: Minimally met. Schedule characteristic: Controlled; Program 1: Partially met; Program 2: Partially met; Program 3: Minimally met. Source: GAO analysis of CBP data. Note: Not met-—CBP provided no evidence that satisfies any of the criterion. Minimally met—-CBP provided evidence that satisfies a small portion of the criterion. Partially met-—CBP provided evidence that satisfies about half of the criterion. Substantially met-—CBP provided evidence that satisfies a large portion of the criterion. Met—-CBP provided complete evidence that satisfies the entire criterion. [End of table] Further, CBP has not developed an Integrated Master Schedule for the Plan in accordance with best practices. Rather, CBP has used the separate schedules for each program to manage implementation of the Plan, as CBP officials stated that the Plan contains individual acquisition programs rather than integrated programs. However, collectively these programs are intended to provide CBP with a combination of surveillance capabilities to be used along the Arizona border with Mexico, and resources are shared among the programs. According to scheduling best practices, an Integrated Master Schedule is a critical management tool for complex systems that involve a number of different projects, such as the Plan, to allow managers to monitor all work activities, how long activities will take, and how the activities are related to one another. Developing and maintaining an Integrated Master Schedule for the Plan could help provide CBP a comprehensive view of the Plan and help CBP better understand how schedule changes in each individual program could affect implementation of the overall Plan. Moreover, cost-estimating best practices are summarized into four characteristics—well documented, comprehensive, accurate, and credible. GAO's analysis of CBP's estimate for the Plan and estimates completed at the time of GAO's review for the two highest-cost programs showed that these estimates at least partially met three of these characteristics: well documented, comprehensive, and accurate. In terms of being credible, these estimates had not been verified with independent cost estimates in accordance with best practices. Ensuring that scheduling best practices are applied to the three programs' schedules and verifying Life-cycle Cost Estimates with independent estimates could help better ensure the reliability of the schedules and estimates. CBP did not fully follow key aspects of DHS's acquisition management guidance for the Plan's three highest-cost programs. For example, CBP plans to conduct limited testing of the highest-cost program-—the Integrated Fixed Tower (IFT: towers with cameras and radars)-—to determine its mission contributions, but not its effectiveness and suitability for the various environmental conditions, such as weather, in which it will be deployed. This testing, as outlined in CBP's test plan, is not consistent with DHS's guidance, which states that testing should occur to determine effectiveness and suitability in the environmental conditions in which a system will be used. Revising the test plan to more fully test the program in the conditions in which it will be used could help provide CBP with more complete information on how the towers will operate once they are fully deployed. CBP has identified mission benefits for technologies under the Plan, but has not yet developed performance metrics. CBP has identified such mission benefits as improved situational awareness and agent safety. Further, a DHS database enables CBP to collect data on asset assists, defined as instances in which a technology, such as a camera, or other asset, such as a canine team, contributed to an apprehension or seizure, that in combination with other relevant performance metrics or indicators, could be used to better determine the contributions of CBP's surveillance technologies and inform resource allocation decisions. However, CBP is not capturing complete data on asset assists, as Border Patrol agents are not required to record and track such data. For example, from fiscal year 2010 through June 2013, Border Patrol did not record whether an asset assist contributed to an apprehension event for 69 percent of such events in the Tucson sector. Requiring the reporting and tracking of asset assist data could help CBP determine the extent to which its surveillance technologies are contributing to CBP's border security efforts. This is a public version of a For Official Use Only-—Law Enforcement Sensitive report that GAO issued in February 2014. Information DHS deemed as For Official Use Only—-Law Enforcement Sensitive has been redacted. What GAO Recommends: GAO recommends that CBP, among other things, apply scheduling best practices, develop an integrated schedule, verify Life-cycle Cost Estimates, revise the IFT test plan, and require tracking of asset assist data. DHS concurred with four of six GAO recommendations. It did not concur with the need for an integrated schedule or a revised IFT test plan. As discussed in this report, GAO continues to believe in the need for a schedule and a revised test plan. View [hyperlink, http://www.gao.gov/products/GAO-14-368]. For more information, contact Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov. [End of section] Contents: Letter: Background: CBP's Program Schedules and Life-Cycle Cost Estimates Reflect Some but Not All Best Practices: CBP Followed Some Aspects of DHS Acquisition Guidance, but Did Not Fully Complete Documents for Acquisition Decisions Consistent with the Guidance: CBP Has Taken Some Steps to Assess Performance and Identify Mission Benefits, but Does Not Capture Complete Data on the Contributions of Its Surveillance Technologies: Conclusions: Recommendations for Executive Action: Agency Comments and Our Evaluation: Appendix I: Objectives, Scope, and Methodology: Appendix II: Photographs of Technologies in the Arizona Border Surveillance Technology Plan: Appendix III: Our Schedule Assessment Results for the Integrated Fixed Towers, Remote Video Surveillance System, and Mobile Surveillance Capability Programs: Appendix IV: Summary Statistics on the Reporting of Asset Assists Data for Apprehensions and Seizures across the Tucson and Yuma Sectors from Fiscal Year 2010 through June 2013: Appendix V: Mission Benefits Identified by U.S. Customs and Border Protection of Its Surveillance Technologies: Appendix VI: Comments from the Department of Homeland Security: Appendix VII: GAO Contact and Staff Acknowledgments: Tables: Table 1: Description of the Arizona Border Surveillance Technology Plan's Seven Technology Programs: Table 2: Summary of the Status of the Arizona Border Surveillance Technology Plan's Seven Technology Programs: Table 3: Summary of Our Schedule Assessments for the Integrated Fixed Tower (IFT), Remote Video Surveillance System (RVSS), and Mobile Surveillance Capability (MSC) Programs: Table 4: Quantities to Be Procured and Deployed for the Arizona Border Surveillance Technology Plan's Programs and Their Estimated Cost as of June 2013: Table 5: Comparison of When Key Acquisition Documents Were Required to Be Approved and When They Were Approved for the Integrated Fixed Towers (IFT), Remote Video Surveillance System (RVSS), and Mobile Surveillance Capability (MSC) Programs, as of November 2013: Table 6: Reporting of Asset Assists for Apprehension and Seizure Events Occurring across the Tucson Sector within the Range of Secure Border Initiative Network (SBInet) and Remote Video Surveillance System (RVSS) Towers, from Fiscal Year 2010 through June 2013: Table 7: Our Assessments of the Integrated Fixed Tower (IFT), Remote Video Surveillance System (RVSS), and Mobile Surveillance Capability (MSC) Program Schedules, as of March 2013: Table 8: Numbers of Apprehension Events and Apprehensions, and Percentages with Unreported Asset Assists, Technology Asset Assists, and Other Asset Assists in the Tucson and Yuma Sectors, Fiscal Year 2010 through June 2013: Table 9: Numbers of Seizure Events and Seizures, and Percentages with Unreported Asset Assists, Technology Asset Assists, and Other Asset Assists in the Tucson and Yuma Sectors, Fiscal Year 2010 through June 2013: Table 10: Mission Benefits Identified by the U.S. Customs and Border Protection (CBP) of the Technologies Deployed or Planned for Deployment under the Arizona Border Surveillance Technology Plan: Figures: Figure 1: Department of Homeland Security's (DHS) Acquisition Life- cycle Framework and Acquisition Decision Events (ADE): Figure 2: Comparison of Original Baseline Schedule and March 2013 Schedule for Each Technology Program in the Arizona Border Surveillance Technology Plan: Figure 3: Percentage of Asset Assists Reported across the Tucson Sector for Apprehensions Events, Fiscal Year 2010 through June 2013: Figure 4: Percentage of Asset Assists Reported across the Yuma Sector for Apprehensions Events, Fiscal Year 2010 through June 2013: Figure 5: Percentage of Apprehension Events Occurring within the Detection Ranges of Remote Video Surveillance Systems (RVSS) and Secure Border Initiative Network (SBInet) across the Tucson Sector, Fiscal Year 2010 through June 2013: Figure 6: Integrated Fixed Tower Concept (Secure Border Initiative Network Tower): Figure 7: Remote Video Surveillance System: Figure 8: Mobile Surveillance Capability: Figure 9: Mobile Video Surveillance System: Figure 10: Agent Portable Surveillance System: Figure 11: Thermal Imaging System (RECON III): Figure 12: Unattended Ground Sensor: Figure 13: Percentage of Asset Assists Reported across the Tucson Sector for Apprehensions Events and Apprehensions, Fiscal Year 2010 through June 2013: Figure 14: Percentage of Asset Assists Reported across the Yuma Sector for Apprehension Events and Apprehensions, Fiscal Year 2010 through June 2013: Figure 15: Percentage of Asset Assists Reported across the Tucson Sector for Seizure Events and Seizures, Fiscal Year 2010 through June 2013: Figure 16: Percentage of Asset Assists Reported across the Yuma Sector for Seizure Events and Seizures, Fiscal Year 2010 through June 2013: Abbreviations: AAR: After Action Report: ADE: Acquisition Decision Event: APSS: Agent Portable Surveillance System: ATEC: U.S. Army Test and Evaluation Command: BSFIT: Border Security Fencing, Infrastructure, and Technology: CBP: U.S. Customs and Border Protection: CPIC: Capital Planning and Investment Control: DHS: Department of Homeland Security: DOD: Department of Defense: EID: Enforcement Integrated Database: IFT: Integrated Fixed Towers: IS: Imaging Sensors: MSC: Mobile Surveillance Capability: MVSS: Mobile Video Surveillance System: OMB: Office of Management and Budget: OTIA: Office of Technology Innovation and Acquisition: PIR: SBInet Block 1 Post-Implementation Review: RVSS: Remote Video Surveillance System: SBI: Secure Border Initiative: SBInet: Secure Border Initiative Network: TID: Thermal Imaging Devices: UGS: Unattended Ground Sensors: UGS/IS: Unattended Ground Sensors/Imaging Sensors: [End of section] United States Government Accountability Office: GAO: 441 G St. N.W. Washington, DC 20548: March 3, 2014: Congressional Requesters: In recent years, nearly half of all annual apprehensions of illegal entrants along the southwest border with Mexico have occurred along the Arizona border, according to Department of Homeland Security (DHS) data. A top priority for DHS's U.S. Customs and Border Protection (CBP) is preventing, detecting, and apprehending illegal entrants. In November 2005, DHS announced the launch of the Secure Border Initiative (SBI), a multiyear, multibillion-dollar program aimed at securing U.S. borders and reducing illegal immigration. CBP intended for the SBI Network (SBInet) to include technologies such as fixed sensor towers, a common operating picture, and tactical infrastructure to create a “virtual fence” along the southwest border to enhance CBP' s capability to detect, identify, classify, track, and respond to illegal breaches at and between land ports of entry.[Footnote 1] At a cost of about $1 billion, in 2010, CBP deployed SBInet systems, referred to as Block 1 systems, along the 53 miles of Arizona's 387- mile border with Mexico that represent one of the highest-risk areas for illegal entry attempts.[Footnote 2] However, in January 2011, in response to internal and external assessments that identified concerns regarding the performance, cost, and schedule for implementing the systems, the Secretary of Homeland Security announced the cancellation of further procurements of SBInet systems.[Footnote 3] After the cancellation of SBInet in January 2011, CBP developed the Arizona Border Surveillance Technology Plan (the Plan), which includes a mix of radars, sensors, and cameras to help provide security for the remainder of the Arizona border. Under the Plan, CBP identified seven programs to be implemented ranging in estimated costs from $3 million to about $961 million. The three highest-cost programs under the Plan are the Integrated Fixed Tower (IFT), Remote Video Surveillance System (RVSS), and Mobile Surveillance Capability (MSC), accounting for 97 percent of the Plan's estimated cost.[Footnote 4] In November 2011, we reported on CBP's development of, and estimated life-cycle costs for implementing, the Plan.[Footnote 5] Specifically, we reported that CBP needed more information for the Plan and its costs before proceeding with implementation, and we recommended that CBP (1) ensure the underlying analyses of the Plan were documented in accordance with DHS guidance and internal control standards, (2) determine the mission benefits to be derived from the implementation of the Plan and develop and apply key attributes for metrics to assess program implementation, (3) conduct a post-implementation review and operational assessment of SBInet, and (4) update the cost estimate for the Plan using best practices.[Footnote 6] DHS concurred with these recommendations and has actions under way to address some of them, which we discuss later in this report. Further, in September 2012, we reported on acquisition management at DHS.[Footnote 7] Specifically, we found that DHS acquisition policy reflects many key management practices that could help mitigate risks and increase chances for successful outcomes; however, most of DHS's major acquisition programs continued to cost more than expected, took longer to deploy than planned, or delivered less capability than promised. These challenges in DHS's acquisition management, as well as in the department's other management functions, such as financial and human capital management, have contributed to our designation of DHS's management functions as a high-risk area.[Footnote 8] You asked us to review the status of DHS's efforts to develop and implement the Plan. This report addresses the following questions: To what extent has CBP (1) developed schedules and Life-cycle Cost Estimates for the Plan in accordance with best practices, (2) followed key aspects of DHS's acquisition management framework in managing the Plan's three highest-cost programs, and (3) assessed the performance of technologies deployed under SBInet and identified mission benefits and developed performance metrics for surveillance technologies to be deployed under the Plan? This report is a public version of the prior sensitive report that we provided to you. DHS deemed some of the information in the prior report as For Official Use Only—Law Enforcement Sensitive, which must be protected from public disclosure. Therefore, this report omits sensitive information on our analysis of Border Patrol data on apprehension and seizure events relative to various surveillance technologies. Although the information in this report is more limited in scope, it addresses the same questions as the sensitive report. Also, the overall methodology used for both reports is the same. To address the first question, we analyzed DHS and CBP documents, including program schedules and cost estimates, and interviewed DHS and CBP officials responsible for developing and overseeing schedules and cost estimates. Specifically, we obtained program schedules as of March 2013, which were current at the time of our review, for the three highest-cost programs—-IFT, RVSS, and MSC—-and compared the schedules with best practices for developing schedules outlined in an exposure draft of GAO's Schedule Assessment Guide.[Footnote 9 We also interviewed cognizant officials in CBP's Office of Technology Innovation and Acquisition (OTIA) and program offices. By assessing the schedules against best practices, we identified schedule challenges that CBP was experiencing in testing, procuring, deploying, and operating technologies under the Plan, and interviewed CBP officials to determine reasons for the schedule challenges and steps that CBP had taken or was taking to address them. For Life-cycle Cost Estimates, we analyzed the Plan's June 2013 estimate, the IFT program's January 2012 estimate, and the RVSS program's March 2012 estimate, which were the current estimates at the time of our review, and compared them against best practices for cost estimating.[Footnote 10] We also analyzed DHS and CBP documents and interviewed officials regarding their efforts to implement our prior recommendations to update the August 2010 Life-cycle Cost Estimate for the Plan in accordance with best practices. To assess the reliability of cost estimate data, we reviewed relevant program documentation, such as cost estimation spreadsheets, as available, to substantiate evidence obtained from interviews with knowledgeable agency officials. We found the data to be sufficiently reliable for the purposes of our report. To address the second question, we analyzed DHS and CBP documents, including DHS Acquisition Management Directive 102-01 and its associated DHS Instruction Manual 102-01-001, program briefing slides, budget documents, Acquisition Decision Memorandums, schedules, and program risk sheets.[Footnote 11] We focused on the IFT, RVSS, and MSC programs for more in-depth analyses because they are the Plan's three highest-cost programs and represent 97 percent of the estimated cost of the Plan. Specifically, to assess the acquisition strategy for the Plan, we focused on the IFT, RVSS, and MSC programs and analyzed their respective acquisition plans and discussed the acquisition approaches with CBP officials. To assess system requirements and capabilities for the IFT, RVSS, and MSC programs, we analyzed requirements and capabilities documents and worked with CBP officials to identify any changes since the documents were initially approved and whether any requirements or capabilities had been traded off because of cost, schedule, or other purposes. To assess the extent to which CBP followed DHS acquisition guidance, we selected aspects of Acquisition Management Directive 102-01 that were relevant to where these programs were in the acquisition process during fiscal year 2013. Specifically, we determined whether acquisition documents had been approved by the time required, that is, by the applicable Acquisition Decision Events. To address the third question, we analyzed performance assessment documentation and metrics used by CBP to determine the effectiveness of technologies deployed under SBInet and interviewed CBP officials responsible for performance measurement activities. Specifically, we analyzed the results of CBP's January 2013 post implementation review of the effectiveness of SBInet technologies in achieving their intended results. We also analyzed CBP and DHS documents, such as plans to address the post-implementation review, and interviewed officials to assess corrective actions taken to improve SBInet performance issues. We analyzed CBP data on apprehensions, seizures, and asset assists from fiscal year 2010 through June 2013 to determine the extent to which the data could be used to measure the contributions of SBInet technologies in enhancing border security. [Footnote 12] We selected this time frame because fiscal year 2010 was the first fiscal year for which data on asset assists were available following Border Patrol's deployment of its SBInet technologies, and the collection of data on Geographic Information Systems coordinates for apprehensions and seizures was required.[Footnote 13] To assess the reliability of these data, we discussed data quality control procedures with CBP officials. We determined that these data are sufficiently reliable for the purposes of this report. We compared CBP' s tracking and recording of data on asset assists for apprehensions and seizures against criteria outlined in Standards for Internal Control in the Federal Government.[Footnote 14] In addition, we visited the Tucson sector in Arizona to observe Border Patrol agents operating technologies and discuss agents' experiences in using these technologies. We selected the Tucson sector to visit because of the presence of surveillance technologies, such as SBInet and RVSS towers, in that sector and because the Tucson sector includes locations for which additional technology deployments, such as IFTs, are planned. While the information we obtained from our visit cannot be generalized to all Border Patrol sectors, it provided us with insights about the use of the deployed surveillance technologies. Furthermore, we interviewed CBP officials and analyzed documents to determine the progress CBP and DHS had made in implementing our prior recommendations to develop mission benefits to be derived from technologies in the Plan and metrics to measure the extent to which border security is expected to improve by using these technologies. For more information on our scope and methodology, see appendix I. We conducted this performance audit from September 2012 to March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Background: Border Patrol and OTIA Roles and Responsibilities: Border Patrol has reported that its primary mission is to prevent terrorists and weapons of terrorism from entering the United States and also to detect, interdict, and apprehend those who attempt to illegally enter or smuggle any person or contraband across the nation's borders. Geographic responsibility for the southwest border is divided among nine Border Patrol sectors, two of which are in Arizona--Tucson and Yuma. Each sector has a varying number of stations, with agents responsible for patrolling within defined geographic areas.[Footnote 15] Border Patrol collects and analyzes various data on its enforcement efforts and the number and types of entrants who illegally cross the southwest border between the land ports of entry. These data include apprehensions and seizures of drugs and other contraband. The Border Patrol collects and maintains data on apprehensions and seizures in DHS's Enforcement Integrated Database (EID). This database also includes an asset assists field in which agents can specify whether an asset, such as SBInet surveillance towers, contributed to apprehensions or seizures. CBP's OTIA was created to help ensure CBP's technology efforts are properly focused on the mission and are well integrated, and to strengthen CBP's expertise and effectiveness in program management and acquisition. OTIA's mission is to conduct and facilitate effective identification, acquisition, and life-cycle support of products and services while driving innovation to improve CBP's performance in securing U.S. borders and facilitating lawful movement of goods and people. OTIA manages the implementation of the Plan and is acquiring seven technology programs in the Plan for use by Border Patrol in Arizona. The goal of the Plan is to achieve situational awareness along the Arizona border where the Plan's technologies are deployed. For fiscal year 2013, OTIA budgeted $297 million in development and deployment funds for the Plan's seven technology programs. Table 1 describes the Plan's programs, and appendix II provides a photograph of each technology program. Table 1: Description of the Arizona Border Surveillance Technology Plan's Seven Technology Programs: Technology program: Integrated Fixed Towers (IFT); Description: New towers to consist of surveillance equipment (for example, ground surveillance radars and surveillance cameras) mounted on fixed, that is, stationary towers, and power generation and communication equipment to support the towers. The sensors and command and control equipment are to be capable of displaying information received from surveillance towers on a common operating picture.[A]. Technology program: Remote Video Surveillance System (RVSS); Description: A legacy (that is, existing) system including multiple color and infrared cameras as well as microwave antennas for communications mounted on 30-to 90-foot monopoles, lattice towers, and buildings. The images are transmitted, monitored, and recorded at a central location. This system is deployed to monitor large areas of the international border or critical transit routes. In addition to acquiring new RVSS units that will include upgraded surveillance technologies, U.S. Customs and Border Protection (CBP) officials stated that the agency intends to replace obsolete surveillance technologies on legacy systems with upgraded technologies. Technology program: Mobile Surveillance Capability (MSC); Description: A stand-alone, truck-mounted suite of radar and cameras that provides a display within the cab of the truck. An operator can use the information displayed to identify activity and advise responding Border Patrol agents. Technology program: Mobile Video Surveillance System (MVSS); Description: Also referred to as a Scope Truck, the system includes a telescoping mast or lift system that elevates a camera containing day and night capabilities with target illuminators and range finders. The operator interface display and control subsystem are operated within the cab of the vehicle.[B]. Technology program: Agent Portable Surveillance System (APSS); Description: A portable, ground-sensing radar and surveillance system that can be deployed and operated by Border Patrol agents where truck- mounted systems are unable to be deployed. Technology program: Thermal Imaging Devices (TID); Description: Devices that use a camera and corresponding remote viewing kits to enable Border Patrol agents to see clearly up to 5 miles in areas that are dimly lit or in total darkness. Technology program: Unattended Ground Sensors (UGS) and Imaging Sensors (IS); Description: Sensors placed in the ground to detect, track, identify, and differentiate among humans, animals, and vehicles. Source: GAO analysis of CBP information. [A] The Secure Border Initiative Network (SBInet) Common Operating Picture was intended to provide uniform data through a command center environment to Border Patrol agents in the field and all DHS agencies, and to be interoperable with DHS external stakeholders, such as local law enforcement. [B] Among other things, the MVSS differs from the MSC because the MVSS does not include radar, while the MSC does include radar. [End of table] DHS's Acquisition Life-Cycle Framework: The overall policy and structure for acquisition management outlined in DHS Acquisition Management Directive 102-01 and its associated Instruction Manual 102-01-001 includes an Acquisition Life-cycle Framework to plan and execute the department's acquisition programs. According to the directive, DHS adopted the Acquisition Life-cycle Framework to ensure consistent and efficient acquisition management, support, review, and approval throughout the department. As shown in figure 1, DHS's Acquisition Life-cycle Framework includes four acquisition phases through which DHS develops, deploys, and operates new capabilities. Figure 1: Department of Homeland Security's (DHS) Acquisition Life- cycle Framework and Acquisition Decision Events (ADE): [Refer to PDF for image: illustration] Phase 1: Need. Identify the capability need; The goal of Phase 1 is to identify a capability need and determine whether this need is sufficiently high priority to the department to warrant investment. ADE-1 is the culminating event for Phase 1, where DHS determines whether to approve the acquisition. ADE 1. Phase 2: Analyze/select. Analyze alternatives, select optimal solution, and plan for acquisition; The goal of Phase 2 is to identify possible acquisition alternatives that satisfy the capability need; select the optimal solution; and prepare documentation on the expected performance, schedule, and cost of the optimal solution. ADE-2A is the culminating event for Phase 2, where DHS determines whether to authorize the acquisition to proceed to Phase 3. ADE 2. Phase 4: Obtain. Obtain the capability; The goal of Phase 3 is to test and evaluate the selected acquisition solution. ADE-2B is the interim decision event during Phase 3, at which DHS reviews the progress of the acquisition and approves any supporting acquisitions, projects, or useful segments. ADE 2B. If necessary, low-rate initial production will be conducted to support operational testing and bridge the gap between completion of design/ development and production. ADE 2C. ADE-3 is the culminating event of Phase 3, where DHS determines whether to authorize the acquisition to proceed with full-rate production and deployment of the capability. ADE 3. Phase 4: Produce/support/deploy. Produce and deploy the capability, support the capability through its life cycle. Legend: ADE = Acquisition Decision Event. Source: GAO analysis of Acquisition Life-cycle Framework, as described in DHS Acquisition Management Directive. [End of figure] During the first three phases, the DHS component pursuing the acquisition is required to produce key documents to justify, plan, and execute the acquisition. These phases each culminate in an Acquisition Decision Event where the Acquisition Review Board--a board of senior DHS officials--determines whether a proposed acquisition has met the requirements of the relevant acquisition framework phase and should proceed.[Footnote 16] The Acquisition Review Board is chaired by the Acquisition Decision Authority--the official responsible for ensuring compliance with Acquisition Management Directive 102-01. DHS classifies acquisitions into three levels that determine whether the Acquisition Decision Authority can be a Component Acquisition Executive or should be DHS's Deputy Secretary or Under Secretary for Management.[Footnote 17] Under the Plan, the IFT program is a Level 2 acquisition, which is overseen by the department, and the DHS Under Secretary for Management serves as the Acquisition Decision Authority. The other six programs in the Plan are Level 3 acquisitions, which are overseen by CBP's Acquisition Review Board, and the Acquisition Decision Authority is a CBP official who serves as both the Assistant Commissioner for OTIA and Component Acquisition Executive. Status of the Plan's Seven Programs: As of January 2014, CBP has awarded contracts for four of the Plan's seven programs and has initiated or completed deployment of technology to Arizona for three of the four programs under contract, as shown in table 2.[Footnote 18] Table 2: Summary of the Status of the Arizona Border Surveillance Technology Plan's Seven Technology Programs: Program: Integrated Fixed Towers (IFT); Under contract: [A]; Deployment: Started: [Empty]; Deployment: Completed: [Empty]; U.S. Customs and Border Protection (CBP) officials reported that CBP expects to award an IFT contract by March 2014. As of June 2013, vendors in the competitive range--that is, vendors whose bids CBP's Office of Technology Innovation and Acquisition (OTIA) considers to be the most competitive for the contract award--completed demonstrations of their proposed systems for CBP. During the demonstrations, each vendor was required to provide assurance or demonstrate that its proposed system is ready and deployable and will not require additional engineering development. CBP had completed its evaluations of the demonstrations as of September 2013. Also, as of August 2013, CBP had acquired and completed approach and access roads for three deployment areas in Arizona where IFTs are to be located. In April 2013, because of threats shifting away from the Tucson sector in Arizona to the Rio Grande Valley sector in Texas, Border Patrol requested that OTIA reduce the quantity of IFT units to be procured and deployed to Arizona from 50 to 38. OTIA officials stated that they are considering the request and no actions had been taken as of November 2013. Program: Remote Video Surveillance System (RVSS); Under contract: [Check]; Deployment: Started: [Empty]; Deployment: Completed: [Empty]; On July 26, 2013, CBP awarded a contract to procure 73 units to be deployed to Arizona starting in the second quarter of fiscal year 2014. As of August 2013, CBP had constructed 5 new RVSS towers so that surveillance technologies can be installed on them. According to CBP officials, as of November 2013, modifications to a command and control facility where the surveillance systems are monitored were completed for the towers in the Nogales station's area of responsibility, 6 new RVSS towers were constructed, and 11 existing RVSS towers were repaired to be ready for new surveillance technology. Program: Mobile Surveillance Capability (MSC); Under contract: [Check]; Deployment: Started: [Check]; Deployment: Completed: [Empty]; CBP awarded contracts in December 2010 to two contractors. As of August 2013, one of the contractors had delivered all 33 units it was under contract to provide. However, CBP rejected 13 of the 33 units in July 2013 because cracks were identified where the surveillance technology was welded to the truck carrying the technology. According to CBP officials, the welding problem was resolved in September 2013, and CBP accepted the 13 units previously rejected and deployed them. The other contractor produced and deployed 2 units to Arizona that did not meet performance requirements in the operational environment during multiple tests. According to CBP officials, CBP terminated this contract in July 2013 and plans to use de-obligated funds from this contract to procure 16 additional units from the other contractor during the first quarter of fiscal year 2014. Program: Mobile Video Surveillance System (MVSS); Under contract: [Empty]; Deployment: Started: [Empty]; Deployment: Completed: [Empty]; CBP officials stated that CBP expects to award a contract for the system by July 2014. In December 2012, CBP directed that the 4 units to be procured under the Plan be deployed in Texas instead of Arizona because of shifting threat patterns. OTIA officials stated that they have not removed the program from the Plan because the system will be needed in Arizona in the future. Program: Agent Portable Surveillance System (APSS); Under contract: [Check]; Deployment: Started: [Empty]; Deployment: Completed: [Check]; In 2012, CBP procured 15 APSS units under an existing Department of Defense (DOD) contract because that system was the only option that had an integrated radar and thermal imaging solution. OTIA considered use of these 15 units to be a technology demonstration project under the Plan, and considers this initial project/program to be completed. However, after using these units, Border Patrol determined that the units did not meet its requirements. Border Patrol continues to operate the APSS units until a new APSS program can be established. OTIA plans to establish a new APSS program and is developing a mission needs statement and a preliminary concept of operations, which OTIA expects to complete by October 2014. Program: Thermal Imaging Devices (TID); Under contract: [Check]; Deployment: Started: [Empty]; Deployment: Completed: [Check]; CBP awarded a contract for the system in August 2011 and all 22 units to be procured under the Plan were deployed to Arizona by October 2011. Program: Unattended Ground Sensors (UGS) and Imaging Sensors (IS); Under contract: [Check]; (for UGS)[B]; Deployment: Started: [Empty]; Deployment: Completed: [Empty]; In June 2013, CBP procured additional units of UGS with the same capability as UGS currently deployed in Arizona. As part of the Plan, CBP had intended to procure and deploy UGS with IS technology (an advanced capability) in Arizona. However, during demonstration and analysis of the UGS with IS technology, there were problems because of saturated radio frequencies, limited bandwidth, and system integration. Consequently, in March 2013, CBP approved the procurement of additional units of the UGS with existing current capabilities to replace old and failing units in Arizona, while the Department of Homeland Security (DHS) and CBP work to resolve frequency allocation and infrastructure supportability issues in order to meet the UGS with IS technology operational requirements. OTIA officials expect the problems to be resolved by March 2015. Source: GAO analysis of CBP data. [A] Subsequent to the issuance of our sensitive report on the Arizona Border Surveillance Technology Plan, CBP awarded a contract for the IFT program on February 26, 2014. [B] According to OTIA officials, CBP is procuring UGS units with existing current capabilities; CBP is not procuring UGS with IS technology. Once CBP resolves the frequency allocation and infrastructure supportability issues, according to CBP officials, it will award the contract for the UGS with IS technology as originally planned. [End of table] CBP's Program Schedules and Life-Cycle Cost Estimates Reflect Some but Not All Best Practices: CBP's Program Schedules Have Experienced Delays and the Three Highest- Cost Programs Meet Some but Not All Scheduling Best Practices: The Majority of the Plan's Programs Experienced Some Delays, and Schedules for the IFT, RVSS, and MSC Programs Are Not Fully Reliable: OTIA has developed a schedule for each of the Plan's seven programs, and four programs will not meet their originally planned completion dates. OTIA established schedules for each program, serving as the original program plans with the required sequence of events, resource assignments, and dates for deliverables.[Footnote 19] However, as of March 2013, five of the Plan's programs--IFT, RVSS, MSC, APSS, and UGS/IS--have experienced delays relative to their baseline schedules, as shown in figure 2.[Footnote 20] Figure 2: Comparison of Original Baseline Schedule and March 2013 Schedule for Each Technology Program in the Arizona Border Surveillance Technology Plan: [Refer to PDF for image: illustration] Technology program: Integrated Fixed Towers (IFT); Baseline schedule duration of the program: 2010-2015; March 13, 2013, program schedule duration: 2010-2016; Start date (baseline/actual): Mid-2010; Mid-2010; Completion date (baseline/March 13, 2013): Mid-2015; Mid-2016. Technology program: Remote Video Surveillance System (RVSS); Baseline schedule duration of the program: 2011-2020; March 13, 2013, program schedule duration: 2011-2021; Start date (baseline/actual): 2011; 2011; Completion date (baseline/March 13, 2013): Mid-2020; Early 2021. Technology program: Mobile Surveillance Capability (MSC); Baseline schedule duration of the program: 2009-2014; March 13, 2013, program schedule duration: 2012-2015; Start date (baseline/actual): 2009; 2012; Completion date (baseline/March 13, 2013): Mid-2014; Mid-2015. Technology program: Agent Portable Surveillance System (APSS); Baseline schedule duration of the program: 2010-2013; March 13, 2013, program schedule duration: 2011-2013; Start date (baseline/actual): 2010; 2011; Completion date (baseline/March 13, 2013): Early-2013; Early-2013; Program completed: March 2013. Technology program: Mobile Video Surveillance System (MVSS); Baseline schedule duration of the program: 2011-2014; March 13, 2013, program schedule duration: 2011-2014; Start date (baseline/actual): 2011; 2011; Completion date (baseline/March 13, 2013): Late-2014; Late-2014; Program redirected: Early 2013. Technology program: Thermal Imaging Device (TID); Baseline schedule duration of the program: 2011; March 13, 2013, program schedule duration: 2011; Start date (baseline/actual): Early 2011; Early 2011; Completion date (baseline/March 13, 2013): Mid-2011; Mid-2011; Program completed: Mid-2011. Technology program: Unattended Ground Sensors (UGS) and Imaging Sensors (IS); Baseline schedule duration of the program: 2011-2012; March 13, 2013, program schedule duration: 2012-2014; Start date (baseline/actual): 2011; 2012; Completion date (baseline/March 13, 2013): Mid-2012; late-2014. Source: GAO analysis of Office of Technology and Acquisition and Customs and Border Protection information. [End of figure] OTIA officials attributed program delays to various factors, including higher than expected numbers of proposals from vendors for some of the programs, system performance problems, and limited resources. In particular, OTIA officials stated that they initiated acquisitions for a number of the Plan's programs around the same time, but OTIA did not have a sufficient number of acquisition staff with sufficient experience and skills to review contract proposals or manage the programs, a fact that contributed to program delays.[Footnote 21] For example, OTIA officials stated that for both the IFT and RVSS programs, the source selection process to decide which vendor would be awarded the contract was extended because of a higher than expected number of proposals received from vendors and a limited acquisition workforce to review and process the proposals, including not having a dedicated contracting officer for each of the programs. In addition, for the MSC program, OTIA officials attributed delays to problems both vendors who were awarded contracts experienced with their systems, as previously discussed. OTIA took various actions in response to these delays, such as extending the scheduled contract award date for the IFT and RVSS programs and extending scheduled activities for the MSC program from July 2014 to September 2015. According to best practices, in acquisition programs, agencies may make modifications to program schedules to reflect changes to programs; CBP has consistently updated each program's schedule in response to program delays. However, we assessed OTIA's schedules as of March 2013 for the three highest-cost technology programs--IFT, RVSS, and MSC--and found that these program schedules addressed some, but not all, best practices for scheduling. The Schedule Assessment Guide identifies 10 best practices associated with effective scheduling, which are summarized into four characteristics of a reliable schedule--comprehensive, well constructed, credible, and controlled.[Footnote 22] Table 3 summarizes our assessment of the IFT, RVSS, and MSC schedules. Appendix III provides more detailed information on the description of each best practice and on the results of our assessment. Table 3: Summary of Our Schedule Assessments for the Integrated Fixed Tower (IFT), Remote Video Surveillance System (RVSS), and Mobile Surveillance Capability (MSC) Programs: Schedule characteristic: Comprehensive; Best practice: * Captures all activities, as defined in the work breakdown structure, which defines in detail the work for both the government and its contractors necessary to accomplish a program's objectives; * Reflects what resources (for example, labor, materials, and overhead) are needed to do the work, whether all required resources will be available when needed, and whether any funding or time constraints exist; * Establishes the duration of all activities and has specific start and end dates; IFT: Overall assessment: Partially met; RVSS: Overall assessment: Partially met; MSC: Overall assessment: Partially met. Schedule characteristic: Well constructed; Best practice: * Sequences all activities--that is, all activities are sequenced in the order that they are to be implemented with the most straightforward logic possible; * Establishes a valid critical path, which represents the chain of dependent activities with the longest total duration. A valid critical path is necessary to examine the effects of any activity slipping along this path; * Identifies the total float time--the amount of time by which an activity can slip before the delay affects the program's estimated finish date--so that a schedule's flexibility can be determined; IFT: Overall assessment: Substantially met; RVSS: Overall assessment: Partially met; MSC: Overall assessment: Partially met. Schedule characteristic: Credible; Best practice: * Verifies that schedule is (1) horizontally traceable--that is, it reflects the order of events necessary to achieve aggregated products or outcomes and (2) vertically traceable--that is that activities in varying levels of the schedule map to one another and key dates presented to management in periodic briefings are in sync with the schedule; * Conducts a schedule risk analysis to predict a level of confidence in meeting the program's completion date.[A]; IFT: Overall assessment: Partially met; RVSS: Overall assessment: Partially met; MSC: Overall assessment: Minimally met. Schedule characteristic: Controlled; Best practice: * Updates periodically using actual progress and logic to realistically forecast dates for program activities; * Maintains a baseline schedule to measure, monitor, and report the program's progress; IFT: Overall assessment: Partially met; RVSS: Overall assessment: Partially met; MSC: Overall assessment: Minimally met. Source: GAO analysis of U.S. Customs and Border Protection, Office of Technology Innovation and Acquisition data. Notes: Not met--OTIA provided no evidence that satisfies any of the criterion. Minimally met--OTIA provided evidence that satisfies a small portion of the criterion. Partially met--OTIA provided evidence that satisfies about half of the criterion. Substantially met--OTIA provided evidence that satisfies a large portion of the criterion. Met--OTIA provided complete evidence that satisfies the entire criterion. We developed this rating scale in consultation with cost- estimating experts who helped develop the Cost Estimating and Assessment Guide. [A] A schedule risk analysis is performed to calculate the amount of contingency time that is needed to complete the program on time. [End of table] According to our overall analysis, OTIA at least partially met the four characteristics of reliable schedules for the IFT and RVSS schedules and partially or minimally met the four characteristics for the MSC schedule. For example: * Comprehensive: OTIA's schedule for the IFT, RVSS, and MSC programs partially met best practices in terms of being comprehensive. For example, our analysis found that all three program schedules reflected the work that needed to be accomplished for the schedules, and each schedule had duration estimates that at least substantially met best practices. The IFT schedule contained a clear start and a finish milestone, and the RVSS schedule contained at least a clear start milestone. However, the schedules for these programs did not meet other best practices in terms of being comprehensive. For example, the MSC schedule did not contain fields that map activities to a program work breakdown structure; and the schedules for the IFT and RVSS programs did not fully map all schedule activities to each program's work breakdown structure in accordance with best practices.[Footnote 23] Moreover, the IFT and RVSS schedules did not include the level of detail expected to provide oversight of ongoing construction work, as activities associated with the construction work were reflected in the schedules as milestones, limiting OTIA's ability to monitor the progress of these efforts. Specifically, these activities were reflected in the schedules as a milestone that was a point in time, rather than a range of time, as called for by best practices. In addition, resources were not assigned to some activities in all three schedules. According to best practices, a schedule without resources implies an unlimited number of resources and their unlimited availability. Best practices note that assigning resources to activities across programs can help prevent any future overallocation of resources. * Well constructed: OTIA's schedule for the IFT program substantially met the characteristic of being well constructed; the schedules for RVSS and the MSC programs partially met this characteristic. For example, our analysis found the IFT program schedule had few missing or incorrect logic links and the critical path--the chain of dependent activities with the longest total duration--was found to be a straightforward, continuous path of activities that depicted the effort driving the key milestones. Our analysis of the RVSS and MSC program schedules found that these schedules had no missing or incorrect logic links. However, we could not verify a reliable critical path that was continuous from the status date to contract award for these schedules. In addition, our analysis shows that each of the three programs' schedules exhibited unreasonable amounts of total float--that is, the amount of time by which an activity can slip before the delay affects the program's estimated finish date appeared to be overestimating true schedule flexibility. For example, 25 percent of the activities in the IFT schedule appeared to be able to slip at least 10 working months before affecting the final milestone of the program. * Credible: OTIA's schedules for the IFT and RVSS programs partially met the characteristic of being credible; the MSC program schedule minimally met this characteristic. For example, our analysis found that the IFT and RVSS schedules responded when significant delays were introduced into the planned activities in the schedules; that is, when we tested the robustness of the schedules by extending activity durations, forecasted dates recalculated appropriately. However, the MSC schedule responded to schedule delays in some instances but not in others, and some forecasted dates did not recalculate to account for changes we made in the duration of activities when testing the MSC schedule. Additionally, OTIA performed a risk analysis for the IFT and RVSS programs; however, the IFT and RVSS analyses did not include the risks most likely to delay the project or how much contingency reserve (that is, time held in reserve for potential delays) was needed for each schedule. For the MSC schedule, OTIA did not conduct a schedule risk analysis because, according to program officials, OTIA did not have a tool for conducting schedule risk assessment at the time the MSC schedule was developed. According to best practices, without this analysis, the program office may not sufficiently understand the level of confidence in meeting the program's completion date and identify any potential reserves for contingencies.[Footnote 24] * Controlled: OTIA's schedules for the IFT and the RVSS programs partially met the characteristic of being controlled; the MSC program schedule minimally met this characteristic. For example, our analyses determined all three schedules were well maintained, updated periodically by a trained scheduler, and contained no out-of-sequence activities. We also found that the IFT and the RVSS schedules contained no date anomalies, but the MSC schedule did have anomalies. For example, the MSC schedule contained 13 activities in the past with no actual start or finish dates. Further, our analysis showed that none of the schedules had valid baseline dates for activities or milestones by which management could track current performance. The IFT baseline schedule was originally approved in July 2011, and the baseline for the RVSS was approved in September 2012; however, both of these programs have been delayed. Rebaselining resets the estimated schedule that is used to determine how the program will be held accountable[Footnote 25]. Once a program is rebaselined, OTIA officials stated that the office plans to report on the performance of the program based on the revised schedule. However, none of the schedules we assessed contained valid baseline dates that could be used to track on-time, delayed, or accelerated effort. For example, a baseline schedule was not established for the MSC program and both the IFT and RVSS schedules were missing some baseline dates for activities and milestones. In addition, according to our analyses, none of the three schedules were supported by a schedule baseline document, which is a single document that defines the organization of a schedule, describes the logic of the network, describes the basic approach to managing resources, and provides a basis for all parameters used to calculate dates. OTIA officials stated that the Acquisition Program Baseline for both the IFT and RVSS serves as the baseline schedule document, which defines the cost, schedule, and performance baselines; however, the Acquisition Program Baseline and related guidance present an overview of OTIA schedule policy rather than assumptions specific to individual program schedule[Footnote 26]s.: OTIA officials stated that they believe the schedules for the IFT, RVSS, and MSC programs are generally reliable, but also stated that these schedules may not fully meet all best practices. OTIA officials stated that they plan to rebaseline the IFT and RVSS program schedules after contract award and the MSC program schedule after contract negotiations. Rebaselining these schedules would help OTIA better address some of the best practices, such as to help ensure a more full and consistent allocation of resources, to address gaps in the critical path to program completion, and to address schedule risk assessments. However, OTIA's plans to rebaseline the schedules would not position OTIA to meet all best practices, which are designed to ensure reliable schedules. According to best practices, to be considered reliable, a schedule must substantially or fully meet all four schedule characteristics. As our analysis indicates, OTIA does not have the information it needs in the schedules to effectively use them in managing and overseeing the IFT, RVSS, and MSC programs. While OTIA's plans to rebaseline the schedules are positive steps, ensuring that all schedule best practices are applied to the IFT, RVSS, and MSC schedules when updating them could help OTIA better ensure the reliability of the three programs' schedules and could help better position OTIA to identify and address any potential further delays in the programs' commitment dates. OTIA Does Not Have an Integrated Master Schedule for the Plan: OTIA has not developed an Integrated Master Schedule for scheduling, executing, and tracking the work to implement the Plan and its seven programs. Rather, OTIA has used the separate schedules for each individual program (or "project") to manage implementation of the Plan. The use of an Integrated Master Schedule is a well-established practice in program and project management and is a necessary tool for coordination of independently managed projects that have dependencies- -including resource dependencies--on one another.[Footnote 27] According to schedule best practices, an Integrated Master Schedule shows the effect of delayed or accelerated government activities on contractor activities, as well as the opposite effect for multiple programs. In addition, an Integrated Master Schedule that allows managers to monitor all work activities, how long the activities will take, and how the activities are related to one another is a critical management tool for complex systems that involve the incorporation of a number of different projects, such as the Plan.[Footnote 28] OTIA officials stated that an Integrated Master Schedule for the overarching Plan is not needed because the Plan contains individual acquisition programs as opposed to a plan consisting of seven integrated programs. However, collectively, these programs are intended to provide Border Patrol with a combination of surveillance capabilities to assist in achieving situational awareness along the Arizona border with Mexico, as referenced in CBP's planning documents.[Footnote 29] As a document that integrates the planned work, the resources necessary to accomplish that work, and the associated budget, an Integrated Master Schedule provides information and oversight regarding the schedule. According to best practices, an Integrated Master Schedule also helps agencies monitor progress against overall completion dates. However, OTIA has not established a target completion date for an Integrated Master Schedule for the overall Plan. Moreover, while the programs themselves may be independent of one another, the Plan's resources are being shared among the programs. OTIA officials stated that when schedules were developed for the Plan's programs, they assumed that personnel would be dedicated to work on individual programs and not be shared between programs. However, as OTIA has initiated and continued work on the Plan's programs, it has shared resources such as personnel among the programs, contributing, in part, to delays experienced by the programs. For example, with regard to the IFT program, OTIA officials stated that a contracting officer had to be shared with another program. Further, OTIA officials told us that because of resource constraints associated with initiation of the Plan, development of two acquisition documents--an Acquisition Program Baseline and Life-cycle Cost Estimate--for the MSC program were deferred because the IFT and RVSS programs were deemed higher priorities. In addition, for the IFT and RVSS programs, planning and deployment activities were delayed because of resource-constrained environments and the lack of dedicated contracting officers to plan and execute the programs' source selection and environmental activities. Developing and maintaining an Integrated Master Schedule for the Plan could allow OTIA insight into current or programmed allocation of resources for all programs as opposed to attempting to resolve any resource constraints for each program individually. Because OTIA does not have an Integrated Master Schedule for the Plan, it is not well positioned to understand how schedule changes in each individual program could affect implementation of the overall Plan. An Integrated Master Schedule could also help provide CBP a comprehensive view of the Plan and help CBP to reliably commit to when the Plan will be fully implemented, as well as help CBP to better predict whether estimated completion dates are realistic to manage programs' performance. OTIA Does Not Have Life-Cycle Cost Estimates for the Plan or Its Two Highest-Cost Programs That Fully Meet Best Practices: OTIA has developed a rough order of magnitude estimate for the Plan and individual Life-cycle Cost Estimates for the IFT and RVSS programs that meet some but not all best practices for such estimates.[Footnote 30] Best practices for cost estimating and Office of Management and Budget guidance emphasize that reliable cost estimates are important for program approval and continued receipt of annual funding.[Footnote 31] DHS policy similarly provides that Life-cycle Cost Estimates are essential to an effective budget process and form the basis for annual budget decisions. Reliable Life-cycle Cost Estimates reflect four characteristics--they are (1) well documented, (2) comprehensive, (3) accurate, and (4) credible--which encompass 12 best practices. [Footnote 32] For example, a best practice for a credible cost estimate is independently verifying a program's Life-cycle Cost Estimate with an independent cost estimate and reconciling any differences.[Footnote 33] In August 2010, OTIA developed a rough order of magnitude cost estimate for the Plan--a high-level estimate without much detail-- which was about $1.54 billion, including approximately $750 million in acquisition costs and approximately $800 million in operations and maintenance costs. In June 2013, OTIA revised this cost estimate for the Plan, estimating the cost at $1.39 billion, including about $480 million in acquisition costs and about $910 million in operations and maintenance costs.[Footnote 34] According to OTIA officials, some of the differences in costs between the August 2010 and June 2013 estimates are attributable to using more current information for the June 2013 estimate.[Footnote 35] Table 4 provides the June 2013 estimated cost and number of units to be procured and deployed for each of the Plan's seven programs. Table 4: Quantities to Be Procured and Deployed for the Arizona Border Surveillance Technology Plan's Programs and Their Estimated Cost as of June 2013: Millions of then-year dollars: Program: Integrated Fixed Towers (IFT); Number of units to be procured and deployed: 52[A]; Estimated cost in June 2013: $960.8 million. Program: Remote Video Surveillance System (RVSS); Number of units to be procured and deployed: 18 new systems and technology upgrades to 47 existing systems[B]; Estimated cost in June 2013: $287.5 million. Program: Mobile Surveillance Capability (MSC); Number of units to be procured and deployed: 48[C]; Estimated cost in June 2013: $107.2 million. Program: Agent Portable Surveillance System (APSS); Number of units to be procured and deployed: 15; Estimated cost in June 2013: $11.6 million. Program: Mobile Video Surveillance System (MVSS); Number of units to be procured and deployed: 4[D]; Estimated cost in June 2013: $12.6 million. Program: Thermal Imaging Device (TID); Number of units to be procured and deployed: 22[E]; Estimated cost in June 2013: $7.3 million. Program: Unattended Ground Sensors (UGS) and Imaging Sensors (IS); Number of units to be procured and deployed: 545 UGS and 140 IS; Estimated cost in June 2013: $3.0 million[F]. Program: Total; Number of units to be procured and deployed: 891; Estimated cost in June 2013: $1.39 billion. Source: GAO analysis of U.S. Customs and Border Protection (CBP) data. [A] The cost estimate is based on 52 units, but CBP subsequently reduced the number of units to 50, and according to officials, CBP is considering reducing the quantity to 38 because of threats shifting from Arizona to Texas. [B] In November 2013, CBP officials told us that CBP had awarded a contract for 73 units on July 26, 2013, but the agency did not provide a revised cost estimate for the additional units. [C] The 48 units to be procured and deployed exclude 1 unit that CBP received for consideration. Twelve of the 48 units are to be procured under the Plan with annually appropriated funds; the other 36 units are to be procured with American Recovery and Reinvestment Act funds. According to CBP officials, all units will be initially deployed in Tucson, Arizona, and 36 units were to be deployed as of March 2013. [D] Because of a change in threat, these units are to be deployed in Texas instead of Arizona as originally planned. [E] Border Patrol procured an additional 34 units with American Recovery and Reinvestment Act funds. [F] According to OTIA officials, the UGS estimated cost is for existing technology, not the UGS with IS technology as planned, because of problems developing the new IS technology. [End of table] In November 2011, we reported on the results of our analysis of the Plan's August 2010 estimate.[Footnote 36] Specifically, we found that the August 2010 estimate substantially met best practices in terms of being comprehensive and accurate, and partially met best practices in terms of being well documented. For example, we reported that, in terms of being comprehensive, the estimate included documented technical data. In terms of accuracy, we reported that the cost estimate was continually updated and refined as more information became known. However, we also found that the August 2010 estimate minimally met best practices for being credible. For example, CBP officials had not conducted a sensitivity analysis and a cost-risk and uncertainty analysis to determine a level of confidence in the estimate, nor did CBP compare it with an independent estimate. At that time, OTIA officials stated that CBP's approach was to develop and report an initial rough order of magnitude cost estimate for the programs in the Plan, not necessarily a Life-cycle Cost Estimate that met all best practices. In our November 2011 report, we recommended that CBP update its August 2010 cost estimate for the Plan using best practices, so that the estimate would be comprehensive, accurate, well documented, and credible.[Footnote 37] CBP concurred with the recommendation. In November 2012, OTIA officials told us that CBP no longer intends to develop a Life-cycle Cost Estimate for the Plan that meets all best practices. OTIA officials also stated that they used a risk-based approach to improve cost-estimating certainty and confidence by focusing on the Life-cycle Cost Estimates for the IFT and RVSS programs, which compose 90 percent of the Plan's estimated cost. According to the officials, developing a Life-cycle Cost Estimate for the Plan that followed all best practices at this point in the acquisition cycle would not contribute much cost management benefit because a number of programs are under contract and units were being deployed to the field. However, as we recommended in November 2011, we continue to believe that a Life-cycle Cost Estimate for the Plan, developed using best practices, is needed to ensure that the estimate is comprehensive, accurate, well documented, and credible to help the agency and Congress fully understand the impacts of the Plan's various programs. Moreover, CBP's June 2013 revised cost estimate for the Plan does not address the concerns we identified in November 2011 with CBP's original cost estimate. For example, the IFT and RVSS programs compose 90 percent of the Plan's cost in the June 2013 Life-cycle Cost Estimate; however, OTIA has not independently verified its Life-cycle Cost Estimates for the IFT and RVSS programs with independent cost estimates and reconciled any differences with each program's respective Life-cycle Cost Estimate, consistent with best practices. [Footnote 38] Furthermore, the remainder of the June 2013 Life-cycle Cost Estimate is not fully documented. The costs for programs other than the IFT and RVSS are provided as a summary program cost without a detailed description provided. In contrast, the IFT and RVSS Life- cycle Cost Estimates provided backup documentation, including labor hours and methodology. After CBP developed the initial cost estimate for the Plan in August 2010, CBP developed separate Life-cycle Cost Estimates for the IFT and RVSS programs in January and March 2012, respectively. The estimates for the IFT and RVSS programs met some but not all best practices for cost estimates. Specifically, our analysis shows that, in developing these estimates, CBP partially documented the data used in the cost model for the IFT's Life-cycle Cost Estimate and fully documented the cost model for the RVSS's Life-cycle Cost Estimate. CBP also conducted a sensitivity analysis and risk and uncertainty analysis to determine the level of confidence in both Life-cycle Cost Estimates so that contingency funding could be established relative to quantified risk. However, our analysis showed that CBP did not independently verify its draft Life-cycle Cost Estimates for the IFT and RVSS programs with independent cost estimates and reconcile any differences with each program's respective Life-cycle Cost Estimate, consistent with best practices. According to OTIA officials, the IFT program's Life-cycle Cost Estimate will be updated after the contract is awarded, the cost model for the updated Life-cycle Cost Estimate will be fully documented in accordance with best practices for cost estimating, and DHS's Office of Program Accountability and Risk Management is expected to review the updated IFT Life-cycle Cost Estimate. Also, OTIA officials stated that they expect to update the RVSS Life-cycle Cost Estimate and receive approval for it in February 2014. However, OTIA is uncertain as to whether the updated IFT and RVSS Life-cycle Cost Estimates will be verified with independent cost estimates and any differences reconciled with the respective updated Life-cycle Cost Estimates. Specifically, OTIA officials stated that the IFT contract award will drive changes to the scope, schedule, and cost/budget baseline for the IFT program; CBP plans to update the Life-cycle Cost Estimate with programming and cost assumptions; and CBP plans to provide the updated cost estimate to the department as part of a revised submission of the Acquisition Program Baseline document. For the RVSS program, OTIA officials stated that the contract award resulted in changes that required updates to and reconciliation between the Cost Estimating Baseline Document and the Life-cycle Cost Estimate for the program's scope, schedule, and cost/budget baseline. CBP intends to update the RVSS program's Life-cycle Cost Estimate with programming and cost assumptions during the second quarter of fiscal year 2014 and provide the updated cost estimate to DHS for review. However, according to OTIA officials, as of November 2013, the agency had not yet determined whether to independently verify or validate the IFT and RVSS Life- cycle Cost Estimates. As CBP no longer intends to develop a Life-cycle Cost Estimate for the entire Plan, when updating the IFT and RVSS Life- cycle Cost Estimates, independently verifying the cost estimates and reconciling any differences, in accordance with cost-estimating best practices, could help better ensure the reliability of each estimate. CBP Followed Some Aspects of DHS Acquisition Guidance, but Did Not Fully Complete Documents for Acquisition Decisions Consistent with the Guidance: CBP Followed Some Aspects of DHS Acquisition Guidance to Acquire Commercial-Off-the-Shelf Products: Consistent with DHS acquisition guidance, CBP tailored the DHS Acquisition Life-cycle Framework for the IFT, RVSS, and MSC programs, primarily because the agency's strategy for the three programs includes acquiring nondevelopmental technologies, preferably commercial-off-the-shelf systems, as opposed to developing technologies. As a result, rather than entering the DHS acquisition framework at Acquisition Decision Event 1, when a system includes technology development, the IFT program entered at combined Acquisition Decision Events 2B/3, and the RVSS and MSC programs entered at Acquisition Decision Event 2B.[Footnote 39] In pursuing its strategy to acquire nondevelopmental systems for the Plan's three highest-cost programs, OTIA identified requirements and capabilities for each program, consistent with DHS acquisition guidance. Specifically, OTIA identified requirements for the IFT and RVSS programs that were approved in 2012, and capabilities for the MSC program that were developed in 2009. As part of the strategy to acquire commercial-off-the-shelf systems, CBP traded off, that is, reduced, some requirements for the RVSS and expects to trade off some requirements for the IFT for cost-effectiveness or schedule reasons. For example, with regard to the RVSS, OTIA traded off two requirements because, according to OTIA officials, they were not offered with the selected RVSS, which presented the best value to the government while providing as many requirements as possible.[Footnote 40] According to DHS Acquisition Management Directive 102-01 guidance, as part of the acquisition process, a program office may make trade-offs among performance, life-cycle cost, schedule, and risk. For example, the guidance states that a small reduction in performance that does not impair the mission might result in a large cost reduction. DHS and CBP Did Not Consistently Approve Key Acquisition Documents in Accordance with Departmental Guidance: For the Plan's three highest-cost programs, DHS and CBP did not consistently approve key acquisition documents before or at the Acquisition Decision Events, in accordance with DHS's acquisition guidance. An important aspect of an Acquisition Decision Event is the review and approval of key acquisition documents critical to establishing the need for a program, its operational requirements, an acquisition baseline, and test and support plans, according to DHS guidance. DHS Acquisition Management Directive 102-01--and the associated DHS Instruction Manual 102-01-001 and appendixes--requires program offices to develop documents demonstrating critical knowledge that would help leaders make better-informed investment decisions when managing individual programs. The DHS guidance provides information for preparing acquisition documents, which require department-or component-level approval before a program moves to the next acquisition phase. In a September 2012 report, we found that while DHS had initiated efforts to validate required acquisition documents in a timely manner at major milestones, DHS leadership had authorized and continued to invest in major acquisition programs even though the vast majority of those programs lacked foundational documents demonstrating the knowledge needed to help manage risks and measure performance.[Footnote 41] We concluded in September 2012 that this limited DHS's ability to proactively identify and address the challenges facing individual programs. We recommended, among other things, that DHS ensure all major acquisition programs fully comply with DHS acquisition policy by obtaining department-level approval for key acquisition documents before approving their movement through the acquisition life cycle. DHS concurred and since the time of our September 2012 report has approved additional acquisition documents. However, DHS has not yet demonstrated progress in obtaining department- level approval for most of its major acquisition programs' key acquisition documents. On the basis of our analysis for IFT, RVSS, and MSC programs under the Plan, the DHS Acquisition Decision Authority approved the IFT program and the CBP Acquisition Decision Authority approved the RVSS and MSC programs to proceed to subsequent phases in the Acquisition Life-cycle Framework without approving all six required acquisition documents for each program. We also found that one document for the IFT program, five documents for the RVSS program, and two documents for the MSC program were subsequently approved after the programs received authority to proceed to the next phase. Table 5 provides a comparison of when key acquisition documents were required to be approved and when they were approved for the IFT, RVSS, and MSC programs. Table 5: Comparison of When Key Acquisition Documents Were Required to Be Approved and When They Were Approved for the Integrated Fixed Towers (IFT), Remote Video Surveillance System (RVSS), and Mobile Surveillance Capability (MSC) Programs, as of November 2013: Document: Acquisition Plan--provides a top-level plan for the overall acquisition approach; IFT: Date required: for Acquisition Decision Event 2B/3 on 3/15/12; Date approved: 3/13/12; RVSS: Date required: for Acquisition Decision Event 2B on 11/3/11; Date approved: 2/24/12; [Shaded] MSC: Date required: for Acquisition Decision Event 2B on 7/22/10 and Acquisition Decision Event 3 on 9/27/12; Date approved: 4/20/10. Document: Acquisition Program Baseline--establishes a program's critical baseline cost, schedule, and performance parameters; IFT: Date approved: 3/15/12; RVSS: Date approved: 9/6/12; [Shaded] MSC: Date approved: Not yet approved[D]. [Shaded] Document: Integrated Logistics Support Plan--defines the strategy to ensure supportability and sustainment of a future capability; IFT: Date approved: 3/15/12; RVSS: Date approved: 10/9/12; [Shaded] MSC: Date approved: 5/17/13. [Shaded] Document: Life-cycle Cost Estimate--provides an exhaustive and structured accounting of all resources and associated cost elements required to develop, produce, deploy, and sustain a particular program; IFT: Date approved: Not yet approved[A]; RVSS: Date approved: Not yet approved[C]; [Shaded] MSC: Date approved: Not yet approved[E].[Shaded] Document: Operational Requirements Document--provides a number of performance parameters that a program must meet to provide useful capability to the operator; IFT: Date approved: 3/15/12; RVSS: Date approved: 4/20/12; [Shaded] MSC: Date approved: 12/29/09[F]. Document: Test and Evaluation Master Plan--documents the overarching test and evaluation approach for the acquisition program and describes developmental and operational test and evaluation needed to determine a system's technical performance, operational effectiveness and suitability, and limitations; IFT: Date approved: 11/27/13[B]; [Shaded] RVSS: Date approved: 5/8/12; [Shaded] MSC: Date approved: 8/30/11[G].[Shaded] Source: GAO analysis of U.S. Customs and Border Protection (CBP) information. Notes: Shaded documents were not approved when required. [A] CBP has a Life-cycle Cost Estimate for the IFT dated January 6, 2012, but it has not been approved by the Department of Homeland Security (DHS); CBP officials stated that they plan to update the IFT Life-cycle Cost Estimate after the IFT contract is awarded. [B] Although the IFT Test and Evaluation Master Plan was originally approved on March 15, 2012, the approval was rescinded on June 11, 2012. CBP updated the IFT Test and Evaluation Master Plan and the DHS Director of Operational Test and Evaluation approved it on November 27, 2013. [C] CBP has a draft Life-cycle Cost Estimate for the RVSS dated March 8, 2012, which CBP officials stated is expected to be completed and approved in the second quarter of fiscal year 2014. [D] According to CBP officials, an Acquisition Program Baseline for the MSC is to be approved by the second quarter of fiscal year 2014. [E] CBP officials stated that they are developing a Life-cycle Cost Estimate for the MSC's operations and maintenance costs, which was expected to be completed by December 2013. [F] CBP did not approve an Operational Requirements Document for the MSC; rather, CBP approved a capabilities matrix. [G] The MSC Test and Evaluation Master Plan was approved later than required for Acquisition Decision Event 2B and in time for Acquisition Decision Event 3. [End of table] We discuss the status of key acquisition documents for the three highest-cost programs below. IFT program. Our analyses found that the DHS Acquisition Decision Authority approved four of the six documents required at Acquisition Decision Event 2B/3--the Acquisition Plan, Acquisition Program Baseline, Integrated Logistics Support Plan, and Operational Requirements Document--but did not approve two others--the Life-cycle Cost Estimate and Test and Evaluation Master Plan. At the time of the Acquisition Decision Event, CBP had a Life-cycle Cost Estimate for the IFT, but the cost estimate had not yet been approved by DHS. According to OTIA officials, the Life-cycle Cost Estimate for the IFT was discussed at the Acquisition Decision Event 3 meeting and approved by the DHS Under Secretary for Management and DHS's Office of Program Accountability and Risk Management. However, CBP did not provide documentation showing that the estimate was approved by DHS. The DHS Director of Operational Test and Evaluation approved the revised IFT Test and Evaluation Master Plan on November 27, 2013, over 18 months after it was required to be approved.[Footnote 42] DHS and CBP officials attributed the delay in approving the Test and Evaluation Master Plan, in part, to discussions within CBP about the type and level of testing to be conducted on the IFTs. Specifically, CBP officials stated that a June 2012 version of the draft Test and Evaluation Master Plan did not include robust operational test and evaluation because of the IFT program's strategy to acquire a nondevelopmental system (sometimes referred to as a commercial-off-the- shelf system). As a result, Border Patrol requested that rigorous, disciplined testing be included in the Test and Evaluation Master Plan to obtain familiarization with, and confidence in, the system and establish baseline performance information. According to DHS's acquisition guidance, the Test and Evaluation Master Plan is important because it describes the strategy for conducting developmental and operational testing to evaluate a system's technical performance, including its operational effectiveness and suitability. However, the IFT Test and Evaluation Master Plan approved by DHS in November 2013 does not describe testing to evaluate the operational effectiveness and suitability of the system. Rather, the Test and Evaluation Master Plan describes CBP's plans to conduct a limited user test of the IFT. According to the Test and Evaluation Master Plan, the limited user test will be designed to determine the IFT's mission contribution. According to OTIA and the Test and Evaluation Master Plan, this testing is planned to occur during 30 days in environmental conditions present at one site--the Nogales station. CBP plans to conduct limited user testing for the IFT under the same process that is typically performed in any operational test and evaluation, according to the Test and Evaluation Master Plan. The November 2013 IFT Test and Evaluation Master Plan notes that, because the IFT acquisition strategy is to acquire nondevelopmental IFT systems from the marketplace, a limited user test will provide Border Patrol with the information it needs to determine the mission contributions from the IFTs, and thus CBP does not plan to conduct more robust testing. However, this approach is not consistent with DHS's acquisition guidance, which states that even for commercial-off- the-shelf systems, operational test and evaluation should occur in the environmental conditions in which a system will be used before a full production decision for the system is made and the system is subsequently deployed. This guidance also states that for commercial- off-the-shelf systems, operational tests should be conducted to ensure that the systems satisfy user-defined requirements. In addition, DHS guidance states that the primary purpose of test and evaluation is to provide timely and accurate information to managers, decision makers, and other stakeholders to support research, development, and acquisition, in a manner that reduces programmatic financial, schedule, and performance risk.[Footnote 43] We recognize the need to balance the cost and time to conduct testing to determine the IFT's operational effectiveness and suitability with the benefits to be gained from such testing. However, revising the Test and Evaluation Master Plan to include more robust testing to determine operational effectiveness and suitability that more fully accounts for the various environmental conditions under which the IFTs will operate could better position CBP to evaluate IFT capabilities before moving to full production for the systems, help provide CBP with information on the extent to which the towers satisfy the Border Patrol's user requirements, and help reduce potential program risks. In particular, although the limited user test should help provide CBP with information on the IFTs' mission contribution and how Border Patrol can use the system in its operations, the limited user test does not position CBP to obtain information on how the IFTs may perform under the various environmental conditions the system could face once deployed. For example, in November 2013, the DHS Director of Test and Evaluation stated that testing the IFT at only one location during a clear, warm day without much wind would not produce representative results for days when it would be, for example, rainy, windy, freezing, or snowy, or when there was lightning. Likewise, he said testing in one location, such as Nogales, would not necessarily produce the same results as testing in Tucson because of the different terrains for the two locations. Conducting limited user testing in one area in Arizona--the Nogales station--for 30 days could limit the information available to CBP on how the IFT may perform in other conditions and locations along the Arizona border with Mexico. As of November 2013, CBP intends to deploy IFTs to 50 locations in southern Arizona, which can include different terrain and differences in climate throughout the year. Although the IFT program is not the same as SBInet, according to the Plan, the IFTs are to be deployed to locations with similar environmental and terrain conditions as SBInet towers, and IFT and SBInet systems may have similar types of technologies, such as cameras and radar. CBP previously encountered testing issues with SBInet. For example, in a January 2010 report, we found that while DHS's approach to SBInet testing appropriately consisted of a series of progressively expansive developmental and operational events, the test plans and procedures for some test events were not defined in accordance with guidance.[Footnote 44] In January 2010, we concluded that effective testing was integral to successfully acquiring and deploying a large- scale, complex system, like SBInet. We further concluded that to do less unnecessarily increased the risk of problems going undetected until late in the system's life cycle, such as when it was being accepted for use. In addition, in a November 2011 report, we found that the U.S. Army Test and Evaluation Command (ATEC) operationally tested SBInet at Tucson and that testing revealed challenges regarding the effectiveness and suitability of the technology for border surveillance.[Footnote 45] Among other things, this testing found that the rugged, restrictive terrain and weather conditions prevalent where SBInet is deployed affected the performance of the system's radar, which affected success in detecting, identifying, and classifying items of interest. Revising the Test and Evaluation Master Plan to more fully test the IFT in the various environmental conditions in which it will be used to determine operational effectiveness and suitability before IFTs move to full production, in accordance with DHS acquisition guidance, could help provide CBP with more complete information on how the IFTs will operate under a variety of conditions before beginning full production. It could also help better position CBP to understand how the IFTs will meet Border Patrol's operational requirements for the towers in contributing to Border Patrol's border security mission. Without conducting operational testing in accordance with DHS guidance, the IFT program may be at risk of not meeting Border Patrol operational needs. RVSS program. The CBP Acquisition Decision Authority approved the program at Acquisition Decision Event 2B; however, the official had not approved any of the six required documents as required by DHS acquisition guidance at the time of that event. According to OTIA officials, the Acquisition Decision Authority approved the program for this Acquisition Decision Event because all of the necessary programmatic information was sufficiently developed and coordinated to support this decision. However, the Acquisition Decision Authority did not approve five of the documents until months after this event, and a sixth document, a Life-cycle Cost Estimate, was in draft form in November 2013--2 years after its required approval date. According to OTIA officials, the RVSS Life-cycle Cost Estimate is expected to be completed and approved in the second quarter of fiscal year 2014 and provided to DHS for review. MSC program. The CBP Acquisition Decision Authority approved two of the required six documents by Acquisition Decision Event 2B--the Acquisition Plan and Operational Requirements Document. However, the Integrated Logistics Support Plan was not approved until about 21 months after Acquisition Decision Event 2B. Also, the Acquisition Program Baseline was not expected to be approved until the second quarter of fiscal year 2014, more than 3 years after it was required to be approved for Acquisition Decision Event 2B and at least 16 months after it was required to be approved for Acquisition Decision Event 3. Furthermore, a Life-cycle Cost Estimate for the MSC's operations and maintenance costs was expected to be completed in late 2013, more than 3 years after it was required to be approved for Acquisition Decision Event 2B. CBP Has Taken Some Steps to Assess Performance and Identify Mission Benefits, but Does Not Capture Complete Data on the Contributions of Its Surveillance Technologies: CBP Has Taken Steps to Assess the Performance of SBInet Technologies: Since we last reported on CBP's efforts to assess the performance of its SBInet surveillance systems in November 2011, CBP has taken steps to assess the performance of these technologies.[Footnote 46] In November 2011, we found that CBP had not conducted a post- implementation review and developed a plan to address SBInet operational test outcomes. Specifically, we found that CBP had not addressed the findings of ATEC's March 2011 operational test results for the SBInet system at Tucson, which revealed challenges regarding the effectiveness and suitability of the technology for border surveillance and made nine recommendations to address performance issues.[Footnote 47] At that time, CBP officials stated that the agency did not conduct a post-implementation review or develop a plan to address the ATEC test results because the Secretary of Homeland Security canceled SBInet in January 2011. In November 2011, we recommended that CBP, in accordance with DHS guidance, conduct a post- implementation review and operational assessment of its SBInet system, and assess costs and benefits of taking action on the results of ATEC's operational test.[Footnote 48] In making this recommendation, we concluded that conducting such a review, and weighing the costs and benefits of taking action on recommendations resulting from ATEC's test of the SBInet system, could inform CBP's decisions about future deployments of similar technologies, such as the IFTs. In response to our November 2011 recommendation, OTIA tasked the Johns Hopkins University Applied Physics Laboratory with conducting an independent post-implementation review of its SBInet Block 1 system. In January 2013, CBP released the results of the SBInet Block 1 Post Implementation Review (PIR), an assessment of the performance of its two SBInet surveillance system locations at Tucson and Ajo.[Footnote 49] The PIR concluded that CBP's SBInet surveillance system has enhanced overall situational awareness within system viewsheds, improved agent safety, and been operationally available and effective with costs consistent with those anticipated for the system.[Footnote 50] For instance, the PIR concluded that the system broadened the agents' situational awareness beyond the tactical, agent-on-the-ground sphere of awareness, and increased their ability to monitor incursions. The PIR also made five recommendations for CBP to improve future operational assessments of its SBInet surveillance system and to plan for new acquisition sensor deployments, such as for CBP to conduct a more detailed assessment of the impacts of Block 1 systems and develop more on-the-job agent training.[Footnote 51] According to OTIA and Border Patrol officials, as of May 2013, CBP is in the process of documenting and reviewing each recommendation outlined in the PIR, and intends to document its plans to address those recommendations that OTIA and the Office of Border Patrol determine need corrective action. However, these officials stated that some of the findings and recommendations outlined in the PIR will not be explicitly addressed or applied to future deployment efforts. For instance, according to officials, because the technologies planned for deployment under the Plan are commercial-off-the-shelf products, the PIR finding about recording the documentation of environmental factors, such as weather and terrain, that impede the system performance will not apply to the technologies to be deployed under the Plan, as those technologies include requirements on documentation of environmental factors. Border Patrol officials further stated that the contractor and Border Patrol will have a process to enable them to determine where the best deployment locations, given the variable terrain, will be for the technologies to be deployed under the Plan. Moreover, Border Patrol officials stated that Tucson sector officials have been assigned responsibility to determine the extent to which corrective actions are needed to address each recommendation outlined in the PIR because these sector officials have a better understanding of the environment in which the SBInet system is operating. According to OTIA officials, the agency plans to conduct annual operational assessments of its SBInet system. As additional surveillance technologies are deployed, we will continue to monitor Border Patrol's efforts to address issues identified by the PIR as part of our recommendation follow-up process. In addition, the PIR concluded that as of January 2013, six of the nine recommendations outlined in ATEC's operational test have either been addressed or are in the process of being addressed. The ATEC recommendations that remain to be addressed include, for example, addressing software reliability, improving sustainability cost, and reducing maintenance issues. OTIA officials stated that the agency plans to take actions to address the remaining three recommendations by, for example, pursuing alternative technical solutions to extend the life-cycle of the SBInet system and improving sustainability costs by reducing the contractor's responsibility for field maintenance and other functions by transitioning to government support in 2014. CBP Is Not Capturing Complete Data on the Contributions of Its Surveillance Technologies: CBP is not capturing complete asset assist data on the contributions of its surveillance technologies to apprehensions and seizures, and these data are not being consistently recorded by Border Patrol agents and across locations. Although CBP has a field within the EID for maintaining data on whether technological assets, such as SBInet surveillance towers, and nontechnological assets, such as canine teams, assisted or contributed to the apprehension of illegal entrants, and seizure of drugs and other contraband, according to CBP officials, Border Patrol agents are not required to record these data. This limits CBP's ability to collect, track, and analyze available data on asset assists to help monitor the contribution of surveillance technologies, including its SBInet system, to Border Patrol apprehensions and seizures and inform resource allocation decisions. Our analysis of EID asset assist data for apprehensions and seizures in the Tucson and Yuma sectors from fiscal year 2010 through June 2013 shows that information on asset assists was generally not recorded for all apprehension and seizure events.[Footnote 52] For instance, for the 166,976 apprehension events reported by the Border Patrol across the Tucson sector during fiscal year 2010 through June 2013, an asset assist was not recorded for 115,517 (or about 69 percent) of these apprehension events. In the Yuma sector, of the 8,237 apprehension events reported by Border Patrol agents during the specified time period, an asset assist was not recorded for 7,150 (or about 87 percent) of these apprehension events. Similarly, data on seizure events reported across the Tucson and Yuma sectors show that for some seizure events, asset assists were not reported from fiscal year 2010 through June 2013 (about 32 percent and about 67 percent, respectively). According to Border Patrol officials, in the absence of requirements for Border Patrol agents to record data on asset assists, differences in the reporting of these data at the station level are likely attributable to the emphasis placed on the recording of these data by supervisory agents. Appendix IV contains summary statistics on the extent to which data on asset assists are recorded for apprehensions and seizures across the Tucson and Yuma sectors from fiscal year 2010 through June 2013. Since data on asset assists are not required to be reported, it is unclear whether the data were not reported because an asset was not a contributing factor in the apprehension or seizure or whether an asset was a contributing factor but was not recorded by agents. As a result, CBP is not positioned to determine the contribution of surveillance technologies in the apprehension of illegal entrants and seizure of drugs and other contraband during the specified time frame.[Footnote 53] As shown in figures 3 and 4, while the recording of asset assists increased from fiscal year 2010 through June 2013 from about 18 percent to about 47 percent in the Tucson sector and from about 8 percent to about 21 percent in the Yuma sector, for more than one-half of the apprehension event records for the Tucson sector and four- fifths for the Yuma sector, asset assists were not reported for the first three quarters of fiscal year 2013. Figure 3: Percentage of Asset Assists Reported across the Tucson Sector for Apprehensions Events, Fiscal Year 2010 through June 2013: [Refer to PDF for image: vertical bar graph] (N = 166,976) Fiscal year: 2010; Asset assist not reported: 82.1%; Technology asset assist: 7.7%; Other asset assist: 10.2%. Fiscal year: 2011; Asset assist not reported: 72.8%; Technology asset assist: 13.2%; Other asset assist: 13.9%. Fiscal year: 2012; Asset assist not reported: 60.8%; Technology asset assist: 21.2%; Other asset assist: 17.9%. Fiscal year: 2013; Asset assist not reported: 53.3%; Technology asset assist: 25.7%; Other asset assist: 21%. Source: GAO analysis of Customs and Border Protection data. Note: Percentages may not add to 100 because of rounding. For the purposes of this analysis, we identify asset assists involving technology, as those Border Patrol technological assets identified within the Border Patrol's "asset assists" data field for which Border Patrol continues to make significant investments and are included as part of the Arizona Border Surveillance Technology Plan. Thus, technology assets that may be selected from the asset assists field's drop-down menu include Cameras, Mobile Surveillance Systems, Scope Trucks, and Unattended Ground Sensors. According to Border Patrol headquarters officials, agents identifying "Cameras" are most likely attributing the asset assist to either SBInet or Remote Video Surveillance Systems towers. Because agents may identify assists from more than one type of asset within the asset assists data field, assists from technological assets could be the result of technology assets alone or some combination of other asset assists. [End of figure] Figure 4: Percentage of Asset Assists Reported across the Yuma Sector for Apprehensions Events, Fiscal Year 2010 through June 2013: [Refer to PDF for image: vertical bar graph] Fiscal year: 2010; Asset assist not reported: 92.3%; Technology asset assist: 2.1%; Other asset assist: 5.6%. Fiscal year: 2011; Asset assist not reported: 87%; Technology asset assist: 3.7%; Other asset assist: 9.3%. Fiscal year: 2012; Asset assist not reported: 85%; Technology asset assist: 3.7%; Other asset assist: 11.3%. Fiscal year: 2013; Asset assist not reported: 79.2%; Technology asset assist: 5.1%; Other asset assist: 15.7%. Note: For the purposes of this analysis, we identify asset assists involving technology, as those Border Patrol technological assets identified within the Border Patrol's "asset assists" data field for which Border Patrol continues to make significant investments and are included as part of the Arizona Border Surveillance Technology Plan. Thus, technology assets that may be selected from the asset assists field's drop-down menu include Cameras, Mobile Surveillance Systems, Scope Trucks, and Unattended Ground Sensors. According to Border Patrol headquarters officials, agents identifying "Cameras" are most likely attributing the asset assist to either SBInet or Remote Video Surveillance Systems towers. Because agents may identify assists from more than one type of asset within the asset assists data field, assists from technological assets could be the result of technology assets alone or some combination of other asset assists. [End of figure] Border Patrol officials did not specify why the agency does not require the recording and tracking of data on asset assists. However, Border Patrol officials stated that agents are encouraged to select the appropriate asset assist code when assets contributed to an apprehension or seizure. Border Patrol officials also stated that although they do not regularly track and analyze data on asset assists, including those from surveillance technologies, these data are tracked and analyzed on an ad hoc basis to help determine Border Patrol's resource allocation and operational needs, and more specifically, what resources are available at the strategic level to help mitigate the threat of illegal entrants, drugs, and other contraband. Moreover, an Associate Chief at Border Patrol told us that while data on asset assists are not systematically recorded and tracked, Border Patrol recognizes the benefits of assessments of asset assists data, including those from surveillance technologies, such as the SBInet system, as these data in combination with other data, such as numbers of apprehensions and seizures, are used on a limited basis to help the agency make adjustments to its acquisition plans prior to deploying resources, thereby enabling the agency to make more informed deployment decisions. Border Patrol also uses these other data, such as numbers of apprehensions and seizures, to help inform assessment of its efforts to secure the border. Border Patrol officials cautioned that while asset assists data are the only available data directly linking apprehensions and seizures to the agency's surveillance technologies, these data do not enable direct attributions of the SBInet system's contribution to border security strategic goals because of several factors, such as changes in the flows of illegal entrants across sectors or in economic conditions in the United States and Mexico. Moreover, the officials said that surveillance technologies such as SBInet and RVSS towers enable the detection of apprehensions and seizures and accordingly, it is the agents who identify and track the illegal activity and ultimately apprehend illegal entrants and seize contraband. Despite the absence of complete data on the contribution of CBP's surveillance technologies to apprehensions and seizures, our analysis of Border Patrol's data on the location of apprehensions and seizures provides some insights into where Border Patrol apprehensions and seizures occurred in relation to the locations of its two highest-cost surveillance technologies--SBInet towers and RVSS.[Footnote 54] For example, our analysis of apprehensions events data, as determined by Geographic Information System data entered by Border Patrol agents when recording apprehensions and seizures, shows that across the Tucson sector from fiscal year 2010 through June 2013, of the 166,976 apprehension events, 71,397 (or about 43 percent) occurred within the camera and radar range of SBInet and RVSS towers.[Footnote 55] As shown in figure 5, the percentage of apprehension events occurring within the range of both SBInet and RVSS surveillance technologies has changed little, if at all, over time. Apprehension events occurring within the radar and camera range of SBInet towers have remained relatively unchanged, while apprehension events occurring within the range of RVSS towers increased by about 1 percent during our specified time frame. Figure 5: Percentage of Apprehension Events Occurring within the Detection Ranges of Remote Video Surveillance Systems (RVSS) and Secure Border Initiative Network (SBInet) across the Tucson Sector, Fiscal Year 2010 through June 2013: [Refer to PDF for image: vertical bar graph] Fiscal year: 2010; Range of SBInet and RVSS towers: 41.4%; RVSS tower range: 34%; Camera range of SBInet tower: 6.5%; Radar range of SBInet tower: 7.4%. Fiscal year: 2011; Range of SBInet and RVSS towers: 43.5%; RVSS tower range: 36.3%; Camera range of SBInet tower: 6.5%; Radar range of SBInet tower: 7.3%. Fiscal year: 2012; Range of SBInet and RVSS towers: 44%; RVSS tower range: 35.3%; Camera range of SBInet tower: 8%; Radar range of SBInet tower: 8.7%. Fiscal year: 2013; Range of SBInet and RVSS towers: 42.7%; RVSS tower range: 35.4%; Camera range of SBInet tower: 6.5%; Radar range of SBInet tower: 7.2%. Source: GAO analysis of Customs and Border Protection data. [End of figure] Moreover, of those 115,517 apprehension events in the Tucson sector that do not have data on asset assists, 8,751 (or about 8 percent) occurred within the camera range, and 9,818 (or about 9 percent) occurred within the radar range of SBInet towers. Moreover, data on asset assists were not recorded in 35,147 (or about 30 percent) of apprehension events within the range of RVSS towers. Table 6 shows the reporting of asset assists for apprehension and seizure events occurring across the Tucson sector within the range of SBInet and RVSS towers during our specified time period. Table 6: Reporting of Asset Assists for Apprehension and Seizure Events Occurring across the Tucson Sector within the Range of Secure Border Initiative Network (SBInet) and Remote Video Surveillance System (RVSS) Towers, from Fiscal Year 2010 through June 2013: Camera range of SBInet tower: Within range; Apprehension events: Asset assist recorded: Number: 2,677; Percent: 5.2; Asset assist not recorded: Number: 8,751; Percent: 7.6; Total: 11,428; Seizure events: Asset assist recorded: Number: 957; Percent: 6.9; Seizure events: Asset assist not recorded: Number: 518; Percent: 8.1; Total: 1,475. Not in range; Apprehension events: Asset assist recorded: Number: 48,782; Percent: 94.8; Asset assist not recorded: Number: 106,766; Percent: 92.4; Total: 155,548; Seizure events: Asset assist recorded: Number: 12,959; Percent: 93.1; Asset assist not recorded: Number: 5,888; Percent: 91.9; Total: 18,847. Radar range of SBInet tower: Within range; Apprehension events: Asset assist recorded: Number: 2,942; Percent: 5.7; Asset assist not recorded: Number: 9,818; Percent: 8.5; Total: 12,760; Seizure events: Asset assist recorded: Number: 1,090; Percent: 7.8; Asset assist not recorded: Number: 568; Percent: 8.9; Total: 1,658. Not in range; Apprehension events: Asset assist recorded: Number: 48,517; Percent: 94.3; Asset assist not recorded: Number: 105,699; Percent: 91.5; Total: 154,216; Seizure events: Asset assist recorded: Number: 12,826; Percent: 92.2; Asset assist not recorded: Number: 5,838; Percent: 91.1; Total: 18,664. Camera range of RVSS tower: Within range; Apprehension events: Asset assist recorded: Number: 23,490; Percent: 45.7; Asset assist not recorded: Number: 35,147; Percent: 30.4; Total: 58,637; Seizure events: Asset assist recorded: Number: 2,501; Percent: 18.0; Asset assist not recorded: Number: 1,110; Asset assist not recorded: Percent: 17.3; Total: 3,611. Not in range; Apprehension events: Asset assist recorded: Number: 27,969; Percent: 54.4; Asset assist not recorded: Number: 80,370; Percent: 69.6; Total: 108,339; Seizure events: Asset assist recorded: Number: 11,415; Percent: 82.0; Asset assist not recorded: Number: 5,296; Percent: 82.7; Total: 16,711. Source: GAO analysis of U.S. Customs and Border Protection data. [End of table] Border Patrol officials stated that while analyzing data on the contributions of Border Patrol's surveillance technologies is a relevant measure of the agency's ability to meet its border security goals, conclusions regarding the contributions and impacts of its surveillance technologies on Border Patrol's enforcement efforts cannot be formed solely on the basis of the proximity of apprehension or seizure events to the locations of its surveillance technologies. These officials stated that there are instances in which illegal entrants were detected by some combination of cameras or radar closer to the border; however, to gain a better tactical advantage, Border Patrol agents made the apprehensions farther from the border. As we reported in December 2012, Border Patrol officials stated that apprehensions occur in areas farther from the border because several factors preclude greater border presence, including terrain that is inaccessible or creates a tactical disadvantage, the distance from Border Patrol stations to the border, and access to ranches and lands that are federally protected and environmentally sensitive.[Footnote 56] Standards for Internal Control in the Federal Government calls for agencies to ensure that ongoing monitoring occurs during the course of normal operations to help evaluate program effectiveness.[Footnote 57] These standards also state that agencies should promptly and accurately record transactions to maintain their relevance and value for management decision making and that this information should be readily available for use by agency management and others so that they can carry out their duties with the goal of achieving all of their objectives, including making operating decisions and allocating resources. These standards further state that to be effective, agencies need to clearly document all transactions in a timely manner to ensure that they are making appropriately informed decisions. Moreover, the standards call for clear documentation of and procedures that are readily available for examination. In addition, these standards call for comparisons and assessments relating different sets of data to one another so that analyses of the relationships can be made and appropriate actions taken. Because DHS's EID database already includes the asset assists data field and these data are used by Border Patrol on a limited basis to make decisions about resources, requiring agents to record and track asset assists data could help ensure that these data are complete and, if analyzed, could help better inform CBP's resource allocation decisions. Moreover, we acknowledge that conclusions regarding the contributions of surveillance technologies based on location and proximity data alone may not be sufficient to examine the contribution of CBP's surveillance technologies in achieving their strategic goals. However, analyzing data on apprehensions, seizures, and asset assists in combination with other relevant performance metrics or indicators as appropriate could provide more robust analysis of the contributions of surveillance technologies, and accordingly could better position CBP to be able determine the extent to which its technology investments have contributed to the agency's border security efforts. CBP Has Identified Mission Benefits for Surveillance Technologies under the Plan but Has Not Yet Developed Performance Metrics: In response to our November 2011 recommendation regarding the identification of mission benefits and development of key attributes for performance metrics for the surveillance technologies to be deployed as part of the Plan, CBP has identified mission benefits expected from the implementation of the surveillance technologies to be acquired or deployed as part of the Plan, but has not fully developed key attributes for performance metrics for these technologies.[Footnote 58] In November 2011, we reported that agency officials had not yet defined the mission benefits expected or quantified metrics to assess the contribution of the selected approaches in achieving their goal of situational awareness and detection of border activity using surveillance technology. We recommended that CBP determine the mission benefits to be derived from implementation of the Plan and develop and apply key attributes for metrics to assess program implementation. CBP concurred with our recommendation. In April 2013, CBP issued its Multi-Year Investment and Management Plan for Border Security Fencing, Infrastructure, and Technology for Fiscal Years 2014-2017, which identifies specific mission benefits to be achieved by the deployment of each of the seven technologies under the Plan.[Footnote 59] According to CBP officials, the majority of these surveillance technologies will provide the mission benefits of improved situational awareness and agent safety. Furthermore, CBP officials stated that each of the seven technologies deployed or planned for deployment as part of the Plan will help enhance the ability of Border Patrol agents to detect, identify, deter, and respond to threats along the border. A summary of the mission benefits of each surveillance technology deployed or planned for deployment under the Plan is presented in appendix V. While CBP has defined mission benefits for the technology programs under the Plan, the agency has not yet developed key attributes for performance metrics for all surveillance technologies to be deployed as part of the Plan. The Clinger-Cohen Act of 1996 and Office of Management and Budget (OMB) guidance emphasize the need to ensure that information technology investments, such as IFT systems, produce tangible, observable improvements in mission performance.[Footnote 60] In our April 2013 update on the progress made by the agencies to address our findings on duplication and cost savings across the federal government, CBP officials stated that operations of its two SBInet surveillance systems identified examples of key attributes for metrics that can be useful in assessing the Plan's implementation for technologies.[Footnote 61] For example, according to CBP officials, to help measure whether illegal activity has decreased, examples of key attributes include decreases in the amount of arrests, complaints by ranchers and other citizens, and destruction of public and private lands and property. While the development of key attributes for metrics for the two SBInet surveillance systems is a positive step, as of April 2013, CBP has not yet identified attributes for metrics for all technologies to be acquired and deployed as part of the Plan. In addition to these efforts, CBP officials stated that in response to our prior recommendations regarding the establishment of a performance goal or goals and associated performance metrics that define how border security is to be measured, Border Patrol, as of December 2013, was in the process of developing and implementing performance goals and measures to assess Border Patrol's efforts to secure the border. [Footnote 62] However, CBP officials stated that none of the current measures directly address the operational impact of technology. The officials further stated that the Tucson sector has submitted an issue paper that identifies potential data that can attribute a certain level of effectiveness to its SBInet system, but it is still under review by CBP. While these are positive steps, to fully address the intent of our recommendation, CBP would need to develop and apply key attributes for performance metrics for each of the technologies to be deployed under the Plan to assess its progress in implementing the Plan and determine when mission benefits have been fully realized. Conclusions: CBP has established schedules for the Plan and the IFT, RVSS, and MSC programs that meet some but not all best practices for scheduling, hindering CBP's ability to reliably commit to when it will deliver all of the Plan's technologies to Arizona. Ensuring that all schedule best practices are applied to the IFT, RVSS, and MSC schedules when updating them could help OTIA better ensure the schedules' reliability and could help better position OTIA to identify and address any potential further delays in the program's milestone commitment dates. Further, developing and maintaining an Integrated Master Schedule for the Plan, in accordance with best practices, could allow insight into current or programmed allocation of resources for the Plan and help CBP to reliably commit to when the Plan will be fully implemented. Also, CBP has developed Life-cycle Cost Estimates for the IFT and RVSS programs. Although OTIA officials stated that DHS's Office of Program Accountability and Risk Management conducted an assessment of the IFT Life-cycle Cost Estimate, an assessment is not equivalent to verifying the estimate with an independent cost estimate. When updating the Life- cycle Cost Estimates for the IFT and RVSS programs, verifying the estimates with independent cost estimates and reconciling any differences, consistent with best practices, could help to better ensure the credibility of CBP's cost estimates for these programs. DHS and CBP have approved some key acquisition documents as directed by DHS and CBP Acquisition Review Boards, but work remains to approve all key acquisition documents in accordance with DHS acquisition guidance. Specifically, revising the Test and Evaluation Master Plan to more fully test the IFTs in the various environmental conditions in which they will be used to determine operational effectiveness and suitability, in accordance with DHS acquisition guidance, could help provide CBP with more complete information on how the IFTs will operate under a variety of conditions before beginning full production. Requiring the collection of data on the extent to which technology assets assisted in apprehensions and seizures could better position Border Patrol to assess the contribution of surveillance technologies to its enforcement efforts and its goals of achieving and maintaining operational control and situational awareness along the southwest border. Conducting analysis of such data, once collected, in combination with other relevant performance metrics or indicators as appropriate, could help better position CBP to be able to determine the extent to which its technology investments have contributed to border security efforts. Recommendations for Executive Action: To improve the acquisition management of the Plan and the reliability of its cost estimates and schedules, assess the effectiveness of deployed technologies, and better inform CBP's deployment decisions, we recommend that the Commissioner of CBP take the following six actions: * When updating the schedules for the IFT, RVSS, and MSC programs, ensure that scheduling best practices, as outlined in our schedule assessment guide, are applied to the three programs' schedules. * Develop and maintain an Integrated Master Schedule for the Plan that is consistent with scheduling best practices. * When updating Life-cycle Cost Estimates for the IFT and RVSS programs, verify the Life-cycle Cost Estimates with independent cost estimates and reconcile any differences. * Revise the IFT Test and Evaluation Master Plan to more fully test the IFT program, before beginning full production, in the various environmental conditions in which IFTs will be used to determine operational effectiveness and suitability, in accordance with DHS acquisition guidance. * Require data on asset assists to be recorded and tracked within the Enforcement Integrated Database, which contains data on apprehensions and seizures. Once data on asset assists are required to be recorded and tracked, analyze available data on apprehensions and seizures and technological assists, in combination with other relevant performance metrics or indicators, as appropriate, to determine the contribution of surveillance technologies to CBP's border security efforts. Agency Comments and Our Evaluation: We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are summarized below and reproduced in full in appendix VI, and technical comments, which we incorporated as appropriate. DHS concurred with four of the recommendations in the report. DHS did not concur with the other two recommendations in the report. With regard to the first recommendation, that CBP ensure that scheduling best practices are applied to the IFT, RVSS, and MSC programs' schedules when they are updated, DHS concurred and stated that OTIA plans to ensure that scheduling best practices are applied as far as practical when updating the three programs' schedules. DHS plans to update the programs' schedules by July 2015. With regard to the second recommendation, that CBP develop and maintain an Integrated Master Schedule for the Plan, DHS did not concur with this recommendation. DHS stated that maintaining an Integrated Master Schedule for the Plan undermines the DHS-approved implementation strategy for the individual programs making up the Plan and that a key element of the Plan has been the disaggregation of technology procurements. According to DHS, the implementation of this recommendation would essentially create a large, aggregated program, similar to SBInet, and effectively create an aggregate "system of systems." DHS stated that CBP believes its strategy of disaggregation has been effective and has reduced overall risk and cost. DHS also stated that each program within the Plan has its own schedule and that forcing linkages among the Plan's programs into a single Integrated Master Schedule contradicts lessons learned and the approved implementation strategy for the Plan. We continue to believe that developing and maintaining an Integrated Master Schedule for the Plan, consistent with best practices for scheduling, is needed. As noted in the report, the use of an Integrated Master Schedule is a well-established practice in program and project management and is a necessary tool to coordinate independently managed projects that have dependencies--including resource dependencies--on one another. The programs under the Plan are intended to provide Border Patrol with a combination of surveillance capabilities to assist in achieving situational awareness along the Arizona border with Mexico; and while the programs themselves may be independent of one another, the Plan's resources are being shared among the programs. Furthermore, this recommendation is not intended to imply that DHS needs to re-aggregate the Plan's seven programs into a "system of systems" or change its procurement strategy in any form. Rather, the intent of our recommendation is for DHS to insert the individual schedules for each of the Plan's programs into a single electronic Integrated Master Schedule file in order to identify any resource allocation issues among the programs' schedules. Developing and maintaining an Integrated Master Schedule for the Plan could allow OTIA insight into current or programmed allocation of resources for all programs as opposed to attempting to resolve any resource constraints for each program individually. In addition to helping identify resource constraints, an Integrated Master Schedule can be a useful tool for consolidating multiple projects or program files into a single master file, even if those projects or programs have no direct links among activities. For example, aggregating individual files into a master schedule is useful for reporting purposes, particularly if the projects or programs are under the purview of a single management organization or a single customer. In this case, the master schedule would allow for a concise view of all projects or programs for which the stakeholder is responsible or has an interest. A master schedule of this nature is often referred to as a consolidated schedule, although the terms "consolidated schedule" and "Integrated Master Schedule" are often synonymous. We continue to believe that developing and maintaining an Integrated Master Schedule for the Plan could help provide CBP a comprehensive view of the Plan and help CBP to reliably commit to when the Plan will be fully implemented and better predict whether estimated completion dates are realistic to manage programs' performance, as noted in the report. With regard to the third recommendation, that CBP verify the Life- cycle Cost Estimates for the IFT and RVSS programs with independent cost estimates and reconcile any differences, DHS concurred, although its planned actions will not fully address the intent of the recommendation unless assumptions underlying the cost estimates change. DHS stated that while OTIA did not obtain a traditional independent cost estimate for the programs, the Life-cycle Cost Estimates were meant to be conservative in managing program risk and that the estimated life-cycle costs to date are less than originally projected. DHS further stated that at this point it does not believe that there is a benefit in expending funds to obtain independent cost estimates and that if the costs realized to date continue to hold, there may be no requirement or value added in conducting full-blown updates with independent cost estimates. DHS noted, though, that if this assumption changes, OTIA will complete updates and consider preparing independent cost estimates, as appropriate. We recognize the need to balance the cost and time to verify the Life-cycle Cost Estimates with the benefits to be gained from verification with independent cost estimates. However, as noted in this report, independently verifying the cost estimates is consistent with best practices and could help provide CBP with more insights into program costs. An independent cost estimate provides an independent view of expected program costs that tests the program office's estimate for reasonableness. Independent cost estimates frequently use different methods and are less burdened with organizational bias than a program office's estimate, helping to provide decision makers with insight into a program's potential costs. Thus, we continue to believe that independently verifying the Life-cycle Cost Estimates for the IFT and RVSS programs and reconciling any differences, consistent with best practices, could help CBP better ensure the reliability of the estimates. With regard to the fourth recommendation, that CBP revise the IFT Test and Evaluation Master Plan to more fully test the IFT program in the various environmental conditions in which IFTs will be used to determine operational effectiveness and suitability, DHS did not concur with the recommendation. Specifically, DHS stated that the Test and Evaluation Master Plan includes tailored testing and user assessments that will provide much, if not all, of the insight contemplated by the intent of the recommendation. According to DHS, the approved non-developmental item acquisition strategy for the IFT program was based on market surveys and observations during field use by other customers and the incorporation of system demonstrations conducted during source selection. DHS also stated that there is no requirement for expansive, formal operational test and evaluation and to re-write the Test and Evaluation Master Plan to incorporate operational testing undermines and removes the benefits of the non- developmental item strategy. Moreover, DHS stated that the user test currently outlined in the Test and Evaluation Master Plan will provide the operational user the information needed to validate system requirements and operational characteristics. DHS also noted that Acquisition Decision Event 3 has been approved for IFT production, and after the initial IFT system undergoes testing in accordance with the Test and Evaluation Master Plan, the Office of Border Patrol will make the determination regarding operational readiness prior to deploying additional systems. We continue to believe that DHS should revise the Test and Evaluation Master Plan to more fully test the IFT program, before beginning full production, in the various environmental conditions in which the IFT will be used to determine operational effectiveness and suitability. DHS's acquisition guidance states that the Test and Evaluation Master Plan is important because it describes the strategy for conducting developmental and operational testing to evaluate a system's technical performance, including its operational effectiveness and suitability. The guidance states that, even for commercial-off-the-shelf systems, such as the IFT program, operational test and evaluation should occur in the environmental conditions in which a system will be used before a full production decision for the system is made and the system is subsequently deployed. In addition, DHS guidance states that the primary purpose of test and evaluation is to provide timely and accurate information to managers, decision makers, and other stakeholders to support research, development, and acquisition in a manner that reduces programmatic financial, schedule, and performance risks. The current Test and Evaluation Master Plan describes CBP's plans to conduct a limited user test of the IFT, which will be designed to determine the IFT's mission contribution. However, determining mission contribution is not equivalent to determining operational effectiveness and suitability, which specifically identifies how effective and reliable a system is in meeting its operational requirements in its intended environment. DHS plans to conduct limited user testing during a 30-day period in environmental conditions present at one site--the Nogales station. However, as of November 2013, CBP intended to deploy IFTs to 50 locations in southern Arizona, which can include different terrain and differences in climate throughout the year. As we noted in the report, conducting limited user testing in one area in Arizona for 30 days could limit the information available to CBP on how the IFTs may perform in other conditions and locations along the Arizona border. Therefore, CBPs approach to use limited user testing will not specifically identify how effective and reliable a system is in meeting its operational requirements in its intended environment. Moreover, while DHS has approved the IFT program for production at Acquisition Decision Event 3, the program is not yet under contract and testing for the IFTs has not yet begun. As noted in the report, revising the Test and Evaluation Master Plan to include more robust testing to determine operational effectiveness and suitability could better position CBP to evaluate IFT capabilities before moving to full production for the system, help provide CBP with information on the extent to which the towers satisfy the Border Patrol's user requirements, and help reduce potential program risks. Furthermore, although the IFT program is not the same as SBInet, according to the Plan, the IFTs are to be deployed to locations with similar environmental and terrain conditions as SBInet towers, and IFT and SBInet systems may have similar types of technologies, such as cameras and radar. As noted in the report, we previously identified testing issues CBP encountered with SBInet, such as DHS's test plans and procedures for some SBInet test events not being defined in accordance with guidance and that operational tests of SBInet at Tucson revealed challenges regarding the effectiveness and suitability of the technology for border surveillance. Thus, we continue to believe that revising the Test and Evaluation Master Plan to more fully test the IFT in the various environmental conditions in which it will be used to determine operational effectiveness and suitability, before beginning full production, could help provide CBP with more complete information on how the IFTs will operate under a variety of conditions. Without conducting operational testing in accordance with DHS guidance, the IFT program may be at increased risk of not meeting Border Patrol operational needs. With regard to the fifth recommendation, that CBP require data on asset assists to be recorded and tracked within the Enforcement Integrated Database, DHS concurred and stated that Border Patrol is changing its data collection process to allow for improved reporting on asset assists for apprehensions and seizures and intends to make it mandatory to record whether an asset assisted in an apprehension or seizure. DHS plans to change its process by December 31, 2014. With regard to the sixth recommendation, that CBP analyze available data on apprehensions and seizures and technology assists to determine the contribution of surveillance technologies to its border security efforts, DHS concurred and stated that Border Patrol intends to create a plan of action with milestones to explore and develop a process to answer how different classes of technology, within a certain environment, contribute to Border Patrol's mission. DHS stated that Border Patrol plans to develop an initial set of quantitative and qualitative technology-related measures by September 30, 2014, as an interim milestone; gather baseline data for the measures in fiscal year 2015 and begin to use these data to evaluate the contributions of specific technology assets by the end of that fiscal year; and by the end of fiscal year 2016, use measures associated with technology to assist in determining levels of situational awareness in different areas of the border. These planned actions, if implemented effectively, should address the intent of the recommendations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at [hyperlink, http://www.gao.gov]. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our: Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. Signed by: Rebecca Gambler: Director, Homeland Security and Justice Issues: List of Congressional Requesters: The Honorable Michael T. McCaul: Chairman: The Honorable Bennie G. Thompson: Ranking Member: Committee on Homeland Security: House of Representatives: The Honorable Candice S. Miller: Chairman: The Honorable Sheila Jackson Lee: Ranking Member: Subcommittee on Border and Maritime Security: Committee on Homeland Security: House of Representatives: The Honorable Peter T. King: Chairman: Subcommittee on Counterterrorism and Intelligence: Committee on Homeland Security: House of Representatives: The Honorable Jeff Duncan: Chairman: Subcommittee on Oversight and Management Efficiency: Committee on Homeland Security: House of Representatives: The Honorable Henry Cuellar: House of Representatives: [End of table] Appendix I: Objectives, Scope, and Methodology: Our objectives were to determine the extent to which U.S. Customs and Border Protection (CBP) has (1) developed schedules and Life-cycle Cost Estimates for the Arizona Border Surveillance Technologies Plan (the Plan) in accordance with best practices; (2) followed key aspects of the Department of Homeland Security's (DHS) acquisition management framework in managing the Plan's three highest cost programs; and (3) assessed the performance of technologies deployed under the Secure Border Initiative Network (SBInet), identified mission benefits, and developed performance metrics for surveillance technologies to be deployed under the Plan. To determine the extent to which CBP has followed best practices in developing schedules and Life-cycle Cost Estimates for the Plan's three highest-cost programs--the Integrated Fixed Tower (IFT), Remote Video Surveillance System (RVSS), and Mobile Surveillance Capability (MSC)--we obtained CBP's Office of Technology Innovation and Acquisition's (OTIA) program schedules as of March 2013, which were current at the time of our review, for the these programs and compared them against best practices for developing schedules.[Footnote 63] Specifically, we assessed the extent to which the schedules for these three programs met each of the 10 best practices identified in the schedule assessment guide.[Footnote 64] We characterized whether the schedules met each of the 10 best practices based on the following scale: * Not met--the program provided no evidence that satisfies any of the criterion. * Minimally met--the program provided evidence that satisfies a small portion of the criterion. * Partially met--the program provided evidence that satisfies about half of the criterion. * Substantially met--the program provided evidence that satisfies a large portion of the criterion. * Met--the program provided complete evidence that satisfies the entire criterion. In conducting our analysis, we focused, for example, on whether the schedules reflect best practices for a reliable schedule, such as whether the schedules define the work necessary to accomplish a program's objectives. More details on our assessment and methodology are presented in appendix III. By assessing the schedules against best practices, we also identified schedule challenges that CBP was experiencing in testing, procuring, deploying, and operating technologies in the Plan and interviewed CBP officials to determine the reasons for the schedule challenges and steps that CBP had taken or was taking to address them. In addition, we obtained and analyzed the August 2010 and June 2013 Life-cycle Cost Estimates for the Plan. We also analyzed the IFT and RVSS January 2012 and March 2012 Life- cycle Cost Estimates, respectively, which were current at the time of our review, and compared them against best practices for cost estimating.[Footnote 65] We analyzed DHS and CBP documents and interviewed officials regarding their efforts to implement our November 2011 recommendations to update the Life-cycle Cost Estimate for the Plan in accordance with best practices.[Footnote 66] To assess the reliability of cost estimate data that we used, we reviewed relevant program documentation, such as cost estimation spreadsheets, as available, to substantiate evidence obtained from interviews with knowledgeable agency officials. We found the data to be sufficiently reliable for the purposes of our report. To determine the extent to which CBP followed key aspects of DHS's acquisition management framework in managing the Plan's three highest- cost programs, we analyzed DHS and CBP documents, including DHS Acquisition Management Directive 102-01 and its associated DHS Instruction Manual 102-01-001, program briefing slides, budget documents, Acquisition Decision Memorandums, schedules, and program risk sheets.[Footnote 67] We focused on the IFT, RVSS, and MSC programs for more in-depth analyses because they are the Plan's three highest-cost programs and represent 97 percent of the estimated cost of the Plan. Specifically, to assess the acquisition strategy for the Plan, we focused on the IFT, RVSS, and MSC programs and analyzed their acquisition plans and discussed the approaches with CBP officials. To assess system requirements and capabilities for the IFT, RVSS, and MSC programs, we obtained and analyzed requirements and capabilities documents and worked with CBP officials to identify any changes to requirements and capabilities since they were initially approved and whether any requirements or capabilities had been traded off for cost, schedule, or other purposes. To assess the extent to which CBP followed DHS acquisition guidance, we selected aspects of Acquisition Management Directive 102-01 that were relevant to where these programs were in the acquisition process during fiscal year 2013. Specifically, we determined whether acquisition documents had been approved by the time of the applicable Acquisition Decision Events as required by DHS acquisition guidance. To determine the extent to which CBP has assessed the performance of technologies deployed under SBInet and developed performance metrics to assess the performance of surveillance technologies planned for deployment under the Plan, we analyzed performance assessment documentation and interviewed CBP officials responsible for performance measurement activities regarding the establishment of performance metrics used by CBP to determine the effectiveness and the contributions of its surveillance technologies toward the agency's stated border security goals. With respect to CBP's assessment of the performance of technologies deployed under SBInet, we analyzed the results of the January 2013 post implementation review, which was conducted by the Johns Hopkins University Applied Physics Laboratory to determine the effectiveness of SBInet technologies in achieving their intended results.[Footnote 68] We reviewed our November 2011 report to determine the extent to which CBP's post implementation review aligned with DHS guidance and the Office of Management and Budget's (OMB) Capital Programming Guide, a supplement to OMB Circular A-11, which identifies a post implementation review as a tool to evaluate an investment's efficiency and effectiveness.[Footnote 69] We also analyzed CBP and DHS documents, such as CBP's July 2013 SBInet Block 1 After Action Report, and interviewed officials to assess corrective actions taken to improve SBInet performance issues.[Footnote 70] Specifically, we analyzed CBP documentation and interviewed agency officials within OTIA and the Office of Border Patrol to determine the progress the agency has made in addressing findings and recommendations outlined in CBP's post implementation review and prior performance assessments, including the Army Test and Evaluation Command's assessment of its SBInet technologies.[Footnote 71] On the basis of interviews with agency officials regarding the methodology and implementation of the review, we found the review to be sufficiently reliable for our report. In addition, we analyzed CBP data on apprehensions of illegal entrants and seizures of drugs and other contraband for the Tucson and Yuma sectors maintained in the Enforcement Integrated Database (EID), a DHS- shared common database repository for several DHS law enforcement and homeland security applications, as well as policy, planning, and budget documents provided by Border Patrol to determine whether such data could be used to determine the contributions of the SBInet technologies to apprehensions and seizures. We analyzed apprehension [Footnote 72] and seizure data for the Tucson and Yuma sectors within Arizona, because these are the Border Patrol sectors contained within Arizona and covered by the Plan.[Footnote 73] For the purposes of this report, we analyzed apprehension and seizure events recorded in the EID for fiscal years 2010 through June of fiscal year 2013.[Footnote 74] An apprehension or seizure event is defined as an occasion on which Border Patrol agents apprehend an illegal entrant or seize drugs and other contraband. Each reported apprehension or seizure event is assigned a unique identifier in the EID, and Border Patrol agents assign an additional identifier to each individual illegal entrant or type of seized item associated with the event. As a result, a single apprehension event may involve the apprehension of multiple illegal entrants, and a single seizure event may result in the seizure of multiple items. Appendix IV contains the results of our analysis of all recorded apprehensions and seizures occurring across the Tucson and Yuma sectors during the specified time frame. For our analysis, we also obtained data on asset assists recorded in the EID for apprehensions and seizures. According to Border Patrol officials, the asset assist data field was added to the EID in May of 2009. Agents may select from a drop-down menu to identify whether a technological or nontechnological asset assisted in the apprehension or seizure. Multiple assets can be selected for a single event, if relevant. For the purposes of this report, technological assets identified within the Border Patrol's asset assists data field drop-down menu are those assets for which Border Patrol continues to make significant funding investments and are included as part of the Plan, and include Cameras, Mobile Surveillance Systems, Scope Trucks, and Unattended Ground Sensors. According to Border Patrol headquarters officials, agents identifying "Cameras" are most likely attributing the asset assist to either SBInet towers or Remote Video Surveillance Systems. In addition, for our analysis, we obtained Geographic Information Systems data for apprehensions, seizures, and Border Patrol's two highest-cost surveillance systems--SBInet and RVSS towers--to show the latitude and longitude coordinates of apprehensions and seizures in relation to the location of SBInet towers and RVSS towers. We used Geographic Information Systems data to determine the percentage of apprehensions and seizures that occurred within the proximity of the radar and camera range of SBInet and camera range of RVSS towers, and the extent to which asset assists were reported for apprehensions and seizures occurring within the proximity of the surveillance systems. For the purposes of this report, the ranges of the SBInet and RVSS towers are "buffer ranges" that, according to Border Patrol headquarters officials, do not account for obstructions due to terrain, land features, and vegetation. To perform these analyses, we compared Border Patrol data on the longitude and latitude of apprehensions, seizures, SBInet towers, and RVSS towers with agency mapping data, which allowed us to determine the extent to which apprehensions and seizures occurred within the proximity of SBInet and RVSS towers. We interviewed Border Patrol headquarters officials regarding data collection and analysis procedures, and performance assessment activities. We analyzed apprehensions and seizures data from fiscal year 2010 through June 2013 because fiscal year 2010 was the first fiscal year for which data on asset assists were available following Border Patrol's deployment of its SBInet technologies, and the collection of data on Geographic Information Systems coordinates for apprehensions and seizures was required.[Footnote 75] To assess the reliability of apprehensions and seizures data, including the asset assist and Geographic Information Systems data, we interviewed Border Patrol headquarters officials who oversee the maintenance and analyses of the data about agency guidance and processes for collecting and reporting the data. We determined that the apprehensions, seizures, asset assists, and Geographic Information Systems data were sufficiently reliable for the purposes of this report. However, as we reported in December 2012, because of potential inconsistencies in how the data are collected, these data cannot be compared across sectors but can be compared within a sector over time.[Footnote 76] We determined that the recorded data on asset assists were sufficiently reliable for the purposes of this report, but found limitations with the consistency in which these data are recorded for all apprehensions and seizures, a fact that we discuss in the report. Although we determined that the latitude and longitude coordinates for some apprehensions and seizures were invalid--e.g., they were identified as occurring outside U.S. national boundaries--the numbers were not significant, and we determined that the Geographic Information Systems data were sufficiently reliable for the purposes of this report. Location data that were determined to be invalid were not included in our analysis. We compared CBP's reporting requirements and use of asset assists data against criteria in Standards for Internal Control in the Federal Government, which, among other things, call for ensuring effectiveness and efficiency of management operations, including the use of the entity's resources.[Footnote 77] In addition, we visited Border Patrol's Tucson sector in Arizona to observe Border Patrol agents operating SBInet technologies and other selected technologies, such as RVSS towers, and discussed agents' experiences in using these technologies. While visiting the Tucson sector, we interviewed officials regarding the deployment and contributions of surveillance technologies within the sector. We visited the Tucson sector because of the presence of surveillance technologies, such as SBInet and RVSS towers, in that sector and because, under the Plan, the Tucson sector has locations for which additional technology deployments, such as IFTs, are planned. While the information we obtained from our visit cannot be generalized to all Border Patrol sectors, it did provide us with insights about the use of the deployed surveillance technologies. Finally, we analyzed documents, including CBP's Multi-Year Investment and Management Plan for Border Security Fencing, Infrastructure, and Technology for Fiscal Years 2014-2017, and interviewed CBP officials responsible for overseeing the progress CBP and DHS have made in implementing our November 2011 recommendations to identify the mission benefits to be derived from technologies in the Plan and metrics to measure the extent to which border security is expected to improve by using these technologies.[Footnote 78] We conducted this performance audit from September 2012 through March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. [End of section] * Appendix II: Photographs of Technologies in the Arizona Border Surveillance Technology Plan: Figure 6: Integrated Fixed Tower Concept (Secure Border Initiative Network Tower): [Refer to PDF for image: photograph] Source: GAO. [End of figure] Figure 7: Remote Video Surveillance System: [Refer to PDF for image: photograph] Source: GAO. [End of figure] Figure 8: Mobile Surveillance Capability: [Refer to PDF for image: photograph] Source: U.S.Customs and Border Protection. [End of figure] Figure 9: Mobile Video Surveillance System: [Refer to PDF for image: photograph] Source: GAO. [End of figure] Figure 10: Agent Portable Surveillance System: [Refer to PDF for image: photograph] Source: U.S.Customs and Border Protection. [End of figure] Figure 11: Thermal Imaging System (RECON III): [Refer to PDF for image: photograph] Source: U.S.Customs and Border Protection. [End of figure] Figure 12: Unattended Ground Sensor: [Refer to PDF for image: photograph] Source: GAO. [End of figure] [End of section] Appendix III: Our Schedule Assessment Results for the Integrated Fixed Towers, Remote Video Surveillance System, and Mobile Surveillance Capability Programs: Best practices for cost estimating and scheduling identify 10 practices associated with effective scheduling.[Footnote 79] These are (1) capturing all activities, (2) sequencing all activities, (3) assigning resources to all activities, (4) establishing the duration of all activities, (5) verifying that the schedule is traceable horizontally and vertically, (6) confirming that the critical path is valid, (7) ensuring reasonable total float, (8) conducting a schedule risk analysis, (9) updating the schedule with actual progress and logic, and (10) maintaining a baseline schedule.[Footnote 80] These practices are summarized into four characteristics of a reliable schedule--comprehensive, well constructed, credible, and controlled. We assessed the extent to which the March 2013 schedules for CBP's three highest-cost technology programs under the Arizona Border Surveillance Technology Plan--IFT, RVSS, and MSC--met each of the 10 best practices.[Footnote 81] We characterized whether the schedules met each of the 10 best practices as follows: * Not met--the program provided no evidence that satisfies any of the criterion. * Minimally met--the program provided evidence that satisfies a small portion of the criterion. * Partially met--the program provided evidence that satisfies about half of the criterion. * Substantially met--the program provided evidence that satisfies a large portion of the criterion. * Met--the program provided complete evidence that satisfies the entire criterion. Table 7 provides the results of our analysis of the IFT, RVSS, and MSC schedules as of March 2013. Table 7: Our Assessments of the Integrated Fixed Tower (IFT), Remote Video Surveillance System (RVSS), and Mobile Surveillance Capability (MSC) Program Schedules, as of March 2013: Comprehensive: Schedule characteristic or best practice: Capturing all activities; Captures all activities, as defined in the work breakdown structure, which defines in detail the work for both the government and its contractors necessary to accomplish a program's objectives; Our assessment: IFT: Partially met; The schedule contains clear start and finish milestones and schedule activities were mapped to a program work breakdown structure. However, there were 174 activities (or 24 percent of detail activities) that did not have an assigned program work breakdown structure. According to program officials, the schedule reflects 53 duplicate milestones that are not mapped to the work breakdown structure. The revised program work breakdown structure was approved in May 2013 and program officials stated they planned to re-baseline the schedule after contract award. Re-baselining resets the estimated schedule that is used to determine how the program will be held accountable. However, current government activities and U.S. Army Corps of Engineers activities were not mapped to a program work breakdown structure prior to its revision in May. According to scheduling best practices, aligning a schedule to a work breakdown structure can help ensure that the total scope of work is accounted for within the schedule. Further, our analysis found two detail activities for every milestone in the schedule. With this type of ratio of detail activities to milestones, it can be difficult to monitor progress for activities against milestones, according to best practices. Activity names in the schedule were also not unique. We found 94 activity names that were repeated at least once, including 17 names used more than 10 times and 10 names used more than 20 times. Best practices state that descriptive names should be unambiguous and should identify their associated product without the need to review high-level summary activity or preceding activity names. Descriptive activity names help ensure that decision makers, managers, control account managers, task workers, and auditors know what scope of work is required for each activity; RVSS: Partially met; The schedule activities were mapped to a program work breakdown structure; however, there were 12 milestones that did not have an assigned program work breakdown structure. According to program officials, the schedule reflects 12 duplicate milestones that are not mapped to the work breakdown structure. The schedule reflects all the work that needs to be accomplished and contains a clear start milestone, but we could not identify a project finish milestone in the schedule. Our analysis also found a less than expected level of detail within the RVSS schedule, with one detail activity per milestone. As previously stated, with this type of ratio of detail activities to milestones, it can be difficult to monitor progress for activities against milestones, according to best practices; MSC: Partially met; Fields within the schedule did not map activities to a program work breakdown structure, although a separate external mapping document provided by the program office did align with the program work breakdown structure. In addition, the schedule does not denote clear start and finish milestones. Our analysis found that high-level efforts from vendors were represented as milestones, as no integrated master schedule data item description was required in the contracts. While the vendors are delivering off-the-shelf production equipment-- which typically precludes the need for a detailed production schedule-- a best practice is to include the work of suppliers and vendors as activities rather than representing the promise date of the delivery in the schedule. This allows progress to be monitored and risk to be applied to the activity, as well as eliminating the need for artificial constraints on delivery dates. Thus, it is unclear what method the vendors use to inform U.S. Customs and Border Protection (CBP) and its Office of Technology Innovation and Acquisition (OTIA) of their schedule progress. Finally, our analysis found 119 activities that were repeated at least once, including one name used more than 20 times. As previously stated, best practices state that descriptive names should be unambiguous and should identify their associated product without the need to review high-level summary activity or preceding activity names. Descriptive activity names help ensure that decision makers, managers, control account managers, task workers, and auditors know what scope of work is required for each activity. Schedule characteristic or best practice: Assigning resources; Reflects what resources (for example, labor, materials, and overhead) are needed to do the work, whether all required resources will be available when needed, and whether any funding or time constraints exist; Our assessment: IFT: Minimally met; Our analysis showed that the schedule is not resource loaded and the alternative method in use--tracking resources by mapping summary activities to integrated project team names--limits CBP's ability to identify resource constraints. A resource-loaded schedule implies that all required labor and significant materials, equipment, and other costs are assigned to the appropriate activities within the schedule. We identified 91 activities (7 percent of all activities) that had integrated project team responsibilities mapped to them, and those were mapped at the summary task level--with one exception (there is one detail task with an integrated project team mapping). Officials stated that through the use of these integrated project team mappings, OTIA can determine which resources are in demand and identify overall resource constraints. However, given the integrated project team mappings that were at the summary task level in the IFT schedule, and work hours were not estimated for each activity, it is difficult to identify resource constraints, according to best practices. Information on resource needs and availability in each work period assists the program office in forecasting the likelihood that activities will be completed as scheduled. If the current schedule does not allow insight into current or projected allocation of resources, then the risk of the program's slipping is significantly increased, according to best practices. Furthermore, if there is no justification for allocating and assigning resources, the schedule may convey a false level of accuracy. This is important in a constrained resource environment where multiple projects are sharing critical resources, according to best practices; RVSS: Minimally met; Our analysis found that the RVSS schedule includes a total of 61 resources designated as "work." However, the schedule reflects assigned resources (i.e., individual people and number of work hours) for 6 detailed activities, and our analysis showed that there were 119 detailed activities without resource assignments. Information on resource needs and availability in each work period assists the program office in forecasting the likelihood that activities will be completed as scheduled. If the current schedule does not allow insight into current or projected allocation of resources, then the risk of the program's slipping is significantly increased, according to our best practices. A schedule without resources implies an unlimited number of resources and their unlimited availability. As previously stated, if there is no justification for allocating and assigning resources, the schedule will convey a false level of accuracy. This is particularly important in a constrained resource environment where multiple projects are sharing critical resources, according to best practices; MSC: Not met; Our analysis showed that the schedule is not resource loaded, and we found that the schedule did not include responsibility assignments. Further, program officials stated that they did not estimate resources because the schedule was developed under the assumption that the activities were to be fully resourced to do the work and that the personnel would be dedicated and not shared between the other programs. Information on resource needs and availability in each work period assists the program office in forecasting the likelihood that activities will be completed as scheduled. If the current schedule does not allow insight into current or projected allocation of resources, then the risk of the program's slipping is significantly increased. According to our best practices, a schedule without resources implies an unlimited number of resources and their unlimited availability. Furthermore, if there is no justification for allocating and assigning resources, the schedule will convey a false level of accuracy. This is particularly important in a constrained resource environment where multiple projects are sharing critical resources, according to best practices. Schedule characteristic or best practice: Establishing the durations of all activities; Establishes the duration of all activities, such as how long each activity will take, and allows for discrete progress measurement. Estimated detail activity durations should be shorter than 2 working months, or approximately 44 working days, for near-term effort. Durations should be as short as possible to facilitate the objective measurement of accomplished effort; Our assessment: IFT: Met; Our analysis found that the schedule activities were generally short enough to be consistent with the needs of effective planning. We found about 84 percent of detailed activities were 44 days in duration or less; 19 activities, or 15 percent, have durations greater than 44 days.[A]; RVSS: Substantially met; Our analysis found that 44 activities, or 35 percent of remaining detailed activities, were 44 days in duration or less, and 81 activities or 65 percent of remaining detailed activities had greater than 44 days. The average duration of the remaining activities was 171 days. However, taking into account rolling wave planning, where near- term activities were planned in more detail than long-term activities, we found that durations were generally less than 2 business months. According to program officials, schedule activity durations were estimated to account for availability, distractions, leave, and training. While most activities were shorter than 44 days, we found that those durations may be unrealistic because management directed them to be artificial. The durations were shortened because, according to program officials, when the projected completion date was beyond senior management expectations, the durations were shortened to a more satisfactory length; MSC: Substantially met; Our analysis found that about 123 activities, or 93 percent of remaining detailed activities, were 44 days in duration or less. The remaining about 7 percent of detailed activities had greater than 44 days and were the logistical support activities noted by the program office. However, our analysis showed that while the project is scheduled to finish on September 16, 2015, the base project calendar only includes holidays through June 30, 2013. Other holidays will occur between June 30, 2013, and September 16, 2015, but these holidays are not reflected in the schedule. Well constructed: Schedule characteristic or best practice: Sequencing all activities; Sequences all activities--that is, that all activities are logically sequenced with the most straightforward logic possible. In particular, activities that must be completed before other activities can begin (predecessor activities), as well as activities that cannot begin until other activities are completed (successor activities), should be identified; Our assessment: IFT: Substantially met; Our analysis showed that the schedule has very few missing or incorrect logic links (less than 3 percent) and contains some number of date constraints (less than 3 percent). Missing logic refers to activities that are missing necessary predecessors or successors which in turn reduces the credibility of the calculated dates. The schedule contains 35 activities (17 percent of remaining activities) with lags, and a total of 38 lags throughout the remaining duration of the schedule, although some lags were used appropriately for future planning purposes. Lags are often used to put activities on a specific date or to insert a buffer for risk. In addition, we found that 92 percent of the logic links in the schedule were finish-to-start relationships, allowing for intuitive, serial workflow. However, the schedule contains 3 activities with dangling logic--that is, either the start or finish dates for these activities are not properly tied to other activities. Specifically we found 2 activities that have predecessor logic that is not affecting their start dates and 1 activity with successor logic that is not tied to its finish date. Finally, while the majority of activities have 3 or fewer predecessors, 10 activities (5 percent of remaining activities) have 4 or more predecessors, including the project completion milestone that has 16. Several parallel activities converging or joining with a single successor activity is known as path convergence. According to best practices, these points are a concern because risk at the merge point is multiplicative. Because each predecessor activity has a probability of finishing by a particular date, as the number of predecessor activities increases, the probability that the successor activity will start on time diminishes to zero; RVSS: Substantially met; Our analysis found no missing or dangling logic in the schedule. The schedule has 94 percent of finish-to-start logic link relationships, allowing for intuitive, serial workflow. The count of remaining logic relationships is split between finish-to-finish and start-to-start logic. There were a total of 15 lags among 8 activities (or 5 percent of remaining activities), but in most cases the lags were used to plan long-term future effort where details were likely to be unknown. Our analysis also found 11 constraints, but most of these were due to external dependencies related to construction projects and were justified as such in the schedule. We also found 13 activities (7 percent of remaining activities) that have 4 or more predecessors, including the planning stage completion milestone that has 23. As previously stated, several parallel activities converging or joining with a single successor activity is known as path convergence. These points are a concern because risk at the merge point is multiplicative, according to our best practices. That is, because each predecessor activity has a probability of finishing by a particular date, as the number of predecessor activities increases, the probability that the successor activity will start on time quickly diminishes to zero; MSC: Substantially met; Our analysis showed that the schedule has 97 percent finish-to-start logic link relationships, allowing for intuitive, serial workflow. According to our analysis, we found that the remaining milestones and detailed activities in the MSC schedule do not have any dangling logic, and missing logic was explained by external vendor deliveries. We found 11 constraints in the schedule (6 percent of remaining activities). These constraints were necessary because vendor deliveries were modeled as milestones, rather than long vendor tasks, as noted above in "Capturing all activities." In addition, we found 4 activities with more than 4 predecessors, including one activity with 12 predecessors. As previously stated, these points are a concern because risk at the merge point is multiplicative, according to best practices. That is, because each predecessor activity has a probability of finishing by a particular date, as the number of predecessor activities increases, the probability that the successor activity will start on time quickly diminishes to zero. Schedule characteristic or best practice: Confirming that the critical path is valid; Establishes a valid critical path, which represents the chain of dependent activities with the longest total duration. A valid critical path is necessary to examine the effects of any activity slipping along this path; Our assessment: IFT: Substantially met; Our analysis found the project critical path to be a straightforward sequence of activities from the status date to project completion. The critical path derived in the detail IFT schedule matches the critical path shown in management briefing slides, with only one exception. However, as discussed in the next section, the unreasonableness of total float means there are concerns with the critical path as calculated by the network; RVSS: Minimally met; Our analysis found that the critical path is not continuous from the status date to contract award. According to program officials, there is a near-term critical path to contract award as well as the overall critical path. However, according to our analysis of the critical path--defined as the path of longest duration through the sequence of activities with zero or less-than-zero total float considered critical--it does not reflect a continuous path from the status date to the major completion milestones. The first critical activity does not occur until December 9, 2013. There were no critical activities between the status date of March 13, 2013, and December 9, 2013, because of a date constraint on the critical path. Our analysis was able to derive a driving path of activities from the status date to the posting of contract award milestone, but this path has 7 days of total float. That is, the date constraint could prevent the ability of activities to be dependent on one another. According to best practices, the schedule should produce a critical path that reflects the driving path of the project to help program managers be better able to provide reliable timeline estimates or identify when problems or changes may occur and affect their downstream work; MSC: Partially met; Our analysis found that the critical path is not continuous from the status date because it begins with a date constraint on a vendor delivery activity 3 months after the status date. That is, the date constraint could prevent the ability of activities to be dependent on one another. However, from the vendor delivery activity to the project closeout milestone, the critical path is continuous and drives the closeout milestone. We found that neither the limited user test activities nor the post implementation review milestones were on the critical path, yet these were activities noted by management as currently being monitored as critical. As with the RVSS, until the schedule can produce a true critical path, the program office may not be able to provide reliable timeline estimates or identify when problems or changes may occur and their effect on downstream work. Further, it will limit management's ability to focus on activities that will have detrimental effects on the key project milestones and deliveries if they slip. Schedule characteristic or best practice: Ensuring reasonable total float; Identifies the total float time--the amount of time by which an activity can slip before the delay affects the program's estimated finish date--so that a schedule's flexibility can be determined. Large total float on an activity or path indicates that the activity or path can be delayed without jeopardizing the finish date. The length of delay that can be accommodated without the finish date's slipping depends on a variety of factors, including the number of date constraints within the schedule and the amount of uncertainty in the duration estimates, but the activity's total float provides a reasonable estimate of this value. Float differs among activities, given their logical sequence in the network and the overall program duration. Management should not adhere to a target float value or specific float measure; Our assessment: IFT: Partially met; Our analysis found excessive values of total float in the schedule. For example, the schedule was showing extreme flexibility, with 49 percent of the remaining activities able to slip at least 20 percent of the remaining duration of the project. Our analysis found approximately half of the total float values were greater than 87 days, with 25 percent being greater than 204 days, and a maximum float value of 628 days. In other words, half of the remaining activities and milestones in the schedule were able to slip at least 4 working months before affecting the final key milestone; a quarter of the activities could slip at least 10 months before affecting the final key milestone (assuming 20 working days per month, the value used by the scheduling software). There were no notes in the schedule related to the float values, and some activities appear to have questionable values of float. For example 3 activities had 462 days of total float (23 working months). We found that unreasonable values of total float were likely due to the sequencing of activities in the network. For example, activities with 462 days of float were on the same path, and share 462 days of float because the end milestone of that sequence has no successor. According to our analysis, it does not seem likely, then, that this float value represents true schedule flexibility. Without accurate values of total float, a schedule cannot be used to identify activities that could be permitted to slip and thus release and reallocate resources to activities that require more resources to be completed on time, according to best practices; RVSS: Partially met; Our analysis found relatively excessive values of total float in the schedule. We found approximately 20 percent of remaining activities with float values exceeding 88 days (or 4 working months). This includes 16 activities with float over 1,500 days, 2 with float values between 1,000 and 1,500 days, and 11 with float values between 100 and 300 days. According to program officials, the total float values are reasonable, reflect schedule flexibility, and areas with high total float are due to the long-duration post-contract award planning packages. However, according to best practices, planning packages should be logically linked within the schedule to create a complete picture of the program from start to finish and to allow the monitoring of a program's critical path. According to best practices, a critical path--which is defined by the lowest total float path-- cannot be derived accurately if excessive float values exist because of a particular sequencing of planning packages. In addition, inaccurate values of total float falsely depict true project status, which could lead to decisions that may jeopardize the project, according to best practices; MSC: Partially met; Our analysis found relatively excessive values of total float in the schedule. We found approximately 61 percent of remaining activities had float values exceeding 156 days (over 7 working months). This includes 25 activities with float over 600 days, 79 with over 500 days, and 3 activities with float between 161 and 250 days. According to program officials, the total float values are reasonable, reflect schedule flexibility, and areas with high total float are due to the long-duration contractor maintenance support planning packages. Credible: Schedule characteristic or best practice: Verifying that the schedule is traceable horizontally and vertically; Verifies that schedule is (1) horizontally traceable--that is, it reflects the order of events necessary to achieve aggregated products or outcomes--and (2) vertically traceable--that is that activities in varying levels of the schedule map to one another and key dates presented to management in periodic briefings are in sync with the schedule; Our assessment: IFT: Substantially met; The schedule exhibited vertical traceability, as we were able to match select forecasted dates between the detailed schedule and higher-level management briefing slides. We also found the schedule exhibited horizontal traceability for the most part--that is, it links products and outcomes with sequenced activities. We tested the horizontal traceability of the schedule network by extending the durations of some activities by extreme amounts, and then observing the impact on the overall network. The network responded to the drastic changes, but in one example a delay did not carry through the schedule because of dangling logic (as noted above in "Sequencing all activities"). Unless the schedule is horizontally traceable, activities whose durations are greatly extended will have no effect on key milestones, according to best practices; RVSS: Substantially met; The schedule exhibited vertical traceability because we were able to match select dates between the detailed schedule and higher level management briefing slides. We tested the horizontal traceability of the schedule network by extending the durations of some activities by extreme amounts, and then observing the impact on the overall network. In each case, the network responded appropriately to changes; MSC: Partially met; The schedule exhibited vertical traceability because we were able to trace key production delivery dates between the detailed schedule and higher-level management briefing slides. We tested the horizontal traceability of the schedule network by extending the durations of some activities by extreme amounts, and then observing the impact on the overall network. We found that the MSC schedule responds to delays in the network in some instances but not in others. For example, we extended the duration of a vendor delivery activity. While it delayed planned activities, such as a planned operational readiness review and post-implementation review, it had no impact on the subsequent vendor deliveries because of the way that vendor delivery milestones were constrained, as discussed above in "Sequencing all activities." According to best practices, unless the schedule is horizontally traceable, activities whose durations are greatly extended will have no effect on key milestones. Schedule characteristic or best practice: Conducting a schedule risk analysis; Conducts a schedule risk analysis to predict a level of confidence in meeting the program's completion date; Our assessment: IFT: Minimally met; Program officials stated they conducted a schedule risk analysis in December 2011, which encompassed the scheduled activities from request for proposal to contract award. According to the documentation OTIA provided us and the results of our analysis, it did not describe how much contingency reserve (that is, time held in reserve for potential delays) the analysis suggested, the paths or activities most likely to delay the project, or whether correlation was accounted for between tasks. Officials stated that all remaining durations were given three- point distributions based on global +/-percentage factors. However, the schedule risk analysis does not appear to have been based on any connection between risks and specific activities. Officials stated that the risk register was not used to identify risk factors for the schedule risk analysis, project risks were not prioritized based on results of the schedule risk analysis, and contingency was not selected based on the results of the schedule risk analysis. A schedule risk analysis is used to select an amount of contingency to reach the appropriate confidence level for a successful--that is, on- time--project completion date, according to best practices. In the absence of a complete schedule risk analysis, we could not determine the likelihood of the project's completion date, how much schedule risk contingency is needed to provide an acceptable level of certainty for completion by a specific date, risks most likely to delay the project, how much contingency reserve each risk requires, and the paths or activities that are most likely to delay the project; RVSS: Minimally met; Program officials stated that the last schedule risk analysis was conducted in February 2013 and encompassed the schedule activities through to contract award. According to the documentation OTIA provided us with the findings and results of this analysis, remaining durations of activities were varied by global +/-percentage factors rather than any connection to specific risks. Officials stated that the risk register was not used to identify risk factors for the schedule risk analysis, project risks were not prioritized based on results of the schedule risk analysis, and contingency was not selected based on the results of the schedule risk analysis. As previously stated, a schedule risk analysis is used to select an amount of contingency to reach the appropriate confidence level for a successful--that is, on-time--project completion date, according to best practices. In the absence of a complete schedule risk analysis, we could not determine the likelihood of the project's completion date, how much schedule risk contingency is needed to provide an acceptable level of certainty for completion by a specific date, risks most likely to delay the project, how much contingency reserve each risk requires, and the paths or activities that are most likely to delay the project; MSC: Not met; A schedule risk analysis was not performed for the schedule because, according to program officials, the program began in 2009 and the schedule risk analysis tool was not available for OTIA use until October 2011. Controlled: Schedule characteristic or best practice: Updating the schedule with actual progress and logic; Updates periodically using actual progress and logic to realistically forecast dates for program activities; Our assessment: IFT: Substantially met; Our analysis showed that the schedule is well maintained, and the status date of March 13, 2013 was current as of the date the schedule was requested. We found 4 activities in progress, 2 of which were critical. We found no out-of-sequence activities and no date anomalies. However, no schedule narrative accompanies the weekly schedule update because there is no agency requirement to do so. According to best practices, all changes made to the schedule during statusing should be documented, and salient changes should be justified along with their likely effect on future planned activities. Moreover, best practices state that a schedule narrative should accompany the updated schedule to provide decision makers and auditors a log of changes and their effect, if any, on the schedule timeframe; RVSS: Substantially met; Our analysis showed that the schedule is well maintained, and the status date of March 13, 2013, was current as of the date the schedule was requested. We found 6 activities in progress. We found no out-of- sequence activities and no date anomalies. However, no schedule narrative accompanies the weekly schedule update. According to program officials, a schedule narrative is not a requirement for the government's schedule. However, OTIA plans to impose this requirement on the technology vendor that is awarded the contract; MSC: Partially met; Our analysis showed that the schedule is well maintained, and the status date of March 13, 2013, was current as of the date the schedule was requested. However, we found several date anomalies in the schedule. According to our analysis, the MSC schedule contains a total of 13 activities that have planned start or finish dates in the past without actual start or finish dates (8 have starts in the past with no actual start, and 5 have finish dates in the past but no actual finish--although 1 activity is planned to finish on the status date). If these activities have not started, according to best practices, their planned start dates should be at least the current status date, as activities cannot be planned to start in the past. Further, according to best practices, if the activities have not actually finished, they should at least finish on the status date. Our analysis showed that the MSC schedule does not contain activities with actual start or finish dates in the future. We also found 2 activities in progress, though neither one is critical. At least 1 in-progress activity should be critical because the critical path must be continuous, according to best practices. Schedule characteristic or best practice: Maintaining a baseline schedule; Maintains a baseline schedule to measure, monitor, and report the program's progress; Our assessment: IFT: Minimally met; Officials stated that the schedule does not currently have a baseline. A baseline was originally set in July 2011 as the pre-award baseline, but the program has been delayed since then and no changes have been made to the baseline. Further, we found the schedule does not currently have a baseline schedule document. Officials also stated that the IFT Acquisition Program Baseline serves as the baseline schedule document. However, the Acquisition Program Baseline and related guidance presents an overview of OTIA schedule policy rather than assumptions specific to individual program schedules. In addition, according to best practices, a baseline schedule document is a single document that describes, among other things, the organization of the schedule; the logic of the network; the basic approach to managing resources; the schedule's unique features; and justification for lags, date constraints, and long activity durations. According to best practices, thorough documentation is important for validating and defending a baseline schedule. Thorough documentation also helps with analyzing changes in the program schedule and identifying the reasons for variances between estimates; RVSS: Minimally met; According to program officials, the baseline schedule is the basis for measuring the RVSS schedule efficiency. However, our analysis could not determine a completely valid baseline within the schedule file provided. The latest baseline dates in the file were set in September 2012, or 603 calendar days after the start date of the project kick- off milestone. In addition, we found 154 activities (25 percent) in the schedule that did not have baseline start or finish dates. According to best practices, a formally established baseline schedule to measure current performance against can help managers identify or mitigate the effect of unfavorable performance. We found the schedule does not currently have a baseline schedule document. Program officials state that the RVSS Acquisition Program Baseline, which was approved in September 2012, serves as the baseline schedule document. However, the Acquisition Program Baseline and related guidance present an overview of OTIA schedule policy rather than assumptions specific to individual program schedules. In addition, according to best practices, a baseline schedule document is a single document that describes, among other things, the organization of the schedule; the logic of the network; the basic approach to managing resources; the schedule's unique features; and justification for lags, date constraints, and long activity durations; MSC: Not met; Program officials stated that a schedule baseline document does not exist and a baseline schedule for the MSC was not established. As previously stated, a baseline schedule document is a single document that describes, among other things, the organization of the schedule; the logic of the network; the basic approach to managing resources; the schedule's unique features; and justification for lags, date constraints, and long activity durations. According to best practices, without a formally established baseline schedule to measure current performance against, management cannot identify or mitigate the effect of unfavorable performance. Source: GAO analysis of CBP and OTIA data. Notes: Not met--OTIA provided no evidence that satisfies any of the criterion. Minimally met--OTIA provided evidence that satisfies a small portion of the criterion. Partially met--OTIA provided evidence that satisfies about half of the criterion. Substantially met--OTIA provided evidence that satisfies a large portion of the criterion. Met--OTIA provided complete evidence that satisfies the entire criterion. [A] For the remaining about 1 percent of the schedule activities, we could not determine durations. [End of table] [End of section] Appendix IV: Summary Statistics on the Reporting of Asset Assists Data for Apprehensions and Seizures across the Tucson and Yuma Sectors from Fiscal Year 2010 through June 2013: Numbers of Apprehension and Seizure Events: DHS's Enforcement Integrated Database includes a field that enables Border Patrol agents to identify whether a technological or nontechnological asset assisted in the apprehension of illegal entrants or the seizure of drugs or other contraband.[Footnote 82] This appendix provides summary statistics on the reporting of asset assists by Border Patrol agents in the apprehension of illegal entrants and seizure of drugs and other contraband across the Tucson and Yuma sectors from fiscal year 2010 through June 2013.[Footnote 83] As mentioned earlier, because CBP does not require data on asset assists to be recorded and tracked within the EID, our analysis provides information on the extent to which asset assists were recorded during the specified time frame across the Tucson and Yuma sectors. Because agents are not required to report asset assists data within the EID, it is unclear when the asset assist field is left blank whether it was because an asset was not a contributing factor in the apprehension or seizure or whether an asset was a contributing factor but was not recorded by agents. Accordingly, for the purposes of our analysis, we refer to instances in which asset assists were not recorded as "unreported asset assists."[Footnote 84] Moreover, because of potential differences in the collection and reporting of data on asset assists across sectors and over time, and differences in the types of surveillance technologies deployed across sectors, conclusions about the differences in reported asset assists across sectors and differences within sectors over time cannot be made. [Footnote 85] As shown in tables 8 and 9, the 166,976 apprehension events that occurred in the Tucson sector from fiscal year 2010 through June 2013 resulted in the apprehension of 549,357 illegal entrants, and 20,322 seizure events that occurred in the Tucson sector over the same period resulted in the seizure of 21,973 items.[Footnote 86] The 8,237 apprehension events and 6,828 seizure events occurring in the Yuma sector during the same time period resulted in the apprehension of 17,580 illegal entrants and 7,892 seized items. The two tables also show that the percentages of reported asset assists for apprehension events and seizure events and the resulting apprehensions and seizures differed across the Tucson and Yuma sectors and in both sectors and over time.[Footnote 87] Table 8: Numbers of Apprehension Events and Apprehensions, and Percentages with Unreported Asset Assists, Technology Asset Assists, and Other Asset Assists in the Tucson and Yuma Sectors, Fiscal Year 2010 through June 2013: 1. Number of apprehension events; Tucson sector: 2010: 55,905; 2011: 38,110; 2012: 39,378; 2013: 33,583; Total: 166,976; Yuma sector: 2010: 2,569; 2011: 2,096; 2012: 2,188; 2013: 1,384; Total: 8,237. 2. Percentage of events with unreported asset assists; Tucson sector: 2010: 82.1%; 2011: 72.8%; 2012: 60.8%; 2013: 53.3%; Total: 69.2%; Yuma sector: 2010: 92.3%; 2011: 87.0%; 2012: 85.0%; 2013: 79.2%; Total: 86.8%. 3. Percentage of events with technology asset assists; Tucson sector: 2010: 7.7%; 2011: 13.2%; 2012: 21.2%; 2013: 25.7%; Total: 15.8%; Yuma sector: 2010: 2.1%; 2011: 3.7%; 2012: 3.7%; 2013: 5.1%; Total: 3.4%. 4. Percentage of events with other asset assists; Tucson sector: 2010: 10.2%; 2011: 13.9%; 2012: 17.9%; 2013: 21.0%; Total: 15.1%; Yuma sector: 2010: 5.6%; 2011: 9.3%; 2012: 11.3%; 2013: 15.7%; Total: 9.8%. 5. Number of apprehensions; Tucson sector: 2010: 211,417; 2011: 120,701; 2012: 118,675; 2013: 98,564; Total: 549,357; Yuma sector: 2010: 5,414; 2011: 4,480; 2012: 4,440; 2013: 3,246; Total: 17,580. 6. Percentage of apprehensions with unreported asset assists; Tucson sector: 2010: 72.1%; 2011: 63.2%; 2012: 49.1%; 2013: 38.9%; Total: 59.2%; Yuma sector: 2010: 89.4%; 2011: 81.9%; 2012: 81.6%; 2013: 73.7%; Total: 82.6%. 7. Percentage of apprehensions with technology asset assists; Tucson sector: 2010: 12.7%; 2011: 17.8%; 2012: 26.7%; 2013: 33.7%; Total: 20.6%; Yuma sector: 2010: 3.2%; 2011: 4.7%; 2012: 5.2%; 2013: 8.1%; Total: 5.0%. 8. Percentage of apprehensions with other asset assists; Tucson sector: 2010: 15.1%; 2011: 19.0%; 2012: 24.2%; 2013: 27.4%; Total: 20.1%; Yuma sector: 2010: 7.4%; 2011: 13.4%; 2012: 13.2%; 2013: 18.2%; Total: 12.4%. Source: GAO analysis of U.S. Customs and Border Protection data. Note: For the purposes of our analysis, we define "apprehension event" as an occasion on which Border Patrol agents make an apprehension of an illegal entrant. The event is recorded in the Enforcement Integrated Database. An event can involve the apprehension of one or multiple illegal entrants. Additionally, for the purposes of this analysis, we identify known asset assists involving technology as those Border Patrol technological assets identified within the Border Patrol's "asset assists" data field for which Border Patrol has made investments are included as part of the Arizona Border Surveillance Technology Plan, such as Cameras or Mobile Surveillance Systems. According to Border Patrol officials, agents identifying "Cameras" are most likely attributing the asset assist to either Secure Border Initiative Network or Remote Video Surveillance Systems towers. [End of table] Table 9: Numbers of Seizure Events and Seizures, and Percentages with Unreported Asset Assists, Technology Asset Assists, and Other Asset Assists in the Tucson and Yuma Sectors, Fiscal Year 2010 through June 2013: 1. Number of seizure events; Tucson sector: 2010: 5,119; 2011: 4,892; 2012: 5,284; 2013: 5,027; Total: 20,322; Yuma sector: 2010: 2,528; 2011: 2,129; 2012: 1,280; 2013: 891; Total: 6,828. 2. Percentage of seizure events with unreported asset assists; Tucson sector: 2010: 38.2%; 2011: 34.3%; 2012: 29.0%; 2013: 24.7%; Total: 31.5%; Yuma sector: 2010: 78.0%; 2011: 69.0%; 2012: 52.3%; 2013: 51.4%; Total: 66.9%. 3. Percentage of seizure events with technology asset assists; Tucson sector: 2010: 29.4%; 2011: 31.3%; 2012: 32.8%; 2013: 37.0%; Total: 32.6%; Yuma sector: 2010: 0.5%; 2011: 0.7%; 2012: 0.3%; 2013: 0.6%; Total: 0.5%. 4. Percentage of seizure events with other asset assists; Tucson sector: 2010: 32.4%; 2011: 34.5%; 2012: 38.2%; 2013: 38.3%; Total: 35.8%; Yuma sector: 2010: 21.6%; 2011: 30.2%; 2012: 47.4%; 2013: 48.0%; Total: 32.6%. 5. Number of seizures; Tucson sector: 2010: 5,383; 2011: 5,378; 2012: 5,753; 2013: 5,459; Total: 21,973; Yuma sector: 2010: 2,823; 2011: 2,417; 2012: 1,516; 2013: 1,136; Total: 7,892. 6. Percentage of seizures with unreported asset assists; Tucson sector: 2010: 38.2%; 2011: 34.5%; 2012: 29.5%; 2013: 25.4%; Total: 31.8%; Yuma sector: 2010: 77.5%; 2011: 68.6%; 2012: 52.8%; 2013: 51.7%; Total: 66.3%. 7. Percentage of seizures with technology asset assists; Tucson sector: 2010: 28.5%; 2011: 29.8%; 2012: 31.1%; 2013: 35.6%; Total: 31.3%; Yuma sector: 2010: 0.4%; 2011: 0.7%; 2012: 0.3%; 2013: 0.5%; Total: 0.5%. 8. Percentage of seizures with other asset assists; Tucson sector: 2010: 33.3%; 2011: 35.8%; 2012: 39.4%; 2013: 38.9%; Total: 36.9%; Yuma sector: 2010: 22.1%; 2011: 30.7%; 2012: 47.0%; 2013: 47.8%; Total: 33.2%. Source: GAO analysis of U.S. Customs and Border Protection (CBP) data: Note: For the purposes of our analysis, we define "apprehension event" as an occasion on which Border Patrol agents make an apprehension of an illegal entrant. The event is recorded in the Enforcement Integrated Database. An event can involve the seizure of one or multiple illegal items. Additionally, for the purposes of this analysis, we identify known asset assists involving technology as those Border Patrol technological assets identified within the Border Patrol's "asset assists" data field for which Border Patrol has made investments are included as part of the Arizona Border Surveillance Technology Plan, such as Cameras or Mobile Surveillance Systems. According to Border Patrol officials, agents identifying "Cameras" are most likely attributing the asset assist to either Secure Border Initiative Network or Remote Video Surveillance Systems towers. [End of table] Reported Asset Assists by Type of Asset for Apprehensions and Seizures: Figure 13 shows that the percentages of apprehension events and apprehensions in the Tucson sector for which asset assists were not reported decreased from fiscal year 2010 through June 2013. Figure 14 shows that in the Yuma sector, the percentages of apprehension events and apprehensions for which asset assists were not reported were higher than in the Tucson sector, but similarly decreased over that time period. For the first three quarters of fiscal year 2013, asset assist information was not reported for more than one-half of the apprehension events in the Tucson sector and nearly four-fifths of the apprehension events in the Yuma sector. Figure 13: Percentage of Asset Assists Reported across the Tucson Sector for Apprehensions Events and Apprehensions, Fiscal Year 2010 through June 2013: [Refer to PDF for image: 2 vertical bar graphs] Apprehension events (N = 166,976): Fiscal year: 2010; Asset assist not reported: 82.1%; Technology asset assist reported: 7.7%; Other asset assist reported: 10.2%. Fiscal year: 2011; Asset assist not reported: 72.8%; Technology asset assist reported: 13.2%; Other asset assist reported: 13.9%. Fiscal year: 2012; Asset assist not reported: 60.8%; Technology asset assist reported: 21.2%; Other asset assist reported: 17.9%. Fiscal year: 2013; Asset assist not reported: 53.3%; Technology asset assist reported: 25.7%; Other asset assist reported: 21%. Apprehensions (N = 549,357): Fiscal year: 2010; Asset assist not reported: 72.1%; Technology asset assist reported: 12.7%; Other asset assist reported: 15.1%. Fiscal year: 2011; Asset assist not reported: 63.2%; Technology asset assist reported: 17.8%; Other asset assist reported: 19%. Fiscal year: 2012; Asset assist not reported: 49.1%; Technology asset assist reported: 26.7%; Other asset assist reported: 24.2%. Fiscal year: 2013; Asset assist not reported: 38.9%; Technology asset assist reported: 33.7%; Other asset assist reported: 27.4%. Source: GAO analysis of Customs and Border Protection data. Note: Numbers may not add to 100 because of rounding. For the purposes of this analysis, we identify known asset assists involving technology as those Border Patrol technological assets identified within the Border Patrol's "asset assists" data field for which Border Patrol has made investments are included as part of the Arizona Border Surveillance Technology Plan, such as Cameras or Mobile Surveillance Systems. According to Border Patrol officials, agents identifying "Cameras" are most likely attributing the asset assist to either Secure Border Initiative Network or Remote Video Surveillance Systems towers. Because agents may identify assists from more than one type of asset within the asset assists data field, assists from technological assets could be the result of technology assets alone or some combination of other asset assists. [End of figure] Figure 14: Percentage of Asset Assists Reported across the Yuma Sector for Apprehension Events and Apprehensions, Fiscal Year 2010 through June 2013: [Refer to PDF for image: 2 vertical bar graphs] Apprehension events (N = 8,237): Fiscal year: 2010; Asset assist not reported: 92.3%; Technology asset assist reported: 2.1%; Other asset assist reported: 5.6%. Fiscal year: 2011; Asset assist not reported: 87%; Technology asset assist reported: 3.7%; Other asset assist reported: 9.3%. Fiscal year: 2012; Asset assist not reported: 85%; Technology asset assist reported: 3.7%; Other asset assist reported: 11.3%. Fiscal year: 2013; Asset assist not reported: 79.2%; Technology asset assist reported: 5.1%; Other asset assist reported: 15.7%. Apprehensions (N = 17,580): Fiscal year: 2010; Asset assist not reported: 89.4%; Technology asset assist reported: 3.2%; Other asset assist reported: 7.4%. Fiscal year: 2011; Asset assist not reported: 81.9%; Technology asset assist reported: 4.7%; Other asset assist reported: 13.4%. Fiscal year: 2012; Asset assist not reported: 81.6%; Technology asset assist reported: 5.2%; Other asset assist reported: 13.2%. Fiscal year: 2013; Asset assist not reported: 73.7%; Technology asset assist reported: 8.1%; Other asset assist reported: 18.2%. Source: GAO analysis of Customs and Border Protection data. Note: For the purposes of this analysis, we identify known asset assists involving technology as those Border Patrol technological assets identified within the Border Patrol's "asset assists" data field for which Border Patrol has made investments are included as part of the Arizona Border Surveillance Technology Plan, such as Cameras or Mobile Surveillance Systems. According to Border Patrol officials, agents identifying "Cameras" are most likely attributing the asset assist to either Secure Border Initiative Network or Remote Video Surveillance Systems towers. Because agents may identify assists from more than one type of asset within the asset assists data field, assists from technological assets could be the result of technology assets alone or some combination of other asset assists. [End of figure] Figures 13 and 14 also show that the percentages of apprehension events and apprehensions for which technological asset assists and other asset assists were reported increased during that period in both sectors. Because it is difficult to determine whether unreported asset assists do not involve asset assists, or do involve asset assists that were not recorded, it is difficult to determine whether the increases in the percentages of apprehension events and apprehensions involving technology asset assets and other assists involve real increases, or increases resulting from fewer asset assists going unreported. The higher percentage of apprehension events and apprehensions involving technology asset assists in the Tucson sector relative to the Yuma sector may also be partly due to differences in the two sectors in unreported asset assists, and also to differences in the number of technology assets in the two sectors. Figure 15 shows that for seizure events and seizures in the Tucson sector from fiscal year 2010 through June 2013, as was the case for apprehension events and apprehensions, the percentages for which asset assists were unreported declined, while the percentages for which technology asset assists and other asset assists were reported increased. Unreported asset assists were lower for seizures than for apprehensions in the Tucson sector, and the changes with respect to the percentages of seizures involving unreported asset assists, technology asset assists, and other asset assists in the Tucson sector were not as pronounced as the changes with respect to apprehensions. Figure 15: Percentage of Asset Assists Reported across the Tucson Sector for Seizure Events and Seizures, Fiscal Year 2010 through June 2013: [Refer to PDF for image: 2 vertical bar graphs] Seizure events (N = 20,322): Fiscal year: 2010; Asset assist not reported: 38.2%; Technology asset assist reported: 29.4%; Other asset assist reported: 32.4%. Fiscal year: 2011; Asset assist not reported: 34.3%; Technology asset assist reported: 31.3%; Other asset assist reported: 34.5%. Fiscal year: 2012; Asset assist not reported: 29%; Technology asset assist reported: 32.8%; Other asset assist reported: 38.2%. Fiscal year: 2013; Asset assist not reported: 24.7%; Technology asset assist reported: 37%; Other asset assist reported: 38.3%. Seizures (N = 21,973): Fiscal year: 2010; Asset assist not reported: 38.2%; Technology asset assist reported: 28.5%; Other asset assist reported: 33.3%. Fiscal year: 2011; Asset assist not reported: 34.5%; Technology asset assist reported: 29.8%; Other asset assist reported: 35.8%. Fiscal year: 2012; Asset assist not reported: 29.5%; Technology asset assist reported: 31.1%; Other asset assist reported: 39.4%. Fiscal year: 2013; Asset assist not reported: 25.4%; Technology asset assist reported: 35.6%; Other asset assist reported: 38.9%. Source: GAO analysis of Customs and Border Protection data. Note: Numbers may not add to 100 because of rounding. For the purposes of this analysis, we identify known asset assists involving technology as those Border Patrol technological assets identified within the Border Patrol's "asset assists" data field for which Border Patrol has made investments are included as part of the Arizona Border Surveillance Technology Plan, such as Cameras or Mobile Surveillance Systems. According to Border Patrol officials, agents identifying "Cameras" are most likely attributing the asset assist to either Secure Border Initiative Network or Remote Video Surveillance Systems towers. Because agents may identify assists from more than one type of asset within the asset assists data field, assists from technological assets could be the result of technology assets alone or some combination of other asset assists. [End of figure] Figure 16 shows that in the Yuma sector over the same period, the percentages for which asset assists were unreported declined, and the percentage for which other (nontechnology) assets were reported increased. In the Yuma sector, the percentage of technology asset assists was small (less than 1 percent) in each of the fiscal years and there was no discernible trend in the percentage of technology asset assists. As with apprehensions, it is difficult to determine how much of the increase in technology asset assists in the Tucson sector and other asset assists in both sectors involves the increased use of technology and other assets, or changes in reporting of asset assists. Figure 16: Percentage of Asset Assists Reported across the Yuma Sector for Seizure Events and Seizures, Fiscal Year 2010 through June 2013: [Refer to PDF for image: 2 vertical bar graphs] Seizure events (N = 6,828): Fiscal year: 2010; Asset assist not reported: 78%; Technology asset assist reported: 0.5%; Other asset assist reported: 21.6%. Fiscal year: 2011; Asset assist not reported: 69%; Technology asset assist reported: 0.7%; Other asset assist reported: 30.2%. Fiscal year: 2012; Asset assist not reported: 52.3%; Technology asset assist reported: 0.3%; Other asset assist reported: 47.4%. Fiscal year: 2013; Asset assist not reported: 51.4%; Technology asset assist reported: 0.6%; Other asset assist reported: 48%. Seizures (N = 7,892): Fiscal year: 2010; Asset assist not reported: 77.5%; Technology asset assist reported: 0.4%; Other asset assist reported: 22.1%. Fiscal year: 2011; Asset assist not reported: 68.6%; Technology asset assist reported: 0.7%; Other asset assist reported: 30.7%. Fiscal year: 2012; Asset assist not reported: 52.8%; Technology asset assist reported: 0.3%; Other asset assist reported: 47%. Fiscal year: 2013; Asset assist not reported: 51.7%; Technology asset assist reported: 0.5%; Other asset assist reported: 47.8%. Source: GAO analysis of Customs and Border Protection data. Note: Numbers may not add to 100 because of rounding. For the purposes of this analysis, we identify known asset assists involving technology as those Border Patrol technological assets identified within the Border Patrol's "asset assists" data field for which Border Patrol has made investments are included as part of the Arizona Border Surveillance Technology Plan, such as Cameras or Mobile Surveillance Systems. According to Border Patrol officials, agents identifying "Cameras" are most likely attributing the asset assist to either Secure Border Initiative Network or Remote Video Surveillance Systems towers. Because agents may identify assists from more than one type of asset within the asset assists data field, assists from technological assets could be the result of technology assets alone or some combination of other asset assists. [End of figure] [End of section] Appendix V: Mission Benefits Identified by U.S. Customs and Border Protection of Its Surveillance Technologies: Table 10 summarizes the mission benefits to be derived from each of the technologies to be deployed as part of the Arizona Border Surveillance Technology Plan as outlined in CBP's Multi-Year Investment and Management Plan for Border Security Fencing, Infrastructure, and Technology for Fiscal Years 2014-2017.[Footnote 88] According to CBP officials, each of the seven technologies deployed or planned for deployment as part of the Plan will increase situational awareness and enhance the ability of Border Patrol agents to detect, identify, deter, and respond to threats along the border. Table 10: Mission Benefits Identified by the U.S. Customs and Border Protection (CBP) of the Technologies Deployed or Planned for Deployment under the Arizona Border Surveillance Technology Plan: Technology: Agent Portable Surveillance Systems (APSS); Mission benefits: * Improved situational awareness; * Agility; * Rapid response; * Agent safety; Summary of CBP identified mission benefits: The APSS capabilities provide situational awareness and a better understanding of cross- border flows and activities. The portable, rugged ground-sensing radar system can be deployed and operated on short notice in the harshest border terrain, exemplifying APSS's agility. Technology: Integrated Fixed Towers (IFT); Mission benefits: * Improved situational awareness; * Agent safety; Summary of CBP identified mission benefits: In threat areas where mobile surveillance systems cannot be a viable or long term solution, IFTs equipped with sensor suites and communication equipment can be deployed to provide automated, persistent wide area surveillance for the detection, tracking, identification, and classification of illegal entries. When multiple IFT units are integrated Border Patrol expects to be able to increase situational awareness of and be able to monitor a larger area of interest, whereas previously, multiple agents exposed to threats were required to provide coverage in the same amount of area. Technology: Mobile Surveillance Capability (MSC); Mission benefits: * Agility; * Rapid response; * Agent safety; Summary of CBP identified mission benefits: The purpose of MSC is to provide mobile area surveillance in remote rural areas. This allows CBP to adjust the location of its surveillance capabilities to keep pace with the ever-changing border threat. The capabilities of MSC are detection, identification, and tracking of items of interest until successfully culminating in a law enforcement conclusion. Technology: Mobile Video Surveillance Systems (MVSS); Mission benefits: * Improved situational awareness; * Agility; * Rapid response; * Agent safety; Summary of CBP identified mission benefits: The MVSS enhances CBP's capability to provide persistent video surveillance and situational awareness, resulting in timely and effective responses from law enforcement in predominately rural, remote areas. The system's agility provides Border Patrol with dynamic surveillance capability, demonstrated by the ability to relocate video surveillance assets on the basis of changes in threat patterns and behavior and provides video surveillance coverage (as needed) between fixed tower assets. Technology: Remote Video Surveillance Systems (RVSS); Mission benefits: * Agent safety; * Improved situational awareness; * Intelligence analysis; * Rapid response; Summary of CBP identified mission benefits: RVSS cameras provide the persistent ground surveillance capability needed by Border Patrol agents to effectively deter, detect, track, identify, classify, and respond in a timely and effective manner to items of interest located along the U.S. borders. Additionally, an RVSS is to provide continuous monitoring of encounters, which supports another Border Patrol mission element--to ensure agent safety. RVSS is also to provide archival data of items of interest regarding incursions and encounters to support analysis, intelligence activities, and incident resolution. Technology: Thermal Imaging Devices (TID); Mission benefits: * Improved situational awareness; * Agent safety; * Rapid response; Summary of CBP identified mission benefits: The outcome and mission benefit is increased situational awareness for Border Patrol operators, which can contribute to more timely and effective responses. In support of the physical security and safety of Border Patrol agents, the corresponding Remote Viewing Kit--a device that when combined with TIDs allows agents to remotely control patrol operations--is expected to reduce agent fatigue during long-term deployment of the long-range TID. Technology: Unattended Ground Sensors (UGS) and Imaging Sensors (IS); Mission benefits: * Improved situational awareness; * Rapid response; * Agent safety; Summary of CBP identified mission benefits: UGSs are to provide situational awareness and persistent surveillance. Along with the tower-based surveillance systems that UGSs are intended to augment, the sensors are to increase the Border Patrol's strategic intelligence. The information gathered from these combined systems is to contribute to a greater understanding of border activities. Source: GAO analysis of CBP information. [End of table] [End of section] Appendix VI: Comments from the Department of Homeland Security: U.S. Department of Homeland Security: Washington, DC 20528: February 25,2014: Rebecca Gambler: Director, Homeland Security and Justice Issues: U.S. Government Accountability Office: 441 G Street, NW: Washington, DC 20548: Re: Draft Report GAO-14-368, "Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness" Dear Ms. Gambler: Thank you for the opportunity to review and comment on this draft report. The U.S. Department of Homeland Security (DHS) appreciates the U.S. Government Accountability Office's (GAO's) work in planning and conducting its review and issuing this report. The Department is pleased to note GAO's recognition that U.S. Customs and Border Protection (CBP) identified mission benefits for technologies under the Arizona Border Surveillance Technology Plan (the Plan) and has taken steps to assess performance. Securing the Arizona portion of the southwest border that the United States shares with Mexico--while keeping the illegal flow of people and illicit contraband under control--is a top priority for CBP. It is important to note, however, that the Plan, unlike the Secure Border Initiative Network (SBInet), is a portfolio of non-developmental independent programs, which does not require an overarching integrated master schedule (IMS). CBP is committed to ensuring effective use of program funding for the Plan by following through with approved program implementation strategies. The draft report contained six recommendations, four with which the Department concurs (Recommendations 1,3, 5, and 6) and two with which it non-concurs (Recommendations 2 and 4). Specifically, GAO recommended that the Commissioner of CBP: Recommendation 1: When updating the schedules for the Integrated Fixed Tower (IFT), Remote Video Surveillance System (RVSS), and Mobile Surveillance Capabilities (MSC) programs, ensure that scheduling best practices, as outlined in our schedule assessment guide, are applied to the three programs' schedules. Response: Concur. CBP's Office of Technology Innovation and Acquisition (OTIA) will ensure that GAO's best practices in scheduling will be applied as far as practical to the IFT, RVSS, and MSC programs when updating the respective schedules. OTIA will provide GAO program schedules, as interim milestones, when they are updated, which normally occurs approximately 6 months after each contract is awarded. Estimated Completion Date (ECD): July 31,2015. Recommendation 2: Develop and maintain an Integrated Master Schedule for the Plan that is consistent with scheduling best practices. Response: Non-concur. Maintaining an IMS for the Plan in one file undermines the DHS-approved implementation strategy for the individual programs making up the Plan. The collection of technology programs, referred to as a "Plan" for the sake of clarity, serves as a communication tool. There is no requirement for an IMS to connect everything to procurement. In fact, a key element of the Plan has been the disaggregation of technology procurements, consistent with the lessons learned from the SBInet experience. Among other reasons, the SBInet strategy was not entirely successful because it attempted to be a single, comprehensive program in an environment where that was not appropriate. CBP believes implementation of this recommendation would essentially create a large, aggregated program, similar to SBInet, and effectively create an aggregated "system of systems." CBP believes its strategy of disaggregation has been effective, as it has reduced overall risk and significantly reduced the cost of procurements, which it manages and coordinates among the individual programs. Each program within the Plan has an individual schedule representing the Life-cycle management of each program. However, forcing those linkages artificially into a single IMS for a plan directly contradicts lessons learned and the approved implementation strategy. Accordingly, CBP requests that this recommendation be considered resolved and closed. Recommendation 3: When updating Life-cycle Cost Estimates for the IFT and RVSS programs, verify the Life-cycle Cost Estimates with independent cost estimates and reconcile any differences. Response: Concur. While OTIA did not obtain a traditional independent cost estimate for the IFT and RVSS programs, the developed Life-cycle cost estimates were meant to be conservative in managing program risk. The estimated Life-cycle costs to date are considerably less than originally projected, and OTIA anticipates the estimated program costs will be even lower by the time the IFT and RVSS contract awards are finalized. At this point, there is no benefit to the Federal Government in expending funds to obtain independent cost estimates to verify the Government's work. Assuming that costs realized, based on procurement results to date, continue to hold, there may be no requirement or value added in conducting full-blown updates with independent cost estimates. If that assumption changes, OTIA will complete updates giving consideration to independent cost estimates, as appropriate. OTIA will provide GAO with program cost updates as they become available, normally approximately 6 months after contracts are awarded. ECD: July 31, 2015. 2 Recommendation 4: Revise the IFT Test and Evaluation Master Plan to more fully test the IFT program, before beginning full production, in the various environmental conditions in which IFTs will be used to determine operational effectiveness and suitability, in accordance with DRS acquisition guidance. Response: Non-concur. The revised IFT Test and Evaluation Master Plan (TEMP) of November 2013 is a tailored document developed in accordance with DRS's policy and constructed with DRS's Office of Science and Technology, CBP's Office of Border Patrol (OBP), and OTIA to ensure user needs are met. In fact, the TEMP includes tailored testing and user assessments that will provide much, if not all, of the insight contemplated by the apparent intent of this recommendation. The tailoring is entirely consistent with DRS policy as well as the Non- Developmental approach to acquisition. The approved non-developmental item (NDI) acquisition strategy was based on market surveys and observations during field use by other customers and the incorporation of actual system demonstrations conducted during source selection. This recommendation would drive the program away from its approved strategy, which is designed to reduce risk and cost while providing operationally effective systems. There is no requirement for expansive, formal Operational Test and Evaluation. OTIA's strategy mitigates the risks in a way that is consistent with the nature of the acquisition and without unnecessary added cost and bureaucracy. In short, to re-write the TEMP to incorporate operational testing as recommended undermines and removes the benefits of the NDI strategy. The user test currently outlined in the TEMP will provide the operational user the information needed to validate the system requirements and operational characteristics. IFT has an approved Acquisition Decision Event-3 production decision. Once the initial system undergoes testing in accordance with the TEMP, OBP will make the determination regarding operational readiness prior to deploying additional systems. Accordingly, CBP requests this recommendation be considered resolved and closed. Recommendation 5: Require data on asset assists to be recorded and tracked within the Enforcement Integrated Database (EID), which contains data on apprehensions and seizures. Response: Concur. Although asset-assist data are tracked in the EID via e3 Processing, OBP is in the process of changing e3 Processing to allow improved reporting on asset assists for apprehensions and seizures. In the current configuration, the user has the option of opening a list of values and selecting the applicable assisting asset or leaving the field blank (indicating no asset use). The change will designate the data field as mandatory. Further, OBP will be altering the interaction between the Intelligent Computer Assisted Detection (ICAD) application and e3 Processing. OBP will also be able to link an ICAD event to an e3 event, which will greatly assist in attributing the contribution of assets to individual apprehensions and seizures. The Chief of U.S. Border Patrol will notify the appropriate offices of the mandate of changes to the e3 Processing application. ECD: December 31, 2014. Recommendation 6: Once data on asset assists are required to be recorded and tracked, analyze available data on apprehensions and seizures and technological assists, in combination with other relevant performance metrics or indictors as appropriate, to determine the contribution of surveillance technologies to its border security efforts. Response: Concur. In support of this recommendation-and in alignment with the DRS response to GAO's final report, "Border Patrol: Key Elements of New Strategic Plan Not Yet in Place to Inform Border Security Status and Resource Needs" (GAO-13-25)-OBP intends to create a plan of action with milestones to explore and develop a process that will answer the following question: Row do different classes of technology, within a certain environment, contribute to the OBP mission? To address the question, OBP first will develop an initial set of quantitative and qualitative technology-related measures by September 30,2014, as an interim milestone. In Fiscal Year (FY) 2015, as another interim milestone, OBP will gather baseline data for the developed measures as they apply to key border areas. By the end of FY 2015, OBP will use these data to begin their evaluation of the individual and collective contributions of specific technology assets as they relate to key strategic outcomes outlined in the response to GAO-13-25. OBP will mature this evaluation in FY 2016, and by the end of that fiscal year, measures associated with technology--along with other supporting measures--will assist them in determining levels of situational awareness in different areas of the border. ECD: September 30, 2015. Again, thank you for the opportunity to review and provide comments on this draft report. Technical comments were previously provided under separate cover. Please feel free to contact me if you have any questions. We look forward to working with you in the future. Sincerely, Signed by: Jim H. Crumpacker: Director: Departmental GAO-OIG Liaison Office: [End of section] Appendix VII: GAO Contact and Staff Acknowledgments: GAO Contact: Rebecca Gambler, at (202)512-8777 or GamblerR@gao.gov: Staff Acknowledgments: In addition to the contact named above, Jeanette Espinola (Assistant Director), David Alexander, Charles Bausell, Frances Cook, Katherine Davis, Joseph E. Dewechter, Jennifer Echard, Shannon Grabich, Yvette Gutierrez, Eric Hauswirth, Richard Hung, Jason Lee, Grant Mallie, Linda Miller, John Mingus, Anna Maria Ortiz, Karen Richey, Doug Sloane, Karl Seifert, Nate Tranquilli, Katherine Trimble, Jim Ungvarsky, and Michelle Woods made key contributions to this report. [End of section] Footnotes: [1] The SBInet fixed sensor towers were intended to transmit radar and camera information into a common operating picture at workstations manned at all times by U.S. Border Patrol agents. The SBInet Common Operating Picture was intended to provide uniform data through a command center environment to Border Patrol agents in the field and all DHS agencies, and to be interoperable with the equipment of DHS external stakeholders, such as local law enforcement. Tactical infrastructure includes pedestrian and vehicle fences, roads, and lighting. Ports of entry are officially designated places that provide for the arrival at, or departure from, the United States. [2] These systems were specifically deployed to the Tucson and Ajo stations within the Tucson sector of Arizona, and Border Patrol began using SBInet at the Tucson station in February 2010 and at the Ajo station in August 2010. [3] See, for example, GAO, Secure Border Initiative: DHS Needs to Reconsider Its Proposed Investment in Key Technology Program, [hyperlink, http://www.gao.gov/products/GAO-10-340] (Washington, D.C.: May 5, 2010), and Secure Border Initiative: DHS Needs to Address Significant Risks in Delivering Key Technology Investment, [hyperlink, http://www.gao.gov/products/GAO-08-1086] (Washington, D.C.: Sept. 22, 2008). [4] The IFT consists of towers with, among other things, ground surveillance radars and surveillance cameras mounted on fixed (that is, stationary) towers. The RVSS includes multiple color and infrared cameras mounted on monopoles, lattice towers, and buildings and differs from the IFT, among other things, in that the RVSS does not include radars. The MSC is a stand-alone, truck-mounted suite of radar and cameras that provides a display within the cab of the truck. [5] GAO, Arizona Border Surveillance Technology: More Information on Plans and Costs Is Needed before Proceeding, [hyperlink, http://www.gao.gov/products/GAO-12-22] (Washington, D.C.: Nov. 4, 2011). A Life-cycle Cost Estimate provides an exhaustive and structured accounting of all resources and associated cost elements required to develop, produce, deploy, and sustain a particular program. [6] Measures and key attributes are generally defined as part of the business case in order to explain how they contribute to the mission's benefits. See Office of Management and Budget, OMB Circular No. A-11, Part 7, Section 300, Planning, Budgeting, Acquisition, and Management of Capital Assets (Washington, D.C.: Executive Office of the President, July 2010). [7] GAO, Homeland Security: DHS Requires More Disciplined Investment Management to Help Meet Mission Needs, [hyperlink, http://www.gao.gov/products/GAO-12-833] (Washington, D.C.: Sept. 18, 2012). [8] Every 2 years, we call attention to agencies and program areas that are high risk because of their vulnerabilities to fraud, waste, abuse, and mismanagement, or are most in need of transformation. In 2003, we designated implementing and transforming DHS as high risk because DHS had to transform 22 agencies--several with major management challenges- -into one department. In February 2013, we narrowed the scope of this high-risk area to focus on strengthening DHS management functions, as we reported that DHS had made considerable progress in transforming its original component agencies into a single cabinet-level department and positioning itself to achieve its full potential. We found, though, that continued progress was needed in order to mitigate the risks that management weaknesses pose to mission accomplishment and the efficient and effective use of the department's resources. See GAO, High-Risk Series: An Update, [hyperlink, http://www.gao.gov/products/GAO-13-283] (Washington, D.C.: February 2013). [9] GAO, GAO Schedule Assessment Guide: Best Practices for Program Schedules, [hyperlink, http://www.gao.gov/products/GAO-12-120G] (exposure draft) (Washington, D.C.: May 2012). We developed this guide through a compilation of best practices that federal cost-estimating organizations and industry use. This guide presents guiding principles for auditors in evaluating the economy, efficiency, and effectiveness of government programs. We used the best practices in this guide to assess the schedules for the Plan and its three highest-cost programs because, as the guide states, a schedule is used to help manage government acquisition programs, and thus this guide is applicable to the Plan and its acquisition programs. [10] To compare the cost estimates, we used leading government and industry practices as discussed in GAO, GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs, [hyperlink, http://www.gao.gov/products/GAO-09-3SP] (Washington, D.C.: March 2009), and Office of Management and Budget, Capital Programming Guide Supplement to Office of Management and Budget, Circular A-11, Part 7: Preparation, Submission, and Execution of the Budget (Washington, D.C.: Executive Office of the President, June 2006). Specifically, the methodology outlined in the Cost Estimating and Assessment Guide is a compilation of best practices that federal cost-estimating organizations and industry use to develop and maintain reliable cost estimates throughout the life of an acquisition program. We did not analyze a Life-cycle Cost Estimate for the MSC because CBP had not completed it as of December 2013. [11] DHS Acquisition Management Directive 102-01, Jan. 20, 2010, and DHS Instruction Manual 102-01-001, Acquisition Management/Instruction Guidebook, Oct. 1, 2011. [12] As discussed in more detail later in this report, in general, an asset assist is what happens when a technological asset, such as an SBInet surveillance tower, or a nontechnological asset, such as a canine team, contributes to apprehensions or seizures. For the purposes of this report, apprehensions data include individuals arrested and identified as deportable aliens, consistent with Border Patrol's definition. Apprehension and seizure data for fiscal year 2010 through June 2013 were obtained from relevant DHS databases. See appendix I for additional information on our scope and methodology. [13] Data on asset assists attributed to SBInet surveillance towers were available for 8 months of fiscal year 2010. Border Patrol began requiring the collection of longitude and latitude coordinates for all apprehensions and seizures in May 2009 and thus fiscal year 2010 was the first full year for which these data were available. [14] GAO, Internal Control: Standards for Internal Control in the Federal Government, [hyperlink, http://www.gao.gov/products/GAO/AIMD-00-21.3.1] (Washington, D.C.: November 1999). [15] Border Patrol, within CBP, has primary responsibility for securing the national borders between U.S. land ports of entry. The Tucson sector is made up of eight stations located within Arizona: Ajo, Casa Grande, Douglas, Naco, Nogales, Sonoita, Tucson, and Wilcox. The Yuma sector is made up of three stations located within Arizona, California, and Nevada, including Blythe, Wellton, and Yuma stations. [16] DHS components represented on the Acquisition Review Board include the Office of Policy, the Science and Technology Directorate, the Office of General Counsel, and the Procurement Office, among others. [17] Level 1 is for programs with estimated life-cycle costs of $1 billion or more. Level 2 is for programs with estimated life-cycle costs from $300 million to less than $1 billion. Level 3 is for programs with estimated life-cycle costs of less than $300 million. Within DHS components, the Component Acquisition Executives are responsible for establishing acquisition processes and overseeing the execution of their respective portfolios. [18] Subsequent to the issuance of our sensitive report on the Arizona Border Surveillance Technology Plan, CBP awarded a contract for the IFT program on February 26, 2014. [19] The baseline schedule is to represent the original configuration of the program plan and to signify the consensus of all stakeholders regarding the required sequence of events, resource assignments, and acceptable dates for key deliverables. The current schedule is to represent the actual plan to date. The current schedule is to be compared with the baseline schedule to track variances from the program plan. [20] The APSS program, which was considered a demonstration project under the Plan, was completed on time, but it experienced a delay relative to its baseline schedule because the start date was delayed. Units to be procured under the MVSS program have been redirected to Texas, and OTIA plans to award a contract for the program in July 2014. The TID program did not experience any schedule delays and was completed by CBP's originally planned target date. [21] A senior OTIA official stated that in prior years, OTIA had about half of the workforce needed to manage the Plan's technology programs and that there was a shortage in contracting staff. The official stated that OTIA staff has gained experience and skills more recently. [22] [hyperlink, http://www.gao.gov/products/GAO-12-120G]. [23] The program's work breakdown structure defines, in detail, the work necessary to accomplish a program's objectives, including activities the owner and contractors are to perform. [24] Specifically, best practices state that if a full and complete schedule risk analysis is not conducted, program managers may not be able to determine the likelihood of the program's completion date, how much schedule risk contingency is needed to provide an acceptable level of certainty for completion by a specific date, risks most likely to delay the program, how much contingency reserve each risk requires, and paths or activities most likely to delay the program. [25] According to our assessment guide, while rebaselining can be beneficial for quickly identifying new variances, reporting a program's performance based on a rebaselined cost or schedule can also skew or conceal the program's real cost and schedule performance or overall timeline. [26] An Acquisition Program Baseline document explains the overall approach to the project and establishes a program's baseline cost, schedule, and performance parameters. [27] [hyperlink, http://www.gao.gov/products/GAO-12-120G] and [hyperlink, http://www.gao.gov/products/GAO-09-3SP]. We recognize that different organizations may use the term "Integrated Master Schedule" differently; for example, an Integrated Master Schedule is often used to refer solely to the prime contractor schedule. We use "integrated" to refer to the schedule's incorporation of all activities--contractor and government--necessary to complete a program. [28] [hyperlink, http://www.gao.gov/products/GAO-12-120G] and [hyperlink, http://www.gao.gov/products/GAO-09-3SP]. [29] Department of Homeland Security, Multi-Year Investment and Management Plan for Border Security Fencing, Infrastructure, and Technology (BSFIT) for Fiscal Years 2014-2017 (Washington, D.C.: Apr. 17, 2013). [30] CBP officials stated that they are developing a Life-cycle Cost Estimate for the MSC's operations and maintenance costs, which is expected to be completed in early 2014. [31] [hyperlink, http://www.gao.gov/products/GAO-09-3SP], and Office of Management and Budget, Capital Programming Guide V 2.0 Supplement to Office of Management and Budget, Circular A-11, Part 7: Preparation, Submission, and Execution of the Budget (Washington, D.C.: June 2006). [32] [hyperlink, http://www.gao.gov/products/GAO-09-3SP]. [33] An independent cost estimate provides an independent view of expected program costs that tests the program office's estimate for reasonableness. Independent cost estimates frequently use different methods and are less burdened with organizational bias, helping to provide decision makers with insight into a program's potential costs. [34] The August 2010 and June 2013 cost estimates are in then-year dollars, which reflect the cost at the time of the procurement. [35] Because OTIA did not have cost estimate data readily available for some of the Plan's programs for the August 2010 cost estimate, we could not describe all of the reasons for differences between the August 2010 and June 2013 cost estimates. [36] [hyperlink, http://www.gao.gov/products/GAO-12-22]. To compare the cost estimates, we used leading government and industry practices as discussed in [hyperlink, http://www.gao.gov/products/GAO-09-3SP] and OMB Circular A-11, Part 7. [37] Specifically, we recommended that CBP (1) fully document data used in the cost model; (2) conduct a sensitivity analysis and risk and uncertainty analysis to determine a level of confidence in the estimate, so that contingency funding can be established relative to quantified risk; and (3) independently verify the new Life-cycle Cost Estimate with an independent cost estimate and reconcile any differences. [38] According to OTIA officials, they submitted the IFT Life-cycle Cost Estimate to DHS's Office of Program Accountability and Risk Management in January 2013 and the estimate was assessed in preparation for the Acquisition Decision Event decision. However, an assessment is not equivalent to verifying the IFT estimate with an independent cost estimate. [39] The RVSS program is expected to reach Acquisition Decision Event 3 in May 2014 and will be based on the completion of a system acceptance test at the Nogales area of responsibility. For the MSC program, according to OTIA officials, the CBP Component Acquisition Executive chaired a series of decision briefs on the program and, after determining that the spirit and intent of DHS Acquisition Management Directive 102-01 were being met, approved the MSC program for Acquisition Decision Event 2B. The MSC program was approved for Acquisition Decision Event 3 in September 2012. [40] DHS deemed specific details about the requirements that OTIA traded off as sensitive; therefore, we did not include them in this report. [41] [hyperlink, http://www.gao.gov/products/GAO-12-833]. [42] The Director of Operational Test and Evaluation administers the DHS test and evaluation policy and process for DHS acquisitions, supports the Acquisition Review Board by providing independent test and evaluation progress and status on acquisitions reviewed by the board, approves the Test and Evaluation Master Plan, and provides the status of any operational testing. [43] DHS Test and Evaluation Directive 026-06. May 22, 2009. [44] GAO, Secure Border Initiative: DHS Needs to Address Testing and Performance Limitations That Place Key Technology Program at Risk, [hyperlink, http://www.gao.gov/products/GAO-10-158] (Washington, D.C.: Jan. 29, 2010). [45] [hyperlink, http://www.gao.gov/products/GAO-12-22]. ATEC was the operational test agency for the SBInet Block 1 deployment at Tucson. [46] [hyperlink, http://www.gao.gov/products/GAO-12-22]. [47] As the operational test agency for the SBInet Block 1 deployment at Tucson, ATEC was to provide an independent evaluation of the system's operational effectiveness and suitability. ATEC issued its final evaluation report in March 2011. U.S. Army Test and Evaluation Command, Operational Test Agency Evaluation Report for the Secure Border Initiative Network (SBInet) Block 1.0 (Aberdeen Proving Ground, Maryland. Mar. 29, 2011). [48] DHS guidance requires program managers to conduct a post- implementation review to evaluate the impact of an investment's deployment on customers, the mission and program, and technical or mission capabilities. Chief Information Officer, Department of Homeland Security, Capital Planning and Investment Control (CPIC) Guide, Version 4.0 (Washington, D.C.: May 2007). [49] CBP and the Johns Hopkins University Applied Physics Laboratory, Secure Border Initiative Network (SBInet) Block 1 Post Implementation Review, Version 1.1, AOD-12-0916, (Laurel, Maryland: Jan. 29, 2013). [50] A viewshed refers to the region viewable from a fixed location, as constrained by line-of-sight blockage, such as terrain obstructions. [51] The recommendations outlined in the PIR include the (1) conduction of a more detailed assessment of the impact of deployment of Block 1 system on threat tactics and illegal incursion traffic, (2) systematic recording of system downtime in operations logs, (3) continued observation by OTIA and Border Patrol of Tucson performance and possibly implementation of another trial common operating picture team at Ajo, (4) development of a more formalized on-the-job agent training program, and (5) development of an electronic Operator Watch Log that would allow agents to provide a record for system performance assessment. [52] For the purposes of our analysis, we define an "apprehension or seizure event" as the occasion on which Border Patrol agents make an apprehension of an illegal entrant or a seizure of drugs or other contraband. The event is recorded in the EID and a date and unique identifying number are assigned. An event can involve the apprehension of one or multiple illegal entrants or types of items, and each individual illegal entrant apprehended or type of item seized in the event is associated with the assigned unique identifying number. Our analysis of apprehension events includes instances in which an event has at least one deportable individual. [53] For the purposes of this report, surveillance technologies are technological assets identified within the Border Patrol's "asset assists" data field for which Border Patrol continues to make significant investments and are included as part of the Plan. Thus, surveillance assets that may be selected from the asset assist field's drop-down menu include Cameras, Mobile Surveillance Systems, Scope Trucks, and Unattended Ground Sensors. According to Border Patrol headquarters officials, agents identifying "Cameras" are most likely attributing the asset assist to either SBInet towers or RVSSs. [54] For the purposes of this report, the ranges of the SBInet and RVSS towers are "buffer ranges" that, according to Border Patrol headquarters officials, do not account for obstructions due to terrain, land features, and vegetation. [55] CBP's SBInet tower system is located in the Tucson sector, and accordingly, there are no apprehension events within the viewsheds of SBInet surveillance towers in the Yuma sector. [56] GAO, Key Elements of New Strategic Plan Not Yet in Place to Inform Border Security Status and Resource Needs, [hyperlink, http://www.gao.gov/products/GAO-13-25] (Washington, D.C.: Dec. 10, 2012). [57] [hyperlink, http://www.gao.gov/products/GAO/AIMD-00-21.3.1]. [58] [hyperlink, http://www.gao.gov/products/GAO-12-22]. [59] Department of Homeland Security, U.S. Customs and Border Protection, Multi-Year Investment and Management Plan for Border Security Fencing, Infrastructure, and Technology for Fiscal Years 2014--2017 (Washington, D.C.: Apr. 17, 2013). [60] Clinger-Cohen Act of 1996, 40 U.S.C. §§ 11101-11703; Office of Management and Budget, Circular No. A-130 Revised, Management of Federal Information Resources (Washington, D.C.: Nov. 28, 2000). According to Office of Management and Budget Circular A-11, Part 6, Section 200, performance measurement should include program accomplishments in terms of outputs (quantity of products or services provided) and outcomes (results of providing outputs in terms of effectively meeting intended agency mission objectives), as well as, indicators, statistics, or metrics used to gauge program performance. [61] GAO, 2013 Annual Report: Actions Needed to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits, [hyperlink, http://www.gao.gov/products/GAO-13-279SP] (Washington, D.C.: Apr. 9, 2013). [62] [hyperlink, http://www.gao.gov/products/GAO-13-25]. [63] GAO, GAO Schedule Assessment Guide Exposure Draft, [hyperlink, http://www.gao.gov/products/GAO-12-120G] (Washington, D.C.: May 2012). We developed this guidance through a compilation of best practices that federal cost-estimating organizations and industry use. This guide presents guiding principles for auditors in evaluating the economy, efficiency, and effectiveness of government programs. We used the best practices in this guide to assess the schedules for the Plan and its three highest-cost programs because, as the guide states, a schedule is necessary for government acquisition programs and thus this guide is applicable to the Plan and its acquisition programs. [64] We determined the overall assessment rating by assigning each individual rating a number: Not met = 1, minimally met = 2, partially met =3, substantially met = 4, and met = 5. Then, we took the average of the individual assessment ratings to determine the overall rating for each of the four characteristics. The resulting average becomes the overall assessment as follows: Not met = 1.0 to 1.4, minimally met = 1.5 to 2.4, partially met = 2.5 to 3.4, substantially met = 3.5 to 4.4, and met = 4.5 to 5.0. This rating scale was developed by GAO staff in consultation with some of the cost-estimating experts who helped develop the Cost Estimating and Assessment Guide. See GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs, [hyperlink, http://www.gao.gov/products/GAO-09-3SP] (Washington, D.C.: March 2009). [65] To compare the cost estimates, we used leading government and industry practices as discussed in [hyperlink, http://www.gao.gov/products/GAO-09-3SP] and Office of Management and Budget, Capital Programming Guide Supplement to Office of Management and Budget, Circular A-11, Part 7: Preparation, Submission, and Execution of the Budget (Washington, D.C.: Executive Office of the President, June 2006). Specifically, the methodology outlined in [hyperlink, http://www.gao.gov/products/GAO-09-3SP] is a compilation of best practices that federal cost- estimating organizations and industry use to develop and maintain reliable cost estimates throughout the life of an acquisition program. We did not obtain and analyze a Life-cycle Cost Estimate for the MSC because CBP had not completed it during our audit engagement. A Life- cycle Cost Estimate provides an exhaustive and structured accounting of all resources and associated cost elements required to develop, produce, deploy, and sustain a particular program. [66] GAO, Arizona Border Surveillance Technology Plan: More Information on Plans and Costs Is Needed before Proceeding, [hyperlink, http://www.gao.gov/products/GAO-12-22] (Washington, D.C.: Nov. 4, 2011). [67] DHS Acquisition Management Directive 102-01, Jan. 20, 2010, and DHS Instruction Manual 102-01-001, Acquisition Management/Instruction Guidebook, Oct. 1, 2011. [68] CBP and the Johns Hopkins University Applied Physics Laboratory, Secure Border Initiative Network (SBInet) Block 1 Post Implementation Review, Version 1.1, AOD-12-0916, (Laurel, Maryland: Jan. 29, 2013). [69] [hyperlink, http://www.gao.gov/products/GAO-12-22]; Chief Information Officer, Department of Homeland Security, Capital Planning and Investment Control (CPIC) Guide, Version 4.0 (Washington, D.C.: May 2007); and Office of Management and Budget, Capital Programming Guide V 2.0 Supplement to Office of Management and Budget, Circular A- 11, Part 7: Planning, Budgeting, and Acquisition of Capital Assets (Washington, D.C.: June 2006). [70] U.S. Customs and Border Protection, Secure Border Initiative Network (SBInet), Block 1.0 After Action Report (AAR): From SBInet Acquisition to Operational Test and Evaluation, Version 1.1, (Washington, D.C.: July 2013). [71] The Army Test and Evaluation Command (ATEC) is the operational test agency for the SBInet Block 1--the initial deployment of SBInet capabilities--deployment at Tucson. In this capacity, ATEC provides an independent evaluation of the system's operational effectiveness and suitability. ATEC issued its final evaluation report in March 2011: U.S. Army Test and Evaluation Command, Operational Test Agency Evaluation Report for the Secure Border Initiative Network (SBInet) Block 1.0 (Aberdeen Proving Ground, Maryland: Mar. 29, 2011). [72] Although Border Patrol agents arrest both deportable aliens and nondeportable individuals whom they encounter during patrol activities, for the purposes of this report we define "apprehensions" to include only deportable aliens, to be consistent with Border Patrol's definition. According to the Immigration and Nationalization Act, deportable aliens include those who are inadmissible to the United States or present in violation of U.S. law, who have failed to maintain their status or violated the terms of their admission, or who have committed certain criminal offenses or engaged in terrorist activities, among others. (See 8 U.S.C. § 1227 for a complete list of the classes of deportable aliens.) In some cases, Border Patrol apprehends a deportable alien but turns the individual over to another agency prior to initiating a removal. Aliens with lawful immigration status and U.S. citizens would be considered nondeportable. [73] Although the boundaries of the Yuma sector extend beyond Arizona, because of the small percentage of apprehensions and seizures occurring outside of Arizona, we analyzed data for the entire sector. [74] Apprehension and seizure data for fiscal year 2010 through June 2013 were obtained from DHS's EID. The totals from our analyses of apprehension and seizure data do not match Border Patrol's year-end totals for this time frame for several reasons. First, Border Patrol reports apprehension and other data on an "end-of-year" basis, and these reports do not reflect adjustments or corrections made after that reporting date, whereas the data Border Patrol provided to us reflected these adjustments or corrections. Additionally, we used the "arrest site" or "seizure site" (i.e., the location at which an individual was arrested or a seizure was made as determined by Geographic Information Systems coordinates), rather than "processing site" (i.e., the location at which the arrest or seizure was processed), which is the Border Patrol's standard reporting method to determine apprehension and seizure locations. Further, we removed from our analyses apprehensions and seizures for which there were no arrest or seizure site Geographic Information System coordinates. Moreover, the EID contains a "program area" data field in which agents are able to assign specific program area codes to apprehensions or seizures. These program area codes include, but are not limited to, Border Patrol, Joint Terrorism Task Force, and Law Enforcement Area Response Units. Our analysis of apprehensions and seizures includes those for which the Border Patrol "program area" was assigned, and thus excludes those attributed to other program areas. [75] Border Patrol began using SBInet at Tucson in February 2010 and at Ajo in August 2010. Border Patrol began requiring the collection of longitude and latitude coordinates for all apprehensions and seizures in May 2009; therefore fiscal year 2010 was the first full year for which these data were available. [76] GAO, Border Patrol: Key Elements of New Strategic Plan Not Yet in Place to Inform Border Security Status and Resource Needs, [hyperlink, http://www.gao.gov/products/GAO-13-25] (Washington, D.C.: Dec. 10, 2012). [77] GAO, Internal Control: Standards for Internal Control in the Federal Government, [hyperlink, http://www.gao.gov/products/GAO/AIMD-00-21.3.1] (Washington, D.C.: November 1999). [78] Department of Homeland Security, U.S. Customs and Border Protection, Multi-Year Investment and Management Plan for Border Security Fencing, Infrastructure, and Technology for Fiscal Years 2014- 2017 ( Washington, D.C.: Apr. 17, 2013). Also, see [hyperlink, http://www.gao.gov/products/GAO-12-22]. [79] [hyperlink, http://www.gao.gov/products/GAO-09-3SP] and [hyperlink, http://www.gao.gov/products/GAO-12-120G]. [80] A schedule is horizontally traceable if it reflects the order of events necessary to achieve aggregated products or outcomes and is vertically traceable if activities in varying levels of the schedule map to one another and key dates presented to management in periodic briefings are in sync with the schedule. A valid critical path represents the chain of dependent activities with the longest total duration. Total float refers to the amount of time by which an activity can slip before the delay affects a project's estimated finish date. [81] We determined the overall assessment rating by assigning each individual rating a number: Not met = 1, minimally met = 2, partially met = 3, substantially met = 4, and met = 5. Then, we took the average of the individual assessment ratings to determine the overall rating for each of the four characteristics. The resulting average becomes the Overall Assessment as follows: Not met = 1.0 to 1.4, minimally met = 1.5 to 2.4, partially met = 2.5 to 3.4, substantially met = 3.5 to 4.4, and met = 4.5 to 5.0. We developed this rating scale in consultation with cost-estimating experts who helped develop the Cost Estimating and Assessment Guide. [82] The EID is a DHS-shared common database repository for several DHS law enforcement and homeland security applications that contains data on apprehensions and seizures. For the purposes of this analysis, we identify asset assists involving technology as those Border Patrol technological assets identified within the Border Patrol's "asset assists" data field for which Border Patrol continues to make significant investments and are included as part of the Plan. Thus, technology assets that may be selected from the asset assist field's drop-down menu include Cameras, Mobile Surveillance Systems, Scope Trucks, and Unattended Ground Sensors. According to Border Patrol headquarters officials, agents identifying "Cameras" are most likely attributing the asset assist to either SBInet or RVSS towers. [83] As mentioned in appendix I, this time frame was selected because fiscal year 2010 represents the first full fiscal year for which data on asset assists were available. This timeframe was also selected because fiscal year 2010 represents the first year for which SBInet surveillance technologies were operationally deployed. [84] For the purposes of this report, "unreported asset assists" could include instances for which an asset was not a contributing factor in apprehensions and seizures. [85] According to CBP officials, while SBInet surveillance technologies were deployed to the Tucson sector during our specified time frame, these technologies were not deployed, and thus are not present, within the Yuma sector. Thus, reported technological asset assists for the Yuma sector include assists from technological assets such as RVSS, Scope Trucks, and Unattended Ground Sensors. [86] In table 8, apprehension events and apprehensions that did not involve deportable subjects have been eliminated. [87] We define "apprehension events and seizure events" as the occasions on which Border Patrol agents make an apprehension of an illegal entrant or a seizure of drugs or other contraband. The event is recorded in the EID and a date and unique identifying number are assigned. These events can involve the apprehension of one or multiple illegal entrants or one or more types of seized items, and each individual illegal entrant apprehended or type of item seized in the event will be associated with the assigned unique identifying number. [88] Department of Homeland Security, U.S. Customs and Border Protection, Multi-Year Investment and Management Plan for Border Security Fencing, Infrastructure, and Technology for Fiscal Years 2014- 2017 (Washington, D.C.: Apr. 17, 2013). [End of section] GAO's Mission: The Government Accountability Office, the audit, evaluation, and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony: The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO's website [hyperlink, http://www.gao.gov]. Each weekday afternoon, GAO posts on its website newly released reports, testimony, and correspondence. To have GAO e-mail you a list of newly posted products, go to [hyperlink, http://www.gao.gov] and select "E-mail Updates." Order by Phone: The price of each GAO publication reflects GAO's actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO's website, [hyperlink, http://www.gao.gov/ordering.htm]. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. Connect with GAO: Connect with GAO on facebook, flickr, twitter, and YouTube. Subscribe to our RSS Feeds or E mail Updates. Listen to our Podcasts. Visit GAO on the web at [hyperlink, http://www.gao.gov]. To Report Fraud, Waste, and Abuse in Federal Programs: Contact: Website: [hyperlink, http://www.gao.gov/fraudnet/fraudnet.htm]; E-mail: fraudnet@gao.gov; Automated answering system: (800) 424-5454 or (202) 512-7470. Congressional Relations: Katherine Siggerud, Managing Director, siggerudk@gao.gov: (202) 512-4400: U.S. Government Accountability Office: 441 G Street NW, Room 7125: Washington, DC 20548. Public Affairs: Chuck Young, Managing Director, youngc1@gao.gov: (202) 512-4800: U.S. Government Accountability Office: 441 G Street NW, Room 7149: Washington, DC 20548. [End of document]