Skip to main content

AECOM Management Services, Inc.--Advisory Opinion

B-417506.12 Sep 18, 2019
Jump To:
Skip to Highlights

Highlights

AECOM Management Services, Inc., of Germantown, Maryland, protests the award of contracts (and the issuance of task orders) in connection with the logistics civil augmentation program (LOGCAP) to Kellogg, Brown & Root Services, Inc. (KBR) of Houston, Texas; Vectrus Systems Corporation, of Colorado Springs, Colorado; Fluor Intercontinental, Inc., of Greenville, South Carolina, and PAE-Parsons Global Logistics Services (P2GLS), of Arlington, Virginia, under request for proposals (RFP) No. W52P1J16R0001, issued by the Department of the Army for support services for U.S. military installations worldwide. AECOM argues that the agency misevaluated proposals and made unreasonable source selection decisions.

Based on our review, we would have no basis to object to the agency's actions.
View Decision

DOCUMENT FOR PUBLIC RELEASE
The decision issued on the date below was subject to a GAO Protective Order. This redacted version has been approved for public release.

Decision

Matter of:  AECOM Management Services, Inc.--Advisory Opinion

File:  B-417506.12

Date:  September 18, 2019

Jeffery M. Chiow, Esq., Neil H. O’Donnell, Esq., Lucas T. Hanback, Esq., Emily A. Wieser, Esq., and Cassidy Kim, Esq., Rogers Joseph O’Donnell, PC, for the protester.
Lee P. Curtis, Esq., Seth H. Locke, Esq., David E. Fletcher, Esq., Eric A. Aaserud, Esq., Alexander O. Canizares, Esq., Julia M. Fox, Esq., and Brenna D. Duncan, Esq., Perkins Coie, LLP, for Kellogg, Brown & Root Services, Inc.; Kevin P. Mullen, Esq., J. Alex Ward, Esq., James A. Tucker, Esq., Sandeep N. Nandivada, Esq., R. Locke Bell, Esq., Lauren J. Horneffer, Esq., and Caitlin A. Crujido, Esq., Morrison & Foerster LLP, for Vectrus Systems Corporation; Andrew Shipley, Esq., Stephen W. Preston, Esq., Philip E. Beshara, Esq., Souvik Saha, Esq., Matthew F. Ferraro, Esq., Elizabeth J. D’Aunno, Esq., and Chanda L. Brown, Esq., Wilmer Cutler Pickering Hale and Dorr, LLP, for Fluor Intercontinental, Inc.; and Anuj Vohra, Esq., Christian N. Curran, Esq., Olivia L. Lynch, Esq., Zachary H. Schroeder, Esq., and Lauren H. Williams, Esq., Crowell & Moring LLP, for PAE-Parsons Global Logistics Services, LLC, intervenors.
Scott A. Johnson, Esq., Alex M. Cahill, Esq., and Matthew R. Wilson, Esq., Department of the Army, for the agency.
Scott H. Riback, Esq., Evan D. Wesser, Esq., Edward Goldstein, Esq., and Tania Calhoun, Esq., Office of the General Counsel, GAO, participated in the preparation of this advisory opinion.

DIGEST

Protest challenging agency’s evaluation of proposals and source selection decisions provides no basis for Government Accountability Office to object to the agency’s actions where record shows agency’s evaluation and source selections were reasonable and consistent with the terms of the solicitation and applicable statutes and regulations.

DECISION

AECOM Management Services, Inc.,[1] of Germantown, Maryland, protests the award of contracts (and the issuance of task orders) in connection with the logistics civil augmentation program (LOGCAP) to Kellogg, Brown & Root Services, Inc. (KBR) of Houston, Texas; Vectrus Systems Corporation, of Colorado Springs, Colorado; Fluor Intercontinental, Inc., of Greenville, South Carolina, and PAE-Parsons Global Logistics Services (P2GLS), of Arlington, Virginia, under request for proposals (RFP) No. W52P1J16R0001, issued by the Department of the Army for support services for U.S. military installations worldwide.  AECOM argues that the agency misevaluated proposals and made unreasonable source selection decisions.

Based on our review, we would have no basis to object to the agency’s actions for the reasons discussed below.

BACKGROUND

AECOM, along with several other concerns, filed a protest in our Office relating to the agency’s actions in connection with this acquisition.  We denied the protest of one of the other protesters in an earlier decision.  DynCorp International, LLC, B-417506, B‑417506.10, July 31, 2019, 2019 CPD ¶ __.  DynCorp then filed a protest with the United States Court of Federal Claims, and in the wake of that protest, we dismissed AECOM’s original protest.  At the request of the Court, Motion Requesting GAO’s Advisory Opinion, Aug. 21, 2019, we are issuing this advisory opinion, which reflects our views concerning the protest AECOM originally filed with our Office.

LOGCAP fulfills the Department of the Army’s requirements to provide global logistical support capabilities to Geographical Combatant Commands (GCCs) and Army Service Component Commands (ASCCs) so that military units can focus on and carry out critical missions without having to focus on base operation activities.  LOGCAP establishes contracted solutions and capabilities, incorporating an extensive portfolio of services.  This includes services such as:  “setting the theater”; supply operations; transportation services; engineering services; base camp services; and other logistics and sustainment support services.  These services are further broken out into more than 200 work breakdown structure (WBS) references in the Performance Work Statement (PWS), including:  minor construction; food services; laundry; morale, welfare and recreation services; billeting; and facility management.  See Agency Report (AR), exh. 120-1, Source Selection Plan, at 5.[2]

The RFP sought proposals for the award of multiple indefinite-delivery, indefinite-quantity (IDIQ) contracts for the Army’s fifth generation of LOGCAP, LOGCAP V.  The RFP contemplates the award of between four and six IDIQ contracts, with each contract having an initial 5-year ordering period and five, 1-year optional ordering periods.  RFP at 2.[3]  Task orders under the IDIQ contracts can be awarded using fixed-price, cost-reimbursable, or labor-hour type contract line item numbers (CLINs).  Id. at 3.  The cumulative maximum anticipated dollar amount for all IDIQ contracts is $82 billion.  Id.

In addition to the award of the IDIQ contracts, the RFP also contemplates the award of the first seven task orders in support of U.S. military operations as follows:  Northern Command (NORTHCOM); Southern Command (SOUTHCOM); European Command (EUCOM); African Command (AFRICOM); U.S. Central Command (CENTCOM); Pacific Command (PACOM); and Afghanistan.  RFP at 115-116.[4]  Each task order award, with the exception of Afghanistan, consists of two primary components.  The first component of each of the awarded task orders, “setting the theater,” is to be performed on a fixed-price basis.[5]  The amount of the task order for the “setting the theater” component during the base year of performance represents the minimum guaranteed amount for each of the IDIQ contracts.  RFP at 3.  The second, larger, component of the awarded task orders is for requirements to be performed on a cost-reimbursement basis.  The RFP contemplates including in the initial task orders performance requirements for a 1‑year base, and four 1-year option periods.  Id

Offerors were required to submit a single proposal encompassing all six GCCs/ASCCs and Afghanistan.  RFP at 101.  Award of the IDIQ contracts and the corresponding seven initial task orders was to be made on a best-value tradeoff basis, considering the following four factors, which are listed in descending order of importance:  (1) technical/management; (2) past performance; (3) small business participation; and (4) cost/price.[6]  Id. at 114-115.  The technical/management factor was further divided into two subfactors:  (1) regional capabilities in support of setting and surging the theater and initial service support for Army deployment; and (2) management approach, key initiatives, and labor staffing model.  Id. at 117.  The non-price factors, when combined, were significantly more important than price.  Id. at 115.

In response to the RFP, the agency received six proposals, all of which were included in the competitive range.  The agency engaged in extensive discussions with the offerors, soliciting several interim rounds of proposals, and ultimately soliciting and receiving final proposal revisions (FPRs) from each offeror.  The agency evaluated the FPRs and awarded four IDIQ contracts--to Fluor, KBR, Vectrus and P2GLS--and issued the seven initial task orders to one or another of the awardees.  The agency’s evaluation results and award decision in each of the GCCs and Afghanistan was as follows (the recipient of each task order is shaded in the tables below):

EUCOM

Offeror

Technical/

Management

Past Performance

Small Business

Total Evaluated Price

DynCorp

Good

Substantial

Outstanding

$417,999,413

KBR

Outstanding

Substantial

Outstanding

$183,304,831

Fluor

Outstanding

Satisfactory

Good

$180,491,766

P2GLS

Good

Substantial

Outstanding

$162,390,361

URS

Good

Satisfactory

Acceptable

$287,441,533

Vectrus

Outstanding

Substantial

Good

$147,453,303

 

PACOM

Offeror

Technical/

Management

Past Performance

Small Business

Total Evaluated Price

DynCorp

Good

Substantial

Outstanding

$597,240,524

KBR

Outstanding

Substantial

Outstanding

$383,055,076

Fluor

Outstanding

Satisfactory

Good

$317,034,989

P2GLS

Good

Substantial

Outstanding

$304,425,024

URS

Good

Satisfactory

Acceptable

$537,372,565

Vectrus

Outstanding

Substantial

Good

$349,187,574

 

CENTCOM

Offeror

Technical/

Management

Past Performance

Small Business

Total Evaluated Price

DynCorp

Good

Substantial

Outstanding

$2,053,603,781

KBR

Outstanding

Substantial

Outstanding

$1,866,642,855

Fluor

Outstanding

Satisfactory

Good

$1,385,197,224

P2GLS

Good

Substantial

Outstanding

$1,463,639,678

URS

Good

Satisfactory

Acceptable

$1,530,466,786

Vectrus

Outstanding

Substantial

Good

$1,033,582,366

 

NORTHCOM

Offeror

Technical/

Management

Past Performance

Small Business

Total Evaluated Price

DynCorp

Good

Substantial

Outstanding

$575,683,741

KBR

Outstanding

Substantial

Outstanding

$393,988,697

Fluor

Outstanding

Satisfactory

Good

$426,033,361

P2GLS

Good

Substantial

Outstanding

$472,838,710

URS

Good

Satisfactory

Acceptable

$374,137,985

Vectrus

Outstanding

Substantial

Good

$423,823,325

 

AFRICOM

Offeror

Technical/

Management

Past Performance

Small Business

Total Evaluated Price

DynCorp

Good

Substantial

Outstanding

$179,957,087

KBR

Outstanding

Substantial

Outstanding

$154,273,093

Fluor

Outstanding

Satisfactory

Good

$137,222,537

P2GLS

Good

Substantial

Outstanding

$126,507,558

URS

Good

Satisfactory

Acceptable

$242,525,796

Vectrus

Outstanding

Substantial

Good

$117,736,326

 

SOUTHCOM

Offeror

Technical/

Management

Past Performance

Small Business

Total Evaluated Price

DynCorp

Good

Substantial

Outstanding

$60,883,169

KBR

Outstanding

Substantial

Outstanding

$56,925,859

Fluor

Outstanding

Satisfactory

Good

$53,422,722

P2GLS

Good

Substantial

Outstanding

$34,596,500

URS

Good

Satisfactory

Acceptable

$87,733,751

Vectrus

Outstanding

Substantial

Good

$32,703,734

 

AFGHANISTAN

Offeror

Technical/

Management

Past Performance

Small Business

Total Evaluated Price

DynCorp

Acceptable

Substantial

Outstanding

$1,424,025,013

KBR

Good

Substantial

Outstanding

$1,372,043,984

Fluor

Good

Satisfactory

Good

$1,235,346,545

P2GLS

Acceptable

Substantial

Outstanding

$1,276,889,223

URS

Acceptable

Satisfactory

Acceptable

$972,798,555

Vectrus

Good

Substantial

Good

$1,338,863,477

 

AR, exh. 123, Source Selection Decision Document (SSDD), at 7, 12, 15, 18, 19, 22, 23.  After being advised of the agency’s source selection decisions and requesting and receiving a debriefing, AECOM filed the instant protest. 

DISCUSSION

AECOM takes issue with virtually every aspect of the agency’s evaluation of proposals and, derivatively, maintains that the agency’s source selection decisions were unreasonable based on the errors AECOM alleges were made in the evaluation.  According to the protester, an examination of every element of the agency’s evaluation reveals what the protester describes as a “smorgasbord of bloopers.”  We have considered all of the protester’s allegations and find no basis to object to the agency’s actions. 

We note at the outset that, in reviewing protests that challenge an agency’s evaluation of proposals, our Office does not independently evaluate proposals; rather, we review the agency’s evaluation to ensure that it is reasonable and consistent with the terms of the solicitation and applicable statutes and regulations.  L-3 Communications, L-3 Link Simulations and Training, B-410644.2, Jan. 20, 2016, 2016 CPD ¶ 44 at 3-4.  We discuss AECOM’s principal allegations below.

The Labor Staffing Model

As we discussed in our recent decision denying the protest of another disappointed offeror, DynCorp International, LLC, supra., the RFP required firms to include in their proposal a labor staffing model (LSM).  Each offeror’s LSM essentially is a mechanism designed to predict and track the cost associated with performance of the solicited requirements, in light of certain broad, performance-based parameters and assumptions.  The broad performance-based parameters and assumptions were identified in the RFP’s PWS and WBS, and included, for example, the operation of a fuel depot of a particular size and configuration, in a particular location, during certain specified hours of operation, and so on.  All of the performance-based parameters and assumptions for the LSM were constant for all offerors, which is to say, all offerors prepared their proposals using the same set of inputs.

AECOM does not challenge the agency’s evaluation of its own LSM, but maintains that the LSMs of KBR, Vectrus and Fluor were materially flawed, and that the agency’s evaluation failed to identify these flaws.  The record shows that the LSMs of these awardees were assigned a strength in the agency’s evaluation of the offerors’ technical/management proposals based on the ability of their respective LSMs to provide for transparent labor estimates, giving enhanced cost traceability during program execution.  See e.g., AR, exh. 123, SSDD, at 8-9.  Fluor’s LSM also was singled out for offering an internal verification check, and Fluor was identified as the only offeror to provide this feature.  Id. at 9.  According to AECOM, the agency’s evaluation was unreasonable because these awardees’ LSMs--and in particular KBR’s LSM--lack predictability, transparency, and traceability.[7] 

We find no merit to this aspect of AECOM’s protest.  AECOM’s allegations are based entirely on materials and opinions presented by the protester’s consultant.  Our review of those materials leads us to conclude that this aspect of AECOM’s protest is based on requirements not found in the RFP.  We also conclude that AECOM’s consultant misunderstands the mechanics of the awardees’ respective LSMs in general, and more specifically, the mechanics of the KBR LSM.  We discuss our conclusions below.[8]

The RFP required the LSM to predict the mix, types, and quantities of staffing necessary to account for all activated service requirements set forth in the RFP.  RFP at 107.  The RFP provided that the LSM should be consistent, scalable, and adjustable.  Id.  Offerors also were required to provide a supporting rationale describing the basis for the LSM development for all activated services, including clearly explaining how the offeror determined the types and quantities of labor proposed for a particular resource.  Id.  The RFP provided that the offerors’ explanations should identify the source of the data, formulae or calculations used to estimate the proposed quantities, and should include the basis, support, estimating relationships, or estimating methodologies used by the offeror.  Id.

The focus of AECOM’s allegations relate principally to differences between the awardees’ “base” LSMs and the individual labor staffing approaches (LSAs) developed for each GCC and Afghanistan (AECOM focuses primarily on KBR’s proposal).  In this connection, the RFP’s instructions provided, in pertinent part, as follows:

The base Labor Staffing Model shall be consistent, scalable, and adjustable accounting for all activated service requirements identified through the RFP, the PWS, and the associated technical exhibits, including the government provided workload inputs and assumption criteria identified in Attachments 0002 thru 0010.

*  *  *  *

Utilizing the base Labor Staffing Model above, the Offeror shall develop and provide one (1) Labor Staffing Approach for each task order.  The Labor Staffing Approach for each task order shall be produced by populating the base Labor Staffing Model with the unique requirements in the Government provided workload data and assumptions identified in Attachments 0002 thru 0010, respectively.

RFP at 107. 

  Design of the Base LSMs

AECOM makes several arguments relating to the awardees’ base LSMs respective individual labor staffing approaches for each GCC and Afghanistan, but again, its principal focus is on what AECOM claims to be shortcomings with the KBR LSM.  First, AECOM argues that KBR’s (and also Fluor’s and P2GLS’s) base LSM did not account for all activated service requirements.  According to the protester, this resulted in the base LSMs having a fundamental flaw because not all of the possible labor categories, populations of various labor types and job categories or titles, and staffing mix and formulae for calculating estimated labor hours for certain jobs, are included in the base LSMs. 

This aspect of AECOM’s protest is based on a faulty underlying premise, namely, that the base LSM had to be pre-populated with the entire universe of all possible inputs from the performance work statement.  This is incorrect.  As the RFP instructions make clear, the base LSM had to be “consistent, scalable and adjustable.”  RFP at 107.  These instructions make it clear that the only definitive requirement for the base LSM was that it was required to be consistent.  The remaining two requirements--that it also be scalable and adjustable--demonstrate that the agency contemplated that the base LSM could utilize varying additional data inputs based on the particular requirements being procured. 

This interpretation is reinforced by the second paragraph of the solicitation instructions quoted above relating to the development of the LSAs, which provides as follow:  “The Labor Staffing Approach for each task order shall be produced by populating the base Labor Staffing Model with the unique requirements in the Government provided workload data and assumptions identified in Attachments 0002 thru 0010, respectively.”  RFP at 107 (emphasis supplied).  In effect, therefore, the RFP contemplated the creation of a base LSM that operates essentially as a “generator” for the more specific LSAs for each GCC and Afghanistan.  In other words, it was not necessary for the base LSM to be pre-populated with all possible data inputs included in the RFP.[9]

Turning to the KBR LSM as an illustrative example, the record shows that KBR proposed a “baseline” LSM that effectively worked as a “generator” for the individual LSAs.  AR, exh. 33-13, KBR Baseline LSM.  This baseline LSM includes WBS elements that are either generic or are common to all of the GCCs and Afghanistan, but also allows the user to draw in various additional data to generate the individual LSAs.  Id., User Interface.  Of significance, the baseline LSM interface includes a user “tip” for inserting a WBS element into the model, which states:  “Tip:  WBS Selection.  Type your key word to search dropdown list for items containing your criteria OR Leave blank to select from all activated services!”  Id. (emphasis supplied).  In other words, while KBR’s baseline LSM already had certain requirements that were either generic or common to all GCCs and Afghanistan built into its base LSM, it also allowed the user to select data from any other WBS element to generate specific individual outcomes for each GCC and Afghanistan.[10]

The various outcomes for each GCC and Afghanistan are included in the record.  AR, exhs. 33-3 to 33-12.  In addition, the record includes other exhibits that were part of the KBR proposal that describe all of the additional underlying data to be used in generating individual LSAs, including a comprehensive list of labor categories, job titles and detailed job descriptions, AR, exh. 33-14, and a comprehensive list of WBS elements.  AR, exh. 33-2.  All of these data points can be drawn into the baseline LSM to generate specific outcomes for the individual LSAs. 

Our understanding of the KBR baseline LSM is corroborated by a declaration submitted by an individual retained by KBR as a consultant.  He states as follows:

The Solicitation instructs that the Labor Staffing Approaches be developed from the base Labor Staffing Model, but it does not explicitly state that the base Labor Staffing Model and Labor Staffing Approaches remain one-to-one in perpetuity.  For instance, using a copy of the base Labor Staffing Model as a starting point, the unique requirements as referenced in the Solicitation could have been inserted, at which point that Excel file became a “Labor Staffing Approach” and was no longer the original “base Labor Staffing Model”.  In effect, the base Labor Staffing Model would be a template to be customized into individual Labor Staffing Approaches.  Nevertheless, [AECOM’s consultant] does not consider such an implementation as a possibility under the language of the Solicitation.  Instead, he assumes that the base Labor Staffing Model must contain the same information as the individual Labor Staffing Approaches at all times, even after development of the Labor Staffing Approaches.

KBR Supplemental Comments, Declaration of KBR’s consultant, at 7.  AECOM has not submitted any evidence that would contradict these conclusions, and we find, based on our own examination of the KBR baseline LSM and the statements from KBR’s consultant, that, in fact, this is how the KBR baseline LSM functions.  As a result, simply because the KBR baseline LSM does not include all possible inputs does not provide a basis for our Office to find the agency’s evaluation of the other offerors’ base LSMs and corresponding LSAs unreasonable.  In effect, AECOM has, at most, identified a mechanical difference between the functioning of its base LSM and the base LSMs proposed by other offerors that does not run afoul of any requirement in the RFP. 

In addition to these considerations, we also point out that AECOM has not explained--and it is not apparent to us--how it might have been competitively prejudiced by the other offerors’ business decisions to construct their respective base LSMs in the manner that they did.  Competitive prejudice is an essential element of every viable protest, and where none is shown or otherwise evident from the record, we will not sustain a protest, even if the agency’s actions arguably are improper.  Olympus America, Inc., B–414944, Oct. 19, 2017, 2018 CPD ¶ 151 at 3-4

Here, while AECOM has demonstrated that not all of the base LSMs included all possible data inputs, it has not shown how this fact, without more, provided any of the other offerors a competitive advantage or, correspondingly, how AECOM’s business decision to include all possible data inputs in its base LSM negatively affected its competitive position.  Under the circumstances we would have no basis to object to the agency’s evaluation findings, even if we were to agree with AECOM (which we do not) that offerors were not permitted to construct their respective base LSMs in the manner that they did. 

Cost Realism Evaluation

AECOM’s central argument with respect to the agency’s evaluation of cost/price proposals is that the agency failed to perform a cost realism evaluation for the cost-reimbursement elements of the requirement.  AECOM points out that the agency did not make any cost realism evaluation adjustments to any offeror’s proposed costs.  The protester argues that this was fundamentally unreasonable given the wide disparity in the offerors’ proposed costs and differences in their respective proposed staffing, notwithstanding the fact that all offerors were proposing to perform the same requirements.  The protester also argues that the agency failed to compare the offerors’ proposed cost elements either to an objective baseline (such as an independent government cost estimate) or to compare the cost elements from one proposal to the same cost elements in the other proposals.

We find no merit to this aspect of AECOM’s protest.  As a general matter, when an agency evaluates proposals for the award of a cost-reimbursement type contract, the offerors’ proposed costs are not necessarily dispositive because, regardless of the costs proposed, the government is bound to pay the contractor its actual and allowable costs.  Federal Acquisition Regulation (FAR) §§ 15.305(a)(1), 15.404-1(d), 16.505(b)(3); Exelis Sys. Corp., B-407673 et al., Jan. 22, 2013, 2013 CPD ¶ 54 at 7.  Consequently, an agency must perform a cost realism analysis to determine the extent to which an offeror’s proposed costs are realistic for the work to be performed.  FAR § 15.404-1(d)(1)  An agency is not required to conduct an in-depth cost analysis, or to verify each and every item in assessing realism; rather, the evaluation requires the exercise of informed judgment by the contracting agency.  AdvanceMed Corp.; TrustSolutions, LLC, B-404910.4 et al., Jan. 17, 2012, 2012 CPD ¶ 25 at 13.  Our review is limited to determining whether the cost analysis is reasonably based and not arbitrary.  TriCenturion, Inc.; SafeGuard Servs., LLC, B-406032 et al., Jan. 25, 2012, 2012 CPD ¶ 52 at 6.  Based on our review, we have no reason to object to the agency’s evaluation of cost proposals here.

As discussed above and also at length in our earlier DynCorp International, LLC supra, decision, the LSM was a central feature of each offeror’s proposal.  Offerors were required to submit an LSM that predicts the labor staffing mix, types, and quantities necessary to account for all activated service requirements set forth in the RFP.  RFP at 107.  Offerors also were required to provide a supporting rationale describing the basis for the LSM development for all activated services, including clearly explaining how the offeror determined the types and quantities of labor proposed for a particular resource.  Id.  The RFP provided that the offerors’ explanations should identify the source of the data, formulae or calculations used to estimate the proposed quantities, and should include the basis, support, estimating relationships, or estimating methodologies used by the offeror.  Id.

The RFP included additional guidance concerning the basis of estimate used by the offerors to prepare their respective LSMs.  RFP at 107.  Of significance, the RFP did not require offerors to use any particular data or “measuring stick” for the development of their respective LSMs.  Rather, firms were left to use their own business judgment to present an underlying basis or rationale for their LSM, and were free to use any type of information that the offeror thought was useful or predictive of staffing “outcomes” in light of the underlying information used.

The RFP provided examples of the types of information that could be used, including:  information derived from “analogous relationships” (such as performance of a particular requirement in a similar setting that the offeror thought was predictive of performing RFP work requirements); information derived from past experience (such as historical performance, a time study, or standard operating procedures); and information derived from minimum manning standards or regulations applicable to a particular task or area of performance.  RFP at 107.  In effect, offerors were free to present an LSM based on whatever data or “measuring stick” the offeror thought would best predict the staffing “outcomes” in light of the data inputs from the RFP.  Once delineated, each offeror’s LSM dictated the cost “outcomes” for each proposal.

The record shows that, after receipt of initial proposals, the agency engaged in extensive discussions with each offeror in order to advise them of the deficiencies, inadequacies, shortcomings or inconsistencies in their proposals, either in meeting the agency’s requirements as outlined in the PWS; in the adequacy of their proposed estimating methodologies; or in the adequacy of data or rationale provided to support their respective LSMs. 

For example, the record shows that the agency identified a deficiency in the AECOM proposal relating to its failure to include adequate staffing to meet all of the requirements of PWS section 04.07, relating to the provision of health care services.  AR, exh. 274-2, AECOM Resolution of Initial Discussion Questions, at 12.  During the same round of discussions, the agency identified a different deficiency in AECOM’s proposal relating to the basis of estimation the firm used to calculate the costs associated with performing the full service laundry requirements outlined in PWS section 04.04.01.  Id. at 16.  Also in that same round of discussions, the agency identified a deficiency in the AECOM proposal in terms of the supporting rationale the firm provided for using particular rates to perform unscheduled emergency, urgent and routine maintenance orders that differed from the rates assumed by the government under the RFP.  Id. at 5-6. 

In response to each of these discussion question examples, AECOM either increased its proposed staffing to meet the agency’s requirements, AR, exh. 274-2, AECOM Resolution of Initial Discussion Questions, at 12; revised its estimating methodology to respond to the government’s concern, id. at 16; or provided additional--and in the agency’s view adequate--supporting rationale to satisfy the agency that its approach was feasible, id. at 6.  AECOM therefore resolved the government’s concerns raised in connection with each of the deficiency examples noted above.

The record shows that the discussions process continued at length and in significant detail with each offeror until the agency was satisfied that all deficiencies, significant weaknesses and weaknesses or inconsistencies relating to the offerors’ respective LSMs were resolved.  Like AECOM in the above examples, the other offerors responded to the agency’s discussion questions, either by upwardly adjusting their proposed costs to respond to a concern identified by the agency; by adjusting their estimating methods to respond to the agency’s concerns; or by providing additional supporting information and data, such that the agency was satisfied that the costs proposed were realistic in light of the offerors’ showing, and technical approach. 

As a consequence of the agency’s extensive discussions with the offerors, all of the agency’s cost-related concerns were resolved.  At that point, the agency concluded that it was not necessary to make any cost realism adjustments to any of the proposals because all of the offerors either elected to: (1) increase their proposed cost for a given element in light of the requirements of the RFP identified by the agency; (2) make a change or clarification to their respective estimating methodologies to address the agency’s concerns; or (3) present additional supporting information to demonstrate the feasibility of their respective proposed approaches. 

As noted, AECOM principally argues that the variation either in the offerors’ proposed costs or staffing solutions demonstrates that the agency failed to conduct an adequate cost realism evaluation.  In effect, AECOM insists that, because all offerors were responding to the same requirements included in the RFP, it necessarily follows that all offerors should have had similar costs.  We disagree.

As we discussed in our decision in DynCorp International, LLC supra, the solicitation charged the offerors with primary responsibility for developing their respective LSMs to meet the agency’s requirements based on an exercise of each offeror’s considered business judgment.  In engaging the offerors in this manner, the agency did not comment on or criticize the comparative wisdom of each offeror’s chosen business strategy or judgment during discussions.  Instead, the agency left the offerors largely to their own devices to develop their respective LSMs in the manner they determined would present their best business strategy and most competitive position.  As we pointed out in DynCorp International, LLC supra, the agency’s discussions did not question the offerors’ business judgments concerning what formulae, data or other inputs to use in developing their respective LSMs.  Rather, the agency’s discussions focused on inconsistencies, calculation errors or other anomalies identified by the agency in light of the choices made by the offerors in the exercise of their respective business judgments. 

Given the wide latitude afforded to the offerors by the agency’s acquisition strategy, it is not particularly surprising that the proposals presented markedly varying technical or business solutions to meeting the agency’s requirements.  However, this fact alone does not demonstrate that the agency failed to perform an adequate cost realism evaluation.  Rather, it demonstrates that the agency relied on the outcome of the competition to identify those offerors proposing the most advantageous solution to the agency’s requirements.  And, while the record shows that there were marked differences in the offerors’ proposed costs, levels of effort, and staffing profiles, the agency was not required to use the mechanism of a cost realism evaluation to eliminate the competitive differences among the proposed approaches.  CFS-KBR Marianas Support Services, LLC; Fluor Federal Solutions, LLC, B-410486, et al., Jan. 2, 2015, 2015 CPD ¶ 22 at 4-5.

As noted, AECOM argues that, because the agency’s requirements were the same for all offerors, their solutions necessarily either should have been closely aligned and presented largely similar costs and levels of effort, or should have been adjusted by the agency during its cost realism evaluation to reflect largely similar costs and levels of effort.  However, any such evaluation by the agency would have amounted to an improper mechanical normalization of the offerors’ proposals to, for example, either an objective baseline (such as an independent government estimate) or to the other offerors’ proposed costs or levels of effort.  Any such evaluation necessarily would have been improper, precisely because it would have failed to take into consideration each offeror’s unique technical solution or approach--which in this case was embodied in each offeror’s LSM.  CFS-KBR Marianas Support Services, LLC; Fluor Federal Solutions, LLC, supra.

Significantly, AECOM has not identified or shown that any aspect of any offeror’s proposed technical solution materially failed to meet the agency’s requirements, or that the agency unreasonably found each offeror’s solution to be at least technically acceptable.  As noted, AECOM did raise a challenge to the agency’s evaluation of certain other offerors’ proposed LSMs (principally the KBR LSM), but the discussion above demonstrates that AECOM’s challenge is meritless.[11]

In the final analysis, AECOM essentially is asking us to find that the agency erred in not making any cost realism adjustments to the offerors’ proposed costs simply because there were differences among the offerors in terms of technical approach and total evaluated costs/prices.  However, any such cost realism adjustments to the offerors’ proposed costs--in the absence of properly identified deficiencies or weaknesses in their proposed technical approaches--would have amounted to the agency making improper changes to the offerors’ respective technical approaches.  See EMW, Inc.; Pragmatics, Inc.; Futron, Inc.; VMDn, LLC, B-409686.4, et al., July 21, 2014, 2014 CPD ¶ 220 at 11.  In light of the foregoing, we have no basis to object to the agency’s evaluation of cost/price proposals for the central reasons advanced by AECOM.

  AECOM’s Remaining Cost Realism Challenges

As noted above, AECOM submitted a consultant report with its comments responding to the initial agency report, along with a second--untimely filed--consultant report with its supplemental comments responding to the agency’s supplemental report.  Its first consultant report identified a small number of discrete calculation errors found in the offerors’ proposals that, in several instances result in minimal changes to one or another firm’s proposed costs that were not identified by the agency during its evaluation, but that would have had no impact on the competition or the agency’s source selection decisions.  We briefly discuss these allegations below.

AECOM argues that KBR improperly included program management costs for the NORTHCOM GCC that amounted to approximately $10 million for services that were not required by the RFP; AECOM argues that these costs should not have been included in the KBR proposal for NORTHCOM.  Elsewhere, AECOM argues that KBR’s cost proposal included an incorrect formula for calculating the hours associated with fuel distribution services in NORTHCOM that, had the correct formula been used, should have resulted in a reduction of its costs by approximately $1.4 million.[12]  Inasmuch as these errors would appear to actually improve the competitive position of KBR in relation to AECOM (by lowering KBR’s total evaluated cost/price overall by approximately $11 million for NORTHCOM), these arguments do not provide a basis for our Office to object to the agency’s cost realism evaluation.

In addition to the particular items noted with respect to KBR, AECOM also alleges that Fluor used what the protester describes as ‘inconsistent’ labor hour multipliers between its base LSM and the individual LSAs for communications and information technology support services, noting that, in certain GCCs, Fluor’s multiplier was lower than it was in the base LSM.  However, the record shows that in the only GCC where Fluor received award--AFRICOM--Fluor used the higher multiplier.  Accordingly, this would not provide a basis for our Office to object to the agency’s cost realism evaluation

AECOM also alleges that P2GLS failed to apply a base LSM labor staffing multiplier in calculating the costs associated with handling containers and rolling stock in Iraq in the CENTCOM GCC.  Given that P2GLS did not receive award for the CENTCOM GCC, any errors identified by AECOM in this area would have had no impact on the ultimate selection decision.  We therefore have no basis to object to the agency’s cost evaluation for this reason.

Finally, AECOM argues that Vectrus used an incorrect type of personnel at a particular location (site APN001) in the CENTCOM GCC for what it describes as a “random sample” of personnel positions, maintaining that Vectrus used other country nationals as opposed to American expatriates at that location.  However, AECOM made no attempt to monetize this alleged discrepancy.  Given that AECOM’s total evaluated price for the CENTCOM GCC was approximately $500 million higher than Vectrus’s price, and in light of the fact that there are two interceding offerors with technically superior, lower cost/price proposals, we have no basis to conclude that any alleged error on the agency’s part in determining Vectrus’s total evaluated cost/price for the CENTCOM GCC could have prejudiced AECOM.[13]  We therefore have no basis to object to the agency’s cost realism evaluation for this reason.

In addition to these considerations, in both its supplemental comments responding to the agency’s supplemental report, as well as in the untimely second consultant report submitted by AECOM, the protester argued that there were what it describes as countless other calculation errors that AECOM claimed went unidentified by the agency in its cost realism evaluation.  For example, according to the protester, its consultant found labor formulae calculation errors in approximately 70 percent of all the labor staffing formulae included in the KBR base LSM.  However, since AECOM had copies of all proposals and the agency’s evaluation record as of the date the agency submitted its initial agency report, AECOM was required to advance these contentions within 10 days of receiving the initial agency report.  Since AECOM did not raise these allegations until more than 10 days after the protester had the evidence upon which these challenges rely, these contentions are untimely and not for our consideration.[14]  We therefore dismiss these allegations.  4 C.F.R. § 21.2(a)(2).

Based on the foregoing discussion, we dismiss in part and deny in part AECOM’s challenges to the agency’s cost realism evaluation.

Past Performance Evaluation

AECOM challenges the agency’s assignment of a satisfactory rating to its past performance.  The protester makes two principal arguments in this connection.  First, AECOM argues that the agency’s assignment of the satisfactory rating was based primarily on a finding by the agency that there was a discernible trend of safety-related concerns reflected in the protester’s past performance.  According to the protester, this finding was unreasonable because, elsewhere in its proposal, it highlighted what it characterizes as objective data that it maintains shows that, in fact, AECOM has a generally improving safety record.

We find no merit to this aspect of AECOM’s protest.  The record shows that the agency identified a group of safety-related concerns occurring across multiple AECOM contracts that gave rise to a larger concern on the part of the evaluators that these instances reflected a negative trend in safety for AECOM. 

First, the record shows that, on a task order issued under AECOM’s Enhanced Army Global Logistics Enterprise (EAGLE) Army Prepositioned Stock 5 contract (a contract determined very relevant by the agency evaluators), there was a level III nonconformance report (NCR)[15] issued to AECOM because an unexploded .50 caliber shell was found inside a vehicle while maintenance was being performed.  This level III NCR also referenced a similar incident occurring earlier in the same month (April, 2017) that went unreported.  AR, exh. 275, AECOM Past Performance Report, at 38, 81.  As to this incident, AECOM submitted a corrective action plan and the incident was deemed resolved approximately a month after the level III NCR was issued.  Id. at 38.

Second, the record shows that another level III NCR[16] was issued to AECOM’s affiliate, AC First, LLC, concerning an incident occurring under a different contract performed in Afghanistan and also deemed very relevant by the agency.  In that instance, improper documentation of a technical inspection of a Howitzer artillery piece led the agency to conclude that this failure contributed to a test-fire malfunction.  AR, exh. 275, AECOM Past Performance Report, at 47, 81.  The level III NCR/CAR contained the following finding with respect to this incident:

The DA [Department of the Army] Form 2404 was not filled out to DAPAM [Department of the Army Pamphlet]-750-8 standards and the repair/service actions on the DA 2404 were poorly documented or missing all together.  The CAR states the lack of following procedures in the documentation of maintenance for the M777A2 HOWITZER MED TWD 155 weapon system leads to serious safety and operational concerns for all sites serviced in the Afghanistan Theater putting lives at risk.

Id. at 47.

Third, a series of three level III NCRs were issued to AECOM’s affiliate AC First, LLC, on a maintenance and operational support contract performed in Afghanistan based on three separate vehicle safety issues that all occurred within 60 days of one another.  One level III NCR/CAR was issued because of a collision between two vehicles being road tested after repairs.  AR, exh. 275, AECOM Past Performance Report, at 66, 81.  A second level III NCR was issued because a vehicle that was undergoing maintenance was not properly secured and the vehicle rolled out of control and over a mechanic, who later died of complications arising from the incident.  Id. at 67, 81.  The record also shows that, as to this incident, the contractor failed to report the incident within 24 hours to the administrative contracting officer, as required under the terms of the contract.  Id. at 67.  A third level III NCR was issued because a vehicle undergoing maintenance moved unexpectedly, collided with a second vehicle, and as a result, both vehicles experienced fire damage.  Id. at 67, 81. 

Fourth, a level III NCR was issued to AECOM’s affiliate, AC First, LLC, on another EAGLE contract for failure to accurately report the status of various equipment.  According to the record, AC First’s failures to accurately report the status of various equipment resulted in the Army having an inaccurate summary of operational readiness status of important assets, which could result in increased risk of damage to equipment, injury or death to personnel, and mission failure.  AR, exh. 275, AECOM Past Performance Report, at 72-73.  This same contract also involved another level III NCR that was issued for failure to report missing sensitive items.  Id.

In addition to these specific concerns, the agency evaluators also noted three instances (two involving the reporting issues discussed above) where level I or II NCRs were elevated to a level III NCR.  The evaluation report provides as follows:

Under contract W52P1J-12-G-0028 TO [task order] 0003 determined to be somewhat relevant, the NCRs issued included repeat non conformances, where insufficient corrective action plans resulted in an overarching NCR being reissued at a higher level.  Also, under contract W52P1J-12-G-0048 TO 0002 also determined to be somewhat relevant, both level III NCRs were initially issued as lower level concerns.  The Level III NCR that was issued 25 September 2017 for failing to notify the Security Directorate regarding mission sensitive items was a repeat finding, originally issued as a Level II NCR on 15 August 2017.  The ESR [equipment status report] concerns under this contract were repetitive, two NCRs were issued regarding ESR concerns and data accuracy, the USG [U.S. Government] determined this concern to [be] a repetitive, systemic issue, which generated the escalation to a Level III NCR.

AR, exh. 275, AECOM Past Performance Report, at 82.

The agency’s evaluators summarized their safety-related concerns as follows:

The safety infractions identified above are spread across both very relevant and somewhat relevant records within URS’ past performance history.  While only one NCR/CAR involved a serious injury (associated with contract W52P1J-15-C-0040, determined to be Very Relevant by the PPET [past performance evaluation team]), the specific facts and circumstances behind the other NCRs/CARs indicated the potential for more serious consequences due to the failure to follow the necessary procedures and protocols. The safety infractions within the URS past performance history lowers its past performance confidence assessment as it is indicative of a trend in the Offeror’s ability to consistently comply with safety requirements.

AR, exh. 275, AECOM Past Performance Report, at 82.

Finally, in addition to these safety-related concerns, the agency also noted that AECOM had been assigned a marginal rating for cost control under its EAGLE contract during a recent rating period (January, 2017-January, 2018).  One underlying reason for the marginal rating was that the government discovered that AECOM had more than 300 employees staged at an off-site facility awaiting base access for several weeks or months where they were not being effectively utilized, but the government nonetheless was being billed for them.  AR, exh. 275, AECOM Past Performance Report, at 29. 

According to the record, this problem was a matter of concern because it was first discovered by the government (rather than being disclosed by AECOM); AECOM relied heavily on the government to resolve the issue; and the matter was not resolved during the rating period.  AR, exh. 275, AECOM Past Performance Report, at 29.  According to the record, AECOM should have been aware of the applicable base access protocol and local holiday policies and had plans in place to mitigate the issue, and also immediately should have notified the government in order to reduce the significant loss of productive time.  Id.  Under the same contract during the same rating period, the government also found that AECOM consistently deviated from negotiated labor rates without receiving fair and reasonable rate determinations from the government concerning those rates before invoicing the government for the differing rates.  Id. at 29‑30.

Based on these findings, the record shows that the agency assigned AECOM an overall past performance rating of satisfactory, finding as follows:

While the issues above are concerning to the PPET [past performance evaluation team], as demonstrated within this report, URS’ successful performance and positive feedback in CPARs [contractor performance assessment reports] and those responding to CPQs [contractor performance questionnaires] and interviews speak positively to URS’ abilities to successfully perform many requirements.  Notwithstanding the significance related to the adverse past performance information identified above, and any negative narrative comments or less than satisfactory ratings, each of the Assessing Officials and CPQ POCs [points of contact] stated they would recommend URS for future awards.  Considering the totality of URS’ performance and the above evaluation and analysis, the PPET has a reasonable expectation that the Offeror will successfully perform the required effort; therefore, a SATISFACTORY rating has been assigned.

AR, exh. 275, AECOM Past Performance Report, at 83.

As noted, AECOM argues that the agency unreasonably assigned its proposal only a satisfactory rating principally because, elsewhere in its proposal, it provided information about the firm’s overall safety record that it maintains demonstrates that the agency’s concerns were exaggerated.  AECOM describes the above past performance concerns identified by the agency as isolated minor safety incidents spread across a limited number of contracts that do not reflect its larger safety trend.  AECOM directs our attention to “total case incident rate” (TCIR) statistics[17] it included in its proposal that it maintains show that it has an overall safety record that is better than the industry averages in the locations where it performed the contracts that were reviewed by the agency in its past performance evaluation.  See, e.g., AR, exh. 269-1, AECOM Past Performance Proposal, at 3.[18]

We have no basis to object to the agency’s evaluation of AECOM’s past performance for these reasons.  First, AECOM does not dispute the accuracy of the incidents identified and relied on by the agency during its evaluation; we therefore assume the accuracy of the agency’s findings.  We find AECOM’s arguments amount to nothing more than disagreement with the agency’s conclusions based on the accurately-identified incidents. 

Second, the data that AECOM points to in its proposal amounts to generic safety-related statistical data.[19]  The agency here was concerned with evaluating specific examples of past performance that were directly relevant to the solicited requirements.  And while it is possible that AECOM can show what the protester describes as a better safety record based on a presentation of carefully-curated data points, the fact of the matter remains that the agency identified a number of significant, safety-related concerns where AECOM encountered performance problems that endangered or resulted in the death of personnel, or jeopardized mission success.  Simply stated, on this record, we have no basis to object to the agency’s assignment of a satisfactory rating to AECOM under the past performance factor.

AECOM also argues that the agency’s past performance evaluation resulted in disparate treatment of the offerors.  According to the protester, the agency was far more lenient in its evaluation of the other offerors’ past performance than it was in its evaluation of AECOM’s past performance. 

We have reviewed all of AECOM’s arguments relating to the agency’s alleged disparate evaluation of past performance and find no basis to object to the agency’s actions for the reasons advanced.  We discuss one example for illustrative purposes. 

AECOM argues that the agency treated offerors disparately in its consideration of the importance of instances where lower-level NCRs were elevated to higher-level NCRs.  As discussed above, the agency noted several instances where AECOM had lower-level NCRs elevated to higher-level NCRs, including those instances where AECOM failed to notify the contracting directorate of instances where it did not accurately account for mission sensitive equipment, as well as instances where it repeatedly failed to accurately report equipment status.  AECOM argues that there was an instance where Vectrus also had lower-level NCRs elevated to a higher-level NCR, but the agency did not similarly “penalize” Vectrus in its evaluation. 

The record shows that there was a qualitative distinction between those instances where AECOM had lower-level NCRs elevated to higher-level NCRs and the instance where Vectrus had lower-level NCRs elevated to a higher-level NCR.  As discussed above, those instances where AECOM had lower-level NCRs elevated to higher-level NCRs included one where this occurred in connection with a failure to notify the contracting directorate when AECOM had not accurately accounted for mission sensitive equipment, and one instance where AECOM had reported inaccurately on the status of equipment.  Both of these issues resulted in serious, safety-related concerns, and created an increased risk of damage to equipment, the threat of injury or death to personnel, or an increased risk of mission failure.  AR, exh. 275, AECOM Past Performance Report, at 72-73.  This related, ultimately, to the agency’s overarching conclusion that AECOM was experiencing a downward trend in safety-related concerns reflected in its past performance.

In contrast, the record shows that, while the agency did identify an instance where lower-level NCRs were escalated to a higher-level NCR for Vectrus, it was an isolated instance that did not involve any safety-related concerns.  Specifically, the record shows that Vectrus had a number of level II NCRs elevated to a level III NCR based on a systemic issue with Vectrus’s supply accountability and inventory processes, which the agency viewed as indicative of deficient or ineffective internal management controls.  AR, exh. 106-1, Vectrus Past Performance Report, at 44, 47, 71.  Nonetheless, there were no safety-related concerns associated with this aspect of Vectrus’s past performance.  The record therefore shows that there was a qualitative difference between the past performance issues experienced by AECOM, and the past performance issue identified for Vectrus.  We therefore have no basis to object to the agency’s evaluation based on AECOM’s allegations of disparate treatment.

Regional Capabilities

AECOM challenges the agency’s evaluation of the offerors’ regional capabilities under the technical/management factor.  The record shows that the agency assigned all of the offerors five strengths in each of the GCCs, except Afghanistan, where regional capabilities were not evaluated.[20]  According to AECOM, this was unreasonable for two reasons.  First, AECOM argues that, in performing an interim evaluation of proposals, the agency did not assign all offerors the same five strengths in each GCC, as it did in its final evaluation.  AECOM argues that proposals were not meaningfully revised after the interim evaluation, so there is no rational basis for the agency to have arrived at different evaluation results in its final evaluation. 

This does not provide a basis for our Office to object to the agency’s evaluation of regional capabilities.  The record shows that the agency evaluators revisited all of their evaluation findings during their review of final proposals and made adjustments to the ratings assigned (as well as the specific narrative findings underlying those ratings) based on their reconsideration of their earlier evaluation findings.  The agency’s source selection evaluation board (SSEB) report specifically states as follows:

For the final evaluation report the SSEB team performed a thorough review of the assigned findings, adjectival ratings, and narrative descriptions to ensure consistent application in accordance with the SSP [source selection plan].  As part of this thorough review, in several factors an Offeror’s final rating is different than its earlier rating.  The change in rating between the final rating and earlier ratings is based on the team thoroughly considering the substantive merits of the Offeror’s proposal and recognizing that the merits of the proposal more appropriately reflect a certain rating in accordance with the adjectival definitions.  This review resulted in changes to offerors’ findings and adjectival ratings in the final evaluation reports, which are summarized below.

AR, exh. 121, SSEB Report, at 51.  Thus, the change in ratings that occurred between the interim and final evaluations simply reflects the evaluators’ careful reconsideration of their earlier evaluation findings.  While the protester appears to assign nefarious intent to the agency’s revision of their evaluation findings (the protester suggests that the agency “whitewashed” its earlier evaluation during the reevaluation), we find nothing improper or unreasonable in the agency’s actions.

AECOM also argues that the agency’s evaluation of the offerors’ regional capabilities was unreasonable because it identified attributes of certain offerors capabilities, but failed to identify the same attributes in AECOM’s proposal.  In effect, AECOM argues that the agency should have identified even more regional capability attributes in its proposal than the agency found during its evaluation.

We have reviewed all of AECOM’s allegations in connection with this aspect of its protest and find no basis to object to the agency’s evaluation findings.  In the final analysis, the record shows that the agency performed an extremely thorough evaluation of the offerors’ regional capabilities and determined, on balance, that the proposals broadly were comparable in terms of regional capabilities, and that this did not provide a basis to distinguish between the proposals.  See AR, exhs. 121, SSEB Report, 122-2, Source Selection Advisory Council Report.  Given that all of the offerors here are large, multinational concerns with well-established global capabilities, it is not inherently unreasonable for the agency to have reached this conclusion.

While AECOM is correct that the agency did not identify identical capabilities for each offeror in each GCC, AECOM has failed to show that the agency materially misevaluated its proposal or failed to identify its own regional capabilities in areas that would have affected the agency’s source selection decisions.  AECOM also has failed to show that the agency improperly or unreasonably identified any regional capability attributes in any other offeror’s proposal.  Under these circumstances, we have no basis to object to the agency’s evaluation of regional capabilities for the reasons advanced by AECOM. 

Unbalanced Costs/Prices

Finally, AECOM argues that the agency failed to evaluate the proposals for unbalanced costs/prices.  According to the protester, the agency failed to evaluate whether the proposed costs/prices were unbalanced as between the fixed-price elements of the requirement and the cost-reimbursable elements of the requirement.  AECOM argues that a high variation among the fixed prices proposed shows that it is possible that one or another firm may have proposed unbalanced costs/prices and, presumably, included cost reimbursable elements of cost in the fixed price elements.

We find no merit to this aspect of AECOM’s protest.  Unbalanced pricing exists when, despite an acceptable total evaluated price, the price of one or more contract line items is significantly overstated or understated.  FAR § 15.404‑1(g)(1).  With respect to unbalanced pricing generally, the FAR requires that contracting officers analyze offers with separately‑priced line items or subline items in order to detect unbalancing.  FAR § 15.404-1(g)(2).  While both understated and overstated prices are relevant to the question of whether unbalanced pricing exists, the primary risk to be assessed in an unbalanced pricing context is the risk posed by overstated prices.  American Access, Inc., B-414137, B‑414137.2, Feb. 28, 2017, 2017 CPD ¶ 78 at 5.  Thus, to prevail on an allegation of unbalanced pricing, a protester must first show that one or more line item prices are significantly overstated since the risk in a line item price being overstated is that the Government will not receive the benefit of its bargain because other line items (for example, option quantities) will not be purchased.  InfoZen, Inc., B‑411530, B‑411530.2, Aug. 12, 2015, 2015 CPD ¶ 270 at 7. 

The record here shows that AECOM proposed the highest fixed prices in four of the six GCCs (AFRICOM, CENTCOM, EUCOM and SOUTHCOM), and the second-highest fixed prices in the remaining two GCCs (NORTHCOM and PACOM).  AR, exhs. 117-2 to 117-7 at 3.  Thus, to the extent that any firm may have benefited from proposing higher fixed prices, it principally was AECOM.  In any event, the protester has not explained--and it is not apparent to us--how it may have been prejudiced by the agency’s alleged failure to perform an unbalanced pricing analysis as between the fixed-price elements and cost-reimbursable elements.  As noted, the principal risk associated with accepting an unbalanced price proposal is that the government will not obtain the benefit of its bargain because it will purchase some line items but not others.  Here, the agency’s awards include both the fixed-price and cost-reimbursable elements. 

It follows that any risk associated with purchase of just the fixed-price elements was effectively removed by the agency’s actions. 

Based on the discussion above, we would have no basis to object to the agency’s actions for the reasons advanced in AECOM’s earlier protest filed with our Office.

Thomas H. Armstrong
General Counsel

 

[1] URS Federal Services, Inc. submitted the original proposal in this acquisition, but during the competition, it changed its name to AECOM Management Services, Inc.  The record in this case refers to URS and AECOM interchangeably.

[2] All of the agency reports in each protest were organized using the same exhibit numbering system so that all citations in every protest were to the same set of documents.  Not all documents were produced in every protest.

[3] All references to the RFP are to the version produced by the Army that is conformed through RFP amendment No. 11.  AR, exh. 3.

[4] The regions were divided into 3 operational groups.  Operational group 1 included EUCOM and PACOM; an offeror was eligible to receive only one task order award in operational group 1.  RFP at 116.  Operational group 2 included CENTCOM, NORTHCOM, AFRICOM, and SOUTHCOM; an offeror was eligible to receive only one task order award in operational group 2.  Id.  Operational group 3 included only Afghanistan; all offerors that were selected for an operational group 1 or 2 award, with the exception of the CENTCOM awardee, were eligible for award of the Afghanistan task order.  Id.

[5] The Afghanistan task order does not include a “setting the theater” component.

[6] The RFP advised that for the technical/management and small business participation factors, the agency would assign adjectival ratings of outstanding, good, acceptable, marginal or unacceptable.  RFP at 118, 120.  For the past performance factor, the agency would assign ratings of substantial confidence, satisfactory confidence, neutral confidence, limited confidence, or no confidence.  Id. at 119.

For purposes of evaluating cost/price, the RFP advised that the agency would evaluate the cost-reimbursement elements for reasonableness and realism, and the fixed-price elements for reasonableness; the RFP also advised that proposed cost/price would be evaluated for balance.  RFP at 120-121.

[7] In support of this aspect of its protest, AECOM relied on the opinion of a consultant it retained in connection with its pursuit of the protest, and submitted a declaration and accompanying report prepared by this individual with its comments and supplemental protest filed in response to the agency’s initial report.  AECOM explains that, because its consultant had only 10 days after receipt of the agency report to review the offerors’ LSMs, he had to “prioritize” his review, and therefore concentrated principally on the KBR LSM, and more specifically, on the operation of KBR’s LSM in the NORTHCOM and EUCOM GCCs and Afghanistan.  AECOM further explained that its consultant identified what it characterizes as “similar” errors in the LSMs of Fluor and P2GLS (although P2GLS was not assigned a strength for its LSM), but AECOM’s consultant did not actually identify any concerns with the Vectrus LSM. 

[8] Our review of this issue was confined to an initial declaration and report submitted by AECOM with its comments and supplemental protest filed in the wake of the agency’s initial agency report.  AECOM’s counsel attempted to file a second declaration and report prepared by its consultant, Electronic Procurement Docketing System (EPDS) Docket Entry No. 89, but those materials were submitted after the deadline our Office established for the submission of comments responding to the agency’s supplemental report.  EPDS Docket Entry Nos. 74, 76.  Based on objections from KBR and the agency relating to counsel’s failure to timely submit these materials, we advised the parties that this second declaration and report would not be considered part of the record.  EPDS Docket Entry No. 92.

[9] AECOM suggests that the RFP includes a latent ambiguity that led it to believe that, in order to be compliant with the RFP instructions, it was required to pre-populate its LSM with all possible data inputs from the RFP.  A latent ambiguity exists where both the protester and the agency have reasonable interpretations of a solicitation term or requirement.  SunGard Data Sys. Inc., B–410025, Oct. 10, 2014, 2014 CPD ¶ 304 at 6.  As the discussion above demonstrates, the protester’s alleged understanding of the RFP instructions is not reasonable.  Simply stated, there is nothing in the RFP that required the base LSMs to be pre-populated with all possible data inputs from the RFP.

[10] The user interface also includes fields to select a wide array of other data inputs, such as [deleted], and so on.  AR, exh. 33-13, User Interface.

[11] AECOM also has not challenged the adequacy of the agency’s discussions with it, or made any showing that the agency failed to apprise it of all of the agency’s concerns with AECOM’s LSM.  (In its initial protest, AECOM argued that the agency failed to provide it with adequate discussions in connection with its LSM, and also in connection with past performance and price.  AECOM subsequently withdrew these allegations.)

[12] In addition to these two alleged errors, AECOM also argues that KBR’s proposal includes a second formula error in calculating the costs associated with retail fuel operations that should have resulted in an increase in KBR’s cost for the NORTHCOM work of approximately $567,000.

[13] AECOM alleges the same error in Vectrus’s proposal for Afghanistan.  However, since Vectrus was not awarded the task order for Afghanistan, there also would be no basis for our Office to object to the agency’s evaluation for this reason.

[14] In addition, as noted, the evidence relied on by AECOM to support these allegations--AECOM’s consultant’s second report--was submitted after the deadline established for the submission of supplemental comments.  Thus, even if these allegations were timely raised pursuant to the requirements of our regulations, the evidence supporting the allegations was not timely filed.

[15] The agency’s past performance report describes a level III NCR as follows:  “Level III NCRs are reserved for a nonconformance that is likely to result in hazardous or unsafe conditions for individuals using, maintaining, or depending upon the supplies or services; or is likely to prevent performance of a vital agency mission.”  AR, exh. 275, AECOM Past Performance Report, at 82.

[16] The record includes information showing that this incident was variously reported both as a level III NCR, and a level III corrective action report (CAR).  AR, exh. 275, AECOM Past Performance Report, at 47-48, 81. 

[17] The protester states that “total case incident rate” is an objective statistic created by the Occupational Health and Safety Administration as a metric to compare the safety performance of companies in industry groups.

[18] AECOM also references a portion of its management proposal that included a list of safety-related awards it has received in the past to support its position.  AR, exh. 268-3 at 7-8.  However, there is no evidence to show that the past performance evaluation team reviewed AECOM’s management proposals, nor was there any requirement included in the RFP for them to do so.  Rather, the RFP provided that the agency’s past performance evaluation would be confined to a review of information related to recent and relevant contracts as those terms were defined in the RFP.  RFP at 109-110, 118-119.

[19] In support of its position, AECOM referenced three pages in its past performance proposal where it claims to have included TCIR data.  One of those pages presents TCIR statistics for Qatar and Kuwait.  AR, exh. 269-1, AECOM Past Performance Proposal, at 3.  However, as noted above, a number of the agency’s concerns arose on contracts that were performed elsewhere, for example Afghanistan and Germany. 

The second reference does not identify the geographic location of the data being referenced, but appears under the firm’s discussion of a contract performed in Europe and Africa.  Id. at 19.  Even assuming that the data being referenced is in connection with work performed by AECOM in Europe and Africa, the agency’s specific safety-related concerns arose based on the incident relating to the mishandling of an unexploded .50 caliber shell while the protester was performing a contract in Germany.  Thus, regardless of the apparent success of the firm as reflected in the data referenced, the agency relied on a serious safety-related incident that has not been disputed by the protester. 

The third proposal page referenced by AECOM does not actually contain any TCIR data (although elsewhere, on page 6 of its proposal, AECOM made a second reference to TCIR data for Qatar and Kuwait).  Id. at 6, 47.

[20] The RFP advised offerors that the agency would evaluate regional capabilities in the following areas:  internal locations and capabilities; established business arrangements with host countries; strategic partnerships and vendor networks; supply chains; and other demonstrations of rapid responsiveness, capabilities and/or experience.  RFP at 117.  The record shows that each offeror was assigned a strength in each of these five areas in each GCC, albeit with a recognition of different, GCC-specific capabilities in each GCC.




Downloads

GAO Contacts

Office of Public Affairs