Skip to main content

Deloitte Consulting, LLP; Softrams, LLC

B-421801.2,B-421801.3,B-421801.4,B-421801.5,B-421801.6 Jan 30, 2024
Jump To:
Skip to Highlights

Highlights

Deloitte Consulting, LLP, of Arlington, Virginia, and Softrams, LLC, of Leesburg, Virginia, protest the award of contracts to Ad Hoc, LLC, of Washington, D.C., Navitas Business Consulting, Inc., of Herndon, Virginia, Gunnison Consulting Group, Inc., of Fairfax, Virginia, Albacore Group, LLC, of Washington, D.C., Artemis Consulting, Inc., of McLean, Virginia, NIC Federal, LLC, of Arlington, Virginia, CGI Federal, Inc., of Fairfax, Virginia, and 22nd Century Technologies, Inc., of McLean, Virginia, under request for proposals (RFP) No. 030ADV22R0079 issued by the Library of Congress (LOC) for agile system development support services. The protesters allege that the agency erred in its evaluation of proposals in numerous respects and in the conduct of its best-value tradeoffs.

We sustain the protests.
View Decision

DOCUMENT FOR PUBLIC RELEASE
The decision issued on the date below was subject to a GAO Protective Order. This redacted version has been approved for public release.

Decision

Matter of: Deloitte Consulting, LLP; Softrams, LLC

File: B-421801.2; B-421801.3; B-421801.4; B-421801.5; B-421801.6

Date: January 30, 2024

Keith R. Szeliga, Esq., Katie A. Calogero, Esq., Daniel J. Alvarado, Esq., and Lillia J. Damalouji, Esq., Sheppard Mullin Richter & Hampton LLP, for Deloitte Consulting, LLP; David B. Dixon, Esq., Robert C. Starling, Esq., Toghrul M. Shukurlu, Esq., and Aleksey R. Dabbs, Esq., Pillsbury Winthrop Shaw Pittman LLP, for Softrams, LLC, the protesters.
Edward J. Tolchin, Esq., Offit Kurman, P.A., for Gunnison Consulting Group, Inc.; Jeff Chiow Esq., Greenberg Traurig LLP, for CGI Federal, Inc.; Emily J. Chancey, Esq., W. Brad English, Esq., Nicholas P. Greer, Esq., and Jon Levin, Esq., Maynard Nexsen PC, for 22nd Century Technologies, Inc.; Jonathan D. Shaffer, Esq., and Zachary Prince, Esq., Haynes & Boone, LLP, for Navitas Business Consulting, Inc.; Aron C. Beezley, Esq., Bradley Arant Boult Cummings LLP, for Albacore Group, LLC, the intervenors.
Emily Vartanian, Esq., and Andrew Brown, Esq., Library of Congress, for the agency.
Michael Willems, Esq., and Evan D. Wesser, Esq., Office of the General Counsel, GAO, participated in the preparation of the decision.

DIGEST

1. Protests alleging disparate evaluation of proposals are sustained where the agency treated substantively identical proposal features differently.

2. Protest alleging that an awardee took a material exception to the terms of the solicitation is sustained where the agency made award to a proposal that contained an assumption inconsistent with the material terms of the solicitation, notwithstanding that the final contract did not incorporate the assumption.

3. Protest challenging agency’s price evaluation is sustained where the agency failed to utilize the solicitation’s disclosed price evaluation methodology.

4. Protest challenging the agency’s evaluation of all offerors’ past performance evaluation as technically equal is sustained where the record is inadequately documented.

5. Protests challenging the agency’s best-value tradeoffs are sustained where the tradeoffs are based on a flawed underlying evaluation and inadequately documented.

DECISION

Deloitte Consulting, LLP, of Arlington, Virginia, and Softrams, LLC, of Leesburg, Virginia, protest the award of contracts to Ad Hoc, LLC, of Washington, D.C., Navitas Business Consulting, Inc., of Herndon, Virginia, Gunnison Consulting Group, Inc., of Fairfax, Virginia, Albacore Group, LLC, of Washington, D.C., Artemis Consulting, Inc., of McLean, Virginia, NIC Federal, LLC, of Arlington, Virginia, CGI Federal, Inc., of Fairfax, Virginia, and 22nd Century Technologies, Inc., of McLean, Virginia, under request for proposals (RFP) No. 030ADV22R0079 issued by the Library of Congress (LOC) for agile system development support services. The protesters allege that the agency erred in its evaluation of proposals in numerous respects and in the conduct of its best-value tradeoffs.

We sustain the protests.

BACKGROUND

On December 1, 2022, the LOC issued the RFP seeking to award multiple indefinite-delivery, indefinite-quantity (IDIQ) contracts for agile software development services. B‑421801.3 Contracting Officer’s Statement of Facts (COS) at 1. The RFP explained that award would be made on the basis of a best-value tradeoff among four factors listed in descending order of importance: (1) corporate experience; (2) past performance; (3) technical approach; and (4) price. Agency Report (AR), Tab 4c, Amended RFP at 75. In addition to identifying price as the least important factor, the solicitation further explained that the non-price factors when combined were significantly more important than price. Id. The RFP did not specify how many awards the agency intended to make.

Relevant to this protest, the solicitation explained that the agency would evaluate past performance, the second most important factor, “to determine the offeror’s likelihood of success in performance of task orders under this IDIQ,” and would also “evaluate any risks demonstrated by the offeror’s past performance.” Id. at 76. With regard to price, the RFP explained that price proposals would be evaluated for completeness, reasonableness, and to determine if pricing reflects a clear understanding of the requirements. Id. Additionally, the RFP explained that the agency would calculate an offeror’s total evaluated price by multiplying “the proposed labor rates by an estimated number of hours per labor category.” Id.

The LOC received 57 initial proposals, of which 51 were considered to be responsive. B-421801.3 COS at 1. The agency established a competitive range on June 7, 2023, which ultimately included 14 offerors, including Deloitte and Softrams. Id. at 3. Of note, the competitive range included only offerors who received the highest possible adjectival ratings for all non-price factors.[1] Id. The agency then conducted discussions with all offerors in the competitive range, and all offerors submitted final proposal revisions by June 29. Id. at 4.

The agency then calculated total evaluated prices (TEPs) by selecting three sample historical task orders (out of 31) issued under the incumbent contract and multiplying offerors labor rates by the labor hours and categories present in those task orders. B‑421801.3 COS at 3. This resulted in the following TEPs for the protesters and awardees:

Offeror

TEP

Gunnison

$4,478,045.70

22nd Century

$4,753,802.59

Albacore Group

$5,011,766.03

Ad Hoc

$5,127,603.86

NIC Federal

$5,221,762.11

Navitas

$5,632,737.85

CGI Federal

$5,693,860.99

Artemis

$6,087,854.05

Softrams

$6,126,985.69

Deloitte

$6,135,645.91

 

Id. at 4.

The contracting officer then conducted a best-value tradeoff using the following methodology. First, the contracting officer selected the eight offerors with the lowest prices, and then performed a best-value tradeoff among those eight offerors, ranking them in terms of best value. AR, Tab 71, Source Selection Determination Document (SSDD) at 2. Then the contracting officer identified those eight offerors as “prospective awardees” and performed a best-value tradeoff for each of the six remaining competitive range offerors, including Deloitte and Softrams, but generally only compared the six remaining offerors to the lowest-ranked of the prospective awardees. Id. The SSDD explained that if an offeror did not represent a better value than the lowest ranked of the current prospective awardees, that offeror was eliminated because “it is certain that at least [eight] other offerors exceed that offeror’s value.” Id. If an unselected offeror represented a better value than the lowest ranked prospective awardee, the unselected offeror then became a prospective awardee, displacing the previous lowest-ranked prospective awardee. Id.

On October 18, 2023, the LOC made award to 22nd Century, Ad Hoc, Albacore, Artemis, CGI, Gunnison, Navitas, and NIC. B-421801.3 COS at 4-5. Deloitte and Softrams received debriefings and these protests followed.

DISCUSSION

Both Deloitte and Softrams raise a number of common challenges to the agency’s evaluation and best-value tradeoff. For example, both protesters allege that the agency disparately evaluated substantially indistinguishable proposal features unfairly benefitting the awardees. See, e.g., Deloitte Comments and Supp. Protest at 4-10, 13‑21; Softrams Comments and Supp. Protest at 45-62. Similarly, both protesters allege that the agency’s best-value tradeoff was unreasonable and inadequately documented.[2] Deloitte Comments and Supp. Protest at 4-10, 13-21; Softrams Comments and Supp. Protest at 15-24. Additionally, Deloitte, but not Softrams, alleges that one of the awardees took exception to a material solicitation requirement concerning data rights, and raises challenges to the agency’s past performance evaluation and price evaluation. Deloitte Comments and Supp. Protest at 10-13, 21-32; Deloitte Supp. Comments and 2nd Supp. Protest at 27-31. We address these arguments in turn.[3]

Disparate Treatment

Both Deloitte and Softrams allege numerous instances of disparate evaluation. For example, among other arguments, Softrams alleges that the agency disparately evaluated proposed approaches to reduce costs, as well as how proposals incorporated Project Management Body of Knowledge (PMBOK) best practices and Capability Maturity Model Integration (CMMI) and International Organization for Standardization (ISO) certifications. Softrams Comments and Supp. Protest at 45-62. Similarly, among other arguments, Deloitte alleges that the agency disparately evaluated proposed approaches to backlog grooming,[4] and the extent to which offerors’ corporate experience demonstrated an ability to meet deadlines and stay within budget. Deloitte Comments and Supp. Protest at 4-10, 13-21.

It is a fundamental principle of federal procurement law that a contracting agency must treat all offerors equally and evaluate their proposals evenhandedly against the solicitation’s requirements and evaluation criteria. Insight Tech. Sols., Inc., B‑420133.2, et al., Dec. 20, 2021, 2022 CPD ¶ 13. When a protester alleges unequal treatment in a technical evaluation, it must show that the differences in the evaluation did not stem from differences between the quotations or proposals. IndraSoft, Inc., B-414026, B‑414026.2, Jan. 23, 2017, 2017 CPD ¶ 30 at 10; Paragon Sys., Inc.; SecTek, Inc., B‑409066.2, B-409066.3, June 4, 2014, 2014 CPD ¶ 169 at 8-9. Accordingly, to prevail on an allegation of disparate treatment, a protester must show that the agency unreasonably failed to assess strengths for aspects of its proposal that were substantively indistinguishable from, or nearly identical to, those contained in other proposals. DigiFlight, Inc., B-419590, B-419590.2, May 24, 2021, 2021 CPD ¶ 206 at 5‑6; SMS Data Prods. Grp., Inc., B-418952.2 et al., Nov. 25, 20202, 2020 CPD ¶ 387 at 9.

Additionally, competitive prejudice is an essential element of a viable protest; for our Office to sustain a protest a protester must demonstrate that, but for the agency’s actions, it would have had a substantial chance of receiving the award. Up-Side Mgmt. Co., B‑417440, B-417440.2, July 8, 2019, 2019 CPD ¶ 249 at 7. However, our decisions have consistently concluded that we resolve doubts regarding prejudice in favor of the protester; a reasonable possibility of prejudice is sufficient to sustain a protest. See Meridian Knowledge Solutions, LLC, B‑420150 et al., Dec. 13, 2021, 2021 CPD ¶ 388 at 6-7; Alutiiq-Banner Joint Venture, B‑412952 et al., July 15, 2016, 2016 CPD ¶ 205 at 11; Delfasco, LLC, B‑409514.3, March 2, 2015, 2016 CPD ¶ 192 at 7.

While the protesters allege numerous areas of disparate treatment, in the majority of cases the protesters fail to establish that the agency engaged in inappropriate disparate treatment, either because the differences in evaluations resulted from differences in the proposals or because the agency awarded similar strengths to the protesters and the awardees. For example, Deloitte argues that the agency engaged in inappropriate disparate treatment because the agency assigned Gunnison a strength for having “never missed a deadline” and having “always been within budget[,]” but did not assign a strength to Deloitte for similar corporate experience. Deloitte Comments and Supp. Protest at 6.

In response the agency reasonably explains that the evaluators assigned Deloitte a strength for its demonstrated ability to successfully manage projects and an additional strength on this basis would have been duplicative. B-421801.2 Supp. Memorandum of Law (MOL) at 4. Thus, to the extent Deloitte overlooks that the LOC in fact similarly assigned strengths to both offerors for these allegedly indistinguishable aspects of their respective proposals, its argument that the agency engaged in disparate treatment is legally and factually without support. General Dynamics Info. Tech., Inc., B-420589, B‑420589.2, June 15, 2022, 2022 CPD ¶ 149 at 16.

However, we note that, for several instances of alleged disparate treatment the agency does not argue that the strengths in question stemmed from proposal differences or were captured elsewhere, but instead argues that the disparate strength was insignificant or did not competitively prejudice the protesters. That is to say, by failing to contest that the proposals in question were meaningfully different, the agency concedes that the allegations of disparate evaluation were substantively correct, contesting only whether the offerors were competitively prejudiced.

For example, Softrams alleges that the agency assigned a technical strength to awardee Ad Hoc for its approach to reducing Amazon Web Services (AWS) costs but declined to assign Softrams a strength for a similar approach to AWS cost reduction. Softrams Comments and Supp. Protest at 54-55. Here, the agency does not argue that the proposals are distinguishable in their approaches to AWS cost reduction or that Softrams received a strength for this proposal feature, but rather, responds that “[t]his claim, while potentially accurate, is insignificant, because cost reductions do not necessarily translate into price reductions for the agency (especially for any orders to be placed on a firm fixed price basis).” B-421801.3 Supp. MOL at 10. That is to say, the agency contends that Softrams proposal was not entitled to a strength for its approach to reducing AWS costs because those cost reductions would not necessarily accrue to the agency’s benefit. This post hoc argument attempting to minimize the advantages of Softram’s proposed AWS cost reduction approach, however, is inconsistent with the agency’s contemporaneous evaluation awarding strengths to Ad Hoc specifically for its approach to AWS cost reduction. Accordingly, this is an example of clearly impermissible disparate evaluation.[5] See DigiFlight, Inc., supra at 7 (sustaining protest where the agency failed to similarly assess a strength in the protester’s proposal and did not adequately explain the basis for the disparate evaluation findings).

Similarly, Softrams notes that the agency assigned strengths to several awardees for their approach to PMBOK and for having relevant CMMI and ISO certifications, but declined to assign strengths to Softrams even though Softrams alleges that it proposed the same approach to PMBOK and possessed the same CMMI and ISO certifications. Softrams Comments and Supp. Protest at 55-62. In response, the agency does not contest that Softrams proposed an approach that employed PMBOK best practices or possessed the relevant certifications, but rather argues that the solicitation required offerors to “follow PMBOK” thus failing to do so would have resulted in a weakness or deficiency, and also contends that these proposal features related to “ancillary performance areas” such that a strength would be unlikely to affect Softrams competitive standing. B-421801.3 Supp. MOL at 11.

Again, the agency’s argument does not address the alleged disparate treatment. Specifically, the agency assigned strengths for those proposal features to other offerors and failed to assign similar strengths to Softrams proposal or explain why the protester’s proposal did not warrant such a strength. If these features were simply minimum or ancillary requirements as the agency contends, there would be no reason to assign strengths to any offeror on that basis. In short, the agency assigned strengths to certain awardees for proposal features that the agency has not argued are meaningfully distinguishable from Softrams proposal features, for which no strengths were assigned. This again, is a clear case of disparate treatment.

Finally, Deloitte notes that Artemis received a strength for its approach to backlog grooming. Deloitte Comments and Supp. Protest at 17-20. However, Deloitte alleges that it also presented a detailed strategy addressing precisely that issue. Id. In response, the agency does not argue that Deloitte’s approach to these issues was meaningfully different than Artemis’s approach, but rather simply acknowledges that “Deloitte’s backlog grooming approach was not explicitly captured in the strengths,” and argues that “Deloitte’s non-selection would not have turned on this one item.” B‑421801.2 Supp. MOL at 8. In effect, the agency concedes disparate treatment with respect to this point but argues that Deloitte was not competitively prejudiced.

For reasons we will discuss in greater detail below, any change in competitive standing could be meaningful in this procurement because the competitive standing of the offerors was extremely close and the best-value tradeoff did not capture the agency’s comparison of all offerors to each other. Accordingly, even one additional strength for either of the protesters (or, alternatively, fewer strengths for the awardees) could have been meaningful given the closeness of the competition. For these reasons, we conclude that the protesters have established a reasonable likelihood of competitive prejudice, and these protest grounds are sustained.

Exception to Material Solicitation Requirement

Deloitte argues in its second supplemental protest that awardee CGI took exception to a material solicitation requirement and therefore was ineligible for award. Specifically, the protester alleges that CGI’s proposal included an assumption about which of two competing data rights contract clauses took precedence that was contrary to the solicitation’s requirements and would limit the agency’s ability to use CGI’s work product.

In response, the agency argues first that CGI’s assumption may not represent an exception to a material solicitation requirement, and notes that, in any case, the agency rejected CGI’s erroneous assumption. 2nd Supp. MOL at 1-3. The agency explains that the signed contract with CGI specifically excluded the erroneous assumption. Id. Accordingly, the agency argues that, even if the assumption represented an exception to a material solicitation requirement, there was no prejudice to the protester as the outcome was no different than if the assumption had been addressed during discussions. Id.

We do not agree. Preliminarily, our decisions have previously concluded that data rights clauses, like the one at issue in this case, are generally material solicitation requirements because they represent clearly stated requirements of the agency’s needs. See Deloitte Consulting, LLP, et al., B-411884 et al., Nov. 16, 2015, 2016 CPD ¶ 2 at 9‑11. Moreover, the agency does not meaningfully explain how CGI’s assumption could be consistent with the requirements of the solicitation. Indeed, the agency represents that it specifically declined to incorporate CGI’s assumption into the final contract, which underscores that the agency viewed the assumption as inconsistent with material terms of the solicitation. 2nd Supp. MOL at 1‑3. Accordingly, we conclude that CGI’s proposal took exception to a material solicitation requirement by conditioning its price and proposal on an assumption that was inconsistent with the data rights requirements of the solicitation.

While the agency is correct that this is a fault in CGI’s proposal that could, and likely should, have been addressed in discussions, it was not. Our decisions have previously concluded on similar facts that a proposal or quotation that takes exception to material solicitation requirement is unawardable, and an agency may not simply ignore an exception to a material solicitation requirement. See, e.g., Deloitte Consulting, LLP, et al., supra at 9-11; Barents Group L.L.C., B-276082, B-276082.2, May 9, 1997, 97‑1 CPD ¶ 164. For example, in Deloitte Consulting, et al., we sustained a protest where the agency made award notwithstanding the fact that a vendor had taken exception to a material data rights clause, even though the agency in that case argued that the protester was not competitively prejudiced because the resulting contract incorporated the correct data rights provisions. Deloitte Consulting, LLP, et al., supra at 9-11. We reached this conclusion because a proposal or quotation that takes exception to a material solicitation requirement is technically unacceptable and may not form the basis for award. Id. at 10-11. This case is no different and we sustain this protest ground.

However, with respect to remedy we decline to recommend that the agency exclude CGI from award as suggested by the protester. As the agency correctly notes, because an exception to a material solicitation requirement would render CGI’s proposal unawardable, the agency was required to raise that issue with CGI in discussions. See 2nd Supp. MOL at 2-3. The agency concedes that it did not do so, and by failing to raise the issue the agency’s previous discussions with CGI would have been other than meaningful. Id. Therefore, the appropriate remedy would be to reopen discussions to allow CGI to rectify the erroneous assumption. See Peraton, Inc., B‑416916.5, B‑416916.7, April 13, 2020, 2020 CPD ¶ 144 at 8 (explaining that where an offeror’s proposal took exception to a material solicitation requirement but that was not raised in discussions, it is inappropriate to exclude the offeror and narrowly tailored discussions are an appropriate remedy).

Evaluation of Price

Next, Deloitte argues that the agency’s price evaluation methodology was inconsistent with the solicitation. Deloitte Comments and Supp. Protest at 21-32. Specifically, Deloitte contends that the agency did not multiply the offerors’ proposed labor rates by an “estimated number of hours per labor category” as required by the RFP, but rather employed a sample task order methodology that was inconsistent with the terms of the solicitation and led to a distorted total evaluated price. Id.

Source selection officials have broad discretion in determining the manner and extent to which they will make use of the technical and price evaluation results, and their judgments are governed only by the tests of rationality and consistency with the stated evaluation criteria. Integrity Mgmt. Consulting, Inc., B-418776.5, June 22, 2021, 2021 CPD ¶ 245. Here, the agency employed a price evaluation methodology that was inconsistent with the terms of the solicitation.

The RFP required the agency to “multiply the proposed labor rates by an estimated number of hours per labor category to derive a total evaluated price.” RFP at 76. However, instead of estimating hours per labor category, the agency selected 3 representative historical task orders from the incumbent effort (out of 31 total task orders) and applied the offerors’ proposed labor rates to the labor hours and categories present on those historical task orders. B-421801.3 COS at 3. For example, the agency’s price analysis included an estimate of 1920 hours for the “Application Programmer Level III” labor category, but zero estimated hours for “Application Programmer Level I” or “Application Programmer Level II” labor categories because the latter two labor categories were not involved in performance of the three selected historical task orders. AR, Tab 70, Price Analysis. As a result, the agency calculated offerors’ total evaluated prices based only on the labor rates for 16 out of 81 total labor categories because only those categories were involved in the sample task orders, with estimates of zero hours for the remaining labor categories. Id. This led to a skewed price evaluation that was inconsistent with the terms of the solicitation.

In response the agency argues that it was following Federal Acquisition Institute (FAI) best practices, which contemplate using a sample task order for price evaluation, and also contends that there is nothing inherently inappropriate about using zero hours as an estimated number of hours for some number of labor categories. See B-421801.2 Supp MOL at 10-11.

However, taking the agency’s arguments on its terms, the FAI best practices do not support its position. According to the FAI best practices cited by the agency, evaluating based on either estimated hours or sample task orders are two mutually exclusive price evaluation methodologies, and the best practices provide differing sample solicitation language tailored to those two different options. See B-421801.2 Supp. MOL at 10-11 (citing https://www.fai.gov/periodic-table). The agency included one set of sample language in the solicitation but performed a different price evaluation. To be clear, there is nothing objectionable in the abstract about using sample task orders to evaluate price, and the use of historical data in the evaluation of price is encouraged. However, where, as here, the RFP provided that the agency would calculate price based on an estimate of hours per labor category, the agency’s actual approach of using a judgmental selection of task orders that covers less than a quarter of the total labor categories is inconsistent with the terms of the solicitation and therefore unreasonable.

Moreover, we are unpersuaded by the agency’s argument that, in effect, zero hours for a majority of labor categories was reasonable or appropriate here. While the agency is correct that zero hours could be a reasonable estimate for some number of labor categories, the agency has not represented that its estimate of actual demand for more than three quarters of the labor categories is zero hours.[6] Put another way, there is no evidence that the agency attempted to evaluate based on a bona fide estimate of hours per labor category. In short, where the RFP clearly contemplates the evaluation of price on the basis of an “estimated number of hours per labor category” the agency cannot, in effect, choose a different method of price evaluation and decline to estimate hours for the majority of labor categories.[7] See Verdi Consulting, Inc., B-414103.2 et al., April 26, 2017, 2017 CPD ¶ 136 at 11-13 (sustaining protest where price evaluation was inconsistent with the terms of the solicitation).

The agency argues, in the alternative, that the price evaluation methodology did not competitively prejudice Deloitte because it would not render Deloitte lower priced than any awardee, and the agency provided a post hoc price reevaluation demonstrating the effect of evaluating based on an average of all proposed labor categories. See B‑421801.2 COS at 4-5 n.3; B-421801.2 Supp MOL at 12. However, Deloitte argues that, while the agency’s alternative price analysis would not render Deloitte lower priced than any of the awardees, the alternative evaluation would bring Deloitte’s price significantly closer to all but one of the awardees, in some cases erasing more than half of the price difference. Deloitte Comments and Supp. Protest at 24. Deloitte argues that this would render its proposal more competitive as the agency’s best-value tradeoff was primarily concerned with the size of Deloitte’s price premium. Id. We agree with the protester. The agency does not meaningfully contest that a price evaluation based on the actual requirements of the solicitation would have narrowed the price difference between Deloitte and the awardees, and accordingly we conclude that Deloitte has demonstrated a reasonable possibility of competitive prejudice and sustain the protest on this basis.[8]

Past Performance

Next Deloitte alleges that the agency improperly flattened the past performance evaluation factor, effectively converting it into a pass/fail criterion instead of a comparative criterion. Deloitte Comments and Supp. Protest at 10-13. In this regard Deloitte alleges that the agency engaged in no meaningful substantive comparison of past performance relevance or quality, instead concluding that all offerors in the competitive range had effectively equal past performance without adequate documentation of that conclusion. Id.

Here, the RFP provided that award would be made on a best-value tradeoff basis, considering past performance as the second most important comparative factor. RFP at 75. The RFP further stated that the agency would “review past performance information to determine the offeror’s likelihood of success in performance of task orders under this IDIQ,” and “evaluate any risks demonstrated by the offeror’s past performance.” Id. at 76.

The contemporaneous record does not document meaningful consideration of each offeror’s likelihood of success or risk. Specifically, the agency conducted only a limited evaluation of relevance and did not meaningfully consider past performance quality as reflected in Contracting Performance Assessment Reporting System (CPARS) reports obtained by the agency. See, e.g., AR, Tab 48, Softrams Consensus Evaluation; AR, Tab 67, Proposal Evaluation Tracker. During the pendency of this protest, the agency represented that it considered past performance quality by evaluating solely whether “derogatory” past performance information existed for each offeror, but even that limited consideration is not meaningfully reflected in the contemporaneous record. Compare B‑421801.2 COS at 2 with AR, Tab 67, Proposal Evaluation Tracker.

Specifically, the consensus reports for each offeror include a brief narrative discussion of the relevance of each offerors’ past performance references along with a yes or no check box for relevance, but no discussion of quality. See, e.g., AR, Tab 48, Softrams Consensus Evaluation. However, the contracting officer’s proposal evaluation tracker simply includes a yes or no check box for relevance and no substantive discussion of each offerors’ past performance. AR, Tab 67, Proposal Evaluation Tracker. While the tracker contains narrative for each offeror, the narratives are substantively identical for all offerors and reflect no substantive features of each offerors’ past performance nor do they include any comparative assessment. Id. In short, the contracting officer’s evaluation tracker contains no discussion of past performance quality, no substantive discussion of relevance or risk, and no meaningful comparison between the offerors on the basis of their likelihood of success or risks posed. Id.

This is reflected in the SSDD, which contains only the following sentence about past performance, which was the second most important evaluation factor: “[e]very offeror in the competitive range received a ‘Low Risk’ rating for Past Performance and the CO determined all such ratings to be equal.” AR, Tab 71, SSDD at 2. The SSDD explained that “[s]trengths in the Corporate Experience and Technical Approach factors are the only findings remaining for the offerors under consideration[,]” and included no further discussion of past performance. Id.

Our decisions have concluded that an agency may, in some circumstances, reasonably conclude that offerors are in effect equal for past performance, but the record must support that finding of equivalency. See, e.g., Pro-Sphere Tek, Inc., B‑410898.11, July 1, 2016, 2016 CPD ¶ 201 at 9-11 (concluding that, where the underlying past performance evaluation is reasonable, an agency is not required to further differentiate between the past performance ratings based on a more refined assessment of the relative relevance of the offeror’s prior contracts, unless specifically required by the RFP). Here, however, there is no supporting rationale or evidence explaining the agency’s findings of technical equality or identical risk, and, on the contrary, there is much in the record that suggests there may be unacknowledged meaningful differences between the offerors. For example, the agency’s own preliminary evaluation of relevance describes very different degrees of relevance among the offerors’ past performance references: at minimum, some offerors proposed references that were not relevant, prompting neutral ratings for those references, while other offerors proposed references that the agency uniformly found to be relevant. See AR, Tab 67, Proposal Evaluation Tracker.

Similarly, the quality of the past performance references, as reflected in CPARS reports obtained by the agency, varied widely, with some offerors having an even mix of satisfactory, very good, and exceptional ratings, while other offerors, such as Deloitte, received almost exclusively exceptional ratings. See Deloitte Supp. Comments and 2nd Supp. Protest at 20. However, the contemporaneous record simply doesn’t engage with past performance quality, which is directly relevant to an offeror’s performance risk. On this record, the agency’s conclusion that all offerors were equivalent for past performance is undocumented and therefore unreasonable. See CPS Prof’l Servs., LLC, B-409811, B‑409811.2, Aug. 13, 2014, 2014 CPD ¶ 260 at 2 (sustaining protest where agency did not consider the relative merits of the firms’ past performance effectively excluding past performance as a comparative factor in the trade-off analysis).

Best-Value Tradeoff

Finally, both Deloitte and Softrams argue that the agency’s best-value tradeoff was unreasonable for several reasons. Deloitte Comments and Supp. Protest at 32-41; Softrams Comments and Supp. Protest at 15-24. First, both offerors object to the agency’s tradeoff methodology, in which the agency first ranked proposals based on the least important evaluation factor, price, and then identified the eight lowest-priced offerors as prospective awardees. Id. Additionally, both offerors contend that the best-value tradeoff among those offerors was unreasonable because it relied on a mechanical counting of strengths and did not look behind the ratings. Id. Finally, both offerors argue that the agency erred by performing only a summary comparison of their proposals to the lowest ranked of the eight prospective awardees, instead of substantively comparing their proposals to all eight awardees. Id.

Source selection officials have broad discretion in determining the manner and extent to which they will make use of the technical and cost evaluation results; cost and technical tradeoffs may be made, and the extent to which one may be sacrificed for the other is governed only by the test of rationality and consistency with the solicitation’s evaluation criteria. Booz Allen Hamilton Inc., B-414283, B-414283.2, Apr. 27, 2017, 2017 CPD ¶ 159 at 13-14. However, our Office has consistently stated that evaluation ratings are merely guides for intelligent decision-making in the procurement process; the evaluation of proposals and consideration of their relative merit should be based upon a qualitative assessment of proposals consistent with the solicitation’s evaluation scheme. Highmark Medicare Servs., Inc., et al., B-401062.5 et al., Oct. 29, 2010, 2010 CPD ¶ 285 at 19

Preliminarily, in this case, the underlying evaluation was unreasonable, because, as discussed above, the agency disparately evaluated offerors, did not adequately document the past performance evaluation, and performed a price evaluation that was inconsistent with the solicitation. Accordingly, the best-value tradeoff is necessarily without a reasonable basis for those reasons. However, in addition to those flaws, the best-value tradeoff did not reasonably establish a ranking and did not adequately document the tradeoff that the agency performed for either protester.

Specifically, the agency first ranked offerors on price, and then ordered the eight lowest-priced offerors based on a best-value tradeoff among those offerors. AR, Tab 71, SSDD at 2. However, in conducting this initial tradeoff the agency selected the least important factor (price) to make the initial selection, and the agency’s tradeoff among those offerors in many cases failed to meaningfully look behind the adjectival ratings. Indeed, in numerous cases the SSDD effectively just counted the number of strengths assigned without any substantive comparison between the offerors, which our decisions have consistently explained is an impermissible basis for conducting a best-value tradeoff. See Id. at 9-10 (repeatedly counting strengths and comparing numbers of strengths with limited or no discussion of the underlying proposal features for several offerors).

Subsequently, the agency only compared the six remaining offerors to the lowest-ranked of the eight lowest-priced offerors. As a result, the best-value tradeoff only documents a best-value tradeoff between the protesters and the lowest-ranking of the eight awardees (22nd Century). Id. at 12-13. Even these limited best-value tradeoffs were inadequately documented, as they included no substantive comparison of the proposals, merely noting the large price difference between the protesters and the eighth-ranked offeror. Id. For example, the entirety of Deloitte’s best-value tradeoff consisted of the following:

Deloitte’s corporate experience strengths reveal high quality prior work which aligns closely with the Library’s processes and developmental environment. However, Deloitte’s total evaluated price ($6,135,645.91) is significantly higher than 22nd Century’s total price. Because all offerors in the competitive range are highest-rated for all non-price evaluation factors (see chart on page 1 of this memorandum), this very large price premium (approximately $1,380,000) is not justified by the differences in strengths between Deloitte and 22nd Century.

AR, Tab 71, SSDD at 13.

This tradeoff includes no discussion of the features of 22nd Century’s technical proposal that are ostensibly being traded off against Deloitte’s technical proposal and no meaningful comparison of the substance of their non-price proposals.[9]

Moreover, because the agency only compared the protesters to the lowest-ranked proposal, the record contains no comparison of the protesters’ proposals to any of the other awardees, several of whose prices and ratings were either similar or arguably inferior to the protesters. When conducting a best-value tradeoff an agency must determine whether the merits of a technically superior, higher-priced proposal warrant the price premium. The MIL Corp., B-294836, Dec. 30, 2004, 2005 CPD ¶ 29 at 8. Even assuming, for the sake of argument, that the agency reasonably concluded that the protesters’ technical merit was not worth the significant price premium between their proposals and 22nd Century’s proposal, it does not necessarily follow that the protesters’ technical strengths did not warrant paying a smaller price premium over other awardees whose proposals were closer in price. This is especially true given that 22nd Century proposed the second lowest price of the offerors in the competitive range.

For example, awardee Artemis’s price was less than one percent lower than both protesters’ prices. AR, Tab 71, SSDD at 1. However, Artemis’s proposal received significantly fewer strengths than both the protesters under the corporate experience factor (the most important factor) and had arguably lower past performance quality than either protester (the second most important factor). See Deloitte Supp. Comments and 2nd Supp. Protest at 13, 20; Softrams Comments and Supp. Protest at 19. While Artemis had slightly more strengths than either protester under the technical factor (the third most important factor) and a slightly lower price (the least important factor), the best-value tradeoff includes no explanation whatsoever for why the agency found Artemis’s proposal to be superior to the protesters’ proposals.

To be clear, we reiterate that mechanically counting strengths is not an appropriate basis for conducting a best-value tradeoff. But in this case, the record before us provides no substantive explanation for the agency’s preference for Artemis over the protesters. It is possible that Artemis’s proposal does in fact represent a better value, as the agency contends, but the contemporaneous record does not adequately document that conclusion and therefore we cannot conclude that it is reasonable, especially in light of the other procurement errors discussed above.

Further, this example clearly demonstrates the possibility that the protesters suffered competitive prejudice. Given the closeness of the competition and the lack of any substantive explanation for the agency’s preference for Artemis--or indeed for any awardee other than 22nd Century--any change in the evaluation due to the issues we have identified above could result in a different outcome for either or both protesters.

In defending its best-value tradeoff, the agency argues that we denied a similar protest of a best-value tradeoff in our decision in ICF Incorporated, LLC, B-407273.17, B‑407273.19, Dec. 19, 2013, 2014 CPD ¶ 10, but that decision is inapposite. The LOC is correct that in ICF the agency established an initial cutoff that included only the 11 highest-rated offerors in much the same way the LOC established its cutoff in this procurement. ICF Incorporated, LLC, supra. However, in ICF the agency compared the unsuccessful offerors against all 11 awardees, which is meaningfully different than what the agency did in this case. See Id. at 8-9. Rather, the facts in this case are more like the facts in our decision in Beneco Enterprises, Inc., B-283154, Oct. 13, 1999, 2000 CPD ¶ 69, in which we sustained a protest where the agency failed to compare a technically strong proposal to each of the awardees. Accordingly, we conclude that the agency’s best-value tradeoff is inadequately documented and therefore unreasonable, and we sustain the protests on that basis.

RECOMMENDATION

We recommend the agency reopen discussions, seek revised proposals as necessary, evaluate revised proposals with respect to all evaluation factors, adequately document its analysis, and make a new source selection decision consistent with this decision. We also recommend that the agency reimburse Deloitte and Softrams for their reasonable costs of filing and pursuing their protests, including attorneys’ fees. Bid Protest Regulations, 4 C.F.R. § 21.8(d)(1). The protesters’ certified claims for costs, detailing the time expended and costs incurred, must be submitted directly to the agency within 60 days of receiving this decision. 4 C.F.R. § 21.8(f)(1).

The protests are sustained.

Edda Emmanuelli Perez
General Counsel

 

[1] Specifically, all offerors in the competitive range received ratings of “high confidence” for corporate experience, “low risk” for past performance, and “outstanding” for technical approach. B-421801.3 COS at 3.

[2] The protesters both raised numerous additional arguments in their protests. While we do not address all of the arguments in this decision, we have reviewed all of the protest arguments and conclude that none of these additional arguments provide a further basis to sustain the protests. For example, Softrams challenges the agency’s decision to make only eight awards. See Softrams Comments and Supp. Protest at 24, n.4. However, the record reflects that the agency made the decision to make eight awards based on an analysis of historical data and consideration about the quantities of work likely to be available. AR, Tab 71, SSDD at 1-2. While it is possible that the agency could have reasonably selected a different number of awards, we see no basis to conclude that the agency’s decision to make eight awards was inappropriate or unreasonable.

[3] We have previously explained that LOC, as a legislative branch agency, is not subject to the Federal Acquisition Regulation (FAR) but conducts its acquisitions in accordance with Library of Congress Regulations (LCR). Strong Envtl, Inc., B-311005, Mar. 10, 2008, 2008 CPD ¶ 57 at 3. However, LCR 7-210, Procurement – Goods and Services, establishes that “[i]t is the policy of the Library to follow the [FAR] in the procurement of goods and services under this regulation” unless a specific deviation is adopted. LCR 7-210 at § 3.A; see also Mythics, Inc.; Oracle Am., Inc., B-418785, B-418785.2, Sept. 9, 2020, 2020 CPD ¶ 295 at 5 (recognizing that “[t]he agency has not argued that it is not bound by the requirements of the FAR, and in fact, cites its own regulation stating that the agency follows the FAR as a matter of policy,” and concluding, therefore, that “the requirements of the FAR govern[ed]” the acquisition at issue in the protests). In this procurement, the agency explains that it used the procedures of FAR Part 15, Contracting by Negotiation. B-421801.2 COS at 1.

[4] Backlog grooming is a process for analyzing, prioritizing, and updating a backlog of tasks or work. See Deloitte Comments and Supp. Protest at 16-18.

[5] To the extent that the agency’s response to the protest suggests that cost reduction approaches did not materially exceed the solicitation’s requirements and, therefore, did not warrant the assessment of a strength, Softrams was nevertheless prejudiced because Ad Hoc’s proposal was otherwise assessed strengths for its cost reduction approach under both the corporate experience and technical factors. See AR, Tab 53, Ad Hoc Consensus Eval. Report, at cells C.10 and C.18; see also Tab 71, SSDD, at 4 (recognizing Ad Hoc’s cost-optimization). Regardless of whether Softrams should have been assessed a strength--or arguably Ad Hoc’s proposal should not have received strengths--for this aspect of its proposal, the agency’s evaluation is facially inconsistent.

[6] Moreover, to the extent the agency relies on historical data here, it relies on a small judgmental sample of the available historical data (3 out of 31 task orders under the incumbent IDIQ) that may or may not be reflective of the agency’s total historical demand. See Office Depot, LLC, B-420482, May 3, 2022, 2022 CPD ¶ 111 (concluding that, where the historical data relied on by the agency was incomplete and not necessarily reflective of demand, relying solely on that information was not reasonable).

[7] Alternatively, the agency argues that this argument is untimely because offerors were on notice the agency intended to use an undisclosed methodology to evaluate price. In this regard, the agency argues, in essence, that because offerors asked the agency to provide its estimated hours for each labor category and the agency declined to do so, offerors should have known the agency intended to use an undisclosed price evaluation methodology. See B-421801.2 Supp MOL at 9-10. The agency argues, therefore, that the solicitation was, at best, patently ambiguous because the solicitation disclosed one price evaluation methodology and separately offerors were on notice that the agency was going to use an undisclosed price evaluation methodology. We do not agree. Id. While the agency’s decision not to disclose its labor hour estimates put offerors on notice that the agency would be using an undisclosed estimate of labor hours, the solicitation made it clear that the agency would be evaluating price on the basis of estimated labor hours per labor category. No reasonable offeror could have understood from the agency’s response that the agency intended to utilize an entirely different price evaluation methodology based on sample task orders, or that, in effect, it intended to estimate zero labor hours for the vast majority of labor categories. Accordingly, we do not agree that the solicitation was patently ambiguous in the way the agency suggests.

[8] Deloitte also argues that the agency’s price evaluation was unreasonable because the agency mapped labor categories to the sample task orders in an unreasonable way. Deloitte Comments and Supp. Protest at 21-32. However, we need not resolve this allegation because the agency’s basic price evaluation methodology was clearly inconsistent with the solicitation, and a methodology based on estimated labor hours per category, as required by the solicitation, would not necessarily be susceptible to the same concerns raised by the protester’s argument.

[9] Softrams tradeoff narrative differs slightly but was similar in scope and detail. See AR, Tab 71, SSDD at 12.

Downloads

GAO Contacts

Office of Public Affairs