Skip to main content

Data Computer Corporation of America

B-419033.4,B-419033.5,B-419033.6,B-419033.7 Aug 03, 2021
Jump To:
Skip to Highlights

Highlights

Data Computer Corporation of America (DCCA), of Ellicott City, Maryland, protests the issuance of a task order to Sparksoft Corporation, of Columbia, Maryland, under a teaming agreement with Skyward IT Solutions, LLC, (Sparksoft/Skyward) under request for quotations (RFQ) No. GS-35F-161CA/75FCMC20F0057 issued by the Department of Health and Human Services, Centers for Medicare and Medicaid Services (CMS), against the General Services Administration's Federal Supply Schedule (FSS) 70 for information technology services related to software testing of various Medicare information systems. The protester contends the agency erred in its evaluation in numerous respects, engaged in impermissible disparate treatment of quotations, and did not adequately justify its best-value tradeoff decision.

We deny the protest.
View Decision

DOCUMENT FOR PUBLIC RELEASE
The decision issued on the date below was subject to a GAO Protective Order. This redacted version has been approved for public release.

Decision

Matter of:  Data Computer Corporation of America

File:  B-419033.4; B-419033.5; B-419033.6; B-419033.7

Date:  August 3, 2021

Rebecca E. Pearson, Esq., Taylor Hillman, Esq., Caleb E. McCallum, Esq., and Lindsay M. Reed, Esq., Venable LLP, for the protester.
David B. Dixon, Esq., Toghrul M. Shukurlu, Esq., and Robert C. Starling, Esq., Pillsbury Winthrop Shaw Pittman LLP, for Sparksoft Corporation, the intervenor.
Krystal A. Jordan, Esq., Robyn A. Littman, Esq., and Douglas Kornreich, Esq., Department of Health and Human Services, for the agency.
Michael Willems, Esq., and Edward Goldstein, Esq., Office of the General Counsel, GAO, participated in the preparation of the decision.

DIGEST

1.  Protest that agency unreasonably evaluated quotations is denied where the record reflects the evaluation was generally reasonable and consistent with the terms of the solicitation and applicable statutes and regulation.

2.  Protest that agency treated vendors disparately by downgrading the protester’s quotation for reasons equally applicable to the awardee’s quotation is denied where the protester cannot show a reasonable possibility of competitive prejudice. 

DECISION

Data Computer Corporation of America (DCCA), of Ellicott City, Maryland, protests the issuance of a task order to Sparksoft Corporation, of Columbia, Maryland, under a teaming agreement with Skyward IT Solutions, LLC, (Sparksoft/Skyward) under request for quotations (RFQ) No. GS-35F-161CA/75FCMC20F0057 issued by the Department of Health and Human Services, Centers for Medicare and Medicaid Services (CMS), against the General Services Administration’s Federal Supply Schedule (FSS) 70 for information technology services related to software testing of various Medicare information systems.  The protester contends the agency erred in its evaluation in numerous respects, engaged in impermissible disparate treatment of quotations, and did not adequately justify its best-value tradeoff decision.

We deny the protest.

BACKGROUND

On March 30, 2020, the agency issued the Medicare Integrated Systems Testing (MIST) RFQ to eight FSS 70 contract holders, including DCCA, Sparksoft, and Skyward.  Memorandum of Law (MOL) at 4.  The RFQ provides for the issuance of a single task order to replace two existing contracts:  (1) the Single Test Contractor contract performed by DCCA, which primarily involves testing Medicare information systems running legacy COBOL software hosted on mainframes; and (2) the Medicare Payment System Modernization Services contract performed by Skyward, which involved migrating portions or aspects of the testing process to a modern cloud environment.  Id. at 2-3.  However, in addition to replacing those prior efforts, the MIST RFQ also contemplates significant new work that will result in a more complete modernization of the testing environment, gradually reducing the share of legacy testing to be performed over the task order’s period of performance.  Id. at 3.

The contemplated task order is primarily fixed-price, with certain direct costs to be paid on a time-and-materials basis.  Agency Report (AR), Tab 5, RFQ at 1.  The RFQ also contemplated a 4-month base period of performance, and three 1-year option periods.  Id. at 3.  Award was to be made on the basis of a best-value tradeoff between the following factors:  (1) corporate experience; (2) performance work statement and quality assurance surveillance plan (PWS/QASP); (3) demonstration exercises; (4) section 508[1] compliance; and (5) price.  Id. at 63.  The RFQ explained the combination of non-price factors was significantly more important than price.  Id. at 66.  Further, the RFQ noted corporate experience was significantly more important than all other non-price factors, the PWS/QASP and demonstration exercise factors were equally important, and Section 508 compliance was significantly less important than the other non-price factors.  Id. at 63. 

Relevant to this protest, the RFQ provided corporate experience would be evaluated to determine capability and suitability of the respondent to perform the work required by the statement of objectives (SOO).  RFQ at 64.  Specifically, the RFQ noted relevance for corporate experience case studies was defined as “information associated with projects similar in size, scope and complexity to that described in the attached SOO.”  Id. at 56.  With regard to the demonstration exercises, the RFQ initially required vendors to respond both orally and in their proposals to two agency-provided sample scenarios.  Id. at 64.  However, due to the COVID-19 pandemic, the agency cancelled the oral portion of the demonstration exercises.  MOL at 7.

The RFQ provided for a two-phase evaluation.  RFQ at 62.  During the first phase, vendors supplied their corporate experience submission only.  Id.  The agency received four phase one quotations, and then advised vendors whether the agency recommended that they proceed to the next phase.  Id.  Only DCCA and Sparksoft/Skyward elected to submit phase two quotations.  Id.

Following the evaluation of phase two quotations, the agency initially issued a task order to Sparksoft/Skyward on August 12, 2020, and DCCA filed a protest of the award with our Office.  MOL at 8.  On September 3, the agency agreed to take voluntary corrective action to reopen the procurement, and we dismissed the protest as academic.  Id.  Following limited exchanges with the vendors and a re-evaluation, the agency again made award to Sparksoft/Skyward on December 18.  Id.  DCCA again filed a protest of the award with our Office, and the agency, again, indicated it intended to conduct further limited exchanges and seek revised quotations, and we dismissed the protest as academic.  Id.

The agency then sent discussion letters and permitted vendors to submit revised quotations.  MOL at 9.  The agency subsequently evaluated the vendors’ revised quotations as follows:

DCCA

Sparksoft/Skyward

Corporate Experience

High Confidence

High Confidence

PWS/QASP

High Confidence

High Confidence

Demonstration Exercises

Some Confidence

High Confidence

Section 508 Compliance

High Confidence

High Confidence

Price

$33,345,781

$34,360,846


AR, Tab 44, Source Selection Decision (SSD) at 18.

In making its tradeoff decision, the agency concluded the two quotations had roughly equal merit with respect to corporate experience and section 508 compliance.  Id. at 19.  However, the agency concluded specific technical aspects of Sparksoft/Skyward’s quotation rendered it superior to DCCA’s quotation with respect to the PWS/QASP and demonstration exercises.  Id. 

As a result, the agency concluded Sparksoft/Skyward’s quotation was “moderately” superior to DCCA’s proposal overall, but was only three percent more expensive.  Id. at 19-20.  Consequently, the agency concluded Sparksoft/Skyward’s quotation represented the best value to the government, and made award on April 29, 2021.  MOL at 10.  This protest followed

DISCUSSION

The protester alleges the agency erred in its evaluation in numerous respects.  Specifically, the protester alleges that the agency erred:  (1) by finding the awardee’s corporate experience relevant, and assigning the awardee the highest confidence rating; (2) in evaluating the PWS/QASP and demonstration exercises of both the protester and the intervenor; and (3) by disparately evaluating substantively identical features of the protester’s and intervenor’s quotations.  See First Supp. Protest at 43-68.  Additionally, the protester alleges the agency’s best-value tradeoff decision was flawed because the agency erroneously concluded the two quotations were technically equivalent in certain respects, among other things.  Id. at 69-70.  We address these arguments in turn.[2]

Corporate Experience

First, the protester argues the standards used by the agency to evaluate corporate experience were inconsistent with the RFQ and unequally applied.  Comments and Second Supp. Protest at 9-13, 54-59.  Specifically, the protester notes the evaluators chose to consider corporate experience case studies involving either 10 or more full-time equivalents (FTEs) dedicated to testing or case studies involving 5,000 test cases per year to be similar in size to the instant effort.  Id. at 11.  However, the protester argues the RFQ provided the effort would involve up to 60 FTEs, which is significantly larger.  Id.  In addition, the protester notes the agency concluded case studies were similar in scope if the vendor performed either legacy or modernization testing in the case study, but did not require both.  Id. at 54-59.  The protester contends this was irrational because the criterion permitted the awardee, who lacks meaningful legacy testing experience, to nonetheless meet the scope criterion as all of its case studies involved modernization testing.  Id.

Further, the protester notes its three corporate experience case studies involved [DELETED], [DELETED, and [DELETED] FTEs respectively, and showed significant experience with both legacy and modernization testing.  Comments and Second Supp. Protest at 13, 54-59.  By contrast, the awardee’s three case studies involved [DELETED] FTEs and showed very limited experience with legacy testing.  Id.  The protester contends that, by choosing low thresholds for size and scope, the agency effectively turned the evaluation into a pass/fail and erased a significant advantage of the protester’s quotation, which involved much larger and more relevant case studies that were more similar in size to the current effort.  Id.

Finally, the protester also argues that, even if the agency’s size evaluation criterion of case studies involving either 10 or more FTEs dedicated to testing or involving 5,000 test cases per year was reasonable, the agency did not apply it consistently.  Comments and Third Supp. Protest at 8-9.  For example, the protester notes one of the awardee’s case studies involved [DELETED] FTEs, but [DELETED] of those FTEs were program support or management.  Comments and Second Supp. Protest at 11.  The awardee’s quotation, therefore, only showed [DELETED] FTEs dedicated to testing.  Id.  Moreover, the protester contends the case study does not qualify under the alternative criterion because it only discussed approximately [DELETED] test cases over an undefined period of time.  Comments and Third Supp. Protest at 8-9.

Where an agency issues a solicitation to vendors holding FSS contracts, and conducts a competition among FSS vendors, we will review the record to ensure that the agency’s evaluation is reasonable and consistent with the terms of the solicitation.  Spectrum Comm, Inc., B-412395.2, Mar. 4, 2016, 2016 CPD ¶ 82 at 8.  Where a solicitation does not expressly define terms such as scope, magnitude, or complexity, agencies are afforded great discretion to determine the relevance of an offeror’s or vendor’s corporate experience.  See CW Government Travel, Inc., B-419193.4, et al., Apr. 15, 2021, 2021 CPD ¶ 188 at 8 (concluding an agency’s discretion to determine the relevance of corporate experience is analogous to an agency’s broad discretion to evaluate the relevance of past performance).

In this regard, with respect to the size criterion, the agency notes the RFQ estimated the effort would include several teams “potentially” totaling 60 FTEs, based on the agency’s historical experience.  Supp. MOL at 4.  The agency noted, however, that the focus of this procurement was on innovation, modernization, and the automation of manual tasks, all of which could lead to lower FTE counts than are currently used to perform the work.  Id.  Moreover, the agency anticipated different vendors might take different approaches, requiring more or fewer FTEs.  Id.  For those reasons, the agency concluded case studies involving either 10 FTEs dedicated to testing or involving 5,000 test cases per year were sufficiently similar in size to the MIST task order to be relevant.  Id. 

The agency’s judgment in this regard is unobjectionable.  As noted above, an agency has great discretion in determining the relevance of corporate experience, and we will not generally disturb an evaluation absent a clear demonstration that the assessments are unreasonable or inconsistent with the solicitation criteria.  SIMMEC Training Sols., B‑406819, Aug. 20, 2012, 2012 CPD ¶ 238 at 4.  In this case, the agency’s explanation is credible, and, while the protester is correct that 10 FTEs is smaller than 60 FTEs, it is not so different as to be unreasonable per se

Likewise, with respect to the scope criterion, the agency was reasonable in concluding case studies involving either legacy testing or modern testing were similar in scope to the current effort.  See AR, Tab 43, TEP Report at 2.  While the protester is correct that this effort will necessarily involve both types of testing, the criterion was applied to individual case studies and a vendor’s case studies may not individually have involved both types of testing.  Put another way, requiring all case studies to exhibit both legacy and modern testing would effectively exclude vendors with substantial experience performing both types of testing on separate efforts.  And while the protester is correct that such a metric might lead to an anomalous outcome if a vendor only demonstrated experience with legacy or modern testing that is not the case here.  The awardee’s case studies showed both meaningful, recent legacy testing experience and significant experience in modern testing environments.  See Id. at 13-14; AR, Tab 7, Sparksoft/ Skyward Phase One Corporate Experience Quotation generally.  In short, on the record before us, we see no reason to conclude the agency erred in judging the awardee’s case studies to be relevant.

Moreover, we do not agree with the protester that the agency’s relevance analysis transformed the corporate experience evaluation into a pass/fail assessment.  While the protester and intervenor received the same adjectival rating under corporate experience, the contemporaneous evaluation record shows the agency substantively evaluated each corporate experience case study.  See AR, Tab 43, TEP Report at 13‑14.  In response, the protester argues its own experience involves larger efforts and more legacy testing, and is therefore more similar in size and scope to the agency’s requirements. 

For example, the protester makes much of the fact that the agency estimated that approximately 90 percent of the initial workload on this effort involves testing of legacy systems, and contends that its extensive legacy testing experience should have distinguished it from the awardee.  See Comments and Second Supp. Protest at 12.  However, as the agency explains, the nature of the requirement involves moving away from legacy testing and the quantity of legacy testing should decrease sharply over the course of the effort.  Supp. MOL at 8.  Accordingly, it is not unreasonable for the agency to conclude the protester’s more extensive legacy testing experience was not an advantage that would distinguish the two quotations.  In short, the protester simply disagrees with the agency’s evaluation judgments in this regard, and a protestor’s disagreement with the agency’s judgment, by itself, is not sufficient to establish that an agency acted unreasonably.  Hughes Network Sys., LLC, B-409666.5, B-409666.6, Jan. 15, 2015, 2015 CPD ¶ 42 at 6.

Lastly, the protester argues the agency erred in applying its size criterion.  The protester argues the awardee’s second case study involved [DELETED] FTEs, and [DELETED] of those FTEs were management and program support staff.  Therefore, according to the protester, the case study only involved [DELETED] FTEs dedicated to testing, and therefore did not meet the agency’s 10 FTE criterion.  While the protester is correct that [DELETED] of the [DELETED] FTEs in the case study were managers or support staff, the case study as a whole involved performing software testing.  See AR, Tab 7, Sparksoft/Skyward Phase One Corporate Experience Quotation at 5-6.  That is to say, it is not clear that the managers and program support for a testing effort should be categorically excluded from the count of FTEs dedicated to testing. 

More significantly, the quotation specifically denominates one of the management staff as a “Test Manager.”  See Intervenor’s Comments on Second Supp. AR at 4; AR, Tab 7, Sparksoft/Skyward Phase One Corporate Experience Quotation at 5.  Therefore, even if we assume the protester is correct that some of the [DELETED] managerial or support staff are not appropriately considered to be dedicated to testing in the sense the agency contemplated, it would have been clearly irrational for the agency to exclude the test manager from the count of FTEs dedicated to testing.  Accordingly, because, at minimum, the test manager must be included, this case study involves at least 10 FTEs dedicated to testing, which satisfies the agency’s size criterion.[3]  Accordingly, this protest ground is without merit.

Demonstration Exercises and PWS/QASP

Next, the protester contends the agency erred in evaluating the demonstration exercises and PWS/QASP of both the protester and the awardee.  Specifically, the protester contends the agency:  applied unstated evaluation criteria to the protester’s quotation, failed to reject an inappropriate assumption in the awardee’s quotation, and disparately evaluated similar aspects of the two quotations.  See First Supp. Protest at 48-68.

  Unstated Evaluation Criteria

First, DCCA alleges the agency applied unstated evaluation criteria when it downgraded one of DCCA’s demonstration exercises for failing to demonstrate agile maturity by, among other things, failing to include an agile release checklist or a process step for process improvement.  Id. at 54-68; Comments and Second Supp. Protest at 23-25.  The protester contends nothing in the RFQ referenced “agile maturity” or otherwise required such a checklist or process improvement step, and the agency was, in effect, comparing the protester’s quotation to the awardee’s quotation rather than the evaluation criterion.  Id.  The protester also contends its quotation adequately addressed the RFQ’s actual requirements, and these areas of lowered confidence represent disparate treatment of quotations because the awardee’s quotation also did not meaningfully address these points.  Id.

Where a protester challenges the evaluation as unfairly utilizing unstated evaluation criteria, our Office will assess whether the solicitation reasonably informs vendors of the basis for the evaluation.  Raytheon Co., B-403110.3, Apr. 26, 2011, 2011 CPD ¶ 96 at 5.  In that regard, procuring agencies are not required to list as stated evaluation criteria every area that may be taken into account; rather, it is sufficient that the areas considered in the evaluation be reasonably related to or encompassed by the stated criteria.  Id.

In this regard, the instructions for the demonstration exercise provided “[c]ode development and deployment in the cloud environment is governed by the Agile Sprint release schedule[,]” and advised vendors to “[u]se your knowledge” concerning Agile Sprint release management to plan and prepare code testing in the relevant environment.  AR, Tab 14, Instructions for Business Case Scenario Two at 1.  Further the instructions provided the response should include an “automation framework and work flows including People, Product, and Process.”  Id.

Here, the instructions were clear that the demonstration exercise involved the use of agile development methodology and that vendors should clearly describe their workflows and processes.  The agency contends a release checklist and process improvement are inherent parts of a mature agile process, and therefore were reasonably encompassed by the terms of the solicitation.  MOL at 28-29; Supp. MOL at 19-20.  We see no basis to conclude the agency’s evaluation is unreasonable in this regard. 

While the protester disagrees with the agency’s view of what is “inherent” in a mature agile development process, the solicitation was clear that the agency intended to evaluate vendors based on their agile processes.  It seems entirely reasonable that a description of a vendor’s processes relating to an “Agile Sprint release schedule” would include a checklist or some other description of the steps a vendor intends to take to seek approval prior to release.  This is especially so when the digital services playbook incorporated by reference in the RFQ and SOO included such agile checklists.  RFQ at 56; SOO at 26.  Similarly, it seems unobjectionable that the description of a vendor’s work flows and processes should include some discussion of process improvement.  In short, we believe the agency’s assessments of these matters were reasonably encompassed by the solicitation’s requirements. 

Moreover, contrary to the protester’s contention, these points represent real differences between the two quotations.  The protester’s demonstration exercise addressed neither a pre-release checklist nor process improvement.  By contrast the awardee’s demonstration exercise addressed both points.  See, e.g., AR, Tab 17.d, Sparksoft/ Skyward Demonstration Exercise Two at 1-2 (“[DELETED]”).  Accordingly, we cannot conclude the agency’s evaluation was unreasonable and this protest ground is without merit.

  Demonstration Assumption

The protester next alleges the agency erred by uncritically accepting one of the awardee’s assumptions in the first demonstration exercise.  Comments and Second Supp. Protest at 15-18.  Specifically, the awardee assumed it had already migrated testing to an agile methodology.  Id.  According to the protester, this assumption is inappropriate because the first demonstration exercise involved testing of systems that currently primarily use a waterfall testing methodology, not an agile one.  Id.  Furthermore, the migration to an agile testing framework is part of the work contemplated by the SOO under this contract, so the awardee should not have been permitted to, in effect, assume it away.  Id.

In this regard, the SOO clearly contemplates the current testing environment includes both waterfall and agile components, and one of the principle goals of the MIST contract is to modernize this testing environment and to increase the amount of testing developed in an agile manner.  See SOO at 12.  For example, the SOO notes vendors must “[u]se Test Management tools, methodologies (waterfall, agile, and hybrid), and processes that are currently in place for IST.  However, the MIST contractor should investigate and recommend modern and improved solution as required.”  Id.

However, the SOO also notes in its assumptions and constraints that, in performing the effort, “[t]he Contractor will use an agile testing approach, a flexible methodology and automated processes as much as possible.”  Id. at 23.  Moreover, the relevant demonstration exercise instructions did not specify what, if any, methodology should be employed, but rather contemplated that vendors were “expected to explain your testing automation framework and workflows including People, Product, and Process.”  See AR, Tab 12, Instructions for Business Case Scenario One at 1.

The agency argues that the evaluators saw no problem with the assumption because the implementation of agile testing environments is a key goal of this effort, and the awardee’s demonstration exercise showed a good command of agile methodology applied to the problem posed in the demonstration exercise.  Supp. MOL at 13-14.  While this contract as a whole involves migration from legacy to modern testing methodologies, the nature of the demonstration exercise scenario did not discuss or require an offeror to demonstrate such a migration, and the awardee’s assumption does not violate any of the constraints imposed by the scenario instructions.  While reasonable minds can differ as to whether the awardee’s assumption merited reduced confidence, we cannot conclude on the record before us that the agency’s acceptance of the assumption is unreasonable. 

  Disparate Treatment

Finally, the protester alleges the agency disparately evaluated quotations in numerous respects.  Comments and Second Supp. Protest at 25-50.  For example, the protester notes the agency downgraded its quotation for failing to specify a definition of done or acceptance criteria.  Id. at 44-47.  However, the protester contends the awardee similarly did not include a definition of done, but received no negative findings.  Id.  As a second example, the agency downgraded the protester’s quotation for failing to address vulnerability testing or cross-browser testing.  Id. at 49-50.  However, the protester argues the awardee likewise did not address those features.  Id.

It is a fundamental principle of federal procurement law that a contracting agency must treat all competitors equally and evaluate their submissions evenhandedly against the solicitation’s requirements and evaluation criteria.  Rockwell Elec. Commerce Corp., B‑286201 et al., Dec. 14, 2000, 2001 CPD ¶ 65 at 5.  However, when a protester alleges unequal treatment in a technical evaluation, it must show that the differences in the evaluation did not stem from differences between the quotations or proposals. IndraSoft, Inc., B‑414026, B-414026.2, Jan. 23, 2017, 2017 CPD ¶ 30 at 10; Paragon Sys., Inc.; SecTek, Inc., B-409066.2, B-409066.3, June 4, 2014, 2014 CPD ¶ 169 at 8‑9.  Accordingly, to prevail on an allegation of disparate treatment, a protester must show that the agency unreasonably downgraded its proposal for deficiencies that were substantively indistinguishable from, or nearly identical to, those contained in other proposals.  Office Design Group v. United States, 951 F.3d 1366, 1372 (Fed. Cir. 2020); Battelle Memorial Inst., B-418047.3, B-418047.4, May 18, 2020, 2020 CPD ¶ 176 at 5. 

While the standard to establish disparate treatment is high, in this case it appears the agency engaged in inappropriate disparate treatment in these two respects.[4]  First, the agency negatively evaluated the protester’s demonstration exercise because it did not specify a definition of done or acceptance criteria.  AR, Tab 43, TEP Report at 19.  While the agency, in several pleadings, appears to concede the protester did in fact include acceptance criteria, it also argues that such criteria are not equivalent to a definition of done and that the negative finding is justified on that basis.  See, e.g., MOL at 23. 

The problem with the agency’s position, however, is that the awardee’s quotation appears to include acceptance criteria, but also does not appear to explicitly specify a definition of done.  In subsequent pleadings, the agency claims that, for the awardee at least, providing acceptance criteria was tantamount to a definition of done.  See Supp. MOL at 30.  This disconnect is made starker by the fact that the protester’s demonstration exercise included notes describing testing outcomes that appear substantively identical to some of the sample acceptance criteria offered by the awardee.  See Comments and Third Supp. Protest at 33-34 (citing AR, Tab 17.c, Sparksoft/Skyward Business Case Scenario One at 5; AR, Tab 16.h, DCCA Business Case Scenario One at 3).  In short, the agency’s positions are inconsistent, and it is unclear from the contemporaneous record on what basis the agency is distinguishing between the two quotations in this regard.  Accordingly, we conclude that this negative evaluation finding represents inappropriate disparate treatment. 

Similarly, another aspect of the protester’s quotation that the agency negatively evaluated was the protester’s failure to address vulnerability testing and cross-browser testing.  AR, Tab 43, TEP Report at 19.  The awardee did not receive a similar area of lowered confidence.  With respect to vulnerability testing, the two quotations appear to be meaningfully distinct, but with respect to cross-browser testing, the quotations are not substantively distinguishable. 

As to vulnerability testing, while both the protester and the awardee include discussions of security in a general way in their quotations, the awardee specifically discusses [DELETED] for its test infrastructure.  See AR, Tab 17.d, Sparksoft/Skyward Demonstration Exercise Two at 3.  Moreover, the awardee’s discussion of its approach to security includes more detail than the protester’s quotation.  Compare AR, Tab 17.d, Sparksoft/Skyward Demonstration Exercise Two at 2-4 with AR, Tab 16.i, DCCA Demonstration Exercise Two at 2.  Accordingly, the quotations appear to be substantively distinguishable in this regard, so we cannot conclude that the agency erred in treating the two proposals differently. 

However, neither the protester nor the awardee specifically addressed cross-browser testing in any way.  The protester points to general language in its quotation concerning graphical user interface (GUI) and application programming interface (API) testing that it believes encompassed cross-browser testing, but the agency evaluators specifically noted that “[i]t cannot be inferred” that cross-browser testing is part of GUI/API testing.  AR, Tab 43, TEP Report at 19.  However, the language in the awardee’s quotation that the agency now claims encompasses cross-browser testing (“code quality, system integration, compliance, and security analysis”) is equally vague and could refer to almost any software quality assurance process.  See Supp. MOL at 33.  Moreover, this explanation is not present in the contemporaneous record.  Accordingly, the agency disparately evaluated the two quotations in this regard by reading the protester’s quotation narrowly and the awardee’s quotation expansively.

Nevertheless, competitive prejudice is an essential element to every viable protest, and where an agency’s improper actions did not affect the protester’s chances of receiving award, there is no basis for sustaining the protest.  See, e.g., American Cybernetic Corp., B-310551.2, Feb. 1, 2008, 2008 CPD ¶ 40 at 2-3.  In this case, these errors were unlikely to have had an effect on the agency’s best-value tradeoff decision.

Where, as here, an agency selects a higher-priced quotation that has been rated technically superior to a lower-priced one, the award decision must be supported by a rational explanation demonstrating the higher-rated quotation is in fact superior, and explaining why its technical superiority warrants the additional cost.  e-LYNXX Corp., B‑292761, Dec. 3, 2003, 2003 CPD ¶ 219 at 7.  Such judgments are by their nature often subjective; nevertheless, the exercise of these evaluation judgments must be reasonable and bear a rational relationship to the announced criteria upon which competing offers are to be selected.  Hydraudyne Sys. and Eng’g B.V., B-241236, B‑241236.2, Jan. 30, 1991, 91-1 CPD ¶ 88 at 4.

In making the tradeoff decision, the source selection authority (SSA) found the two quotations had roughly equal merit with respect to corporate experience, but that the awardee’s quotation was superior both with respect to the PWS/QASP factor and the demonstration exercise factor.  AR, Tab 44, SSD at 19.  The SSA concluded the awardee’s technical superiority with respect to the PWS/QASP and demonstration exercises justified paying a three percent price premium over the protester’s lower-priced quotation.

In this case, both of the agency’s errors involve the protester’s demonstration exercises.  However, the protester had four positive and five negative findings concerning its demonstration exercises, while the awardee had eleven positive findings and zero negative findings.  Additionally, in discussing the negative findings related to the protester’s demonstration exercises, the SSA did not refer to acceptance criteria or cross-browser integration, but rather focused on the fact that the protester’s quotation “lacked specificity with how automated testing will be completed, missed Vulnerability testing details, and included no agile checklist.”  AR, Tab 44, SSD at 19.

On the record before us, it seems highly unlikely that removing two of the five negative findings from the protester’s evaluation would alter the protester’s rating of some confidence or otherwise affect the protester’s competitive standing given the uniformly positive nature of the awardee’s evaluation.  Moreover, the removal of these negative findings would only affect the demonstration exercise factor, and would not impact the awardee’s advantage in the PWS/QASP evaluation factor.  See Med Optical, B‑296231.2, B‑296231.3, Sept. 7, 2005, 2005 CPD ¶ 169 at 4 (“Our Office will not sustain a protest unless the protester demonstrates a reasonable possibility that it was prejudiced by the agency’s actions, that is, unless the protester demonstrates that, but for the agency’s actions, it would have had a substantial chance of receiving the award.”). 

In short, on the record before us, even if these two negative findings were removed, the record strongly suggests that the SSA would have reached the same conclusion.  Accordingly, we conclude the protester was not competitively prejudiced by these errors, and the agency’s award decision was otherwise reasonable and consistent with the solicitation.

The protest is denied.

Edda Emmanuelli Perez
General Counsel



[1] Section 508 of the Rehabilitation Act of 1973, as amended, requires federal agencies to ensure that their electronic and information technology (EIT) provides comparable access to people with and without disabilities whenever an agency develops, procures, maintains, or uses EIT.  Visual Connections, LLC, B-407625, Dec. 31, 2012, 2013 CPD ¶ 18 at 1. 

[2] The protester raises other arguments that are not addressed in this decision.  While we do not address all the protester’s arguments in this decision, we have considered them and conclude that they provide no basis to sustain the protest.  For example, the protester alleges that the agency effectively “double-counted” positive features of the awardee’s quotation, including two nearly identical positive findings related to identifying or documenting the steps for the awardee’s agile process.  See Comments and Second Supp. Protest at 20-22.  In this regard, the agency explains the apparently duplicative findings represent separate findings concerning each of the awardee’s two demonstration exercises, and that all findings relate to separate positive features of the awardee’s quotation.  Supp. MOL at 15-16.  In response, the protester contends the agency’s argument relies on post hoc information not included in the contemporaneous evaluation record and should be disregarded.  Comments and Third Supp. Protest at 19-20. 

While we generally give little or no weight to reevaluations and judgments prepared in the heat of the adversarial process, post-protest explanations that provide a detailed rationale for contemporaneous conclusions, and simply fill in previously unrecorded details, will generally be considered in our review of the rationality of selection decisions--so long as those explanations are credible and consistent with the contemporaneous record.  See Remington Arms Co., Inc., B-297374, B-297374.2, Jan. 12, 2006, 2006 CPD ¶ 32 at 12; Boeing Sikorsky Aircraft Support, B-277263.2, B‑277263.3, Sept. 29, 1997, 97-2 CPD ¶ 91 at 15.

In this case, the contemporaneous evaluation record, in most cases, specified the demonstration exercise to which individual evaluation findings referred.  See AR, Tab 43, Technical Evaluation Panel (TEP) Report at 20.  However, a handful of findings did not indicate a specific demonstration exercise, leaving it ambiguous in the contemporaneous record which demonstration exercise was being described.  Id.  In our view, the agency’s post hoc information merely provides this missing information, and appears credible and otherwise consistent with the record.  Additionally, this additional information makes it clear that the evaluation did not contain duplicative positive findings, but instead involved either similar positive findings concerning each of the awardee’s demonstration exercises or simply involved substantively different findings about the same exercise.

[3] The protester additionally argues that the case study also fails to meet the 5,000 case study per year portion of the agency’s size criterion.  See Comments and Third Supp. Protest at 8-9.  However, the agency’s criterion was framed in the alternative:  a case study was similar in size if it involved either 10 FTEs dedicated to testing or if it involved 5,000 test cases per year.  Therefore, because we conclude above that the case study involved 10 FTEs dedicated to testing, we need not reach the question of whether it also involved 5,000 test cases per year.

[4] The protester alleges several other areas of disparate treatment.  We have considered all of these arguments and concluded that none of them provide a basis to sustain the protest.

Decision

Matter of:  Data Computer Corporation of America

File:  B-419033.4; B-419033.5; B-419033.6; B-419033.7

Date:  August 3, 2021

Rebecca E. Pearson, Esq., Taylor Hillman, Esq., Caleb E. McCallum, Esq., and Lindsay M. Reed, Esq., Venable LLP, for the protester.

David B. Dixon, Esq., Toghrul M. Shukurlu, Esq., and Robert C. Starling, Esq., Pillsbury Winthrop Shaw Pittman LLP, for Sparksoft Corporation, the intervenor.

Krystal A. Jordan, Esq., Robyn A. Littman, Esq., and Douglas Kornreich, Esq., Department of Health and Human Services, for the agency.

Michael Willems, Esq., and Edward Goldstein, Esq., Office of the General Counsel, GAO, participated in the preparation of the decision.

DIGEST

1.  Protest that agency unreasonably evaluated quotations is denied where the record reflects the evaluation was generally reasonable and consistent with the terms of the solicitation and applicable statutes and regulation.

2.  Protest that agency treated vendors disparately by downgrading the protester’s quotation for reasons equally applicable to the awardee’s quotation is denied where the protester cannot show a reasonable possibility of competitive prejudice. 

DECISION

Data Computer Corporation of America (DCCA), of Ellicott City, Maryland, protests the issuance of a task order to Sparksoft Corporation, of Columbia, Maryland, under a teaming agreement with Skyward IT Solutions, LLC, (Sparksoft/Skyward) under request for quotations (RFQ) No. GS-35F-161CA/75FCMC20F0057 issued by the Department of Health and Human Services, Centers for Medicare and Medicaid Services (CMS), against the General Services Administration’s Federal Supply Schedule (FSS) 70 for information technology services related to software testing of various Medicare information systems.  The protester contends the agency erred in its evaluation in numerous respects, engaged in impermissible disparate treatment of quotations, and did not adequately justify its best-value tradeoff decision.

We deny the protest.

BACKGROUND

On March 30, 2020, the agency issued the Medicare Integrated Systems Testing (MIST) RFQ to eight FSS 70 contract holders, including DCCA, Sparksoft, and Skyward.  Memorandum of Law (MOL) at 4.  The RFQ provides for the issuance of a single task order to replace two existing contracts:  (1) the Single Test Contractor contract performed by DCCA, which primarily involves testing Medicare information systems running legacy COBOL software hosted on mainframes; and (2) the Medicare Payment System Modernization Services contract performed by Skyward, which involved migrating portions or aspects of the testing process to a modern cloud environment.  Id. at 2-3.  However, in addition to replacing those prior efforts, the MIST RFQ also contemplates significant new work that will result in a more complete modernization of the testing environment, gradually reducing the share of legacy testing to be performed over the task order’s period of performance.  Id. at 3.

The contemplated task order is primarily fixed-price, with certain direct costs to be paid on a time-and-materials basis.  Agency Report (AR), Tab 5, RFQ at 1.  The RFQ also contemplated a 4-month base period of performance, and three 1-year option periods.  Id. at 3.  Award was to be made on the basis of a best-value tradeoff between the following factors:  (1) corporate experience; (2) performance work statement and quality assurance surveillance plan (PWS/QASP); (3) demonstration exercises; (4) section 508[1] compliance; and (5) price.  Id. at 63.  The RFQ explained the combination of non-price factors was significantly more important than price.  Id. at 66.  Further, the RFQ noted corporate experience was significantly more important than all other non-price factors, the PWS/QASP and demonstration exercise factors were equally important, and Section 508 compliance was significantly less important than the other non-price factors.  Id. at 63. 

Relevant to this protest, the RFQ provided corporate experience would be evaluated to determine capability and suitability of the respondent to perform the work required by the statement of objectives (SOO).  RFQ at 64.  Specifically, the RFQ noted relevance for corporate experience case studies was defined as “information associated with projects similar in size, scope and complexity to that described in the attached SOO.”  Id. at 56.  With regard to the demonstration exercises, the RFQ initially required vendors to respond both orally and in their proposals to two agency-provided sample scenarios.  Id. at 64.  However, due to the COVID-19 pandemic, the agency cancelled the oral portion of the demonstration exercises.  MOL at 7.

The RFQ provided for a two-phase evaluation.  RFQ at 62.  During the first phase, vendors supplied their corporate experience submission only.  Id.  The agency received four phase one quotations, and then advised vendors whether the agency recommended that they proceed to the next phase.  Id.  Only DCCA and Sparksoft/Skyward elected to submit phase two quotations.  Id.

Following the evaluation of phase two quotations, the agency initially issued a task order to Sparksoft/Skyward on August 12, 2020, and DCCA filed a protest of the award with our Office.  MOL at 8.  On September 3, the agency agreed to take voluntary corrective action to reopen the procurement, and we dismissed the protest as academic.  Id.  Following limited exchanges with the vendors and a re-evaluation, the agency again made award to Sparksoft/Skyward on December 18.  Id.  DCCA again filed a protest of the award with our Office, and the agency, again, indicated it intended to conduct further limited exchanges and seek revised quotations, and we dismissed the protest as academic.  Id.

The agency then sent discussion letters and permitted vendors to submit revised quotations.  MOL at 9.  The agency subsequently evaluated the vendors’ revised quotations as follows:

DCCA

Sparksoft/Skyward

Corporate Experience

High Confidence

High Confidence

PWS/QASP

High Confidence

High Confidence

Demonstration Exercises

Some Confidence

High Confidence

Section 508 Compliance

High Confidence

High Confidence

Price

$33,345,781

$34,360,846


AR, Tab 44, Source Selection Decision (SSD) at 18.

In making its tradeoff decision, the agency concluded the two quotations had roughly equal merit with respect to corporate experience and section 508 compliance.  Id. at 19.  However, the agency concluded specific technical aspects of Sparksoft/Skyward’s quotation rendered it superior to DCCA’s quotation with respect to the PWS/QASP and demonstration exercises.  Id. 

As a result, the agency concluded Sparksoft/Skyward’s quotation was “moderately” superior to DCCA’s proposal overall, but was only three percent more expensive.  Id. at 19-20.  Consequently, the agency concluded Sparksoft/Skyward’s quotation represented the best value to the government, and made award on April 29, 2021.  MOL at 10.  This protest followed

DISCUSSION

The protester alleges the agency erred in its evaluation in numerous respects.  Specifically, the protester alleges that the agency erred:  (1) by finding the awardee’s corporate experience relevant, and assigning the awardee the highest confidence rating; (2) in evaluating the PWS/QASP and demonstration exercises of both the protester and the intervenor; and (3) by disparately evaluating substantively identical features of the protester’s and intervenor’s quotations.  See First Supp. Protest at 43-68.  Additionally, the protester alleges the agency’s best-value tradeoff decision was flawed because the agency erroneously concluded the two quotations were technically equivalent in certain respects, among other things.  Id. at 69-70.  We address these arguments in turn.[2]

Corporate Experience

First, the protester argues the standards used by the agency to evaluate corporate experience were inconsistent with the RFQ and unequally applied.  Comments and Second Supp. Protest at 9-13, 54-59.  Specifically, the protester notes the evaluators chose to consider corporate experience case studies involving either 10 or more full-time equivalents (FTEs) dedicated to testing or case studies involving 5,000 test cases per year to be similar in size to the instant effort.  Id. at 11.  However, the protester argues the RFQ provided the effort would involve up to 60 FTEs, which is significantly larger.  Id.  In addition, the protester notes the agency concluded case studies were similar in scope if the vendor performed either legacy or modernization testing in the case study, but did not require both.  Id. at 54-59.  The protester contends this was irrational because the criterion permitted the awardee, who lacks meaningful legacy testing experience, to nonetheless meet the scope criterion as all of its case studies involved modernization testing.  Id.

Further, the protester notes its three corporate experience case studies involved [DELETED], [DELETED, and [DELETED] FTEs respectively, and showed significant experience with both legacy and modernization testing.  Comments and Second Supp. Protest at 13, 54-59.  By contrast, the awardee’s three case studies involved [DELETED] FTEs and showed very limited experience with legacy testing.  Id.  The protester contends that, by choosing low thresholds for size and scope, the agency effectively turned the evaluation into a pass/fail and erased a significant advantage of the protester’s quotation, which involved much larger and more relevant case studies that were more similar in size to the current effort.  Id.

Finally, the protester also argues that, even if the agency’s size evaluation criterion of case studies involving either 10 or more FTEs dedicated to testing or involving 5,000 test cases per year was reasonable, the agency did not apply it consistently.  Comments and Third Supp. Protest at 8-9.  For example, the protester notes one of the awardee’s case studies involved [DELETED] FTEs, but [DELETED] of those FTEs were program support or management.  Comments and Second Supp. Protest at 11.  The awardee’s quotation, therefore, only showed [DELETED] FTEs dedicated to testing.  Id.  Moreover, the protester contends the case study does not qualify under the alternative criterion because it only discussed approximately [DELETED] test cases over an undefined period of time.  Comments and Third Supp. Protest at 8-9.

Where an agency issues a solicitation to vendors holding FSS contracts, and conducts a competition among FSS vendors, we will review the record to ensure that the agency’s evaluation is reasonable and consistent with the terms of the solicitation.  Spectrum Comm, Inc., B-412395.2, Mar. 4, 2016, 2016 CPD ¶ 82 at 8.  Where a solicitation does not expressly define terms such as scope, magnitude, or complexity, agencies are afforded great discretion to determine the relevance of an offeror’s or vendor’s corporate experience.  See CW Government Travel, Inc., B-419193.4, et al., Apr. 15, 2021, 2021 CPD ¶ 188 at 8 (concluding an agency’s discretion to determine the relevance of corporate experience is analogous to an agency’s broad discretion to evaluate the relevance of past performance).

In this regard, with respect to the size criterion, the agency notes the RFQ estimated the effort would include several teams “potentially” totaling 60 FTEs, based on the agency’s historical experience.  Supp. MOL at 4.  The agency noted, however, that the focus of this procurement was on innovation, modernization, and the automation of manual tasks, all of which could lead to lower FTE counts than are currently used to perform the work.  Id.  Moreover, the agency anticipated different vendors might take different approaches, requiring more or fewer FTEs.  Id.  For those reasons, the agency concluded case studies involving either 10 FTEs dedicated to testing or involving 5,000 test cases per year were sufficiently similar in size to the MIST task order to be relevant.  Id. 

The agency’s judgment in this regard is unobjectionable.  As noted above, an agency has great discretion in determining the relevance of corporate experience, and we will not generally disturb an evaluation absent a clear demonstration that the assessments are unreasonable or inconsistent with the solicitation criteria.  SIMMEC Training Sols., B‑406819, Aug. 20, 2012, 2012 CPD ¶ 238 at 4.  In this case, the agency’s explanation is credible, and, while the protester is correct that 10 FTEs is smaller than 60 FTEs, it is not so different as to be unreasonable per se

Likewise, with respect to the scope criterion, the agency was reasonable in concluding case studies involving either legacy testing or modern testing were similar in scope to the current effort.  See AR, Tab 43, TEP Report at 2.  While the protester is correct that this effort will necessarily involve both types of testing, the criterion was applied to individual case studies and a vendor’s case studies may not individually have involved both types of testing.  Put another way, requiring all case studies to exhibit both legacy and modern testing would effectively exclude vendors with substantial experience performing both types of testing on separate efforts.  And while the protester is correct that such a metric might lead to an anomalous outcome if a vendor only demonstrated experience with legacy or modern testing that is not the case here.  The awardee’s case studies showed both meaningful, recent legacy testing experience and significant experience in modern testing environments.  See Id. at 13-14; AR, Tab 7, Sparksoft/ Skyward Phase One Corporate Experience Quotation generally.  In short, on the record before us, we see no reason to conclude the agency erred in judging the awardee’s case studies to be relevant.

Moreover, we do not agree with the protester that the agency’s relevance analysis transformed the corporate experience evaluation into a pass/fail assessment.  While the protester and intervenor received the same adjectival rating under corporate experience, the contemporaneous evaluation record shows the agency substantively evaluated each corporate experience case study.  See AR, Tab 43, TEP Report at 13‑14.  In response, the protester argues its own experience involves larger efforts and more legacy testing, and is therefore more similar in size and scope to the agency’s requirements. 

For example, the protester makes much of the fact that the agency estimated that approximately 90 percent of the initial workload on this effort involves testing of legacy systems, and contends that its extensive legacy testing experience should have distinguished it from the awardee.  See Comments and Second Supp. Protest at 12.  However, as the agency explains, the nature of the requirement involves moving away from legacy testing and the quantity of legacy testing should decrease sharply over the course of the effort.  Supp. MOL at 8.  Accordingly, it is not unreasonable for the agency to conclude the protester’s more extensive legacy testing experience was not an advantage that would distinguish the two quotations.  In short, the protester simply disagrees with the agency’s evaluation judgments in this regard, and a protestor’s disagreement with the agency’s judgment, by itself, is not sufficient to establish that an agency acted unreasonably.  Hughes Network Sys., LLC, B-409666.5, B-409666.6, Jan. 15, 2015, 2015 CPD ¶ 42 at 6.

Lastly, the protester argues the agency erred in applying its size criterion.  The protester argues the awardee’s second case study involved [DELETED] FTEs, and [DELETED] of those FTEs were management and program support staff.  Therefore, according to the protester, the case study only involved [DELETED] FTEs dedicated to testing, and therefore did not meet the agency’s 10 FTE criterion.  While the protester is correct that [DELETED] of the [DELETED] FTEs in the case study were managers or support staff, the case study as a whole involved performing software testing.  See AR, Tab 7, Sparksoft/Skyward Phase One Corporate Experience Quotation at 5-6.  That is to say, it is not clear that the managers and program support for a testing effort should be categorically excluded from the count of FTEs dedicated to testing. 

More significantly, the quotation specifically denominates one of the management staff as a “Test Manager.”  See Intervenor’s Comments on Second Supp. AR at 4; AR, Tab 7, Sparksoft/Skyward Phase One Corporate Experience Quotation at 5.  Therefore, even if we assume the protester is correct that some of the [DELETED] managerial or support staff are not appropriately considered to be dedicated to testing in the sense the agency contemplated, it would have been clearly irrational for the agency to exclude the test manager from the count of FTEs dedicated to testing.  Accordingly, because, at minimum, the test manager must be included, this case study involves at least 10 FTEs dedicated to testing, which satisfies the agency’s size criterion.[3]  Accordingly, this protest ground is without merit.

Demonstration Exercises and PWS/QASP

Next, the protester contends the agency erred in evaluating the demonstration exercises and PWS/QASP of both the protester and the awardee.  Specifically, the protester contends the agency:  applied unstated evaluation criteria to the protester’s quotation, failed to reject an inappropriate assumption in the awardee’s quotation, and disparately evaluated similar aspects of the two quotations.  See First Supp. Protest at 48-68.

  Unstated Evaluation Criteria

First, DCCA alleges the agency applied unstated evaluation criteria when it downgraded one of DCCA’s demonstration exercises for failing to demonstrate agile maturity by, among other things, failing to include an agile release checklist or a process step for process improvement.  Id. at 54-68; Comments and Second Supp. Protest at 23-25.  The protester contends nothing in the RFQ referenced “agile maturity” or otherwise required such a checklist or process improvement step, and the agency was, in effect, comparing the protester’s quotation to the awardee’s quotation rather than the evaluation criterion.  Id.  The protester also contends its quotation adequately addressed the RFQ’s actual requirements, and these areas of lowered confidence represent disparate treatment of quotations because the awardee’s quotation also did not meaningfully address these points.  Id.

Where a protester challenges the evaluation as unfairly utilizing unstated evaluation criteria, our Office will assess whether the solicitation reasonably informs vendors of the basis for the evaluation.  Raytheon Co., B-403110.3, Apr. 26, 2011, 2011 CPD ¶ 96 at 5.  In that regard, procuring agencies are not required to list as stated evaluation criteria every area that may be taken into account; rather, it is sufficient that the areas considered in the evaluation be reasonably related to or encompassed by the stated criteria.  Id.

In this regard, the instructions for the demonstration exercise provided “[c]ode development and deployment in the cloud environment is governed by the Agile Sprint release schedule[,]” and advised vendors to “[u]se your knowledge” concerning Agile Sprint release management to plan and prepare code testing in the relevant environment.  AR, Tab 14, Instructions for Business Case Scenario Two at 1.  Further the instructions provided the response should include an “automation framework and work flows including People, Product, and Process.”  Id.

Here, the instructions were clear that the demonstration exercise involved the use of agile development methodology and that vendors should clearly describe their workflows and processes.  The agency contends a release checklist and process improvement are inherent parts of a mature agile process, and therefore were reasonably encompassed by the terms of the solicitation.  MOL at 28-29; Supp. MOL at 19-20.  We see no basis to conclude the agency’s evaluation is unreasonable in this regard. 

While the protester disagrees with the agency’s view of what is “inherent” in a mature agile development process, the solicitation was clear that the agency intended to evaluate vendors based on their agile processes.  It seems entirely reasonable that a description of a vendor’s processes relating to an “Agile Sprint release schedule” would include a checklist or some other description of the steps a vendor intends to take to seek approval prior to release.  This is especially so when the digital services playbook incorporated by reference in the RFQ and SOO included such agile checklists.  RFQ at 56; SOO at 26.  Similarly, it seems unobjectionable that the description of a vendor’s work flows and processes should include some discussion of process improvement.  In short, we believe the agency’s assessments of these matters were reasonably encompassed by the solicitation’s requirements. 

Moreover, contrary to the protester’s contention, these points represent real differences between the two quotations.  The protester’s demonstration exercise addressed neither a pre-release checklist nor process improvement.  By contrast the awardee’s demonstration exercise addressed both points.  See, e.g., AR, Tab 17.d, Sparksoft/ Skyward Demonstration Exercise Two at 1-2 (“[DELETED]”).  Accordingly, we cannot conclude the agency’s evaluation was unreasonable and this protest ground is without merit.

  Demonstration Assumption

The protester next alleges the agency erred by uncritically accepting one of the awardee’s assumptions in the first demonstration exercise.  Comments and Second Supp. Protest at 15-18.  Specifically, the awardee assumed it had already migrated testing to an agile methodology.  Id.  According to the protester, this assumption is inappropriate because the first demonstration exercise involved testing of systems that currently primarily use a waterfall testing methodology, not an agile one.  Id.  Furthermore, the migration to an agile testing framework is part of the work contemplated by the SOO under this contract, so the awardee should not have been permitted to, in effect, assume it away.  Id.

In this regard, the SOO clearly contemplates the current testing environment includes both waterfall and agile components, and one of the principle goals of the MIST contract is to modernize this testing environment and to increase the amount of testing developed in an agile manner.  See SOO at 12.  For example, the SOO notes vendors must “[u]se Test Management tools, methodologies (waterfall, agile, and hybrid), and processes that are currently in place for IST.  However, the MIST contractor should investigate and recommend modern and improved solution as required.”  Id.

However, the SOO also notes in its assumptions and constraints that, in performing the effort, “[t]he Contractor will use an agile testing approach, a flexible methodology and automated processes as much as possible.”  Id. at 23.  Moreover, the relevant demonstration exercise instructions did not specify what, if any, methodology should be employed, but rather contemplated that vendors were “expected to explain your testing automation framework and workflows including People, Product, and Process.”  See AR, Tab 12, Instructions for Business Case Scenario One at 1.

The agency argues that the evaluators saw no problem with the assumption because the implementation of agile testing environments is a key goal of this effort, and the awardee’s demonstration exercise showed a good command of agile methodology applied to the problem posed in the demonstration exercise.  Supp. MOL at 13-14.  While this contract as a whole involves migration from legacy to modern testing methodologies, the nature of the demonstration exercise scenario did not discuss or require an offeror to demonstrate such a migration, and the awardee’s assumption does not violate any of the constraints imposed by the scenario instructions.  While reasonable minds can differ as to whether the awardee’s assumption merited reduced confidence, we cannot conclude on the record before us that the agency’s acceptance of the assumption is unreasonable. 

  Disparate Treatment

Finally, the protester alleges the agency disparately evaluated quotations in numerous respects.  Comments and Second Supp. Protest at 25-50.  For example, the protester notes the agency downgraded its quotation for failing to specify a definition of done or acceptance criteria.  Id. at 44-47.  However, the protester contends the awardee similarly did not include a definition of done, but received no negative findings.  Id.  As a second example, the agency downgraded the protester’s quotation for failing to address vulnerability testing or cross-browser testing.  Id. at 49-50.  However, the protester argues the awardee likewise did not address those features.  Id.

It is a fundamental principle of federal procurement law that a contracting agency must treat all competitors equally and evaluate their submissions evenhandedly against the solicitation’s requirements and evaluation criteria.  Rockwell Elec. Commerce Corp., B‑286201 et al., Dec. 14, 2000, 2001 CPD ¶ 65 at 5.  However, when a protester alleges unequal treatment in a technical evaluation, it must show that the differences in the evaluation did not stem from differences between the quotations or proposals. IndraSoft, Inc., B‑414026, B-414026.2, Jan. 23, 2017, 2017 CPD ¶ 30 at 10; Paragon Sys., Inc.; SecTek, Inc., B-409066.2, B-409066.3, June 4, 2014, 2014 CPD ¶ 169 at 8‑9.  Accordingly, to prevail on an allegation of disparate treatment, a protester must show that the agency unreasonably downgraded its proposal for deficiencies that were substantively indistinguishable from, or nearly identical to, those contained in other proposals.  Office Design Group v. United States, 951 F.3d 1366, 1372 (Fed. Cir. 2020); Battelle Memorial Inst., B-418047.3, B-418047.4, May 18, 2020, 2020 CPD ¶ 176 at 5. 

While the standard to establish disparate treatment is high, in this case it appears the agency engaged in inappropriate disparate treatment in these two respects.[4]  First, the agency negatively evaluated the protester’s demonstration exercise because it did not specify a definition of done or acceptance criteria.  AR, Tab 43, TEP Report at 19.  While the agency, in several pleadings, appears to concede the protester did in fact include acceptance criteria, it also argues that such criteria are not equivalent to a definition of done and that the negative finding is justified on that basis.  See, e.g., MOL at 23. 

The problem with the agency’s position, however, is that the awardee’s quotation appears to include acceptance criteria, but also does not appear to explicitly specify a definition of done.  In subsequent pleadings, the agency claims that, for the awardee at least, providing acceptance criteria was tantamount to a definition of done.  See Supp. MOL at 30.  This disconnect is made starker by the fact that the protester’s demonstration exercise included notes describing testing outcomes that appear substantively identical to some of the sample acceptance criteria offered by the awardee.  See Comments and Third Supp. Protest at 33-34 (citing AR, Tab 17.c, Sparksoft/Skyward Business Case Scenario One at 5; AR, Tab 16.h, DCCA Business Case Scenario One at 3).  In short, the agency’s positions are inconsistent, and it is unclear from the contemporaneous record on what basis the agency is distinguishing between the two quotations in this regard.  Accordingly, we conclude that this negative evaluation finding represents inappropriate disparate treatment. 

Similarly, another aspect of the protester’s quotation that the agency negatively evaluated was the protester’s failure to address vulnerability testing and cross-browser testing.  AR, Tab 43, TEP Report at 19.  The awardee did not receive a similar area of lowered confidence.  With respect to vulnerability testing, the two quotations appear to be meaningfully distinct, but with respect to cross-browser testing, the quotations are not substantively distinguishable. 

As to vulnerability testing, while both the protester and the awardee include discussions of security in a general way in their quotations, the awardee specifically discusses [DELETED] for its test infrastructure.  See AR, Tab 17.d, Sparksoft/Skyward Demonstration Exercise Two at 3.  Moreover, the awardee’s discussion of its approach to security includes more detail than the protester’s quotation.  Compare AR, Tab 17.d, Sparksoft/Skyward Demonstration Exercise Two at 2-4 with AR, Tab 16.i, DCCA Demonstration Exercise Two at 2.  Accordingly, the quotations appear to be substantively distinguishable in this regard, so we cannot conclude that the agency erred in treating the two proposals differently. 

However, neither the protester nor the awardee specifically addressed cross-browser testing in any way.  The protester points to general language in its quotation concerning graphical user interface (GUI) and application programming interface (API) testing that it believes encompassed cross-browser testing, but the agency evaluators specifically noted that “[i]t cannot be inferred” that cross-browser testing is part of GUI/API testing.  AR, Tab 43, TEP Report at 19.  However, the language in the awardee’s quotation that the agency now claims encompasses cross-browser testing (“code quality, system integration, compliance, and security analysis”) is equally vague and could refer to almost any software quality assurance process.  See Supp. MOL at 33.  Moreover, this explanation is not present in the contemporaneous record.  Accordingly, the agency disparately evaluated the two quotations in this regard by reading the protester’s quotation narrowly and the awardee’s quotation expansively.

Nevertheless, competitive prejudice is an essential element to every viable protest, and where an agency’s improper actions did not affect the protester’s chances of receiving award, there is no basis for sustaining the protest.  See, e.g., American Cybernetic Corp., B-310551.2, Feb. 1, 2008, 2008 CPD ¶ 40 at 2-3.  In this case, these errors were unlikely to have had an effect on the agency’s best-value tradeoff decision.

Where, as here, an agency selects a higher-priced quotation that has been rated technically superior to a lower-priced one, the award decision must be supported by a rational explanation demonstrating the higher-rated quotation is in fact superior, and explaining why its technical superiority warrants the additional cost.  e-LYNXX Corp., B‑292761, Dec. 3, 2003, 2003 CPD ¶ 219 at 7.  Such judgments are by their nature often subjective; nevertheless, the exercise of these evaluation judgments must be reasonable and bear a rational relationship to the announced criteria upon which competing offers are to be selected.  Hydraudyne Sys. and Eng’g B.V., B-241236, B‑241236.2, Jan. 30, 1991, 91-1 CPD ¶ 88 at 4.

In making the tradeoff decision, the source selection authority (SSA) found the two quotations had roughly equal merit with respect to corporate experience, but that the awardee’s quotation was superior both with respect to the PWS/QASP factor and the demonstration exercise factor.  AR, Tab 44, SSD at 19.  The SSA concluded the awardee’s technical superiority with respect to the PWS/QASP and demonstration exercises justified paying a three percent price premium over the protester’s lower-priced quotation.

In this case, both of the agency’s errors involve the protester’s demonstration exercises.  However, the protester had four positive and five negative findings concerning its demonstration exercises, while the awardee had eleven positive findings and zero negative findings.  Additionally, in discussing the negative findings related to the protester’s demonstration exercises, the SSA did not refer to acceptance criteria or cross-browser integration, but rather focused on the fact that the protester’s quotation “lacked specificity with how automated testing will be completed, missed Vulnerability testing details, and included no agile checklist.”  AR, Tab 44, SSD at 19.

On the record before us, it seems highly unlikely that removing two of the five negative findings from the protester’s evaluation would alter the protester’s rating of some confidence or otherwise affect the protester’s competitive standing given the uniformly positive nature of the awardee’s evaluation.  Moreover, the removal of these negative findings would only affect the demonstration exercise factor, and would not impact the awardee’s advantage in the PWS/QASP evaluation factor.  See Med Optical, B‑296231.2, B‑296231.3, Sept. 7, 2005, 2005 CPD ¶ 169 at 4 (“Our Office will not sustain a protest unless the protester demonstrates a reasonable possibility that it was prejudiced by the agency’s actions, that is, unless the protester demonstrates that, but for the agency’s actions, it would have had a substantial chance of receiving the award.”). 

In short, on the record before us, even if these two negative findings were removed, the record strongly suggests that the SSA would have reached the same conclusion.  Accordingly, we conclude the protester was not competitively prejudiced by these errors, and the agency’s award decision was otherwise reasonable and consistent with the solicitation.

The protest is denied.

Edda Emmanuelli Perez
General Counsel



[1] Section 508 of the Rehabilitation Act of 1973, as amended, requires federal agencies to ensure that their electronic and information technology (EIT) provides comparable access to people with and without disabilities whenever an agency develops, procures, maintains, or uses EIT.  Visual Connections, LLC, B-407625, Dec. 31, 2012, 2013 CPD ¶ 18 at 1. 

[2] The protester raises other arguments that are not addressed in this decision.  While we do not address all the protester’s arguments in this decision, we have considered them and conclude that they provide no basis to sustain the protest.  For example, the protester alleges that the agency effectively “double-counted” positive features of the awardee’s quotation, including two nearly identical positive findings related to identifying or documenting the steps for the awardee’s agile process.  See Comments and Second Supp. Protest at 20-22.  In this regard, the agency explains the apparently duplicative findings represent separate findings concerning each of the awardee’s two demonstration exercises, and that all findings relate to separate positive features of the awardee’s quotation.  Supp. MOL at 15-16.  In response, the protester contends the agency’s argument relies on post hoc information not included in the contemporaneous evaluation record and should be disregarded.  Comments and Third Supp. Protest at 19-20. 

While we generally give little or no weight to reevaluations and judgments prepared in the heat of the adversarial process, post-protest explanations that provide a detailed rationale for contemporaneous conclusions, and simply fill in previously unrecorded details, will generally be considered in our review of the rationality of selection decisions--so long as those explanations are credible and consistent with the contemporaneous record.  See Remington Arms Co., Inc., B-297374, B-297374.2, Jan. 12, 2006, 2006 CPD ¶ 32 at 12; Boeing Sikorsky Aircraft Support, B-277263.2, B‑277263.3, Sept. 29, 1997, 97-2 CPD ¶ 91 at 15.

In this case, the contemporaneous evaluation record, in most cases, specified the demonstration exercise to which individual evaluation findings referred.  See AR, Tab 43, Technical Evaluation Panel (TEP) Report at 20.  However, a handful of findings did not indicate a specific demonstration exercise, leaving it ambiguous in the contemporaneous record which demonstration exercise was being described.  Id.  In our view, the agency’s post hoc information merely provides this missing information, and appears credible and otherwise consistent with the record.  Additionally, this additional information makes it clear that the evaluation did not contain duplicative positive findings, but instead involved either similar positive findings concerning each of the awardee’s demonstration exercises or simply involved substantively different findings about the same exercise.

[3] The protester additionally argues that the case study also fails to meet the 5,000 case study per year portion of the agency’s size criterion.  See Comments and Third Supp. Protest at 8-9.  However, the agency’s criterion was framed in the alternative:  a case study was similar in size if it involved either 10 FTEs dedicated to testing or if it involved 5,000 test cases per year.  Therefore, because we conclude above that the case study involved 10 FTEs dedicated to testing, we need not reach the question of whether it also involved 5,000 test cases per year.

[4] The protester alleges several other areas of disparate treatment.  We have considered all of these arguments and concluded that none of them provide a basis to sustain the protest.










Downloads

GAO Contacts

Office of Public Affairs