Skip to main content

[Protest of EPA Contract Award for Technical Services]

B-274943.3 Mar 05, 1997
Jump To:
Skip to Highlights

Highlights

A firm protested an Environmental Protection Agency's (EPA) contract award technical services, contending that: (1) EPA's consensus scores under certain evaluation subfactors were arbitrary and not rationally related to the merits of the bidders' proposals; (2) EPA did not offer a rationale or explanation for the difference between the consensus scores and the individual evaluators scores; and (3) the awardee failed to provide past performance project summary sheets for all of the projects it discussed in its best and final offer (BAFO) oral presentation. GAO held that: (1) EPA's consensus scores accurately reflected the relative merits of both proposals, since EPA evaluators discussed the relative weaknesses and strengths of the proposals to reach a consensus rating and clear up any misconceptions or mistakes that may have occurred in the initial evaluation; (2) there was no credible evidence that the consensus evaluation was unreasonable; and (3) the awardee was not required to submit past performance project summary sheets in its BAFO oral presentation. Accordingly, the protest was denied.

View Decision

Matter of: Resource Applications, Inc. File: B-274943.3 Date: March 5, 1997 * Redacted Decision

DIGEST

Attorneys

DECISION

Resource Applications, Inc. (RAI) protests the award of a Regional Oversight Contract (ROC) to TechLaw, Inc. by the Environmental Protection Agency (EPA) under request for proposals (RFP) No. W500823G3, for technical services.

We deny the protest.

The RFP, a total small business set-aside, contemplated the award of a cost-plus-fixed-fee contract for technical services to support EPA's Federal Facilities Revitalization and Reutilization Office in its mission of oversight and enforcement of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), and the Resource Conservation and Recovery Act (RCRA) at federal facilities in EPA Zone 3. [1] The RFP contained a detailed statement of work and set forth EPA's best estimate of the level-of-effort hours that would be needed. Work to be performed under the ROC will be ordered through written work assignments issued by the contracting officer. The ROC is for a base period of 2 years with 3 option years as well as options to increase the level of effort for each period of performance.

The RFP stated that technical quality was more important than cost, but as proposals became more technically equal, evaluated cost would become more important. The RFP established three technical evaluation factors worth a maximum score of 570 points: three sample regional response scenarios (one for each region within EPA Zone 3, including Region V) worth 75 points a piece, for a total of 225 points; a labor mix matrix worth a maximum of 105 points; and past performance worth a maximum of 240 points. The past performance factor was broken down into 14 subfactors grouped in three categories: RCRA compliance, CERCLA assessment, and remedial activities support. The RCRA compliance category included a technical review subfactor and the remedial activities support category included a removal activities and a remedial action activities subfactor.

EPA charged a technical evaluation panel (TEP) with evaluating each offeror's proposal on a scale of 0 (deficient) to 5 (superior in most features) for each factor and subfactor. These scores were then to be weighted according to the RFP's evaluation scheme. In addition, the score for each past performance subfactor was to be multiplied by a level of confidence assessment rating (LOCAR) to determine the final score for the past performance factor. The LOCARs reflected the government's degree of confidence that the offeror would keep the promises made in its proposal and were to be derived based on past performance information obtained by the contracting officer from references listed in past performance project summary sheets submitted by each offeror.

The RFP required offerors to submit written technical and cost proposals, and also to present oral technical proposals, which would be videotaped by EPA for evaluation by the TEP. [2] During the oral presentations each offeror was to address the three sample regional response scenarios specified in the RFP; the offerors' proposed labor mix; and project summaries for past performance. Offerors were required to submit, for each project to be referred to in the oral presentation, a written past performance project summary sheet identifying the client, a description of the project, and a point of contact.

TechLaw and RAI submitted proposals in response to the RFP. Besides their written technical and cost proposals, both offerors submitted past performance project summary sheets and later gave oral presentations to EPA. Soon after the videotaping, EPA misplaced the videotape of TechLaw's oral presentation, and invited TechLaw to repeat its oral technical presentation. TechLaw did so, and its repeat presentation was videotaped by EPA for evaluation by the TEP. [3]

Each member of the TEP then individually reviewed the written proposals and the videotapes of the offerors' oral presentations, and evaluated and scored the proposals in accordance with the RFP's evaluation scheme. In preparation for consensus discussions among the TEP members, each member sent a computer file to the TEP chairperson, listing, for each evaluation factor and subfactor, that evaluator's score for each proposal and what that evaluator considered the proposal's strengths and weaknesses. The TEP chairperson then compiled these files in order to summarize the individually assessed scores, strengths, weaknesses for consensus discussions. The members of the TEP later met for consensus discussions on the initial proposals and agreed to consensus comments regarding the strengths and weaknesses of each proposal for each factor and subfactor, and assigned numerical consensus scores.

To obtain the information to develop the LOCARs to apply to the TEP's past performance scores, the contracting officer and contract specialist contacted, by telephone, the references provided by the offerors in their past performance project summary sheets.

At the conclusion of TEP's evaluation of technical proposals and the contracting officer's application of the LOCARs, TechLaw had the highest overall technical score with 417.9 points, including 201.9 points for past performance. RAI's overall technical score was 310.2 points, including 139.2 points for past performance.

The contracting officer included both proposals in the competitive range. EPA conducted discussions with both offerors, including questions about aspects of their past performance and requested and obtained best and final offers (BAFO). Both offerors also made BAFO oral presentations, which were videotaped by EPA. The record shows that TechLaw responded to EPA's questions about certain areas of its past performance in its BAFO oral presentation, but did not include additional past performance project summary sheets with its BAFO.

Following the submission of TechLaw's and RAI's BAFOs, the TEP reconvened, and each TEP member again individually evaluated the offerors' BAFO written technical proposals and videotaped BAFO oral presentations. The TEP then held consensus discussions as a group, and arrived at a consensus evaluation and score for each factor and subfactor addressed in the offerors' BAFOs. As a result of improvements to the proposals made by the offerors by their respective BAFOs, EPA increased TechLaw's overall technical score to 469.2 points, including 211.2 points for past performance, and raised RAI's overall technical score to 394.5 points, including 181.5 points for past performance. Overall, the TEP considered TechLaw's proposal to be technically superior to RAI's, as indicated by the point spread.

In performing the required cost realism analysis, the contracting officer adjusted upward TechLaw's proposed cost-plus-fixed-fee of $13,415,879 to $13,789,572 and RAI's proposed cost-plus-fixed-fee of $11,024,804 to $12,763,298, in order to more accurately reflect the quality of the labor the offerors had proposed. The contracting officer determined that TechLaw's superior technical proposal outweighed RAI's lower evaluated cost, and awarded the ROC to TechLaw as the offeror whose proposal was most advantageous to the government.

RAI first protests that the TEP's consensus scores for certain subfactors were arbitrary and not rationally related to the merits of TechLaw's and RAI's proposals, because the consensus scores do not reflect or account for the individual evaluators' scores for those subfactors. For example, with regard to the TEP's evaluation of the Region V scenario subfactor under the regional response scenarios factor, RAI contends that TechLaw's consensus score of 5 (superior in most features) is inconsistent with the individual evaluator scores, which ranged from a low of 3 (adequate) to a high of 4 (good with some superior features). Similarly, RAI points out that TechLaw's proposal received a maximum consensus score of 5 for the removal activities subfactor of the remedial activities support category of the past performance factor, even though no individual evaluator gave TechLaw a score greater than 4 for that subfactor.

RAI alleges that the reverse problem--a lower consensus score than the individual scores--existed in the TEP's initial consensus scoring of RAI's proposal. Specifically, for the technical review subfactor under the RCRA compliance category of the past performance factor, RAI received an initial consensus score of 0 (totally deficient), but no evaluator gave RAI an individual score less than 1 (contains significant deficiencies) and two evaluators gave RAI a score of 4 for the subfactor. Likewise, for the remedial action activities subfactor under the remedial activities support category of the past performance factor, RAI received an initial consensus score of 1, yet no evaluator gave RAI a score less than 2 (clarification required before final scoring), and one evaluator gave RAI a top score of 5.

RAI further contends that the TEP's consensus evaluation does not offer a rationale or explanation for the difference between the consensus scores and the individual evaluator scores. In this regard, the protester points out that the strengths and weaknesses listed on the consensus evaluation report were lifted almost verbatim from the TEP chairman's compilation of the individual evaluators' score sheets for the initial evaluation. The protester questions why the same strengths and weaknesses listed by the evaluators produced higher scores for TechLaw and lower scores for RAI for the consensus evaluation than when these same strengths and weaknesses served as the basis for the individual scores.

There is nothing inherently objectionable in an agency's decision to develop a consensus rating. Appalachian Council, Inc., B-256179, May 20, 1994, 94-1 CPD Para. 319. The fact that the evaluators individually rated TechLaw's proposal for the Region V scenario and removal activities subfactors less favorably than they did on a consensus basis for those subfactors, and individually rated RAI's proposal for the technical review and remedial action activities subfactors more favorably than they did on a consensus basis for those subfactors does not, by itself, warrant questioning the final evaluation results. See Syscon Servs., Inc., 68 Comp.Gen. 698 (1989), 89-2 CPD Para. 258; Dragon Servs., Inc., B-255354, Feb. 25, 1994, 94-1 CPD Para. 151. Agency evaluators may discuss the relative strengths and weaknesses of proposals in order to reach a consensus rating, which often differs from the ratings given by individual evaluators, since such discussions generally operate to correct mistakes or misperceptions that may have occurred in the initial evaluation. Schweizer Aircraft Corp., B-248640.2; B-248640.3, Sept. 14, 1992, 92-2 CPD Para. 200; The Cadmus Group, Inc., B-241372.3, Sept. 25, 1991, 91-2 CPD Para. 271. Thus, a consensus score need not be the score of the majority the evaluators initially awarded--the score may properly be determined after discussions among the evaluators. GZA Remediation, Inc., B-272386, Oct. 3, 1996, 96-2 CPD Para. 155 (note 3). In short, the overriding concern in the evaluation process is that the final score assigned accurately reflect the actual merits of the proposals, not that it be mechanically traceable back to the scores initially given by the individual evaluators. Id.; Dragon Servs., Inc., supra.

Here, the record shows that the TEP consensus report reasonably reconciles the differences of opinion among the evaluators and accurately reflects the relative qualities of TechLaw's and RAI's proposals.

Specifically, while the protester correctly asserts that the strengths and weaknesses listed on the consensus evaluation sheets for TechLaw's proposal for the Region V scenario and the removal activities subfactors were derived from the strengths and weaknesses listed on individual evaluation sheets as compiled by the TEP chairperson, the protester is incorrect in asserting that the TEP then merely assigned a higher numerical score for these same listed strengths and weaknesses. Our review indicates that the TEP's consensus evaluation sheets articulate, separately from the listed strengths and weaknesses derived from the individual score sheets, the TEP's rationale for its consensus scores by listing the strengths the TEP found most pertinent in TechLaw's proposal for these subfactors. For example, as stated on the TEP's consensus evaluation sheet for the Region V scenario, "[t]he technical panel found" that TechLaw's "presentation of the scenario was comprehensive, thorough and addressed all the important elements required and therefore . . . superior in most features." For the removal activities subfactor, the TEP's consensus evaluation sheet stated the reasons why "the panel" determined that TechLaw's proposal element was rated superior in most features for this subfactor. Since the protester does not explain why the TechLaw proposal for the Region V scenario and the removal activities subfactors was not "superior in most features," as documented by the TEP, but simply disagrees with the consensus rating reached by agency's evaluators, we have no basis upon which to find that the agency's evaluation was unreasonable. See Dragon Servs., Inc., supra.

With regard to the evaluation of RAI's proposal for the technical review and remedial action activities subfactors, we think TEP also sufficiently articulated the reasons why it reached its initial consensus scores for these subfactors. The TEP determined that for the technical review subfactor, RAI "did not address any of the criteria listed under the technical review task for RCRA compliance." For the remedial action activities subfactor, the TEP stated that "[b]ecause only one of the five project summaries provided [by RAI] was relevant to remedial action activities, the panel determined that the past performance did not demonstrate the ability to perform this task and therefore, this element is deficient." In any event, the differences between the individual evaluators' scores and the initial consensus evaluation of RAI's technical proposal for the technical review and remedial action activities subfactors is not relevant here because RAI was afforded, and took advantage of, the opportunity to revise the weaknesses identified during the initial evaluation in its BAFO, and the evaluators, with general unanimity, found RAI's BAFO responses to be adequate for both subfactors. RAI does not challenge the BAFO consensus scores for these subfactors.

RAI also protests that because TechLaw failed to provide past performance project summary sheets for all of the projects TechLaw discussed in its BAFO oral presentation, EPA improperly increased TechLaw's past performance score and failed to adjust TechLaw's LOCAR scores. However, the RFP only required the submission of past performance project summary sheets as part of the offerors' initial submission, which TechLaw provided. Moreover, the record shows that TechLaw had submitted past performance summary sheets with its initial proposal for the bulk of the projects it referred to during its BAFO oral presentation. There is no evidence that the TEP required such sheets for the few additional projects referred to in TechLaw's BAFO oral presentation to reasonably evaluate that offeror's past performance. We also note that TechLaw's LOCAR scores were not adjusted as a result of TechLaw's BAFO because TechLaw had already received good or excellent LOCAR scores for each subfactor, i.e., most of the references contacted had given TechLaw a rating of good with superior features for its performance, or a significant majority of the references contacted had given TechLaw a superior rating for its performance; the record evidences that the few additional projects referred to in TechLaw's BAFO oral presentation would not have affected this rating. Under the circumstances, we find unobjectionable EPA's evaluation of TechLaw's BAFO oral presentation.

The protest is denied.

Comptroller General of the United States

1. EPA Zone 3 encompasses EPA Region V (Illinois, Indiana, Michigan, Minnesota, Ohio, Wisconsin), Region VI (Arkansas, Louisiana, New Mexico, Oklahoma, Texas), and Region VII (Iowa, Kansas, Missouri, Nebraska).

2. EPA states that this procurement was EPA's first use of oral presentations in lieu of substantial written technical proposals as a means of streamlining the acquisition process. Offerors were given a time limit in which to make their presentations and the videotaping took place at an EPA facility. EPA reports that the only individuals present during the videotaping, besides the offeror's team, were the contracting officer, the contract specialist, and the cameraman. No member of the TEP was present and none of the EPA officials asked questions during the presentation. The contracting officer informed the TEP that the offeror's presentations were not to be evaluated on the basis of delivery style, but strictly on the basis of technical content. According to the contracting officer, the offeror's representatives essentially read from a prepared text in making their presentations.

3. RAI protests that TechLaw had an unfair competitive advantage because EPA provided TechLaw with this second opportunity to present--and thus improve--its initial oral presentation. However, according to EPA, the protester was aware of the basis for this allegation at least as early as October 29, 1996, when the protester mentioned the repeat videotaping of TechLaw's presentation to EPA. RAI has not rebutted EPA's assertion. Since RAI did not raise this protest ground until November 25, more than 10 calendar days after the basis of its protest was apparently known, it is untimely raised and will not be considered. Bid Protest Regulations, Sec. 21.2(a)(2), 61 Fed. Reg. 39039, 39043 (1996) (to be codified at 4 C.F.R. Sec. 21.2(a)(2)).

DOCUMENT FOR PUBLIC RELEASE A protected decision was issued on the date below and was subject to a GAO Protective Order. This version has been redacted or approved by the parties involved for public release.

Downloads

GAO Contacts

Office of Public Affairs