Streamlined Evaluations Under FAR Part 13

My favorite contracting article or blog post is Competitive Processes in Government Contracting: The FAR Part 15 Process Model and Process Inefficiency by Vern Edwards. It completely changed the way I think about setting up evaluations.

He provides examples and discussion on how to avoid FAR part 15 procedures for GSA orders, Simplified Acquisition Procedures (SAP), and Multiple Award Delivery and Task Order Contracts.

For SAP he writes

In short, calling an acquisition “simplified” does not make it so; it is not what you say you are doing that counts, it is what you actually do. Why did the Army prepare and issue an RFP, require competing hotels to prepare and submit written proposals, establish an evaluation board, and have that board prepare 85 pages of notes and an evaluation report? None of that is required by FAR Part 13.

A search of an American Automobile Club (AAA) website yielded a list of ten hotels in Butte, Montana. Why not have the MEPS commander prepare a checklist based on its requirements for food, lodging and transportation and then have one or two members of the detachment visit all ten hotels in Butte and complete a checklist for each? This could be done in one or two days. Why not then have the MEPS commander send to the contracting officer the names of three hotels that meet the MEPS’s needs and with which they would be willing to do business? The contracting officer could then contact the three hotel managers, fax or email a copy of the RFQ to each of them, and ask them to fax or email price quotes. Based on the price quotes, the contracting officer, in consultation with the MEPS commander, could pick the hotel that represents the best value and negotiate an agreement on contract terms and conditions. The entire contractor selection process and negotiation could have been completed in a week or two.

Read the entire post to understand the history and analysis, but he summarizes with some recommendations.

Some best practices in competitive process design include the following:

  • Limit the number of evaluation factors. It is the number of evaluation factors that determines the amount of information that must be obtained from competitors and processed by the government in order to reach a decision. Therefore, use no more evaluation factors than absolutely necessary and only those factors on which the differences among offerors are likely to be more than trivial.26

  • Limit the amount of proposal information required from competitors. As a general rule, and especially when acquiring services, do not make offerors write technical or management expositions (narratives) describing how they will do the work or how they will organize, or specially-prepared quality assurance and safety plans, etc. Such writings are time-consuming and costly to prepare and to evaluate, but they do not necessarily demonstrate a firm’s ability to perform, and usually are not binding in any meaningful sense. Moreover, incorporating such writings into a contract is inconsistent with performance-based service contracting policy. If agency technical personnel want to obtain first-hand insights into how well the competitors understand the work, require oral presentations instead of written technical or management proposals.27

  • Use the FAR Part 15 Process Model in source selections only when it is most effective. When conducting source selections under FAR Part 15, use the FAR Part 15 Process Model when five or fewer proposals are expected and when obtaining complete proposals from all offerors at the outset of the competition will save time.

  • Consider phased submission of information and proposal evaluation. When conducting source selections under FAR Part 15, and when there is a realistic likelihood of receiving more than five proposals, consider soliciting proposal information and evaluating proposals in phases. Alternatively, consider planning to use phased evaluation as a contingency procedure.28

  • Do not follow the FAR Part 15 Process Model when ordering from GSA schedules. When using GSA’s special ordering procedures for services that require a statement of work, do not follow the FAR Part 15 Process Model and do not use FAR Part 15 terminology (e.g., “competitive range” or “discussions”) or refer to FAR Part 15 in the RFQ. Non-Defense agencies should conduct market research, select at least three contractors to solicit based on experience, past performance and perhaps an informal interview, provide those firms with a statement of work by fax or email and ask for price quotes, then choose one firm with which to negotiate the terms of the order or blanket purchase agreement. Defense agencies complying with DFARS § 208.404-70 and expecting a large number of responses to their “fair notice of intent” should request limited information (experience, past performance and price quote) at the outset of the competition and progressively narrow the competitive field before asking for more detailed proposal information. Except in unusual circumstances, pick one contractor and then negotiate the details of the order or blanket purchase agreement one-on-one; do not establish a competitive range, negotiate with more than one firm at a time, or solicit revised proposals from more than one competitor.

  • Do not follow the FAR Part 15 Process Model when making simplified acquisitions. When using simplified acquisition procedures to buy complex supplies or services worth in excess of $25,000 and for which a synopsis must be published, and if planning to ask for more information than just a price quote, either: (a) select one firm for one-on-one negotiations based on experience, past performance and a price quote and then negotiate to agreement on details, or (b) narrow the competitive field of competitors on the basis of experience, past performance and a price quote before asking for more detailed proposal information, providing a specification or statement of work and clauses for price quote development by fax or email. Do not use FAR Part 15 terminology or refer to FAR Part 15 in the RFQ. Do not establish a competitive range, negotiate with more than one firm at a time, or solicit revised quotes or offers from more than one competitor.

  • Do not follow the FAR Part 15 Process Model when giving contractors under a multiple award delivery or task order contract a fair opportunity to be considered. Maintain a dossier on each contractor reflecting its performance under task orders and any special qualifications that it may have demonstrated during performance. When the time comes to issue an order, send a draft of the order to each contractor and ask for a price quotation. Choose one contractor for one-on-one negotiations based on its past performance, its price quote and, if appropriate, its special qualifications. Negotiate the details of the order with that contractor. If you cannot reach a satisfactory agreement, then go back and choose one of the other contractors for negotiation. Do not solicit complete proposals from every contractor, establish a competitive range, negotiate with more than one contractor at a time, or solicit revised quotes or offers from more than one contractor.

What are some examples of streamlined evaluations people have used under FAR part 13 (or GSA orders & Multiple Award Delivery and Task Order Contracts)?

3 Likes

Axel, love your pitch for putting simplicity back in the Simplified Acquisition process! Where an organization spends the majority of our time buying commercial items, this is an area that has the potential to yield huge time savings and reduce frustration of our mission partners. Expanded Use GPC and Fair Opportunity procedures that do not mimic full blown source selections are areas I think can be even further expanded upon. We should celebrate that simplicity and provide that rationale-level of supporting analysis that helps us determine we are paying a fair and reasonable (not necessarily the absolute lowest price) for the goods we need w/o making it so complex at to remove any possible savings over long term!

2 Likes

Yes! This should be mandatory reading along the lines of annual ancillary training.

Streamling evaluations is easily done by selecting only meaningful discriminating factors. Most SAP buys can be done using experience, past performance, and price.

We really need to get new rules to support multiple award IDCs so that we avoid the artificial pricing that normally takes place (see meaningful above).

2 Likes

ERIE Strayer Company B-406131, Feb 21, 2012 provides a summary of how FAR part 13 acquisitions that look like FAR part 15 acquisitions will be treated. If you call it FAR part 13, but it looks like FAR part 15, then you have to follow the general rules of FAR part 15.

Simplified acquisition procedures are designed, among other things, to reduce administrative expenses, promote efficiency and economy in contracting, and avoid unnecessary burdens for agencies and contractors. FAR § 13.002; 41 U.S.C. § 3305 (Supp. IV 2010). When using these procedures, an agency must conduct the procurement consistent with a concern for fair and equitable competition and must evaluate proposals in accordance with the terms of the solicitation.

Our Office reviews allegations of improper agency actions in conducting simplified acquisitions to ensure that the procurements are conducted consistent with a concern for fair and equitable competition and with the terms of the solicitation. Russell Enters. of N. Carolina, Inc., B-292320, July 17, 2003, 2003 CPD ¶ 134 at 3. Although an agency is not required to conduct discussions under simplified acquisition procedures, where an agency avails itself of negotiated procurement procedures, the agency should fairly and reasonably treat offerors in the conduct of those procedures. See Kathryn Huddleston and Assocs., Ltd., B-289453, Mar. 11, 2002, 2002 CPD ¶ 57 at 6; Finlen Complex, Inc., B-288280, Oct. 10, 2001, 2001 CPD ¶ 167 at 8-10.

In this regard, FAR § 15.306 describes a range of exchanges that may take place when the agency decides to conduct exchanges with offerors during negotiated procurements. Clarifications are “limited exchanges” between an agency and an offeror for the purpose of eliminating minor uncertainties or irregularities in a proposal, and do not give an offeror the opportunity to revise or modify its proposal. FAR § 15.306(a)(2); Lockheed Martin Simulation, Training & Support, B-292836.8 et al., Nov. 24, 2004, 2005 CPD ¶ 27 at 8. Clarifications are not to be used to cure proposal deficiencies or material omissions, or materially alter the technical or cost elements of the proposal, or otherwise revise the proposal. eMind, B-289902, May 8, 2002, 2002 CPD ¶ 82 at 5. Discussions, on the other hand, occur when an agency communicates with an offeror for the purpose of obtaining information essential to determine the acceptability of a proposal, or provides the offeror with an opportunity to revise or modify its proposal in some material respect. Gulf Copper Ship Repair, Inc., B-293706.5, Sept. 10, 2004, 2005 CPD ¶ 108 at 6; see FAR § 15.306(d). When an agency conducts discussions with one offeror, it must conduct discussions with all other offerors in the competitive range. Gulf Copper Ship Repair, Inc., supra.

Do you typically have experience as pass/fail and trade off between past performance and price? Or do you trade off between all three?

It depends on the team’s notion of best value; I’d consider, in part, the supplies/services and market. Failing a small business under experience may require referral to the SBA (if they otherwise would be the apparent successful offeror) … some teams may want to avoid this for whatever reasons. Tradeoffs require more time and well written documentation.

Vern Edwards explains another method (neither pass/fail nor tradeoff) using pairwise comparisons here:

See also, the streamlined LOCAR method.

1 Like

Here are the details on pairwise comparisons from Vern Edwards:

I do not consider comparative evaluation to be synonymous with trade-offs.
I presume that your first post was referring to the following sentence in FAR 13.106-2(b)(3): “Contracting offices may conduct comparative evaluations of offers.” If so, then nobody really knows what the FAR councils meant by “comparative evaluations.” The sentence was added to FAR by FAC 97-03, 62 FR 64916, Dec. 9, 1997, without any explanation in the final rule or in the previous proposed and interim rules. You presume that it means that COs can evaluate offers directly, without using evaluation standards and ratings. Okay, let’s assume that is true.Here is how you do it:

  1. We’ll choose and define our evaluation factors. For this example I’ll choose experience, past performance, and price. We can define those things any way we like, and we’ll include our definitions in our solicitation so quoters will know what we’re looking for. We won’t use “standards.” Instead, we’re going to compare quoters directly, and we’re going to consider the factors to be equally important. We’re not going to make tradeoffs.
  2. Now we’ll issue our RFQ. We’ll check the quotes we receive for conformity with the solicitation. A quote is acceptable if it conforms to all material requirements of the RFQ, otherwise it’s unacceptable. We’ll evaluate only acceptable quotes.

Assume that we get four quotes, from companies A, B, C. and D. We’ll evaluate using a method that is sometimes called “pairwise comparisons” and we’ll assume transitivity.

First , we’ll evaluate for experience. We’ll compare A’s description of its experience to B’s and decide, subjectively , which has the better experience by taking note of asserted facts, identifying differences, determining their significance to us, and documenting our conclusions. Let’s say we decide that A is better than B. We’ll then compare A to C. This time we think C is better than A. Since C is better than A and A is better than B, we assume that C is also better than B. We’ll then compare C to D. We decide that D is better than C. Since D is better than C and C is better than A and B, D is also beter than A and B. So D is best on experience. Since there were four offerors, and since D is best, we’ll give D four points. Since C is better than A and B we’ll give C three points. Since A is better than B we’ll give A two points. Finally, we’ll give B one point.

Experience: D = 4, C = 3, A = 2. and B = 1.

Second , we’ll evaluate for past performance, using the same procedure as we did for experience. This time the result is as follows:

Past performance: D = 4, A = 3, B = 2, and C = 1.

Third , we now compare the four quoters’ prices. This is easy. The lowest price gets four points and the highest gets 1 point. The result is:

Price: B = 4, A = 3, C = 2, and D = 1.

Fourth, we total the points.

A = 8, B = 7, C = 6, D - 9.

Fifth , D is best overall, so we award to D.

We evaluated on the basis of direct comparisons, based on subjective assessments and without standards. We made no tradeoffs, so we have none to document.

That’s one way to do it. There are other ways. Is it a good way to do it? That depends on what you’re buying and on your notions of value.

The hardest part is writing up the rationale for your subjective assessments. It takes some thinking and word-smithing. If you don’t know how to do that, then don’t try this method.

Do you want to weight the factors differently? If so, assign each a decimal weight, such as 0.5 for experience, 0.4 for past performance, and 0.1 for price, so that they add up to 1.0. Then multiply the points by the weights before totaling the points.

The trick to evaluating based on direct comparisons is to be able to write a sentence about each pairwise comparison on each factor. Suppose that you think A is better than B on the factor of experience. Here’s a simple test. Try writing a sentence that begins:

A is better than B because_____________________________________________________.

If you cannot finish that in a way that is consistent with the information that you have about the offerors and makes sense to an intelligent reader, then you’d better stop what you’re doing and think things through.

Interesting approach, I don’t think I’ve ever thought hard about the distinction between comparative evaluations and tradeoffs.

Where is a good place to read more about LOCAR?

http://www.wifcon.com/anal/analguest2.doc

1 Like

Nice reference! It even provides a practical example after the explanation.

TLDR version:

In overview, the LOCAR technique includes the following steps:

  1. First, the evaluators assess the promises in each of the competing offers based on the stated evaluation factors for award, and give each offer a numerical score on a scale of 0 to 100 points (or an adjectival rating) to summarize their assessment of the offer’s value. This score is called the promised value score.
  2. Second, the evaluators assess each offeror’s capability based on the factors stated in the RFP, including experience and past performance and any other factors that bear on the offeror’s ability to perform. The evaluators write descriptions of their as-sessments, but do not give numerical scores.
  3. Third, the evaluators meet and talk to reach a consensus about how much confi-dence they have that each offeror will keep its promises, based on their assessments of the offerors’ capabilities, and give each offeror a LOCAR, which is a numerical score on a decimal scale of between 0 and 1. (See Appendix A.) The higher the evaluators’ confidence in the offeror, the higher the rating. Thus, a LOCAR of .1 would indicate that the evaluators have very little confidence in an offeror; a LOCAR of .9 would indicate a very high level of confidence; and a LOCAR of .5 would indicate that the evaluators think the likelihood that the offeror will keep its promises is 50-50.
  4. Fourth, the evaluators multiply the promised value score by the LOCAR to produce the expected value score. For example, if an offeror has a promised value score of 90 points and a LOCAR of .8, its expected value score would be 72 (90 x .8 = 72); if an offeror has a promised value score of 100 and a LOCAR of .5, its expected value score would be 50.
  5. In the final step, the source selection authority compares the offerors to each other based on their expected values and prices in order to initially rank them from best to worst. He or she then compares the offerors in a series of pairs based on the de-tailed evaluation documentation, making tradeoffs as necessary in order to identify the offeror that represents the best value.

As I was reading, I was going to mention I’m skeptical of numerical ratings and calculations because I think people have a hard time assigning numerical values and percentages that align with their actual values. It’s my personal preference, I know many people who use numerical scoring to make decisions in their personal life. My thoughts were immediately addressed:

As I will describe it, the LOCAR technique employs numerical scoring. I recognize that many persons dislike the use of numerical scoring in source selection and so I would like to make some comments in that regard. The LOCAR technique is a framework for thinking, not a decisional formula. It uses numerals as shorthand for substantive verbal descriptions of an offeror’s strengths, weaknesses, and deficiencies. When employing the LOCAR technique in a source selection, the source selection authority uses the scores to order his or her thinking initially , but makes the source selection decision based on the documented advantages and disadvantages of the competing offerors. The LOCAR technique is a device for thinking about a problem of choice. A LOCAR is a product of subjective judgment; it is not a precise measurement of an objective reality. Used properly, it is a helpful tool for reaching a consensus among a group of evaluators, giving them an orderly way to think as a group about risk.

Everyone who uses or plans to use the LOCAR technique must understand that it is just a device, and that tradeoffs and source selection decisions must be made, explained, and justified on the basis of documented offeror strengths, weaknesses, and deficiencies, and not the LOCAR scores. Those who work for organizations that forbid the use of numerical scores, or who cannot overcome their aversion to the use of numerals, can still use the LOCAR technique, but must use words or other symbolic devices (e.g., color ratings) instead of numerals.

I am trying to build practical examples of streamlined evaluation schemes for commercial supplies/services. Here is my current list:

  1. Price

  2. Price, Offer Acceptability (assent to model contract)

  3. Price, Experience

  4. Price, Past Performance

  5. Price, Experience, Past Performance

  6. Price, Responsibility

  7. Price, Risk (based on considerations such as experience, past performance, general or special responsibility standards, etc.)

Most, if not all, of these can be done with or without tradeoffs. I am on the verge of banning requests for technical approaches, and the like, for commercial items. If you want technical approaches, get them with promises by issuing a Statement of Objectives and having offerors submit Performance Work Statements.

Any thoughts?

Narrative technical approach proposals rarely correlate to whether a company can perform and often cause pain during evaluation. It ends up as a writing competition between companies. Two real-world examples:

  1. A small US-based maid service company’s quote is “technically acceptable” for a custodial contract in the middle east because their “Custodial Operations Plan” covered all requirements of cleaning the bathrooms. NOTE: We amended the RFQ to use experience in the middle east as factor because the greatest risk was not whether they could clean the bathrooms, but whether they could run operations in the middle east.

  2. A chapel music services quote is “technically unacceptable” because their narrative technical proposal did not address whether the company would provide a “rhythmic” music coordinator, a requirement in the PWS. We decided to get an update proposal to correct the deficiency, delaying award.

Then you have the company who has been performing exceptionally over the last 5 years, but is “technically unacceptable” because they didn’t hit all of the checkboxes in their narrative technical approach. Again, you have to get an updated proposal to fix the deficiency while common sense says they can perform the work.

I’ve also contemplated (or at least joked about) banning “narrative technical approach” evaluations, but I’m convinced it is better in the long-term to educate people whey they shouldn’t be used rather than banning them.

When you restrict their use, people turn off their critical thinking about the issue. Things restricted in the past: Award Fee contracts, T&M, & UCAs. People stop thinking about whether these were good ideas and completely removed them from consideration. Effective in the short term, but maybe not in the long term.

I think we should start with how we would select a company if we were making the pick for our own small business or household. Then modify that approach to comply with regulations.

Your list makes great incremental improvements, but do you think we could come up with examples that completely rethink how we conduct SAP evaluations?

I don’t have a great example of what I’m proposing, but I often think back to the hotel example in the blog post at the top of this thread:

With that said, I know we are limited somewhat by the requirement to publish a synopsis which was also addressed in the blog post:

One other related issue is GAO recently ruled that if you hold “discussions”, but call them “interchanges” they will still treat them as discussions, see bottom of pg 2.

https://www.gao.gov/assets/710/701931.pdf

The FOPR stated that because the procurement was conducted under FAR subpart
16.5, the procedures in FAR subpart 15.3 did not apply. The agency, however, utilized
a similar process to FAR subpart 15.3 when conducting exchanges with offerors by
issuing “interchange notices” and requesting final proposal revisions.

An advantage to sticking with experience and past performance is you typically don’t use discussions (or interchanges) since the proposal data is what it is (with the exception of adverse past performance the offeror has not addressed.)

For clarity, I was joking on the ban as it goes against my beliefs…However, I am on the verge of a mental breakdown regarding this challenge.

Thanks for the comments. I am going to do some more thinking about a few of the points you raise. (FAR 5.202(a)(13) is making my head hurt, but it seems like a logical place to start looking for flexibility in synopsizing)

Meanwhile, I think it’s important to note that GAO has a long-standing record of examining what you did, not just what you called it. That is one of the main ideas in the Finlen Complex protest. It follows that simply referring to a process—“that utilized
a similar process to FAR subpart 15.3 when conducting exchanges with offerors by
issuing “interchange notices” and requesting final proposal revisions”—as ‘interchanges’ would be held to the FAR Part 15 standard.

1 Like

Where do commercial supplies with differentiating features that end users care about fall into your list?

In my own life, I use Amazon ratings and online reviews…possible to replicate for the Govt (I don’t count CPARS)?

Also, a simple way to evaluate features and use info in the decision?

Here is a great Vern Edwards quote from your post on Professional Storytelling:

What do you know about retaining personnel? What do you know about refilling positions? Have you ever done those things for a private firm, for a pharmaceutical services contractor, and for a living? If not, how will you judge what you read? Do you think you can really judge a one or two page description of those activities when you don’t know anything about them? And if its the kind of thing that anybody can judge, even thought they’ve never done it, then maybe its the kind of thing that a good writer could describe for someone who might turn out to be not very good at it. Do you think a techncal description describes a good approach if it sounds good to you?

That’s what cracks me up about the whole technical proposal thing: people who’ve never done something judging people who actually do it on the basis of something written by who knows who? And if it does not read well, might that be because people who do something for a living don’t spend time writing about how to do it? What kind of account would you get from the typical framing carpenter about house framing? He or she does not spend their time writing about framing. Hell, they don’t spend their time reading about framing. They spend their time framing. And a person who could write a good account of framing probably does not spend a lot of time framing.

The “technical” proposal approach to evaluating companies is one of the most absurd methods of choosing a contractor that has ever been devised.

1 Like

Shamefully, I am a habitual overspender. I own a 4x4 that I don’t four-wheel in…I always buy the Mac Pros even though I only surf the Internet and use Microsoft Word, Excel, and PowerPoint. I’ll buy the best drone, GoPro, etc. despite them exceeding my requirements. I primarily seek feedback from family and friends or known subject matter experts. I like websites like CNET. I primarily use online reviews to mitigate risk by going with popular supplies/services.

Most popular supplies/services are proven, reliable, trustworthy…I like that, especially when purchasing something I know less about.

The easiest way to evaluate features is to have a salesperson orally and visually describe the features and what they can do for you. In-turn, you can watch and listen and then decide what features resonate with you or otherwise assign values.

This is why, in part, I feel statements of objectives are underutilized. Oral proposals and SOOs are like cousins to CSOs. “When offerors propose performance standards in response to a SOO, agencies shall evaluate the proposed standards to determine if they meet agency needs.” One of the main challenges with oral proposals is the insatiable appetite for documentation…well, I have contractors take meeting minutes that we modify and/or endorse…seems we can have offerors document their oral proposal…heck, some offices have offeror rate themselves using self-assessments. Yes, it’s duplicative, but when talking about commercial items, oral proposals are much easier for evaluators (users) to digest.

Maybe I’ll issue a solicitation that has offerors complete a source selection decision document for their technical factor.

1 Like