Quality-infused Pricing(QIP) and Knowledge-based Services

What is QIP?
It is a price derived by multiplying the contractor’s proposed price by a factor derived from a combined scalar based on both observed and perceived quality analysis by the USG. This process was created at the NRO in 2015. Not to be confused with VATEP published in April 2016. The primary difference is that QIP was created with perceived quality and services in mind, not to increase product performance options (i.e. VATEP…essentially a fancy word/method for priced options or alternative proposals)

Characteristic VATEP QIP©
Tradeoff Type Subjective [sic] Subjective
Focus Products (Systems) Services or Products
Monetized Value Method Objective vs. Threshold Requirements Past performance and perceived service value offered
Factor Development Effort High Initially Med-High and Low thereafter
PALT High Initially Med-High and Low thereafter
Connection between past performance and future price evaluations None Direct
Past performance currency Low (assume as-is CPARS) High (assume cSVI use)
Past performance assessment Separate step/process (assume as-is CPARS) Embedded in quality and price evaluation (assume cSVI use)

QIP Components:
•QIP is derived by applying a composite Quality Adjustment Factor (cQAF) to an offeror’s proposed price to establish the value/price tradeoff. cQAF is a price multiplication factor that may be greater to, equal to or less than 1.

•The cQAF is derived from two primary factors that are weighted by the agency. Weights must be established during the solicitation planning stage and should be listed in the RFP.

  1. Composite Service Value Indices (cSVI)

  2. Composite Proposal Quality Rating (cPQR)

What is cSVI (Finkenstadt & Hawkins, 2016)?

•Derived from past performance survey data
•Scaling must be supported by market research into customer value and price variance in a given service category
•Would act as a FICO© score for a companies PP and reputation management; could replace CPARS or work in tandem
•Captured in point-of-service fashion

What is cPQR (Finkenstadt & Hawkins, 2016)?

•Particular to a specific source selection
•Offered personnel qualifications
•Technical process excellence (3 Rs-see below)
•Program management capability
•Offered solution attractiveness

Why use QIP?
Because we want to maximize the three primary goals of public procurement: 1) Getting the requirement in a timely fashion (i.e. Need with Speed), 2) Transparency, 3) Value for money

What problems can QIP solve?
It can help with two main areas of issue in service procurement, especially those for knowledge-based services: 1) Contract Performance Management 2) Source Selection.
The driving force was a combination of practitioner frustration and the call from BBP 3.0 requesting we in DoD:

  1. Monetize non-cost factors for trade-offs
  2. Clearly define best-value to industry
  3. Use LPTA in its “limited place in the source selection ‘best value continuum”
    Further, the GAO has found CPARS to be late, inaccurate and incomplete.

How can we improve?
We can create a rating system that allows for customers to grade firm performance based on perceived quality in addition to observed quality at point-of-service. Services are intangible and perishable. This means that they are hard to measure and we only have a finite window in which to grab the real experience. They are heterogenous (i.e. non-standard) and complex, involving a degree of shared experience between the firm and customer to create the value received. Using a rating system and evaluation system similar to those offered in QIP allow for this nature of service to be more reflected in the process.

Has it worked? Yes, QIP has been used by GSA for the Army on over $1B worth of knowledge-based services (via OASIS) and has survived a GAO protest (see GAO filing B-414387; B-414387.2, GDIT).

It has also been modified for use as an award fee system for the State Department on a large logistics contract.

Is QIP the perfect alternative to LPTA and Full TO? NO! It is a tool in the toolbox. All forms of best value source selections are useful in the right context (even VATEP may be).

It still requires an increased knowledge into how we measure the objective and perceived portions of quality in things like knowledge-based services. See attached PPT regarding my work into advancing the scales for measuring the perceived portions as well as suggestions on the observables.

If anyone is interested in resources please contact Maj Dan Finkenstadt at fink614@live.unc.edu

Well what is a Knowledge-based service anyway?
•AFI63-138, Paragraph 1.2.1.3 defines knowledge-based services or “KBS” as those defined in DoD Instruction 5000.74.

•The AFI states that this includes, but is not limited to, Advisory and Assistance Services to support Research and Development, Construction, Architect Engineering (A&E), utility services, Federally Funded Research Development center contracts (FFRDC) or Foreign Military Sales (FMS).

•The DODI points to the USD AT&L Memorandum, Taxonomy for the Acquisition of Services and Supplies & Equipment, dated August 27, 2012. The trail of Air Force and DoD cross referencing is somewhat winding but ends at this point. The figure shows that KBS is defined, more specifically as:

•Engineering and Technical Services

•Program Management Services

•Management Support Services

•Administrative & Other Services

•Professional Services

•Education and Training [Services]

•A search of relevant literature was conducted in an effort to find a standard definition for KBS. This search included use of the terms: “knowledge-based services,” “professional services”, “knowledge-intensive services”, “professional service firms” as well as partial terms of these titles.

•The review identified eleven relevant sources, nine of which provided definitions, lists or attributes of these terms. As expected, a consistent definition, operationalization, and measurement of knowledge-based services was not revealed.

•Leveraging von Nordenflycht, 2010: KBS can be placed on a spectrum that compares knowledge intensity, degrees of operant exchange and capital intensity. Those KBS that show the highest degrees of knowledge intensity and operant exchange, while displaying lower capital intensity , are the focus of the study. Some services may be high or low in both knowledge and capital intensity but KBS are predominantly associated with high knowledge/low capital intensity as the service deliverable is intangible in nature.

Suggested definition for KBS: Those services in which the primary medium of exchange is a transfer of expert advice, knowledge, processes or information. Such services are generally low in capital intensity and high in knowledge intensity.

•Objective Measures:

–The 3 R’s: Recruiting, Retaining, Replacing of personnel

–Has to be considered in assessing professional compensation under FAR 22.1103

–Can be measured in terms of compensation plans, staffing strategies, internal firm metrics to show causal paths from their 3R strategy to results of better recruiting and retaining with quick replacement of equal/greater quality personnel

•Subjective Measures:

–How do we measure the equal/greater quality of personnel? Today we use years experience and education. Focus group results show this to be a poor predictor of performance

–Past performance is best in determining this but CPARS has poor information quality (GAO, 2014)

–Critical areas of KBS quality that are intangible or perceived that we can leverage

A KBS firm’s history of DICE: Dependability of employees, Intelligent solutions by employees, high capability of employees, and empathy of the firm toward customer goals/mission needs.

•D – Dependability – Will do a lot

•I – Intelligence – What they do is done well; expertise that adds value above and beyond what we can scope initially

•C – Capability – Can do a lot

•E – Empathy – Understands and ensures that what is done is done with perspective of purpose

These are the current suggested measures for KBSQual, but may evolve with the research.

2 Likes

I’m interested! Can you expand on the differences between cSVI and cPQR? Does cSVI focus on surveys while cPQR focuses on info submitted by the offeror?

So cSVI is intended to become a score of record (past performance) that we can use for QIP or for other methods such as full tradeoff or as even a cutoff score for consideration in IDIQs etc. Yep, cPQR includes the criteria that are focusing on specific to that RFP. What I have done in the past is create the cPQR criteria and had my tech team rate the criteria on how well they felt (1-5) the ktr met the criteria or exceeded it. This then became part of the math.


Note: This image does not include all the cSVI criteria from the example and it does not include the DICE criteria that I am now suggesting.