Six QC recommendations to consider today

Feb. 21, 2017

CONTINUING EDUCATION

To earn CEUs, visit www.mlo-online.com under the CE Tests tab.

LEARNING OBJECTIVES

1. Identify factors in QC error that contribute to increased patient risk.
2. Describe ways in which quality risk management has contributed to an added set of values that the laboratory should be aware of.
3. Identify and describe the factors and benefits of evidence-based QC limits.
4. List the errors that may result when general QC data are given too wide and too narrow limits.

 

An important goal of laboratory medicine is to improve patient health by providing laboratory results that support medical decisions. Meeting this goal requires reporting accurate results that enhance care while minimizing patient risk. It is no longer satisfactory for a laboratory’s Quality Control (QC) to simply focus on taking care of the instruments—treating the instruments as if they are the laboratory’s “patients.” Since publication of risk management guidelines such as ISO 151891 and CLSI EP232, the laboratory is now expected to design its QC with a focus on the patient: How does the QC plan effectively mitigate risk of patient harm from erroneous reported patient results?

Significant advances have been made in recent years in developing theories, models, metrics, and algorithms that more closely tie a laboratory’s QC practices to the risk of patient harm.3,4 The best of these approaches can quantitatively correlate the relationship between a laboratory’s statistical QC strategy (number of QC concentrations evaluated, QC rule(s) used, and frequency of QC evaluations) and the expected number of erroneous patient results reported when an out-of-control condition occurs. These quantitative approaches require advanced mathematical computations that can be accomplished only with specifically designed computer algorithms. But it is also important to understand and to identify effective QC principles and approaches that do not require advanced math. Six effective QC practices that don’t require advanced math were presented as part of an AACC symposium in 2013 and were subsequently published in these pages.5

In this article, we offfer six additional QC recommendations that can be addressed without using advanced math, four that impact patient risk and two that directly affect laboratory costs (or, to use a term that is gaining popularity, the “Cost of Quality”). Successfully following these recommendations won’t provide a quantitatively optimal QC strategy, but should continue to move a laboratory in the right direction.

Reliability of measurement procedures. Record keeping is not the top priority when the laboratory’s QC detects an out-of-control condition. Rightly so, the focus is on returning the measurement procedure to working order and assessing the impact on previously tested patient samples. However, as frequency (of failure) is a key contributor to patient risk, the laboratory’s ability to assess risk depends on keeping track of testing failures and determining their frequency (or the mean time between failures).

For risk management purposes, an out-of-control condition is defined as when a test result quality issue is identified and a change is made to the test method to rectify it. Whether the quality issue is identified by a QC rejection, by inspection of the results, or because of a complaint, the important things are that it is recognized that something is wrong and that an action is taken to resolve the issue. Actions could include measurement procedure reagent changes, calibration, or instrument service. However, appropriate actions would not include QC false rejections (see later recommendation concerning false rejections).

An estimate of how frequently out-of-control conditions occur is an important consideration for QC design. For measurement procedures with a high rate of out-of-control conditions, there is greater need for the laboratory’s QC strategy to be able to minimize the number of patient results affected when an out-of-control condition occurs.
Recommendation 1: Estimate your measurement procedure’s reliability.

Analytes and probability of an erroneous result leading to patient harm. One of the most important aspects of laboratory quality concerns the measurement error in reported patient results. If the measurement error is so large that the result is not fit for its intended use, it creates a hazardous situation for the patient: it increases the likelihood that an inappropriate medical decision or action will occur. In laboratory medicine, the quality required of a patient result has traditionally been defined in terms of the allowable total error in the result. If the measurement error in the patient’s result is less than the allowable total error requirement, the result is considered fit for its intended use; but if the measurement error exceeds the allowable total error requirement, the result is considered erroneous and creates a hazardous situation for the patient.

When an erroneous patient result is reported, the likelihood that the erroneous result leads to an inappropriate medical decision or action will primarily depend on how influential the test result is in the medical decision-making process. Some analytes provide only a small amount of the information used in a medical decision. In other cases, an analyte is the major contributor to the medical decision.

From a patient risk perspective, the general principle should be that the more likely it is that erroneous results will lead to inappropriate medical decisions or actions, the less tolerance the laboratory should have for reporting erroneous results, and the more effort it should devote to minimizing their number.
Recommendation 2: Devote more QC to analytes with a high probablility that erroneous results lead to patient harm.

Analytes and severity of patient harm from an erroneous result. Risk management guidelines such as CLSI EP-23 provide a formal approach that the laboratory can use to establish policies and procedures to help prevent or reduce patient risk. In risk management, risk is defined as the combination of the probability of occurrence of patient harm and the severity of the harm. The probability of occurrence of patient harm depends on:

  • a measurement procedure’s reliability;
  • the effectiveness of the laboratory’s QC strategy (to limit the number of erroneous patient results that get reported when out-of-control conditions occur); and
  • the likelihood that erroneous reported results lead to patient harm.

Measurement procedure reliability and the likelihood that erroneous results lead to patient harm were the subjects of the first two recommendations. The severity of patient harm is intended to be assessed independently of the probability of occurrence of harm.

Severity of harm will depend on the analyte reported and the patient care situation. An inappropriate decision or action based on an erroneous reported troponin result for a patient in the ER will likely cause more severe harm than any inappropriate decision or action based on an erroneous sodium result on a patient as part of an annual wellness exam. While scenarios that would result in patient death from an incorrect result of almost any analyte can be constructed, severity-of harm-assessments should be based on the most common or typical scenarios.

Risk management guidelines suggest categorizing severity of harm into categories of increasing severity. For example, CLSI EP23 suggests five categories:

  • Negligible: inconvenience or temporary discomfort
  • Minor: temporary injury or impairment not requiring professional medical intervention
  • Serious: injury or impairment requiring profession medical intervention
  • Critical: permanent impairment or life-threatening injury
  • Catastrophic: patient death

The more severe the patient harm resulting from an erroneous reported patient result, the less tolerance the laboratory should have for reporting erroneous results, and therefore, the more effort it should devote to minimizing their number.
Recommendation 3: Devote more QC to analytes with a high expected severity of harm.

Multiple instruments in the laboratory that measure the same analytes. Some laboratories recognize but are not particularly concerned with the notion of bias in their measurement procedures (beyond the implications related to passing proficiency testing) and instead focus on providing stable performance. This attitude may not be appropriate when a laboratory is testing the same analyte on multiple instruments that are expected to give the same patient results.

Testing an analyte on multiple instruments within the laboratory requires attention to any within-laboratory bias between instruments. It is important that performance differences do not become so large that they can interfere clinically, as when a patient has the misfortune of having a specimen tested on an instrument with a high bias and a subsequent specimen tested on an instrument with a low bias. In such a case, much of the difference between the results is the bias between the instruments rather than a change in analyte concentration.

One way to be alerted to intra-laboratory between-instrument bias changes is to use the same QC target and QC rule on each instrument.6 This practice restricts the drift of an instrument in the direction of its bias with respect to the group, but allows more drift in the direction away from the bias with respect to the group.

Using the same QC rule and QC target on a group of instruments testing the same analyte may seem not to be properly controlling each instrument, but that thinking stems from a QC focus on the instrument. When the focus is on patient risk, the best strategy should minimize the chance that erroneous patient results are reported due to an out-of-control condition in a biased measurement procedure.
Recommendation 4: Don’t ignore bias between multiple instruments measuring the same analyte.

QC time and QC false rejection rate. Two important performance characteristics of a statistical QC strategy are its ability to detect out-of-control conditions in the measurement procedure as soon as possible (the error detection power of the QC strategy) and its susceptibility to incorrectly reporting that there are problems with the measurement procedure when none exist (the false rejection rate).

False rejections constitute unwanted costs to the laboratory. When a QC rule rejection occurs, the laboratory has to halt patient reporting, investigate the cause of the QC rule rejection, and make a determination about what, if any, corrective actions are required. Additionally, it is expected that all patient results reported since the last accepted QC event be evaluated to determine if any have been adversely affected.

A QC strategy’s false rejection rate is usually expressed as a probability. If a QC rule’s probability of false rejection is 0.01, then the laboratory should expect that about one out of every 100 QC rule evaluations that occur when the measurement procedure is in-control will reject simply due to random chance. An equally important way of considering a QC strategy’s false rejection rate is in terms of time between false rejections. What is the expected length of time of in-control operation before a QC rule rejection would occur simply due to random chance? The expected time between false rejections depends on the QC rule’s probability of false rejection and the frequency of QC evaluations.

So, if a QC rule’s probability of false rejection is 0.01 and the laboratory performs QC evaluations once per day, then the expected length of time between false rejections is 100 days. If QC evaluations are performed every shift, however, the expected time between false rejections for that same QC rule is 33 days. Many of the costs associated with QC false rejections, such as suspension of patient results reporting or time and effort investigating after the QC rule rejection, are more meaningfully assessed in terms of the expected time between false rejections rather than the probability of false rejection.
Recommendation 5: Consider a QC strategy’s expected time between false rejections.

Data points on QC crossover studies. Every time a new lot of quality control material is introduced into the laboratory, the mean concentration of the material should be established. The QC mean is estimated from repeated measurements of the QC material over time when the measurement procedure is operating in its stable in-control state. An important question is: How many repeated results are necessary to provide a good estimate for the QC mean? “Good” may be defined as an estimate that is close enough to the true mean that a QC rule will have a probability of false rejection that is close to the false rejection probability using the true mean.

Using this criterion, a QC mean calculated from 10 measurements obtained on separate days provides a good initial estimate. Then, after the first few months of routine operation with the new QC lot, an improved estimate of the mean can be computed that will include any longer-term sources of variability that were not represented in the initial 10 day estimate.5

Concentration is a characteristic of the lot of quality control material. However, precision is a characteristic of the measurement procedure. Therefore, when a new lot of quality control material is replacing an existing lot that has been in routine operation, the concentration is expected to change, but the precision is not expected to change. When the estimated concentration of the new QC lot is close to that of the old lot, the already established SD can be retained. If the concentration differs significantly, the SD for the new lot can be computed as the new estimate of the mean concentration multiplied by the established CV of the measurement procedure.

When a new measurement procedure is being introduced into the laboratory (no existing estimates for SD), it will be necessary to estimate SD from multiple measurements collected over a time period when the measurement procedure is operating in its stable in-control state. In this situation, using the same criterion for “good” as before, not even 20 measurements obtained on separate days provide a good initial estimate of SD. If an initial 20-measurement SD estimate is used to begin routine operation, the laboratory should be diligent in its response to any QC rule rejections, as there is an increased likelihood that the rule rejection is due to an inaccurate SD used in the QC rule. Again, after the first few months of routine operation with the new QC lot, a more reliable estimate of the SD can be computed that will also include any longer-term sources of variability that were not represented in the initial 20-day estimate.
Recommendation 6: For a new lot of QC material, 10 results are a good start for estimating the mean, but not the SD.

Mitigating patient risk. The laboratory has two main mechanisms to mitigate patient risk of harm; risk management guidelines that provide models and approaches to meaningfully relate laboratory practices to patient risk, and statistical QC principles and strategies that seek to limit the number of erroneous patient results reported that create hazardous situations for patients. Computer software can help the laboratory design good QC strategies based on patient risk principles, but even without software there are approaches, such as those recommended here, that can help the laboratory assure that its QC practices are designed both with a focus on the patient and in a way that helps reduce costs.

 

Editorial contribution: T. Andrew Quintenz contributed to this article. Mr. Quintenz leads the Quality Systems Division’s Scientific and Professional Affairs team. He is a member of the CLSI Consensus Council and Industry Liaison to the CLIAC, which provides scientific and technical advice and guidance to the Department of Health and Human Services.

 

REFERENCES

  1. ISO 15189:2012, Medical laboratories—requirements for quality and competence. http://www.iso.org/iso/catalogue_detail?csnumber=56115.
  2. Clinical Laboratory Standards Institute CLSI EP23-A: Laboratory Quality Control Based on Risk Management; Approved Guideline. CLSI: Wayne, PA, 2011.
  3. Yundt-Pacheco J, Parvin CA. Validating the performance of QC procedures. Clin Lab Med. 2013;33(1):75-88.
  4. Clinical Laboratory Standards Institute. CLSI C24, 4th ed. Statistical Quality Control for Quantitative Measurement Procedures: Principles and Definitions. CLSI: Wayne, PA, 2016.
  5. Parvin CA, Jones JB. QC design: it’s easier than you think. MLO. 2013;45(12):18-22.
  6. Parvin CA, Kuchipudi L, Yundt-Pacheco J. Designing QC rules in the presence of laboratory bias: should a QC rule be centered on the instrument’s mean or the reference mean? Clin Chem. 2012;58(10):A205.

 


 

Curtis A. Parvin, PhD, served as a faculty member of Washington University School of Medicine for 30 years before joining Bio-Rad Laboratories, where he heads Advanced Statistical Research, developing new models and approaches to QC Design and Strategy.

John Yundt-Pacheco, MSCS, is a Scientific Fellow for Bio-Rad’s Quality Systems Division. His research work in Informatics Discovery is leading to new approaches and computer-based solutions to quality control and patient risk management.