“Empty QC”

Nov. 18, 2012

Editor’s Note: “Manifesto” might be too strong a word to describe the following article by hematology supervisor Roy Midyett, CLS—but his strong feelings about forms of QC that he thinks do not accomplish the purpose of QC come through, and he ends with a call to action. His points are provocative ones, and readers of MLO are invited to agree, disagree, or otherwise react by writing me at mlo-online.com. Let me know if you do not want your comments published in a future “Letters” department. —Alan Lenhoff

As medical laboratory scientists, quality control is part of our job, and most of the time, it works and makes sense. But are there QC practices that we perform in the lab that are ineffectual or whose purpose is unclear? Are we performing “empty QC” that satisfies the inspections but doesn’t increase our confidence in the outcome of the test? If so, how did these practices come about, and why does the laboratory industry continue to put up with them? I will attempt to highlight examples of empty QC, consider how they came about, and suggest possible responses and solutions to the problem.

The lessons of history

To appreciate the current state of QC, we should take a quick look back at its history. A scan through an old copy of Clinical Diagnosis by Todd and Sanford, from 1943, shows that the concept of QC is totally absent.1 There is no mention of it for any methods or general practices. Does this mean it didn’t exist? It is possible that some forward-thinking laboratorians kept previously tested samples to re-test the next day as a check, but there is no mention of this. The test methods described are laborious manual methods, using crude (by our standards) colorimeters and comparators that used ambient light or light reflected from a bright source. The human eye was the measuring instrument. This is an important point about these tests. The results were visible with the naked eye, like the pad on a modern urine dip-stick. The technician was trained to read the scales and the colors or count the cells on the chamber.

In the 1940s and 1950s, two key developments prompted the rise of more formal quality control. Simple spectrophotometers using electric photocells were developed, which allowed precise measurements of colored fluids in test tubes. (Remember Beer’s Law?) More accurate and repeatable measurements could be made. The spectrophotometer was slowly incorporated into more complex lab instruments themselves, such as the auto-analyzer and Coulter Model S.2

The second development was the practice of using pooled control sera, followed by commercially available control products, which allowed labs across the country to produce comparable and reliable results. In 1969, Todd and Sanford has a chapter on “Statistical Tools in Clinical Pathology” with mention of Standard Deviation and proficiency survey materials, but not commercial controls (although these were in use shortly thereafter).3

It is important to this discussion to see the relationship between the development of electro-optical analyses and the increased need to assure the accuracy of these types of results with some standard control material. One cannot know what goes on in the flow cell of an electronic instrument at the exact time of measurement. Voltages may vary, lamps may flicker, auto-pipettes may falter, etc. In the old days, technicians weren’t plagued by these concerns. They saw the color in the test tube and compared it to a standard or scale and read it visually (although some type of control would have benefited even these crude tests). So, surrendering the reading of a test tube or a cell solution to an electronic device increased the need for assurance that the reading was accurate.

Toward a working definition

At this point, we can propose a working definition of quality control as the concept evolved. Quality control is a means of assuring that laboratory test results are reliable and accurate, by using some known material or method that checks the steps in the test process, and will detect significant errors in the process. The concept is clear to the bench tech. QC is checking all phases of operation of the analyzer: the dilutions, reagents, microprocessors, sample probes, and others.

It is interesting to consider two aspects of QC today. One is that the statistical tools used are entirely arbitrary. We have decided that 2SD is a good tool to use to monitor QC, but there is nothing inherent in nature that says this is so. We have decided by consensus that it is useful and it works fairly well. One could argue that modern instruments are so much more precise than ones from the 1950s and 1960s that 3SD or 4SD would be more realistic, but that is a minor point. The second aspect is the concept of reasonableness, which is also arbitrary. We have decided that running QC every eight hours is reasonable in most cases. But one could argue that since we don’t know what goes on in that flow cell at the exact time of measurement, we must run QC material before and after every sample. But even this wouldn’t suffice, because we still don’t know what occurs at the exact time of measurement. Fortunately, we take a calculated, but fairly safe gamble, on the repeatability of the instrument, and we compromise with every eight hours.

To recap, laboratories used manual methods with no QC, then photo-optical methods came along, then even more complicated automated instruments, then commercial controls and survey materials became available, and then reasonable criteria for QC were established, and everything was fine. And then it wasn’t.

Primary methods

At some point between the early days of QC and today, a “one size fits all” mentality in regard to QC began to take hold. This is cousin to the popular “zero tolerance” attitude, as in “no test shall go uncontrolled.” This evolved as more regulation and oversight became part of the lab world, promulgated by agencies, more removed from the lab, that made a well-intentioned but wrong turn in applying the logic of QC. The error in judgment was to retroactively apply the principle of QC of automated tests to tests that are what I call Primary Methods, such as the erythrocyte sedimentation rate (ESR), manual hematocrit, and hemacytometer cell count. Primary Methods are methods against which other secondary methods are measured and are tests which are inherently simple and/or rely solely on the expertise of the trained tech performing them, and as such cannot be quality controlled. They are the source of QC itself.

An example of using a Primary Method as the standard for a secondary test is calibrating an automated hematology analyzer’s hematocrit to manual hematocrit values. Once the manual readings have been used to calibrate the instrument, that automated parameter is ready to use, and must be QC’d from that point on. But there is nothing more fundamental than the manual hematocrit itself that could be used to QC those manual readings. Likewise for the ESR and manual cell count “controls.” There is no QC that supersedes the careful measurements of trained techs doing manual methods. And if the measurements for the QC material can’t be controlled, then neither can the test. We wind up using material verified by manual readings to verify our manual readings! It becomes a procedural shell came, with the tail wagging the dog. If you go back far enough in the testing chain, there is a point at which QC can no longer apply.

Empty QC

What then defines “empty” QC? Empty QC is any nominal QC that does not give techs performing the test any more confidence than they would have without the QC, and has by logic or experience, no influence on the reporting of the test.

Empty QC never detects errors in the system. I know coagulation controls catch real problems with the system reasonably often: reagent problems, pipette problems, temperature problems, and so on. At the molecular level, serological controls ensure the reagents and reactions are working. If it’s fixable by adjusting the limits or means, then it’s not empty QC. If a QC procedure never catches a problem in the performance of the test it monitors, then by definition it is not QC. It may, however, fit one definition of insanity—doing the same thing over and over again and expecting different results—if we run a QC procedure month after month, year after year, and expect it to do something different than it did in the preceding months and years.

Second, empty QC has no clear criteria for exactly what it controls. I have not been able to get a good answer from inspectors about what exactly hemacytometer chamber count QC is supposed to QC. I do not know what errors ESR controls are supposed to find. I do not expect them to ever alert me that I am performing the test incorrectly, or that there is a problem with my results.

To illustrate the ineffectual nature of empty QC, imagine a case where a CSF on an infant is submitted to the lab from the emergency room. The ER doctor needs that result badly, and he needs it fast. The lab performs the count and the mandatory QC material. But the tech calls the ER doctor and says “I’m sorry, doc. I can’t release that result because my QC is out.” I am confidant that this scenario would never occur in a real lab. Why? Because we know, as techs, that when we do a manual count either it is accurate, or if it isn’t, that the “QC” will not make any difference in whether or how we report the result.

So what should we do?

If any case has been made here for the definitions of QC, Primary Method, and Empty QC, then what should be our response? New tests that are introduced have to endure a battery of statistical standards to be accepted as valid tests. There are lab tests in use now that would not pass if introduced as new tests, such as bleeding time and band count, but unfortunately these are entrenched in the system and doctors continue to order them. But why are QC procedures not subjected to some validation process? In the case of valid QC (for example, the QC runs on a chemistry or hematology analyzer) the data for proving that the QC works by actually catching real problems are easy to find. I have plenty of those in my own lab. However, I have no data, after two years, that hemacytometer controls ever find problems in the system. I am confident they never will.

It should be the responsibility of the inspecting agencies to ensure that the requirements they mandate are worthwhile. Using questionable science and hand-me-down requirements that have never been vetted properly (such as adjusting citrate concentration for coag tests with high-hematocrit patients)4 is not worthy of the otherwise high standards of scholarship typical of inspecting agencies. Periodic proficiency survey material should be sufficient checks on Primary Methods, because they are blind tests and a group consensus is used to determine the acceptable values. These should be the only checks on Primary Methods.

Second, as discussed earlier, there should be some agreement about exactly what the particular QC is monitoring. If QC is simply “out,” but does not lead to any corrective action other than trying a new control, then it’s time to rethink if it is actually QC. I have no doubt someone can come up with a simple statistical tool or formula, or even a common sense guideline, that could help determine the efficacy of a QC procedure.

Finally, the industry, meaning us, should be more vocal in demanding validation (or even a common-sense explanation) for all QC requirements of dubious value. Call and write inspecting agencies. Challenge inspectors to explain exactly how a required QC actually works. Apart from other issues, we don’t have the time and money to spend on procedures that don’t uphold the quality of the tests we are performing. To echo a theme from the classic movie Jurassic Park, just because we can do QC doesn’t mean we should.

We know some change is possible. CAP, for instance, combined BANDS with SEGS when it couldn’t get a consensus from customers. A broader solution to the problem may take enough people getting concerned about wasting time and money, and someone willing to notice. Until then, stay in control.

A graduate of St. Luke School of Medical Technology in Pasadena, California, Roy Midyett, CLS, has been supervisor of hematology at Presbyterian Hospital in Whittier, California for 15 years.

References

  1. Todd JC, Sanford SH. Clinical Diagnosis by Laboratory Methods.10th ed. W.B. Sauders; 1943.
  2. Lee LW, Schmidt LM. Elementary Principles of Laboratory Instruments. 3rd ed. CV Mosby; 1974:174-75;162-163.
  3. Todd JC, Sanford SH. Clinical Diagnosis by Laboratory Methods. 14th ed. W.B. Saunders; 1969:20-29.
  4. Midyett R. Under the blue top: coags, corrections and ‘crits. MLO. 37;2:20-22.