Laboratory proficiency testing sports anomalies

Feb. 1, 2004
Proficiency testing (PT) seems to send shivers of dread throughout laboratories across the nation. It would seem to be a simple thing, given that it is really just another analysis like hundreds we all do each day in the laboratory.

While PT once served as a way for laboratories to
verify that everything was running smoothly and assure that personnel
training remained current, the implementation of CLIA 88 transformed
PT into a regulatory tool to assess lab performance and possibly remove
a laboratorys ability to test if it did not “pass.”

Yet, this regulatory tool is not an exact science.
Statistics and a whole host of other factors influence the
proficiency-testing process without giving black-and-white answers as to
the “correctness” of the laboratory functioning. Each PT event
gives a snapshot view of one point in time that may reflect a number of
factors, some of which are outside of the laboratorys control.

In light of PTs regulatory status, the Centers for
Medicare and Medicaid Services (CMS) regards ungraded analytes for
specific analyzers/reagent groups to be a problem. At the Medical
Laboratory Evaluation program, our proclivity is to grade as much as
possible, as long as it is scientifically defensible to do so. We have
established a minimum number of laboratories that are required in an
individual grading group (also known as a peer group) before the
comparison among those laboratories would be valid for quantitative
data. CMS regulates PT programs rather stringently and requires that
whenever a valid peer group of greater than 10 does not attain a certain
consensus, or passing rate, referee laboratories must be used to
determine consensus.

A change in the recently released regulations was due
to the large number of ungraded results that were occurring. For most
analytes, this rate was changed to 80% from 90%, except for
immunohematology results that remained at a 95% consensus rate. This
change, barely noticed in the regulations, has increased the number of
labs with unsatisfactory or unsuccessful proficiency testing, since more
grading groups qualify for grading with the lowered consensus
requirement. This, in some ways, has made life easier for those of us
running a PT program, since we are able to grade more groups without the
need for identifying referees (an activity mandated by CMS).

Another provision in the new regulations, the
requirement that laboratories verify the accuracy of any ungraded PT,
has been a source of frustration for proficiency-testing programs in
particular. The execution of this requirement, in many cases, is nearly
impossible using the PT provider statistics, which is the usual
recommendation of inspectors.

There are always very good reasons why a
proficiency-testing program does not grade. Among the most common
reasons are:

1) There are not enough of a particular
instrument/reagent system to form a peer group, and the results produced
by that system are significantly different from results obtained with
other instrument/reagent systems for that test, thus they cannot be
combined with another group to obtain the minimum number of laboratories
required for grading;

2) A particular grading group is showing two diverse
groups of results. This happens frequently when a change in reagent
formulation or calibration change occurs by the manufacturer and the
breakpoint is by lot number. So, what appears to be a homogeneous group
of instrument/reagent users may not be. This is generally a time-limited
problem that is resolved as all users move to the newer lot numbers;

3) A specimen matrix issue arises. This happens
occasionally for a variety of reasons, again a reagent formulation
change, but it has also occurred when the PT programs manufacturer
makes a modification to the PT material in trying to improve the
product.

Each of these situations will lead to statistics that
are not usable by the laboratory for the determination of its individual
performance.

The upshot of this is that while CMS is trying its
best to ensure patient safety and the quality of all labs, it needs to
be aware of the pitfalls of using PT as a regulatory tool. It does not
fit every situation, and there are valid reasons why PT is not graded
and labs cannot use the statistical data provided to evaluate their own
results.

It is my hope that CMS will become more cognizant of
these anomalies in the PT process and inform its inspectors to be aware
of them, so as not to inappropriately penalize labs when their results
are not graded, and to inform laboratories of other alternatives they
can use to validate the accuracy of their testing, other than comparison
to potentially invalid statistics.

Connie Laubenthal, MS, CLS(NCA), MT(ASCP) is the director of the Medical Laboratory Evaluation program at the American College of Physicians. In the formative years of the COLA accreditation program, Laubenthal served as its first surveys division manager and in
various other capacities within the organization. She obtained her BS degree in medical technology from the Medical University of South Carolina and her MS degree in administration from Central Michigan University.

 February 2004: Vol. 36, No. 2