Q I have a question regarding the dilution of samples
with a known standard to verify a “zero” result. I understand the reason
this is done, but I have trouble applying it to a test that normally has
“zero” for a result (e.g., alcohol, salicylates). I cannot recall a
single instance where this dilution has discovered a discrepancy between
the original report and the diluted result in the many years I have been
in the lab. Is it necessary to confirm a result that we, in fact, are
expecting? Is there a valid reason for doing this extra testing,
especially given the state-of-the-art equipment and methodologies now in
Validation of undetectable analyte concentrations …
is a good practice
under certain circumstances.
A Practices will vary widely among laboratories regarding
confirmation of “zero” results for various assays. The only regulatory
requirement related to this issue is the requirement to validate the
analytical measurement range, or AMR, of each assay. That generally involves
doing linearity for chemistry until the relationship between expected and
observed analyte concentration is no longer linear.
Values below that may be reported as “less than some
amount” or for toxicology as “not detected.” Validation of undetectable
analyte concentrations as you have described is a good practice under
certain circumstances, such as when non-blood matrices are used (e.g.,
body-fluid tumor markers), when functional sensitivity of the assay is
marginal at the clinical decision point or for problematic assays that the
lab and/or medical director does not have great confidence. This is
especially true for certain immunoassays susceptible to the hook effect,
such as tumor markers that can vary in concentration over several orders of
For routine chemical analysis, there is probably little
value in doing this for every zero value, particularly for tests expected to
have no analyte present most of the time. The decision about whether or not
to continue this practice is up to the laboratory director and leadership;
there may be reasons related to a particular method or test that justify the
—Brad S. Karon, MD, PhD
Contrast media effect on blood work
Q We often get patients coming to the laboratory after they
have had imaging studies done with contrast. How much do the different
contrast media affect a patient's blood work? Should we wait a certain
amount of time before the blood work is drawn? Do you know of any articles
that have been done on the effects of imaging contrast media on a patient's
A Your question does not state any specific contrast media,
so I will address the contrast media that is most commonly discussed in the
literature in the context of preanalytical interference: fluorescein. One
textbook indicates that fluorescein dye used in angiography can affect the
following laboratory tests: creatinine, cortisol, and digoxin.1
The text does not indicate if these analytes are elevated or decreased;
however, Young's reference book summarizing the body of knowledge on
preanalytical interference does. It points to a study that shows fluorescein
increases bilirubin, creatinine, digoxin (on the Abbott TDx), urine protein,
and reticulocyte counts.2 It also cites studies that show ionized
calcium and urine creatinine are decreased.
There is no set standard on how much time should pass
between the infusion of contrast media and drawing blood. As a general rule,
the more time that passes, the better. At the very least, specimens should
be labeled as being drawn after the infusion of radiologic dyes, and a
notation to that effect should accompany the test result. Stating the time
of the infusion and the name of the dye or contrast media will help the
physician interpret test results properly.
—Dennis J. Ernst, MT(ASCP), Director
Center for Phlebotomy Education
- Garza D, Becan-McBride K. Phlebotomy Handbook, 7th
ed. Upper Saddle River, NJ: Prentice Hall; 2005.
- Young D. Effects of Preanalytical Variables on
Clinical Laboratory Tests. Washington, DC: AACC Press; 2007.
Holding urine for culture
Q We were told to keep our refrigerated urine samples for
72 hours. This happened after a series of reflexed positive urinalysis for
culture had not been set up. I thought refrigeration only keeps the urine
for 24 hours, and that it would be a mistake to do a culture within the next
A Urine must be refrigerated within 30 minutes of
collection,1 and urine samples for bacterial culture can be kept
Holding urine specimens, preserved or unpreserved, for
longer than these recommended times can affect the culture results of the
specimen and can alter clinical significance of culture results.1
Therefore, storing a urine specimen for longer than 24 hours at
refrigeration temperatures and then performing bacterial culture is not
—Susan E. Sharp, PhD, D(ABMM)
Director of Microbiology
Kaiser Permanente – NW
- Miller JM. A Guide to Specimen Management.
III. Specimen collection and processing. Washington, DC: ASM Press;
Prelabeling specimen tubes
Q Is it accurate to state that tubes should never be
prelabeled before collection but labeled at the bedside?
A The CLSI standards come down squarely against the
practice of prelabeling. When tubes are labeled in advance, there is always
the chance a labeled but unused tube could be used accidentally on another
patient. Hypothetically, if one's practice is to prelabel tubes before the
draw and the venipuncture is unsuccessful, one could forget to discard the
tubes or leave them in the room for someone else to use. That creates an
opportunity for a dangerous scenario to unfold.
The CLSI venipuncture standard (H3-A6) lists the steps of
the venipuncture procedure in chronological order.1 Labeling
tubes is Step 15, which comes after the specimen is drawn. On page 8, the
standard states that the labels should be placed on the tubes after
collection is complete, and on page 18, states the tubes must not be labeled
before they are filled. It is quite clear that prelabeling is not
acceptable. It is good risk management for a facility's procedure manual to
reflect the standards.
—Dennis J. Ernst, MT(ASCP)
- Clinical and Laboratory Standards Institute.
Procedures for the Collection of Diagnostic Blood Specimens by
Venipuncture; Approved Standard-Sixth Edition. CLSI Document H3-A6.
Wayne, PA: Clinical and Laboratory Standards Institute;2007.
POCT glucose and poor circulation
Q Lately, we have had several instances of getting 450+
glucose readings on patients who have an underlying circulatory issue;
therefore, I would expect these patients probably should not be having
capillary glucose testing. When a specimen is sent to the lab to confirm,
the glucose drops at least to half of what it was at bedside. What causes
such a wide swing in values? Does glucose tend to “sit” in the extremities
when circulation is poor?
A A number of different studies have examined the accuracy
of bedside glucose meters in various patient populations with hypotension,
poor peripheral perfusion, or edema. Two studies done on patients in the
emergency department and ICU with hypotension (systolic blood pressure less
than 80 mm Hg) both found that capillary glucose measurement systematically
underestimated venous glucose in this patient population.1,2 In
contrast, another study performed on 75 patients with systemic hypoperfusion
(defined as systolic blood pressure less than 90 mm Hg or requirement for
vasoactive agents) found relatively good agreement between capillary whole
blood and arterial whole blood glucose, with 95% limits of agreement of
approximately +/-30 mg/dL.3
A recent study comparing capillary whole blood to
venous-plasma glucose in patients with poor peripheral perfusion related to
vasopressor use or peripheral edema found that in both categories of
patients (poor perfusion and peripheral edema) capillary whole-blood glucose
systematically overestimated venous-plasma glucose. Even in this study,
however, almost all results agreed within approximately 2 mmol/L (36 mg/dL)
Because studies in these various patient populations have
reached different conclusions, there is probably no single physiologic
explanation for why capillary whole-blood measurements are affected by poor
peripheral perfusion, edema, or hypotension. Variables most likely include
patient population under consideration, meter-device, characteristics and
measurement technology (in particular, extent of hematocrit effect but also
any effect of oxygen content or pH); sample-volume requirements for the
measuring device; and range of glucose values encountered (i.e., lower
values expected in population on intravenous insulin vs. higher values
expected on conventional insulin dosing). Underdosing of the test strip and
residual alcohol on the capillary puncture site are two preanalytic
variables that can lead to falsely increased glucose-meter results, though
not specifically related to this patient population.
I cannot offer a clear answer for why a subset of
patients with poor peripheral perfusion would have dramatically increased
capillary whole-blood glucose results relative to a laboratory plasma
measurement. Start by looking at collection technique and meter dosing or,
as you mentioned, a process for avoiding capillary sampling on these
—Brad S. Karon, MD, PhD
- Atkin SH, Dasmahapatra A, Jaker MA, et al.
Fingerstick glucose determination in shock. Ann Intern Med.
- Sylvain HF, et al. Accuracy of fingerstick glucose
values in shock patients. Am J Crit Care Med. 1995;4:44-48.
- Kulkarni A, et al. Analysis of blood glucose
measurements using capillary and arterial blood samples in intensive
care patients. Intensive Care Med. 2005;31:142-145.
- Kanji S, et al. Reliability of point-of-care testing for glucose
measurement in critically ill adults. Crit Care Med.
“For 40 years, MLO has been an authoritative and unbiased source of lab information. I am proud to have been a part of
MLO.”— Daniel M. Baer, MD (deceased), “Tips from the clinical experts” editor for 25 years and member of
MLO editorial advisory board. We miss you!
MLO editorial advisory board.