Temperature measurement in the clinical laboratory…good enough isn’t good enough

Dec. 1, 2011

The clinical laboratory environment uses temperature to maintain stability of testing samples. GLP and individual laboratory SOPs require monitoring of environmental parameters of specialized test facilities. So the laboratory environment, analytical reactions, instrumentation and materials must be monitored and controlled to required temperatures. This includes instruments, incubators, refrigerators-freezers, and specimen holding rooms. Monitoring these temperatures is not a 9-to-5 job…it’s 24/7, including weekends and holidays.

This article gives a quick overview of both the importance of accurate temperature measurement in the clinical laboratory and some common misconceptions about the instrumentation that is used and what can be done to help maintain temperature control.

Temperature: the first parameter

Clinical laboratories need to measure a range of physical parameters, among them humidity, pressure, and air flow. But paramount is temperature. It’s the basic building block of most analytical measurements. You are not in the business of estimating. You need to know precisely what the temperature is, even when the lab is unoccupied. CDC Subpart K Sec 493.1252, .1253, and .5126 covers this for the lab, instrumentation, and materials.

Most test systems used for in vitro tests are of biological origin. They are usually highly sensitive, so their control conditions (such as incubator temperatures) are very important.

Physical properties can also change as temperatures vary. The stability of reagents kept at “room temperature” decreases if the temperature exceeds about 35^0C. Under atmospheric pressure, aqueous liquid viscosity decreases with increasing temperature, generally by about 2% per ^0C. You can see this in patient samples taken from a refrigerator having a different viscosity from those in equilibrium at room temperature. Since water is commonly used as a reference, it also has a different viscosity at different temperatures. Common analytical methods, such as fluorometry, can also be sensitive to changes in temperature (and pH) of the sample. Common assays for ALT and AST are run at different temperatures, typically 30^0C for 37^0C. This all highlights the need for measurements being taken at a constant temperature and why many clinical laboratories operate at room temperature. But what is “room temperature”?

Macro vs. mini environment

In general, the “macro” environment deals with the laboratory work area itself. Here, the commonly used term is “room temperature.” For most laboratories, room temperature is usually either 20^0 or 25o C (293^0 or 298^0 K, 68^0 or 77o F). However, room temperature is not a uniformly defined scientific term, unlike Standard Temperature and Pressure (STP), which also has several definitions. Room temperature often means a temperature inside a temperature-controlled building. An associated term, “ambient temperature,” simply means the local temperature and can be the same as room temperature indoors. All this depends on the design and performance of the HVAC system servicing the clinical laboratory. The typical laboratory can have both “hot spots” and “cold spots,” depending on air vent supply-return locations and air flow patterns. This in turn affects both the assumed and measured room temperature. This is especially important in controlled areas such as surgical operating rooms, isolation rooms, pharmacy storage areas, etc. Since HVAC systems rely on pressure to move air, even small changes in differential pressure can have an impact on delivered air and hence local temperature. Temperature mapping of the laboratory work areas can pinpoint areas of temperature instability.

The “mini” environment deals with smaller controlled areas, such as incubators, refrigerators/freezers, coolers used for blood storage and transport in the O.R., biological safety hoods, pharmacy compounding sterile isolators, etc. Here, a different approach for temperature mapping, sensor selection and placement needs to be taken as compared to the macro environment. While the “penny in a cup” method of temperature monitoring is hopefully no longer used, establishing range limits for reagents, specimens, and other materials in mini environments presents its own challenges.

Some common misconceptions about temperature measurements—and why they are inaccurate

Temperature measurement is always difficult. It is not always difficult. It is sometimes challenging. It depends on the material being measured and what you expect for accuracy. For example, at cryogenic temperatures (-200^0 C), accuracies of +/-5^0 C can be done with care, but +/-0.1^0 C can be difficult. From 0^0C to 50^0C, +/-5^0C is easy, but +/-0.1^0C can sometimes be a challenge. Some of the measurement difficulties have to do with thermal gradients in the material being measured, especially with materials that have poor thermal conductivity, such as plastics.

Thermal gradients don’t occur in the laboratory. Yes, they do. In fact, they are often a common cause of measurement error. You can see this especially when measuring materials with poor thermal conductivity, such as air, most liquids, and non-metallic solids. Just measure the temperature of a tall ice bath beaker and you can see a vertical temperature gradient of a few degrees (just don’t use a bimetallic thermometer).

If I’m using a calibrated sensor, my readings are accurate. Not necessarily. Even calibrated thermometers will eventually have errors due to offset, scale and linearity errors. Also, any of these temperature sensors can drift over time and due to temperature cycling. Hysteresis (where the measured value depends on the direction from which it was approached) can be seen in simple bimetallic (dial) thermometers. Ensure that all temperature measurement instrumentation has been calibrated using a NIST-certified reference or that it has current NIST (or other required) certifications from an outside metrology laboratory. For in-house certifications, CLSI document 12-A2 describes the procedures to use.1

When I measure, I’m measuring the temperature of the sample. What you are actually measuring is the temperature of the sensor. With the exception of non-contact temperature measuring devices (e.g., IR thermometers), heat conduction is what produces the temperature readings.

All temperature sensors respond to temperature changes fairly quickly. Actually, there is considerable variation in response time. Some sensors respond in less than a second, some take minutes. The time for a sensor to reach 99% of measured value is referred to as the “t99” time. Use this feature to compare various temperature sensors and match them to the analytical procedure.

A thermometer that has a digital read-out is the most accurate. Unfortunately, the measuring device connected to the sensor is rarely perfect. Digital meters, chart recorders, or data loggers, all can have calibration, linearity, and temperature dependent errors. Ensure that your NIST calibration of the device includes calibration of the total sensor-read-out system, as temperature effects on the readout device can be a subtle source of error as well.

My refrigerator has a temperature sensor, so it’s good enough. Maybe not. Most refrigerators use a simple thermocouple that usually drifts over time. Compare temperature readings of this thermocouple with an independent NIST-certified digital thermometer, and you’ll probably find differences, even in different areas of the refrigerator.

Not all temperature sensors are the same

Common temperature sensors in the clinical laboratory are thermocouples, thermisters, and RTDs. A quick comparison is helpful to understand why each differs from the other.

A thermocouple is based on the effect that the junction between two different metals produces a voltage which increases with temperature. Compared with resistance-type thermometers, thermocouples offer the advantage of a higher upper temperature limit, up to several thousand degrees Celsius. Their reaction times are quick, but their long-term stability is somewhat worse and their measuring accuracy is poorer. They are frequently used in ovens and other instrumentation with elevated temperatures, often above 250^0C. Thermocouples are an accepted lower-cost solution to temperature measurement in industry and can be found in many laboratory instruments.

A thermister is made from certain metal oxides whose resistance decreases with increasing temperature. Because of this they are often called negative temperature coefficient (NTC) sensors. They are usually employed in instrumentation and measurements below about 200^0C because of their smaller size, smaller thermal mass, and reasonable response time.

Resistance temperature detectors (RTDs) employ the property that electrical resistance of metals varies with temperature. They are positive temperature coefficient (PTC) sensors whose resistance increases with temperature. The main metals used are platinum and nickel, while the most widely used sensor is the 100 ohm or 1000 ohm platinum resistance thermometer. RTDs are the most accurate sensors for temperature measurements, with very good long-term stability. A typical accuracy for a platinum resistance thermometer is +/-0.5^0C, with some designs exceeding +/-0.07^0C.

With different designs and performance characteristics available, the clinical laboratory generally relies on the instrumentation manufacturer to design in the right sensor for the task And in most cases the sensor chosen is adequate. But the instrument should be both chosen and certified with the specific application in mind. Ensuring the temperature sensor employed meets the application, accuracy, and precision requirements means paying some attention to instrument specs and then testing against your requirements. You might not want to use a thermocouple in a high accuracy application. Additionally, using a thermometer with a penetration probe designed for liquids to measure ambient air temperatures is not advisable either.

Calibrated vs. adjusted

Two terms commonly used in metrology are often misunderstood. The first term is “calibrated,” and the second term is “adjusted.” We usually interpret “calibrated”as meaning a laboratory instrument has been tested at a metrology laboratory and returned with a certificate stating its accuracy for a particular parameter as tested. This means the metrology laboratory tested the laboratory instrument against the manufacturer’s specifications and certified its performance. It does not actually mean the laboratory instrument was “adjusted” to meet design specifications. The laboratory instrument may have been received at the metrology laboratory and found to perform within manufacturer’s specifications, so no adjustment was needed.

If, however, the laboratory instrument was found not to perform within the manufacturer’s specifications, then it would require “adjustment” to bring its performance in line. You can see this on the calibration certificate in the “as found” (incoming) and “as left” (outgoing) documentation of performance. Instrument sensor aging can be judged by the amount of actual adjustment needed over time.

Certifying organizations such as JCAHO (Joint Commission) require laboratory instrumentation to be NIST certified. However, in general, NIST does not require or recommend any set recalibration interval for measuring standards or instrumentation. Specific recalibration intervals depend on a number of factors. These may include the following:

  • accuracy requirements set by the laboratory
  • requirements set by regulation
  • environmental factors that can affect an instrument’s stable operation
  • the inherent stability of the specific device or instrument.

Accuracy is not automatic

Temperature measurement accuracy is established through calibration. The entire measuring “system” (sensor and read-out device) should be traceable back to a known standard. That standard should be proven higher in accuracy than the device that you are calibrating. It makes no sense to calibrate a data logger of +/-0.5^0C accuracy with a simple bimetallic dial thermometer that has an accuracy of +/-2oC. Do this, and the calibration is meaningless. And being “NIST traceable” means little if you know nothing about the accuracy of the reference standard. NIST traceability does not necessarily mean “accurate,” so make sure there is proof in any claims of “NIST-certified” equipment.

Automated vs. manual temperature monitoring

Let’s not forget that it’s people who both record laboratory temperature data and are responsible for that data. Workload and time pressures can make this a challenge. And while manual systems worked well if properly managed, there are more reliable and dependable automated systems available today.

The “dependable” feature in automated temperature monitoring systems comes from their alarming and documentation capabilities. Out of control temperatures generate immediate alarms, 24/7, so corrective action can be taken quickly. Some systems even alert you via email or text messaging. The second just as important design feature is automated documentation. Every part of the automated temperature measuring system is recorded, with routine reports generated for monitoring and trending analysis. This same automated documentation aids in validation and audits with non-alterable data files and electronic signatures. These same automated temperature monitoring systems can be used in pharmaceutical compounding (such as in a hospital pharmacy), satisfying 21CFR Part 11 requirements.

We take temperature measurements for granted, since they are such an integral part of the day-to-day operation of a laboratory. As such, the laboratory environment (macro and micro), plus individual instrumentation, storage areas, etc., must be monitored continuously. The challenge is not to just do it, but to do it right: choosing the right components, knowing their advantages and limitations, ensuring their accuracy to meet regulatory requirements, and maintaining required records. Next time you record “Room Temperature” as 20^0C, check to see if you actually read that measurement from a proven accurate, dependable thermometer.

Robert Bove’ is Marketing Manager for the Industrial Products line of New Jersey-based Testo, Inc.

References

  1. Clinical and Laboratory Standards Institute. Temperature calibration of water baths, instruments, and temperature sensors. Second Edition. Approved Standard 1990, CLSI Document 12-A2.

This column addresses compliance, regulatory, legal, certification, and additional concerns in the lab. Readers can submit questions to [email protected].

ID 315754768 © Monthira Yodtiwong | Dreamstime.com
dreamstime_xxl_315754768
ID 42387359 © Gerard Koudenburg | Dreamstime.com
dreamstime_xxl_42387359