Emerging technologies push data integration

June 1, 2009

In an ideal world,
patient data, reference intervals, and test results from both anatomic
pathology and clinical laboratories flow seamlessly between systems,
then into an electronic medical record where a physician can access all
the information she needs. Most hospitals and clinics do not operate in
such an ideal world yet. While writing interfaces between best-of-breed
lab systems can solve some of the problems, the growth of fields such as
genomics, proteomics, and molecular testing is signaling a trend toward
an integrated system with a single database.

Challenges at the basic level

Reference intervals, or “levels,” are typically health-associated, derived
from a reference sample of many healthy individuals. It is from this
sampling — based on factors such as age and gender compiled over time — that
normal levels for lab tests are determined. But because federal regulations
require that instrument manufacturers’ reference intervals be appropriate
for a lab’s patient population, these reference intervals often vary greatly
from lab to lab.

In addition, reference intervals can vary from
patient to patient, says Dale Sanders, vice president and CIO of
Chicago-based Northwestern Medical Faculty Foundation at Northwestern
University, which is affiliated with the 897-bed Northwestern Memorial
Hospital. Sanders gives as an example a patient who, due to certain
health issues, has a high level or low level, which is “normal” for that
particular patient, even though the lab’s reference intervals put that
level in an abnormal range.

If the physician is using an electronic medical
record or EMR that incorporates a physician-preferences feature, Sanders
says he can adjust reference intervals on a patient-to-patient basis.
Furthermore, these changes can be made within some laboratory
information systems (LIS), says Curt Johnson, vice president of sales
and marketing at Carmel, IN-based Orchard Software Corp. “In our system,
there is a rules engine built in so you can set the ranges to the
patient level.”

Even so, physicians may still find it difficult
to access all the levels of data they want, says Sanders. At
Northwestern Memorial, physicians are  “grabbing information” from
both the pathology lab system and from the clinical lab system, he says.
“We are going directly to the sources,” he explains. “Our EMR is not
granular enough, so a lot of information related to analyzing samples,
for example, which is workflow related, is being stripped out in the
EMR.” Plus, he notes, “There is no EMR that can handle genomics because
of the graphical-rich nature of that data.”

Sanders also says that since genomics is a
relatively young science, “There are still no clear reference
intervals.” Because the medical entities at Northwestern are actively
involved in genetic testing and compiling genetic and family-related
data, Sanders says he is currently building an enterprise data warehouse
in order to match clinical outcomes data with genomic data. The problem
that still remains, though, is getting all relevant data to the point of
care.

Data convergence

With the advent
of new tests and treatments on the genetic and molecular levels, “There is a
blurring of lines between the pathology lab and clinical lab,” says Brian
Keefe, director of marketing for clinical products at Milford, MA-based
Psyche Systems Corp. “With cytogenetics and molecular diagnostics, a lot is
falling under pathology. But pathology systems were not designed to handle a
lot of graphical information.”

“The future is Web services interfacing. That is where we need to go.”

And since a clinical LIS is best at handling data
points and numbers, the data from both systems need to be merged. “You have
to have a single, relational database designed so data is accessible by
pathology and clinical and by the physician,” Keefe says. As a result, Keefe
says test results from both pathology and clinical labs can be presented in
a single report.

The importance of generating a single report from
both labs also was stressed by Johnson, who gave as an example a physician
who orders a Pap smear and an HPV test. Since the Pap smear is run in the
pathology lab and the HPV test is done in the clinical lab, the physician
not only has to wait for each set of test results but also will receive two
separate reports on the same patient. “In our system, we integrated anatomic
pathology and clinical labs, and molecular testing into one system with a
single database,” Johnson says. But he also notes: “Even if I have one lab
system, I still need to integrate it into an EMR.”

Using a traditional HL7 interface would still present
a challenge, he says. “An HL7 interface works well with quantifiable data;
less well with quantitative data; and poorly with images.” One solution is
to embed .pdf files in the HL7 transmission. But Johnson adds, “The future
is Web services interfacing. That is where we need to go.”

Richard R. Rogoski is a freelance journalist
based in Durham, NC. Contact him at
[email protected]
.

Changes in test values crucial to quality lab
reports

By Rami Jaschek

As modern day labs
struggle to stand out from the crowd, a proven ability to provide a more
dependable and understandable result report to clients is a major goal.
Physicians are monitoring patients more closely than ever and need to
know what they can make out of changes in patients’ test values.

Assuring the changes to patient results stem from
true variability and not from testing errors was traditionally handled
by following strict quality-control (QC) procedures. Much focus has been
given to setting the most appropriate target values, selecting the right
Westgard rules to apply to each analyte, and addressing questions of
required testing frequency.

Traditional QC methods cover only a small segment of
the entire testing process. Numerous pre-analytical factors that can change
patient results tremendously are not monitored by this system. Travel
conditions and timings, potentially incorrect blood-draw techniques, storage
conditions, centrifuge alignments, and many other factors do not affect
control-material usage and, as such, are not the target of traditional
quality control.

The true goal of a quality program is to ensure that
patient results released today can be compared to those released yesterday.
The law of large numbers tells us that when dealing with a large number of
random values, overall leading parameters such as average, median, and
standard deviation of the values can be expected to remain constant from one
day to the next. Monitoring patients’ daily values for consistency must
become part of the routine of any lab seeking to improve its bottom line.
That requirement can actually be taken one step further by plotting daily
(or any other periodicity) values on a Levy-Jennings graph for ease of
analysis and the ability to treat those values as yet another control in the
QC system.

Taking this concept to the next step involves
treating the stream of patient results coming from the analyzer as yet
another control that can be tracked. Many quality managers are familiar with
requests to halt approval on the lab in the case of a lengthy stream of
abnormal results. Tracking for such incidents allows for capture of errors
and problems occurring in between control runs. Checking for 10 abnormal
results in a sequence, however, is actually just an application of a
Westgard concept to patient results with relevant adaptation due to the fact
that a more highly variable element such as patient results is being dealt
with rather than results of a fixed control material.

Why stop here? Many other rules can be easily applied
with great success to the same patient results. For example, if the
“2of3-2s” rule is applied to control values, perhaps “4of5-2s” can be
applied to patient controls and so on. Unifying control values together with
patient-result streams and daily averages under the Levy-Jennings display
and the Westgard testing framework allows for a more unified, comprehensive
and, most importantly, bottom-line oriented quality program.

Keeping in mind that physicians are looking for the
ability to understand the difference in patient results and assign a level
of clinical significance to that difference. Making the lab analytical error
as small as possible is only a preamble to the more complete discussion in
significance of change. Variability in patient results stems both from the
analytical variability in the lab and the biological variability within each
patient. The changes that can be expected in patient results when looking at
cholesterol testing, for example, are not the same ones expected when
looking at results of thyroid stimulating hormone.

Tables of biological variance are available on
numerous online sites but may be omitted from discussion simply as a
critical decision as to whether the treatment to lower a patient’s
cholesterol level is working or not. Providing a more complete service to
clients, labs can indicate on their results reports an indicator for the
significance level of the change seen in patient results. Combining the
publicly available biological-variance information together with the
analytical variance learned from the internal QC results provides the lab
with a simple total variance calculation. This number can then easily be
used as the basis for a clear indication of significant (2 “sd” change) and
very significant (3 “sd” change) changes in patient values.

Taking these steps will allow labs to provide not
only more accurate information but also comprehensive,
decision-supporting information that enables physicians to make better, more
informed decisions and, ultimately, provide better care to their patients.

Rami Jaschek is vice president of Technology for
NeTLIMS, based in Jersey City, NJ. Contact him at
rami
@netlims.com

or visit
www.netlims.com
.