In an era of diminishing labor and reimbursement, auto-validation for normal results in the clinical laboratory is a generally accepted mechanism to reduce turnaround times, increase staff productivity, and better qualify patient values against a standard rule set. This control technique is one of the methods a laboratory can utilize to improve result turnaround time and minimize the error potential of missing a crucial result, condition, or event.
For autovalidation projects, hematology presents particular challenges related to the subjectivity of the data plus the desire to incorporate instrument flagging into auto-validation rules logic.Hematology labs also need to consider auto-validation rates in conjunction with manual smear review and rerun rates in order to fully benefit from a rules-based project.
Auto-validation and smear review reduction translate into significant productivity gains for a busy lab when you calculate the average amount of time per sample result review multiplied by the number of samples per day that no longer require review.
Imagine the efficiencies if the lab could achieve auto-validation rates in the range of >85%, smear review rates near 10%, and rerun rates in single digits! A software product that provides hematology-specific rules and actions can drive those kinds of numbers.
Regardless of the department undertaking the auto-validation project, program effectiveness can degrade over time unless a process of strategic review and maintenance is instituted. Timely, accurate, reliable data is needed to measure and analyze current rules to provide milestones and identify opportunities for continuous process improvement.
What are your numbers?
Many labs don’t know. Trends suggest that auto-validation rates can vary widely, especially in labs that lack sophisticated middleware or LIS rules packages. Even with advanced rules, the software may not easily provide insight to key metrics and the underlying reasons for those rates.
It’s important to know your numbers. Rigorous attention to where you are today and careful analysis of the numbers behind that current state will guide changes to move you to where you want to be.
Knowing auto-validation rates also provides data to optimize staff scheduling. Kim Moser, Core Lab Supervisor at OhioHealth (a regional health system based in Columbus, OH), says, “We use the Sysmex middleware in our hematology department. The management reports help us to determine if our staffing is appropriate. If the average auto-validation rate is really high, even though the volume of work coming in is large, we know there is minimal tech intervention required for result and smear reviews, and schedule accordingly.”
But what numbers are important? And how can you use the data to advance your auto-validation project closer to your ultimate goals?
You have meticulously established your rule set, tested the rules according to CAP/CSLI guidelines, and are satisfied they represent the best practices of your laboratory. You are officially auto-validating results, and techs embrace the new process and report feeling less workload fatigue. You are proud of your rules and the progress you made. Congratulations!
Many laboratories stop at this point without considering whether the rules they enacted continue to optimize operations over time. But what do laboratories measure for this type of analysis? These are points to consider when planning a new or evaluating an existing auto-validation program:
Turnaround times: For laboratories implementing their first auto-validation program, result turnaround time (TAT) should be the initial statistic used to evaluate the outcome. Decreased TAT is the first indication of the efficacy of your program.
Baseline auto and manual validation rates: If the laboratory utilized a previous method of auto-validation (middleware or LIS rules), there should be baseline statistics available. Comparing the pre- and post-project auto-validation rates and TAT metrics will determine the gain.
Post-project auto and manual validation rates: Even a modest 2% to 5% change in your auto-validation rate can translate into significant productivity impact. Continuously measuring current rates is crucial to determine if the program is accommodating the current patient population, identifying critical results, and responding to rule changes.
Other key statistics: You can assess auto-validation and rules effectiveness by regularly measuring the following and identifying any variances:
- Rerun/reflex rate
- Smear review rate
- Rule utilization rate by rule type
- Patient corrected report rate
- Rules trigger reports (used to determine the number of times a rule has executed)
A rules trigger report is of particular importance when it’s time to “tweak” your rules. If a rule never triggers or triggers more than expected, it becomes a candidate for modification.
Tips to improve your numbers
Set target auto-validation rates: Establish an achievable auto-validation rate in the context of the laboratory patient population, acuity, and workflow. You should set levels that assure significant gains in productivity and turnaround time. Ms. Moser, for example, reports that her lab has a smear review rate of approximately 10% thanks to the analyzer technology and flag-specific rules provided by the middleware.
Review auto-validation rules regularly: Changes to rules without supporting data or a specific reason could negatively impact your auto-validation rate, whereas even a small increase in auto-validation rate can have a positive impact on productivity.
When should you consider modifying a rule? Ask yourself the following questions:
- Is there a rule that has never triggered, or triggers infrequently? Is this an important rule? Does it represent any clinical or operational need?
- Does a rule to hold results for manual review have a higher than expected trigger rate? What is the effectiveness of a rule that has a high trigger rate? Why do we have this rule if it triggers on the majority of the patients? Is this rule non-specific?
- Do any rules conflict or cross ranges? If you have multiple rules for the same test, consider consolidating them to avoid overlap.
A word of caution: as your rule base grows, pay careful attention to assure that rules do not conflict or invalidate each other. The LIS or middleware may provide tools to proactively identify potential rule conflicts so that rules can be modified or eliminated to optimize auto-validation statistics.
Establish consistent measurement policy/tools.
The following are key to successfully tracking rules efficacy over time:
- Identify a test or profile that best represents your laboratory operation—for example, glucose, hemoglobin or smear review.
- Establish an auto-validation benchmark rate that is achievable and will have measurable impact on your operational efficiency.
- Measure the actual auto-validation rate at regular intervals and identify variances.
- Review your rule trigger report to identify rule usage patterns by test/profile over hour and day time points.
- Identify rules that are underutilized, overutilized or conflict with other rules.
- Make small rule adjustments and compare your auto-validation rate statistics at the next measurement period.
- DON’T make wholesale rule changes: you may not be able to identify what improved or eroded your auto-validation rate.
Know your numbers
In today’s healthcare environment, we see increasing workloads, staff shortages, and growing visability into lab turnaround times. Savvy lab managers understand that it’s of utmost importance to know their numbers and use their data to make informed staffing and process decisions.
Anne Tate is Senior Product Manager for Mundelein, Illinois-based provider of laboratory equipment and information management systems Sysmex America, Inc.