Software systems—trust but verify

April 24, 2019

The laboratory is exploding with multiple software solutions that connect disparate systems across the enterprise. These software solutions create and transfer clinical information to different end points and support pathways for clinical decision logic that can be harnessed into key analytical data.

The data generated from these lab software systems is fueling the growing need for analytics and actionable informatics to support insight into the healthcare continuum. We know that lab software systems contain valuable raw descriptive data that can be transformed to support both diagnostic and predictive analytics as the data moves up and down the information chain.

That means now more than ever, we need to know that laboratory results and the digital logic used to support result delivery is accurate and reliable. With more and more laboratory results being managed by digital systems, how do we know that the logic behind those systems is correct? What is the lab relying on to ensure that the software that calculates, manipulates, and auto-verifies laboratory diagnostic information is, in fact, accurate? How can it be assured that the clinical data generated by these software systems does not cause patient harm?

The answer to these questions is that we must trust but verify these software solutions when they are implemented, when there are updates, and when any changes occur. Epner, Gans and Garber, in their article “When Diagnostic Testing Leads to Harm: A New Outcomes Approach for Laboratory Medicine,”1 indicate that one of the five causes of patient harm related to analytical testing is result inaccuracy. We know that result accuracy can be impacted by mechanical issues like instrument calibration but perhaps less conspicuous and not as easily detected are the errors that come from improperly configured data inputs, software rules syntax issues, and missing reference ranges on an EMR patient report. These omissions or errors lurk within laboratory software systems causing insidious issues. They can manifest themselves as a missing diagnostic comment, a critical value that never triggers a pop-up notification, or a test code that is omitted from an interface. These issues can directly impact patient care by what was not reported or not analyzed, just as much as a mechanical instrument or device error that impacts a patient result.

Lab IT projects are stacking up. Continuous verification testing is now the norm vs. the exception. IT and LIS teams are now tasked with perpetual software system verification projects to keep software current from the vendor and clinically relevant based on the laboratory best practices. All of this takes time and resources to ensure that these various lab software systems are fully verified before they are placed into production.

Best practices

The following are general best practices for software verification testing for laboratory software. Refer to CLSI Auto-08 (Managing and Validating Laboratory Information Systems)2 and Auto-10 (Auto-verification of Clinical Laboratory Test Results)3 for validation and verification documentation templates.

  • Maintain a test system. A separate test system or area for verifying new software or proposed changes will enable you to maintain your production integrity and minimize any downtime for all software verification activities. The test system should be synced up with the configuration of the production system to ensure that what is being verified in the test system is the same as in the production system.
  • Separate your testing into dry and wet testing. Determine your software system requirements for testing to verify your data input configurations. Can these configuration parameters be tested within the test system or do the configuration variables require clinical samples to imitate the behavior of the production system? Evaluation of your workflows and information transitions via a two-way diagram will provide you the information to determine what can be tested in a closed environment or should be tested across systems using simulated or clinical specimens.
  • Determine if cross testing is warranted. If data needs to cross systems, then the validation plan should include the point of origin of the data and how it flows to the receiving system. The system inputs should be identified up front so that the output can be determined according to your configuration. For example, if a new software system is being implemented, all test codes and profiles should be exercised across the software systems regarding order and result retrieval. A full spectrum of results should be generated that represent the clinical values expected from your specific patient population that includes numerical and alpha numeric data and unstructured/structured comments. These testable items should be saved as test case scenarios to be used for any future verification testing for comparison between testing events. The reuse of these test case scenarios in subsequent testing events will identify if there are any issues that have been introduced since the last testing project. This is commonly called regression testing and is used to verify that a software system is ready for deployment after software upgrades or system configuration changes. 
  • Reduce human error with test automation. Many vendors offer software testing tools that can automate testing and verification. It is good practice to discuss with your vendor testable items that can be automated to reduce human error and to standardize the process. Many vendors provide simulation or emulation tools that will generate orders and results based on your specific inputs and configuration parameters. Use these tools to their fullest extent to augment and standardize your testing process. 
  • Identify and document high risk items. Good software practice emphasizes the maintenance of a risk analysis document that identifies your organizational software risk areas. You should maintain a current risk analysis for each of your lab software systems that identifies the software configuration and features that are at high risk of impacting patient care if not configured correctly. A risk analysis matrix should be created with the objective of evaluating the severity and risk of occurrence to determine the level of risk. For example, if events are triggered based on critical values or based on multiple levels of reference ranges, these might be target areas of review to ensure each trigger event is configured correctly and spans the patient clinical range of interest.  
  • Focus on data driven testing. Use for your data includes creating test case scenarios that use your specific data and have a directed outcome. If you use different units or test codes in one laboratory software system, but they are translated to a different nomenclature for use in a receiving system, then concentrate on these test case scenarios. These types of test case scenarios should be designed to check a specific build or upgrade and should be included in a continuous testing program.  

   The following are data driven testing recommendations to include in your testing suite:

  • Positive test cases – entry of valid inputs and results to trigger a rule as expected.
  • Negative test cases – entry of invalid results or input values to verify that the rule does not trigger.
  • Boundary test cases – verification of a maximum or minimum value entries that will trigger a rule or event based on the rule configuration.
  • Input variables and values – exercise a full range of input values. Narrowing your testing to a few items will not identify systematic or sentinel issues. 

Conclusion

The goal of software testing and verification is to ensure accuracy and quality of the patient result as generated by laboratory instruments and devices. Employing testing best practices in verifying software systems is never more important than today with the increased growth of laboratory intelligence and event triggering software. The laboratory software systems of today are data rich with clinical knowledge and information that becomes the informatic backbone of our healthcare system. We must take care to trust but verify its accuracy and integrity continuously to protect patient safety and support better healthcare outcomes.

REFERENCES

  1. Epner PL, Gans JE, and Graber ML. (2013). When diagnostic testing leads to harm; a new outcomes approach for laboratory medicine. BMJ Quality Safety. 2013 Oct 22. Retrieved August 16, 2013. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3786651/ doi: 10.1136/bmjqs-2012-001621
  2. CLSI (2006). Auto.008. Managing and validating laboratory information systems. CLSI. Retrieved from https://clsi.org/standards/products/automation-and-informatics/documents/
  3. CLSI (2006). Auto.010. Autoverification of clinical lab results. CLSI First Edition. Retrieved from https://clsi.org/standards/products/automation-and-informatics/documents/