A year of AI in the lab

Oct. 27, 2025
4 min read

Happy Thanksgiving! I can’t believe it’s the end of the year already. One theme I’ve noticed throughout 2025 is the growing presence of artificial intelligence (AI) in laboratory medicine. Across MLO articles — from digital pathology to microbiology — AI is increasingly becoming part of the story. In case you missed any of these articles, some are as follows:

  • Reimbursement for laboratory services — Are you leaving money on the table (January 2025) “Some laboratories have integrated AI with their laboratory information system (LIS), which can manage various functions including time-consuming repetitive tasks, processing of claims, and flagging errors before they are released.”
  • Department of Laboratory Medicine and Pathology, Mayo Clinic Florida: Innovation, collaboration, and commitment to excellence in patient care (April 2025) The laboratory is in the process of digitizing its archived anatomic pathology slides…Future plans will use the large, diverse datasets to build powerful artificial intelligence models in pathology.” 
  • Evolving paradigms in diabetes diagnosis and classification: ADA standards of care 2025 and the use of artificial intelligence (July 2025) “By analyzing data from blood sugar levels, medical history, and even retinal scans, AI tools can predict diabetes subtypes, identify high-risk patients, and tailor solutions to individual needs — with improved accuracy, reducing healthcare costs and addressing critical gaps in diagnosis, treatment, and daily management.”
  • Healthcare tech visionary (July 2025) “If a pathologist needs to research a particular finding or validate a complex diagnosis, they can use AI to find similar cases and information quickly.”
  • Modernizing microbiology: Achieving balance between comprehensive insights and efficiency (September 2025) “The introduction and advancement of artificial intelligence–enabled data interpretation tools is also helping increase consistency in culture media analysis.”

In response to AI’s rapid advancement, the Joint Commission and Coalition for Health AI (CHAI) published high-level guidance, The Responsible Use of AI in Healthcare, for the deployment and use of AI tools in healthcare organizations. Outlined in the guidance are seven elements of responsible use of AI. Below are some laboratory-specific considerations related to each:

  1. AI policy and governance structures: Lab leadership should be a part of this governance committee that provides oversight of AI tools involved in patient care.
  2. Patient privacy and transparency: Lab data frequently contain protected health information that needs to be protected from unauthorized disclosure.
  3. Data security and data use protections: Laboratory information systems must have ample protection from data leaks and cyber attacks.
  4. Ongoing quality monitoring: Labs must continuously monitor AI tool performance to ensure they continue to deliver accurate, reliable, and safe results.
  5. Voluntary, blinded reporting of AI safety-related events: Labs should have mechanisms to flag, investigate, and report incidents in which AI tools contributed to wrong or missed results or adverse outcomes.
  6. Risk and bias assessment: Labs should determine whether AI tools have undergone bias detection during development and monitor tools as appropriate.
  7. Education and training: Education and training helps to ensure safe implementation and integration of AI tools into laboratory workflows.

As AI continues to evolve, these principles can help laboratories adopt new technologies responsibly — enhancing both quality and patient safety.

I welcome your comments and questions — please send them to me at

[email protected].

About the Author

Christina Wichmann

Editor-in-Chief

Editor-in-Chief Medical Laboratory Observer | Endeavor Business Media

Sign up for our eNewsletters
Get the latest news and updates