This month, we are sharing the results of our State of the Industry survey on Lab Data Analytics. Thank you to all who participated in this survey. For the article, Mike Hampton, chief commercial officer at Sapio Sciences shared, “The labs making progress are embedding AI tools into workflows and applying platform-level intelligence, so context, governance, and decision-making remain connected. Disconnected tools can drive shadow AI and fragment workflows.”
I’ve noticed that shadow AI is becoming more talked about this year. In the February 9th issue of our newsletter, LABline, we shared an interview on shadow AI with Alex Tyrrell, PhD, who is head of the Wolters Kluwer AI Center for Excellence.1 Dr. Tyrrell said that in 2025, they started to hear anecdotally about shadow AI becoming more prevalent. In Wolters Kluwer’s new survey with hospitals and health systems, 40% of respondents encountered an unauthorized AI tool in their organizations and nearly 20% have used them.
In healthcare, shadow AI refers to unsanctioned use of artificial intelligence tools outside of an organization’s approved governance framework. Tools such as ChatGPT or transcription applications provide real value; but when used without organizational/IT oversight, cybersecurity experts warn they can introduce risk on a scale that most healthcare leaders and staff underestimate. These risks include maintaining privacy of patient data and accuracy of information provided by the tool.
Shadow AI uses
Labs can be a hotspot for shadow AI for the following reasons:
· Massive documentation burden: AI is incredible at documentation
· Regulatory language (CAP, CLIA, FDA): AI translates and summarizes instantly
· SOP writing and revising: AI does this in seconds
· Validation data: AI capably drafts summaries
· QC data analysis: AI can interpret trends
· Understaffing and burnout: AI feels like a secret assistant
· No formal AI policy: People assume “it’s probably fine”
Examples specific to the lab
Shadow AI in the lab looks like:
· A laboratory professional using ChatGPT to write a corrective action for a QC failure
· A supervisor pasting a CAP deficiency into AI to draft the response
· A manager asking AI to create a validation plan for a new analyzer
· An educator using AI to create competency questions
· A quality coordinator summarizing 30-page regulations
· A director using AI to write policies faster
Shadow AI risks
Shadow AI use doesn’t look like a potential attack; it looks like productivity. But when entering data into an external AI platform, it effectively leaves the organization’s control. As explained by Fortified Health Security, “Anyone using shadow AI can unknowingly exfiltrate sensitive information to third-party systems where it becomes part of external models. Shadow AI doesn’t just leak data; it donates it to someone else’s model. Once uploaded, it cannot be retrieved or deleted.”2
I welcome your comments and questions — please send them to me at [email protected].
REFERENCES
1. Raths D. Navigating shadow AI: An interview with Wolters Kluwer’s Dr. Alex Tyrrell. MLO Online. February 9, 2026. Accessed February 10, 2026. https://www.mlo-online.com/information-technology/artificial-intelligence/news/55356115/navigating-shadow-ai-an-interview-with-wolters-kluwers-dr-alex-tyrrell?.
2. Fortifiedhealthsecurity.com. Accessed February 10, 2026. https://fortifiedhealthsecurity.com/horizon-report/2026-Horizon-Report.pdf.
|
|
