Navigating shadow AI: An interview with Wolters Kluwer’s Dr. Alex Tyrrell

Feb. 9, 2026
7 min read

How can healthcare leaders navigate the rising issue of “shadow AI?”

In this article, originally published by our sister publication, Healthcare Innovation, Contributing Senior Editor David Raths explores the rise of shadow AI in healthcare with Wolters Kluwer’s Alex Tyrrell, Ph.D., highlighting survey findings that reveal widespread awareness and usage among clinicians, raising concerns about patient safety, data privacy, and the need for effective governance frameworks.

Read the full article from Healthcare Innovation below.

How Should Health System IT Leaders Respond to ‘Shadow AI’?

For years, IT leaders have warned about the risks of “shadow IT” — the unauthorized use of software or cloud services. A new subset of this issue is “shadow AI,” in which clinicians and other health system employees use unauthorized large language models. Healthcare Innovation recently spoke with Alex Tyrrell, Ph.D., head of advanced technology at Wolters Kluwer and chief technology officer for Wolters Kluwer Health, about the company’s new survey of healthcare professionals and administrators on this topic. 

Healthcare Innovation: Why did Wolters Kluwer want to ask about shadow AI in a survey and were there any surprising responses?

Tyrrell: In 2025, we started to hear anecdotally about shadow AI becoming more prevalent, but we didn't have any hard data to back it up, so we commissioned the survey. And yes, there were some results that were definitely notable. You're starting to see numbers like 40% of respondents are aware of some form of shadow AI. That isn't necessarily surprising given the conversations we're having, but a hard data point puts it in perspective. 

When you look across the range of risks, things like patient safety arise. Folks who have used these technologies are familiar with the fact that they hallucinate and can make errors. 

Another interesting point is the awareness that there's potential for de-skilling. That suggests that there's the understanding that over time, as these tools become more ubiquitous, there can potentially be an effect where they just begin to get trusted. There seems to be awareness of the future risks, where we begin to trust AI more, put more emphasis on AI tools in a clinical setting, and that has the potential for additional risk.

HCI: One survey item that interested me was that one in ten respondents said they had used an unauthorized AI tool for a direct patient care use case. Now that would seem to raise patient safety concerns for top healthcare execs of a health system.

Tyrrell: Yes, that particular data point is definitely concerning, as you suggest. I think the risk profile there is both the fact that unvetted AI could potentially introduce an error, but also there's the privacy concern. We think this is one of the concerns that is more difficult for people to understand initially when they interact with these tools. We use these tools in our everyday lives. We're familiar with the idea of a hallucination and how that can have an impact, but perhaps not with the idea that exposing protected and private data to these models is really an existential risk. We borrow the Las Vegas tagline — what happens in an LLM potentially stays in that LLM forever. It's difficult for people to understand that existential risk, and that's definitely a concern.

HCI: I have heard of two examples in the last week of academic medical centers’ efforts to put firewalls around the use of generative AI tools by clinicians and administrative staff, while still allowing people to experiment. Does that approach make sense? 

Tyrrell: Absolutely, like the idea of creating a sandbox environment that can be rigorously controlled, audited, and monitored. One of the things that you have to understand is that creating a “culture of no” where you basically attempt to block all access is likely to create the very behaviors you're trying to control. People are going to seek out these tools. There's evidence of that. So, turning it around and conducting regular audits, understanding the use cases, and understanding some of the places where you can add value in a workflow is really important. You can identify a set of vendors and tools that can be properly vetted for due diligence risk, and then make those tools available. Then really it's about engagement and training. This is a great opportunity to raise awareness early on, during the pilot stage, with all stakeholders in the organization, and let them experience what well-governed AI looks like in the workplace, so they know the difference.

HCI: We often interview health system execs about the AI governance frameworks they're putting in place. From talking to your customers, do many of them still have a lot of work to do, and is it something that will continue to evolve?

Tyrrell: Absolutely. I think the pace of technology change and the regulatory landscape are constantly evolving, so you have to be prepared for it. You need to think about both the long term and the immediate need, and think about that balance. It's not just a list of approved tools. We go through this in my own organization. There are tools, but then there are also the use cases. What exactly is the intent and purpose of the application of this technology? There are probably certain types of things that just wouldn't be appropriate with Gen AI with the right risk profile. Even though the tool itself may not be harvesting private data or leaking content through the internet, or may have a good safety profile in the traditional sense, you also have to look at the use cases.

HCI: One of the findings of the survey is that the administrators are three times more likely to be actively involved in the policy development than providers. But when it comes to awareness, 29% of providers were aware of the main policies, versus just 17% of the administrators. What does this suggest? Should more providers be involved in the policy-making?

Tyrrell: That's a really interesting data point, right? In my organization at Wolters Kluwer, we definitely approach this thinking that everybody needs to be involved. A central governance function may be part of the overall approach, but it really is about engagement and awareness — having a proper training and engagement program for all stakeholders.

HCI: Are Wolters Kluwer’s UpToDate point-of-care tools starting to introduce AI features? Do you have to go through a process with health system AI governance committees to allow them to understand how AI is being used in your products, and let them ask you questions about how it's validated?

Tyrrell: We absolutely are introducing AI capabilities into a number of our products, depending on the nature and use case. Overall, as a vetted and established vendor in the enterprise, we work very closely with customers to adhere to whatever policies they have in place. So we're a very close and trusted partner in that regard.

HCI: Do you think that AI will reshape clinical decision support and best practice alerts as we’ve come to think of them over the past 10 or 15 years?

Tyrrell: Obviously, we've established evidence-based practice for a very long time, and I think it's still the key to success outcomes. The fact that the AI tools can help streamline this and improve access is important, but fundamentally it goes back to basics. When you look at the entire evidence-based lifecycle, that is always going to be alive and well, and these tools are going to be enablers. They are going to assist and augment clinical decision-making and judgment, but the clinicians will continue to remain in the driver's seat. These tools will adapt and improve and help providers as well as other stakeholders in the healthcare system, But particularly around clinical decision support, we anticipate the core evidence-based approach to remain largely the same — and it's really focusing on improving that clinical reasoning and judgment and having the tools be augmentative.

About the Author

David Raths

David Raths is a Contributing Senior Editor for MLO sister brand Healthcare Innovation, focusing on clinical informatics, learning health systems and value-based care transformation. He has been interviewing health system CIOs and CMIOs since 2006.

Follow him on Twitter @DavidRaths

Sign up for our eNewsletters
Get the latest news and updates