Factors influencing public confidence in AI-driven healthcare

A survey of 3,000 U.S. adults reveals that trust in medical AI increases with performance, regulatory approval, and clinician involvement, highlighting the importance of transparency and human oversight.
March 26, 2026

A recent analysis conducted by University of Michigan and Michigan State University surveyed 3,000 U.S. adults on their opinions about the use of artificial intelligence (AI) in varying medical scenarios. Details of the study are reported in a press release.

According to the results, factors like performance, regulatory approval, credentials, data security, and whether or not a clinician accompanies the AI influence individual’s trust and attitude toward medical AI. Additional key findings:

  • Adults were 32.5% more likely to trust AI if it performed equal to or better than a medical professional.
  • Participants preferred human intervention, reporting they would be 18.4% more likely to book a visit with AI if a clinician was present.
  • Food and Drug Administration (FDA) approval and certifications from various organizations heightened AI trust.
  • Participants were still wary of using AI to make medical decisions.

The study is published in JAMA Network Open. Regarding the study, Michigan Medicine said, “To build trust in medical AI, institutions must prioritize performance and transparency while balancing governance and the enduring importance of the clinician-patient relationship.”

About the Author

Erin Brady

Managing Editor

Erin Brady is Managing Editor of Medical Laboratory Observer.

Sign up for our eNewsletters
Get the latest news and updates