WHO’s move for regulation of AI in healthcare highlights risks posed by usage of AI tools, says GlobalData
The World Health Organization (WHO) has outlined several considerations for the regulation of artificial intelligence (AI) in healthcare.
The WHO recently released a new publication, in which it listed key regulatory considerations, which touch on the importance of establishing safety and effectiveness in AI tools, making systems available to those who need them, and fostering dialogue among those who develop and use AI tools. The move highlights the potential challenges associated with using AI tools in healthcare, says GlobalData.
The WHO recognizes the potential of AI in healthcare, as it could improve existing devices or systems through strengthening clinical trials, improving diagnoses and treatment, and aiding the knowledge and skills of healthcare professionals.
AI technologies are and have been deployed quite quickly, and not always with a full understanding of how they will work in the long run, which could be harmful to healthcare professionals or patients. AI systems in medical or healthcare often have access to personal and medical information, so there should be regulatory frameworks in place to ensure privacy and security. There are a number of other potential challenges with AI in healthcare, such as unethical data collection, cybersecurity risks, and amplifying biases and misinformation.
A recent example of biases in AI tools comes from a study conducted by Stanford University. The study results revealed that some AI chatbots provided responses that perpetuated false medical information about Black people. The study ran 9 questions through four AI chatbots, including OpenAI’s ChatGPT and Google’s Bard. All four of the chatbots used debunked race-based information when asked about kidney and lung function.
The WHO has released six areas for regulation of AI for health, citing a need to manage the risks of AI amplifying biases in training data. The six areas for regulation are transparency and documentation; risk management; validating data and being clear about the intended use of AI; a commitment to data quality; privacy and data protection; and fostering collaboration.