Ever wonder if the most recent and best artificial intelligence (AI) tool you examine within the morning paper goes to save lots of your life? A brand new study published in JAMA led by John W. Ayers, Ph.D., of the Qualcomm Institute throughout the University of California San Diego, finds that query may be difficult to reply since AI products in healthcare don’t universally undergo any externally evaluated approval process assessing how it’d profit patient outcomes before coming to market.
The research team evaluated the recent White House Executive Order that instructed the Department of Health and Human Services to develop recent AI-specific regulatory strategies addressing equity, safety, privacy, and quality for AI in healthcare before April 27, 2024. Nonetheless, team members were surprised to search out the order didn’t once mention patient outcomes, the usual metric by which healthcare products are judged before being allowed to access the healthcare marketplace.
“The goal of medication is to save lots of lives,” said Davey Smith, M.D., head of the Division of Infectious Disease and Global Public Health at UC San Diego School of Medicine, co-director of the university’s Altman Clinical and Translational Research Institute, and study senior creator. “AI tools should prove clinically significant improvements in patient outcomes before they’re widely adopted.”
In keeping with the team, AI-powered early warning systems for sepsis, a fatal acute illness amongst hospitalized patients that affects 1.7 million Americans annually, demonstrates the results of inadequate prioritization of patient outcomes in regulations. A 3rd-party evaluation of probably the most widely adopted AI sepsis prediction model revealed 67% of patients who developed sepsis weren’t identified by the system. Would hospital administrators have chosen this sepsis prediction system if trials assessing patient outcomes data were mandated, the team wondered, considering the array of obtainable early warning systems for sepsis?
“We’re calling for a revision to the White House Executive Order that prioritizes patient outcomes when regulating AI products,” added John W. Ayers, Ph.D., who’s deputy director of informatics in Altman Clinical and Translational Research Institute along with his Qualcomm Institute affiliation. “Much like pharmaceutical products, AI tools that impact patient care must be evaluated by federal agencies for a way they improve patients’ feeling, function, and survival.”
The team points to its 2023 study in JAMA Internal Medicine on using AI-powered chatbots to reply to patient messages for example of what patient outcome-centric regulations can achieve. “A study comparing standard care versus standard care enhanced by AI conversational agents found differences in downstream care utilization in some patient populations, equivalent to heart failure patients,” said Nimit Desai, B.S., who’s a research affiliate on the Qualcomm Institute, UC San Diego School of Medicine student, and study coauthor. “But studies like this don’t just occur unless regulators appropriately incentivize them. With a patient outcomes-centric approach, AI for patient messaging and all other clinical applications can truly enhance people’s lives.”
The team recognizes that its proposed regulatory strategy generally is a significant lift for AI and healthcare industry partners and will not be obligatory for each flavor of AI use case in healthcare. Nonetheless, the researchers say, excluding patient outcomes-centric rules within the White House Executive Order is a serious omission.