As artificial intelligence aims to transform health care, soon your doctor may consult an AI algorithm before deciding on your treatment
When doctors decide on a course of treatment, they have a lot of data to inform their decision-making. But they do not always have the time to interpret that data.
University of Washington Chair of Radiology Dushyant Sahani believes physicians are only able to process about 5% of the data available to them before deciding on a particular treatment.
“Physicians are overwhelmed with managing the data. And we want the physician to focus more time with the patient and provide them the best experience,” he said. “Health care is one of the best human endeavors, but it’s also a journey of data. And in the modern world, we have so much data, but we need a better way of using this data for appropriate decision-making.”
Sahani is a co-founder of UW’s Institute of Medical Data Science, which supports health care-related artificial intelligence initiatives. Founded last year in Seattle, the institute hopes to provide research, education and funding to get AI into hospitals to provide better patient outcomes.
The technology promises to transform health care by synthesizing millions of pieces of data in nanoseconds – informing how a physician treats their patients and how care is prioritized. But as AI becomes permanently enmeshed in the process of healing, an algorithm working as intended can become a life and death proposition.
Appropriately used, an AI algorithm helps medical professionals sift through large amounts of data in a short time. Sahani pointed to the determination of whether a lesion in the body is potentially cancerous. Based on the risk profile of the lesion, a physician could decide to wait to see if the lesion grows, or test it through an invasive procedure that carries a small amount of risk.
An AI algorithm could be trained on many images of the same type of lesion and a host of other data. Through this, the AI would determine whether that legion carries enough risk to merit further testing, which would inform the doctor’s observations.
AI also can help triage and prioritize care. Often, medicine’s role is to decide how to prioritize patients who need care first or need more care. Sahani said these AI tools can interpret patient data and “pick selective patients who might benefit most appropriately for early interventions.”
More mundane uses of AI in health care are the use of large language models – in the vein of Chat-GPT – that can assist medical providers with administrative tasks like writing notes following a doctor visit or helping patients schedule their appointments.
“AI is collecting and integrating that information for us, which is very difficult for humans to manually apply. We need many staff and other resources to meaningfully apply data. AI can often make a more precise diagnosis quicker,” Sahani said. “With AI, we might be able to integrate clinical information with lab imaging and other information to come up with more personalized diagnostics that will help us with more appropriate decision-making for that patient.”
Though he believes AI is “not a panacea,” Sahani and his collaborators at the Institute of Medical Data Science hope the technology will improve the patient experience.
Despite the optimism, many of the issues plaguing AI in other sectors have much higher stakes in health care. An algorithm that does not work correctly could lead a doctor to misdiagnose their patient or incorrectly prioritize care.
Does AI create bias in health care?
A 2019 study conducted by the by University of California, Berkeley found an algorithm used to triage care among 200 million patients a year was racially biased.
The AI analyzed in the study was used by hospitals to identify patients with complex health needs who may need specialized care. But researchers found the algorithm predicted health care costs rather than the severity of illness.
Because of existing racial disparities in health care, less money is spent on Black patients than white patients. As a result, the algorithm assumed Black patients were less in need of specialized care than white patients, even though that is not the case.
If corrected, the algorithm would predict 46.5% of Black patients would need this additional help, compared to the 17.7% of Black patients the algorithm actually predicted would need it.
“Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients,” the study reads.
In a U.S. Senate hearing on the use of artificial intelligence in health care earlier this month, study author Ziad Obermeyer said his research showed how easily human bias could unintentionally find its way into AI and then be legitimized by the supposed impartiality of the technology.
“(AI) predicted – accurately – that Black patients would generate lower costs, and thus deprioritized them for access to help with their health. The result was racial bias that affected important decisions for hundreds of million patients every year,” he said in remarks to the Senate.
Though not analyzed in his study, Obermeyer noted AI algorithms can also create bias outside of race, such as gender, socioeconomic status or disability.
Obermeyer also said that many of these algorithms are still in use following his 2019 study, and that regulators should not “take algorithm developers’ word that it’s performing correctly.” Despite these criticisms, Obermeyer also stated AI has the potential to both improve health and reduce costs.
Speaking at a Senate Finance Committee hearing, committee Chair Sen. Ron Wyden, D-Ore., said that while AI is making health care more efficient, the technology is also “riddled with bias that discriminates against patients based on race, gender, sexual orientation and disability.”
In his efforts to spread AI in health care, Sahani hopes the technology can reduce disparity. But he acknowledged it can exacerbate bias as well.
“Clearly, bias is an important concern. I don’t think we have addressed that fully. We need to keep an open mind and constantly evaluate our algorithms to validate if they are true,” he said.
Sahani also noted it is incredibly important to be upfront with the public and patients about how AI is being utilized in their health care.
What about in Spokane?
Artificial intelligence tools are already in use in Spokane hospitals, although they may not yet be used in some of the expansive ways envisioned by AI’s champions.
Providence, the largest health system in Spokane, uses AI to complete administrative tasks, assist medical professionals in diagnosis and in “other innovative methods” in Sacred Heart Medical Center and other facilities.
“Providence is always exploring ways to improve the patient experience. For the last several years, Providence has invested in technological advancements, including artificial intelligence, that allow us to pioneer new ways of delivering high-quality, compassionate care safely and responsibly,” Providence said in a statement.
At the beginning of this year, Providence CEO Rod Hochman said AI would be “one of the major drivers of transformation” for the health system in 2024.
“Having significantly invested in IT infrastructure, digital and cloud technology in recent years, health systems have laid the foundation for rapid AI innovation in 2024. Generative AI will fuel advances leading to personalized patient experiences, improved patient outcomes and clinical breakthroughs,” he said in January.
“
Providence also has partnered with Microsoft and AI company Nuance to implement an AI tool that assists physicians with data entry, which Providence said will allow more time with patients.
MultiCare, Spokane’s other large health system, also has implemented AI tools in recent years. The technology is used in the organization’s planning tool and Electronic Medical Record to add “more patient time.” AI also is used for “inventory management, reducing waste and detecting anomalies,” according to a statement from MultiCare Chief Information Officer Bradd Busick.
The hospital system has launched an “ambient listening platform” that uses AI to automate the creation of clinical notes and medical charts. MultiCare facilities also use AI to refill patient prescriptions over the phone.
MultiCare Deaconess Hospital introduced several autonomous robots that use AI to traverse the hospital and deliver supplies and complete menial tasks. According to MultiCare’s statement, the four Moxi robots have completed 35,000 deliveries of items, traveled 7,000 miles and saved over 23,000 staff hours.
Both hospital systems said only internal data is used to train their AI programs and that all private data is protected.
“MultiCare’s utilization of AI is trained on our own curated data. We do not use open source/off the shelf platforms but rather apply strict governance and provisions about the types of investments we make in AI,” Busick said in a statement.
Providence signed the “Rome Call to AI Ethics,” a 2020 document that hopes to create a framework around the ethical development of AI. Also signed by IBM and Microsoft, the letter states AI must be developed “with a focus not on technology, but rather for the good of humanity and of the environment.”
“Providence proactively set up an AI governance structure to align priorities and strategy, safeguard patient data and privacy, prevent bias and ensure access to promising innovations for all, especially under served populations,” the hospital system said in a statement.