AI Must Supplement and Not Drive Patient Care Decisions

At the recently concluded 2021 KLAS Digital Health Investment Symposium, Dr. Robert Bart, M.D., Chief Medical Information Officer of UPMC stated that AI should be referred to as augmented intelligence versus artificial intelligence in healthcare. His reasoning is that no physician should trust or rely entirely on the output of an AI solution to dictate their course of treatment for a patient. Rather, AI should be used to augment the physician’s education and experience for determining the best course of action for patient care.

AI can generate inaccurate or false results due to AI training models that are poorly designed or by giving too much weight to certain AI data elements analyzed by the tool that bias or corrupt the results. These AI model issues are likely to result in inaccurate and harmful recommendations for patient treatment. These current AI challenges represent a large part of the reason many clinicians do not yet trust AI solutions emerging in the healthcare market.

Recently, both Google and Microsoft healthcare executives promoted the same position for using AI in healthcare. Both companies are focused on making current healthcare data more usable by clinicians by improving data search engines that can be used to inform the clinicians of potential adjustments to care plans that provide better treatment outcomes and patient safety. Google and Microsoft are clear leaders in the development of AI technologies and solutions across multiple industries. Will they be disciplined enough to deliver on their AI positions?

The Federal Government Is Here to Help

While many people fear the statement, “The government is here to help,” it may be a good thing relative to managing AI development and use. The Biden administration is building off initial Trump administration actions to identify helpful AI breakthroughs that can drive useful adoption of AI across all industries. The Office of Science and Technology Policy (OSTP) is working with the National Science Foundation (NSF) to lead a new National AI Research Resource Task Force. OSTP will also create a National AI Advisory Committee to provide recommendations for AI ethics, research and development, and AI’s impact on workforce efficiency.

The Food and Drug Administration (FDA) is also involved in managing AI solutions. Currently, the FDA regulates a portion of AI solutions in healthcare, but not all of them, and the framework for governing AI solutions is complex. The FDA reviews AI-enabled software based on its risk classification. Class I devices pose the lowest risk, Class II devices pose a higher risk, and Class III devices pose the highest risk. Manufactures can also apply for a De Novo request if the devices’ safety risk and technology are well understood. While FDA regulation is necessary to protect patients, the 21st Century Cures Act of 2016 does allow clinical decision support solutions that make recommendations to clinicians but that do not replace a provider’s independent judgement to be exempt from FDA risk classification processes. This appears to support the term “augmented intelligence.”

Augmented Intelligence Will Drive Higher Trust and Adoption by Clinicians

Clinicians will be more likely to adopt AI that is perceived to provide cognitive support for evaluating patient treatments versus driving unaided interventions to care plans. While clinical decision support systems have historically been driven by standard rules sets supported by evidence-based medicine, AI can improve this process. Unlike rules that need to be constantly updated, AI can use adoptive models relative to the data feeds it receives by which it learns new patterns for recommendations. To drive higher levels of provider trust, the AI training models must be open for review by the clinicians who use the AI solution. Having access to large data sets across many care settings will improve AI recommendations across all patient populations.

The Players: Large Technology Vendors and Emerging Digital Health Companies

Augmented intelligence for healthcare will be provided by an array of established and emerging companies. Representative examples include the following:

  • Google – Working with Mayo Clinical to develop several AI solutions
  • Microsoft – Recent acquisition of Nuance using AI to support clinical documentation
  • Olive.ai – RPA solution
  • Oncora Medical – Oncology treatment AI
  • CloudMedX Health – AI to optimize patient outcomes and reduce administrative overhead
  • Butterfly Network – Remote patient imaging

Success Factors

  1. AI solutions should be evaluated initially in the innovation center to prototype the solution for recommendation accuracy and the ability to improve clinician workflows.
  2. Providers evaluating AI to support clinicians in healthcare delivery should always include physicians who will be impacted by the AI process.
  3. Any AI solution evaluated should allow open access to the AI training models and update processes for review by clinician users.

Summary

Augmented intelligence solutions that provide recommendations to aid the cognitive process of clinicians treating patients will be more highly trusted and adopted. Clinicians need solutions that help them focus on important aspects of a patient’s health status while also reducing the overhead of working with multiple applications or data streams. Clinicians using AI solutions will also achieve higher levels of trust with clinicians if they can review and be involved with AI training model updates. AI vendors who involve higher levels of clinician review of their AI training models are likely to improve model recommendation accuracy across all patient populations that supports healthcare equity.

Interoperability with clinical applications will also impact AI solution adoption. AI solutions will need to be interoperable for receiving patient data from EHRs, digital applications on smart phones, and remote monitoring devices to generate the best treatment recommendations. The AI solutions will also need to be designed to interoperate with application workflows to provide recommendations to the clinicians that best supports the care delivery process.