Thoughtful Implementation Needed when Considering AI in Patient Safety: Recommendations from an Expert Panel

A key component of preventing harm is the safe and thoughtful implementation of new technologies.

While there’s a lot of excitement about the potential benefits of the use of artificial intelligence (AI) in healthcare, it’s not without serious risks. Without careful planning and long-term monitoring, the use of AI could pose potential harm to patients.

In January 2024, an expert panel convened by the Lucian Leape Institute at the Institute for Healthcare Improvement (IHI) reviewed three clinical use cases for AI:

  • Clinical documentation support

  • Clinical decision support

  • Patient support chatbots

The topic of AI is vast and complex, and it’s important to note that this report did not include the potential impacts of AI on issues of equity, access, and patient or provider satisfaction; data security or privacy; revenue cycle and operations; or healthcare professional education.

What’s in the Report

  • Discussion of potential benefits, risks, and challenges of AI implementation in clinical care for three use cases

  • A detailed review of mitigation and monitoring strategies and expert panel recommendations

  • An appraisal of the implications of AI for the patient safety field

  • Specific considerations for seven key groups, including patients and patient advocates, safety and quality professionals, and healthcare systems

Previous
Previous

System-Focused Event Investigation and Analysis Guide

Next
Next

Recommendations for Your Drug Diversion Program: Tools from ISMP