African male doctor monitors patient's vital signs on screen.
HealthFebruary 02, 2022

Does applying artificial intelligence in medicine exacerbate or lessen healthcare disparities?

It’s no surprise that the COVID-19 pandemic has accelerated the urgency within the medical community to explore and experiment with new technologies, such as virtual reality and artificial intelligence (AI), to address the ever-increasing demands in medicine and patient care.

But are these technologies causing risks to patients through biased data collections?

In the October 2021 episode of the Wolters Kluwer Expert Insights Webinar Series, Vikram Savkar, Vice President and General Manager, Medicine Segment of Health Learning, Research & Practice at Wolters Kluwer, spoke with Dr. Safwan S. Halabi, Associate Chief Medical Informatics Officer; Vice Chair of Imaging Informatics; Medical Imaging, Attending Physician, Medical Imaging (Radiology)at Ann & Robert H. Lurie Children's Hospital of Chicago and Associate Professor of Radiology (Pediatric Radiology) at Northwestern University Feinberg School of Medicine.

Dr. Halabi discusses artificial intelligence and how it is being employed or investigated for deployment in a wide range of capacities across medical specialties. The challenges in ensuring that the implementation of AI both ethically and legally are also discussed. Here are some of the key insights from this series:

1. What are the current and future uses of AI tools in medicine?

The American Medical Association’s stance on AI, or augmented intelligence as a more appropriate use in healthcare, reflects the enhanced capabilities of human clinical decision-making when combined with analytical methods and systems. During the pandemic, there have been many shifts towards virtual learning and telehealth, and a lean towards leveraging AI especially during the time of burnout.

An important factor has been in identifying opportunities to integrate practicing physicians to develop, design, and validate the implementation of healthcare AI. This includes transparency within clinical decision-making and best practices, reproducibility, addressing biases, and safeguarding privacy.

Encouraging education, for not only healthcare professionals but also patients, about what AI can do as well as what implications it may have is important when considering the future of AI and how it will be integrated into healthcare practices.

2. How is AI leading to healthcare disparities?

The Institute of Electrical and Electronics Engineers (IEEE), a major professional industry group and trusted voice for technologies, published guidance on looking at what are the pitfalls of using these artificial intelligence systems since a standard on using these systems is not currently set in place. One of these pitfalls of AI in healthcare is biases that are a disadvantage to certain groups of people.

Much of the information that AI uses is from data sets. If these data sets are underrepresented for certain groups, or certain ideas, we will start to see the unconscious assumptions from the programmers and designers that are using these data sets as part of product development. AI makers need to look at race, ethnicity, gender, age, body type, and disabilities to ensure that tools used for such things as diagnosing diseases are reliable for all groups.

An example of the disadvantages from AI technologies is in cardiovascular risk scores where we have historically seen that white male patients are most of the dataset making tools, algorithms, and clinical decision support less precise for women and minorities.

3. What methods are being used to promote equity and inclusion when deploying AI in healthcare?

The Emory Group’s recommendations on protecting patients’ interest in the era of big data are centered around people having the right to define access and provide informed consent with respect to their own personal information. The Emory Group also states that individuals require mechanisms to help curate their own unique identity and personal data in conjunction with policies and practices that make them aware of the consequences resulting from the resale of their personal information life experiences.

It is the obligation of the medical community to keep data-driven insights about sensitive health issue confidential, prevent the reidentification of individuals from joining data sets, and to notify a patient of a health risk identified by AI. We also must ask if patient data can be reused or sold for developing advanced analytic methods, and should an app be sold for profit if such patient data is used in its creation.

Clinical users need academic guidance on how to use AI algorithms, which requires an understanding of medical societies and organizations on how they are developed. There also needs to be a collaboration between physicians, scientists, and industry support in the development of AI use cases to assemble publicly available data sets. Information and research need to be shared across disparate locations, geography, and patient populations and advocate for and provide research funding for AI. Established standards of how we label and how we implement these algorithms and make sure that we have balanced regulation of AI technology are crucial.

Dr. Halabi’s parting words were that AI is a powerful tool with many applications that can help physicians in many diagnostic tasks. Integrating AI models holds promise for improving healthcare delivery and patient outcomes. More research needs to be done regarding the evaluation of AI in a clinical setting, including its impact on workflow and value of services. No matter how AI is implemented in the workflow, the physicians will have an important role in ensuring accuracy, safety, and quality of algorithms.

Learn more in the full webinar recording "Does Applying Artificial Intelligence in Medicine Exacerbate or Lessen Healthcare Disparities".

Watch The Webinar
Back To Top