Study warns that the use of AI in healthcare may result in increased unintentional unequal access.

Health

Title: Integration of AI in Healthcare Systems Raises Concerns of Uneven Access, Study Finds

By Pierre Herubel

Promising as it may be, the integration of artificial intelligence (AI) into healthcare systems could inadvertently lead to uneven access, warns a collaborative research study conducted by the University of Copenhagen, Rigshospitalet, and DTU. The study examined AI’s ability to identify depression risk across different demographic segments, highlighting the need for vigilance in algorithm implementation to curb potential biases. The researchers advocate for rigorous evaluation and refinement of algorithms before their release.

The study acknowledges the progressive application of AI in the healthcare sector, ranging from improving MRI scans to aiding in swift emergency room diagnoses and enhanced cancer treatment plans. Danish hospitals are among those testing AI’s potential in these areas, with the Danish Minister of the Interior and Health, Sophie Løhde, envisioning AI as a future cornerstone in alleviating strain on the healthcare system.

One of the invaluable capabilities of AI in healthcare is its proficiency in risk analysis and resource allocation. It optimizes the allocation of limited resources, ensuring therapies reach patients who stand to benefit the most. Some countries have already utilized AI to determine suitable candidates for depression treatment, a practice that may extend to Denmark’s mental health system.

However, the researchers from the University of Copenhagen stress the need for thoughtful consideration and cautious deployment of AI to prevent unintentional exacerbation of inequality or its transformation into a tool for purely economic calculations. They caution against reckless implementation, which may inadvertently hinder rather than help. Melanie Ganz, from the University of Copenhagen’s Department of Computer Science and Rigshospitalet, emphasizes the potential of AI but underscores the necessity of cautious deployment to avoid unintended distortions in the healthcare system. The study also highlights how biases can subtly infiltrate algorithms designed to assess depression risk.

The study, co-authored by Ganz and her colleagues from DTU, establishes a framework for evaluating algorithms in healthcare and broader societal contexts. Its aim is to identify and rectify issues promptly, ensuring fair algorithmic practices before implementation. The researchers discovered potential disparities in the algorithm’s effectiveness across different demographic groups, which can be influenced by factors such as education, gender, and ethnicity. These variations, up to 15% between groups, indicate that even with well-intentioned implementation, an algorithm designed to enhance healthcare allocation can unintentionally skew efforts. The researchers caution the need to scrutinize algorithms for hidden biases that may lead to the exclusion or deprioritization of specific groups.

Ethical concerns surrounding AI implementation are also raised by the researchers, particularly regarding the responsibility for resource allocation and treatment decisions resulting from algorithmic outputs. Transparency in decision-making processes, especially when patients seek explanations for algorithm-driven decisions, is called for.

The study’s co-author, Sune Holm from the Department of Food and Resource Economics, emphasizes the importance of being critical rather than blindly accepting the benefits of AI in healthcare. He emphasizes the need for both politicians and citizens to be aware of the potential pitfalls associated with AI use.

The study’s findings were presented at the 2023 ACM Conference on Fairness, Accountability, and Transparency.

© 2023 TECHTIMES.com All rights reserved. Do not reproduce without permission.