Warning: Popular chatbots propagate discredited medical concepts and racial prejudice, cautions study.

Article by Pierre Herubel:

Title: Study Reveals Popular Chatbots Perpetuate Racial Bias in Medical Information

In a recent study published in the academic journal Digital Medicine, researchers from Stanford School of Medicine have raised concerns about the perpetuation of racial bias and debunked medical ideas in popular chatbots and large language models (LLMs) used in medical practice.

The study, obtained exclusively by The Associated Press, focused on chatbots like ChatGPT and Google’s Bard, which responded to medical queries, particularly those related to race, with disturbing and erroneous information. These AI models, trained on extensive text data from the internet, provided fabricated, race-based equations and promoted debunked medical beliefs about Black patients.

Questions regarding skin thickness differences between Black and white skin and lung capacity calculations for Black individuals were posed to these AI systems. Shockingly, the chatbots perpetuated outdated ideas by providing responses that supported these unfounded claims.

Additionally, the study explored the response of these AI models to a discredited method for measuring kidney function based on race. ChatGPT and GPT-4 offered responses that propagated false assertions about Black individuals having different muscle mass and higher creatinine levels. This not only perpetuates medical misinformation but also has real-world consequences, potentially leading to misdiagnoses and healthcare disparities.

Dr. Roxana Daneshjou, an assistant professor of biomedical data science and dermatology at Stanford University, expressed deep concern about the regurgitation of racially biased ideas by commercial language models used by physicians. OpenAI and Google, the creators of these AI models, have acknowledged the need to reduce bias in their systems and emphasized that chatbots are not substitutes for medical professionals. Google explicitly advised against relying on Bard for medical advice. However, the study highlights the challenges in addressing bias in AI models and the potential harm they can cause in healthcare.

This study’s findings align with concerns raised by healthcare professionals and researchers about the limitations and biases of AI in medicine. While AI can assist in diagnosing challenging cases, it is crucial to recognize that these models are not intended to make medical decisions. Dr. Adam Rodman, an internal medicine doctor, questioned the appropriateness of relying on chatbots for medical calculations.

The issue of bias in AI is not new, as algorithms used by hospitals and healthcare systems have shown a systematic favoritism towards white patients over Black patients, resulting in disparities in care.

It is imperative for companies and researchers to address these biases and ensure that AI models used in healthcare are reliable and unbiased. As more physicians turn to these language models for assistance, the race-neutral and evidence-based delivery of medical information becomes even more critical.

Tech Times will continue to report on advancements and challenges in AI and healthcare.

© 2023 TECHTIMES.com All rights reserved. Do not reproduce without permission.

By admin

I have over 10 years of experience in the cryptocurrency industry and I have been on the list of the top authors on LinkedIn for the past 5 years. I have a wealth of knowledge to share with my readers, and my goal is to help them navigate the ever-changing world of cryptocurrencies.