219 North Waukesha Street, Bonifay, FL 32425 | Phone: (850) 547-2163 | Fax: (850) 547-5730 | Mon-Fri 8:00am - 5:30pm | Sat 8:00am - 12:00pm | Sun Closed

Manténgase sano!

  • Posted July 9, 2025

AI Displays Racial Bias Evaluating Mental Health Cases

AI programs can exhibit racial bias when evaluating patients for mental health problems, a new study says.

Psychiatric recommendations from four large language models (LLMs) changed when a patient’s record noted they were African American, researchers recently reported in the journal NPJ Digital Medicine.

“Most of the LLMs exhibited some form of bias when dealing with African American patients, at times making dramatically different recommendations for the same psychiatric illness and otherwise identical patient,” said senior researcher Elias Aboujaoude, director of the Program in Internet, Health and Society at Cedars-Sinai in Los Angeles.

“This bias was most evident in cases of schizophrenia and anxiety,” Aboujaoude added in a news release.

LLMs are trained on enormous amounts of data, which enables them to understand and generate human language, researchers said in background notes.

These AI programs are being tested for their potential to quickly evaluate patients and recommend diagnoses and treatments, researchers said.

For this study, researchers ran 10 hypothetical cases through four popular LLMs, including ChatGPT-4o, Google’s Gemini 1.5 Pro, Claude 3.5 Sonnet, and NewMes-v15, a freely available version of a Meta LLM.

For each case, the AI programs received three different versions of patient records: One that omitted reference to race, one that explicitly noted a patient was African American, and one that implied a patient’s race based on their name.

The AI often proposed different treatments when the records said or implied that a patient was African American, results show:

  • Two programs omitted medication recommendations for ADHD when race was explicitly stated.

  • Another AI suggested guardianship for Black depression patients.

  • One LLM showed increased focus on reducing alcohol use when evaluating African Americans with anxiety.

Aboujaoude theorizes the AIs displayed racial bias, because they picked it up from the content used to train them — essentially perpetuating inequalities that already exist in mental health care.

“The findings of this important study serve as a call to action for stakeholders across the healthcare ecosystem to ensure that LLM technologies enhance health equity rather than reproduce or worsen existing inequities,” David Underhill, chair of biomedical sciences at Cedars-Sinai, said in a news release.

“Until that goal is reached, such systems should be deployed with caution and consideration for how even subtle racial characteristics may affect their judgment,” added Underhill, who was not involved in the research.

More information

The Cleveland Clinic has more on AI in health care.

SOURCE: Cedars-Sinai, news release, June 30, 2025

El servicio de noticias de salud es un servicio para los usuarios de la página web de Johnsons Pharmacy gracias a HealthDay. Johnsons Pharmacy ni sus empleados, agentes, o contratistas, revisan, controlan, o toman responsabilidad por el contenido de los artículos. Por favor busque consejo médico directamente de un farmacéutico o de su médico principal.
Derechos de autor © 2025 HealthDay Reservados todos los derechos.