tech

AI Medicine: Gender stereotypes persist

Technology: Current research highlights the persistence of gender stereotypes in the application of artificial intelligence in the medical field. Researchers at Australia’s Flinders University examined leading generative AI models, including OpenAI’s ChatGPT and Google’s Gemini, and asked them nearly 50,000 questions about healthcare workers.

The study found that these AI models predominantly portray nurses as women, regardless of variables such as experience and personality traits. This finding indicates a significant bias as nurses were identified as female in 98% of cases. Additionally, women were significantly underrepresented in stories about surgeons and doctors, ranging from 50% to 84%. These numbers likely reflect AI companies’ efforts to reduce past social biases in their production.

Generative AI continues to reinforce gender stereotypes, according to anesthesiology experts at Brussels’ Vrije Universiteit who study bias in artificial intelligence. In scenarios where a healthcare professional displays positive traits, the healthcare professional is more likely to be classified as female. In contrast, descriptors with negative characteristics more often identify these professionals as male.
This finding suggests that AI tools can maintain strong beliefs about gendered behavior and the appropriateness of certain roles. Additionally, AI bias not only affects women and underrepresented groups in medicine, but also impacts patient care as algorithms perpetuate false clinical stereotypes based on race and gender. Addressing these biases is vital for the responsible integration of artificial intelligence into healthcare.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button