Skip to content
Photo by Becca Tapert / Unsplash

Paris, France - A new UNESCO study has uncovered alarming levels of gender bias, homophobia, and racial stereotyping in large language models, the AI systems that power popular generative AI platforms. 

The report, released ahead of International Women's Day, examined OpenAI's GPT-3.5 and GPT-2 and Meta's Llama 2.

The study found that these AI models consistently associated women with domestic roles and words like "home," "family," and "children" while associating men with words like "business," "executive," "salary," and "career." 

Open-source models such as Llama 2 and GPT-2 showed the most significant gender bias, with Llama 2 depicting women in domestic roles four times more often than men.

UNESCO Director-General Audrey Azoulay called on governments to enforce clear regulatory frameworks and for private companies to continuously monitor and evaluate their AI systems for systemic bias. 

The organization's Recommendation on the Ethics of AI, which was unanimously adopted by member states in November 2021, outlines specific actions to ensure gender equality in AI development.

The study also highlights the need to diversify recruitment in AI companies, noting that women comprise only 20% of employees in technical roles, 12% of AI researchers, and 6% of professional software developers. 

UNESCO stresses that diverse teams are essential to creating AI systems that meet the needs of diverse users and uphold their human rights.

Comments

Latest