Does Artificial Intelligence (AI) Support Diversity, Equity and Inclusion? by Patrice Caire

1 article/video left!

log in or sign up to unlock 3 more articles/videos this month and explore our expert resources.

Automatic Summary

AI, Inclusivity and Diversity: How can AI systems support equality?

Today, only 23% of women work in tech, and the lack of gender diversity often leads to AI tools and systems reinforcing existing societal inequalities and stereotypes. This imbalance has propelled me, Doctor Patrik, to focus on an essential question: "How can artificial intelligence systems support diversity, equality and inclusion?" In this article, I will delve into these issues, tales from the industry and potential solutions.

Real-life Anecdotes of Bias in Technology

Amazon and Facebook's Age Discrimination:

Referring back to the story about my friend Jane, a 60yr old who was barred from viewing a job advertisement due to an illegal collaboration between Amazon and Meta(formerly Facebook). This illegal age bias exemplifies the unethical side of artificial intelligence. This issue isn't isolated to this case alone. Bias can creep in through several interactions between humans and AI systems, often culminating in ethical dilemmas.

“Zebra Problem” - The Importance of a Diverse Dataset:

AI Systems, through the intelligent behaviour they exhibit, analyse their surroundings to achieve their goals. These "goals" could be recognition of objects, language processing, or other complex tasks, all of which are underpinned by neural network / machine learning models. The underlying factor here is the training data. The diversity and volume of data used influence the precision of output from AI systems. Thus, the dataset's importance cannot be overemphasised, as errors can lead to inappropriate labelling and subsequent dramatic impacts on lives.

Mislabeling in Practice

The story of a Google Photos user whose girlfriend was wrongly categorised as a gorilla illustrates this problem. Although Google later removed the gorilla tag from its service, a more systematic solution is still required. Similarly, for years, image searches for "CEO" often return images of white males due to the lack of diverse data.

The University of Washington determined to investigate this issue and was surprised to discover that adding even a slight variation to the search term drastically shifted the results, again excluding women and people of colour. All these instances underscore the vital but often overlooked role data plays in these systems.

Recommendations for Addressing AI Bias

Automated Decision Systems and the “Zebra Problem”:

If insufficient care is taken to ensure diverse datasets, further instances of bias can emerge. For example, a couple discovered sizeable discrepancies in their individual credit limits on their Apple cards, despite the wife having a superior credit history. The reason? An antiquated 1974 law underpinned the decision-making algorithm. Again, the zebra issue emerges due to incorrect data usage leading to discrepancies.

NIST's report – Biases and AI are the Tip of the Iceberg:

The National Institute for Standard and Technology (NIST) in a report published in March highlighted only some statistical and computational biases are currently addressed. Many human and systemic biases are still largely ignored, leading to the conclusion that the current focus on bias in AI merely scratches the surface of a much deeper problem.

Possible solutions:

  • Users must be vigilant and report any ethical issues encountered with AI systems.
  • Companies must operate within existing legal frameworks and publically disclose the datasets they use, ensuring they are diverse and gender-balanced.
  • Having diverse teams representing society at large, not merely a fraction, involve in AI's lifecycle is necessary.
  • Multi-factor and multi-stakeholder involvement are essential for testing and evaluating these systems.
  • Government and relevant institutions must ensure companies operate within the law and not a legal void, setting AI standards and ensuring compliance through audits.

Conclusion

It lies in our collective decision whether we allow AI systems to perpetuate societal stereotypes and inequalities or pivot and work towards AI that supports diversity, equality and inclusion. As we increasingly use these technologies, we must remain vigilant and strive to create a more inclusive and equitable AI landscape for all.

Questions

There are questions such as "if there are no clear legal frameworks, how can we ensure AI is being regulated? " posed by Paula Castro that emphasizes the need for robust framework to monitor and regulate AI, and it is indeed a common and serious concern. It's important to keep abreast of laws published by governments and join discussions of bodies such as IEEE that are working towards setting standards and addressing issues regarding trust and gender balance in AI.

In conclusion, we all have a role to play in making sure that AI is used fairly and equitably, which includes being vigilant and proactive in addressing and reporting issues that affect us.


Video Transcription

Read More