Overcoming Gender Bias in AI

Automatic Summary

Overcoming Gender Bias in AI

Hello, my name is Tahmina Mahmud and I am currently working as a deep learning engineer at NATO, developing AI-based algorithms to enhance external safety features and improve driver behaviours. Today, I'd like to tackle an important issue - Gender Bias in Artificial Intelligence.

Session Overview

My talk aims to:

  1. Illustrate the relationship between Artificial Intelligence, Machine Learning, and Data Science,
  2. identify the biases found within AI algorithms,
  3. provide examples of the tangible impacts of gender bias on our daily lives,
  4. and suggest best practices to avoid AI bias.

AI, Machine Learning, and Data Science

AI is a set of concepts or techniques used to simulate tasks usually performed by humans. A subset of AI, Machine Learning, instructs machines to mimic human intelligence. On the other hand, Data Science uses statistical and machine learning techniques to identify patterns within the data.

The Importance of Data

Data, often referred to as "the new oil," is the most crucial resource in the world today, as it has the power to guide an entire civilization. Unbiased, well-processed data is elemental for building efficient AI models and hence, should be a paramount concern for all AI developers.

Gender Bias in AI

Bias is manifested when AI algorithms are trained using real-world data, tainted with inherent socio-economic inequalities and historical prejudices, hence introducing bias into AI models.

Illustrations of Gender Bias in AI

  • In Computer Vision, female politicians are often labelled related to their appearance, while their male counterparts are tagged as business people.
  • In Natural Language Processing, certain translations show a clear bias in gender associations based on conventions and societal norms.
  • Virtual assistant voices are most commonly women's, while the most powerful Computer Watson has a male voice, reinforcing existing stereotypes.

Cause and Effect of AI Bias

Training data based on gender discrimination from the real world leads to AI bias. It's time we focus on ‘Responsible AI’ to ensure these technologies promote equality and fairness.

Creating a Bias-Free AI World

While developing AI systems, let's be mindful of the data used, reviewing the context, limitations and validity. Ensuring data is representative of diverse demographics can help reduce bias. Let's also aim for gender diversity while forming teams of AI developers. AI companies worldwide should lead by example, hiring more women across teams.

Conclusion

While bias may be unavoidable in life, it should not be part of our technologies. We must strive for fairness and use AI to foster positive change. With men and women working together, we can shape the future of a bias-free AI world.

Please feel free to reach out via Email or LinkedIn with any queries, comments or further discussions on the topic. Thanks for joining this session. Together, let's make strides towards eliminating gender bias in AI.


Video Transcription

Hi, everyone. Thank you for joining this session. I'm Tahmina Mahmud. Currently working as a deep learning engineer at NATO where I'm responsible for developing A I BASED algorithms for external safety features and driver behavior improvement. The title of my talk is overcoming gender bias in A I.

I would like to thank the women tech network for giving me the opportunity to discuss this important issue on this platform. This is the overview of the session. First, I will briefly discuss the relationship between artificial intelligence, machine learning and data science.

Then I'll go over different aspects of data which are crucial for introducing bias to the A I algorithms. After that, I will share some examples of gender bias in A I and their impact on everyday lives. Next, I will try to pinpoint some causes of A I bias and propose best practices to avert them. Finally, I would conclude the session on a positive note. The goal of this session is not to criticize existing A I algorithms or models, but to start an important conversation, create awareness about the broad scope of the problem and share some key takeaways with the audience. Artificial intelligence is a set of concepts or techniques to mimic humanlike tasks. And machine learning is a subset of A I which teaches machines to simulate human intelligence. Data science is a science of identifying patterns by analyzing data. Now, machines do not have a mind of their own.

They mirror human experience and behavior. Since A I models are trained with real life data. Now, data is probably the most valuable resource in today's world. Some call it the new oil because although raw data itself is not valuable, accurately processed and refined data has the power to influence an entire civilization according to Andre Kathi. While building A I in the real world, the data is far more important than the model itself.

The model is as good as the data it is trained on. Now, data is not always neutral or objective since it's biased from inherent unequal social historical and economic conditions. And that's why more data is not always better, especially when it is not representative of the bigger picture given such extensive power inherent to technology itself, it is the responsibility of the A I practitioner to ensure that A I makes the world a better place and not a worse one.

Now, bias is defined as a prejudice for or against an individual or a group. Typically in a setting which is unfavorable and gender bias means bias based on specific gender. Gender bias in A I results in inferior algorithm algorithm performance for women resulting from earnest assumption in the learning algorithm. Now let's go over some examples of gender bias in A I computer vision is an important branch of A I. An academic study and tests by word showed that popular computer vision based based um image recognition services tag women lawmakers like California State Senator Kathleen Galliani with labels related to their appearance. Whereas she's main lawmakers like her colleague, Jim Bell as business person or elder.

Next is an example of gender bias in natural language processing. Here. I try to translate some sentences from my native language, Bengali or Bangla where the pronouns are non gendered. Yet you can clearly see the inherent bias in the algorithm in associating the attributes in the translation.

Now, the training data is based on a large corpus of human language which is heavily influenced by the gender conventions around the world and language is one of the most powerful means through which gender discrimination is perpetrated and reproduced. Gender gaps can actually widen when algorithms are misinformed.

Most A I based virtual assistants have women's voices yet the most powerful computer in the world is Watson. This reinforces the social reality in which a majority of personal assistants or secretaries are women. Now, let's discuss some life-threatening examples of gender bias in A I for a really long time, seat belts, headrests and air bags are designed based on crash dummy tests with men's physic and sitting position. The dummy is basically 1/50 percentile man with a height of 5 ft nine inches and weight of 1 £70. So basically, it's a one size fits men scenario and that's why women are 47% more likely to be seriously injured and 17% more likely to die than a man in a similar car accident. Online health apps based on data mainly collected from men suggest female users that pain in the left arm or back is due to depression and advise them to see the doctor in a couple of days. In contrast, a male user is more likely to be asked to immediately contact his doctor based on a diagnosis of a possible heart attack. Text to speech technology has been more accurate for taller speakers with longer vocal chords and lower pitched voices.

Speech technology is more accurate for speakers with these characteristics, typically male and far less accurate for those with higher pitched voices. Typically female, there are algorithms which affect women's access to jobs and loans by automatically waiting out their applications.

For example, if the algorithm is trained on data based on previous resumes of the employees, just because less women applied for the position previously or women were less common in the sector in the past. Doesn't mean that a woman is less fit for the position. Algorithm based risk assessment in criminal justice systems could work against women. If the system didn't factor in that women are less likely than men to reoffend. Now, these are the primary causes of A I bias. The training data is based on the gender discrimination from real world.

So the data itself is skewed known as the gender data gap. When data is labeled by humans, another layer of social bias contaminates the data. Now, numbers cannot speak for themselves, especially when they derive from outdated values, heavily influenced by unjust status quo.

So unless you factor the context in the representation is misleading, the lack of diversity in A I or take in general is a major issue because a lot of these biases are unconscious. And unless you are on the discriminatory site, you just not always think about it. Looking at the major impact of this biases in reshaping the future of humanity. It's high time we focused on responsible A I responsible A I isn't just good for business, it is good for the world in general. Acknowledging the problem is the first step, meaningful data needs to be fed to the models and A I practitioners need to start seeing beyond numbers, reviewing the context, limitations and validity of data. We need to make sure that the training samples are as diverse as possible in terms of ethnicity, gender, race, age and so on. And also the people who are working on building the A I models should come from different background data needs to be updated over time since this is not a problem which you solve once and be done with it. Also asking the right questions like what are the problem you are trying to solve? How are you collecting the data? Who is collecting and labeling the data is key and crucial it accuracy levels of algorithms separately for different demographic categories.

If there's not enough women joining or contributing to the industry, there's always gonna be a hole in the A I's knowledge and that's why A I companies needs to lead. By example, by hiring more women in product design, data, science and engineering cross-team. Collaboration is extremely important because diversity of thought leads to better problem solving machine learning. Engineers and scientists should work closely with user researchers and product managers to get better perspectives. Mhm. Now let's conclude the session with some positive thoughts bias may be an unavoidable fact of life.

Let's not make it an unavoidable aspect of technologies A I practitioners and leaders have an obligation to create technology that is effective and fair for everyone. Let's use technology to bring positive change and start a refresh women together with men can play a vital role in shaping the future of a bio free A I world. That was everything I wanted to share today. Thank you for your time. Let's connect through email and linkedin and please feel free to ask any questions. So let me go to the chat window quickly and see if there is any question. OK. So I see different people are saying that they experience the same thing in the translation um in their native languages as well. OK. So, OK, so there is a question that is, do you have any advice for bringing up concerns like this with your team in a work setting? Definitely. So as I mentioned that uh as A I practitioners and as women in general, because we are on the discriminatory side, I would say that it is our major responsibility to bring up any concerns if any, if, if we find any with our team and at least share our perspectives with them, right?

Because as I mentioned, that seems a lot of these biases are unintentional. It may be possible that, you know, they just never think about it. And if you have given it some thought, then it's better to share them and, you know, rectify the problem earlier so that we don't uh en end up with you, you know, any adverse impact uh after the product is already finished. Another question is how much emphasis should be given to set up ethics and governance practices before we start building A I models? Yeah. So in general to make, you know, uh A I responsible and make sure it brings good to the world itself and makes the world a better place. It is of the highest importance that, you know, we set up some ethics and governance uh practices before we actually start building the A I models, right. Because uh for in a lot of the time, there are a lot of emphasis on, you know, the business side or maybe like, you know, how much um the model is and stuff like that.

But even if, you know, the model is only erroneous in 5% of the cases, but that 5% of the cases shows a pattern based on a certain ethnicity, race or age or gender. Then that is something we need to uh take care about, right? Ok. So um thank you again, everyone for your time. And as I mentioned, please feel free to connect and uh ask any follow up questions and uh have a great rest of your day. Um Thank you so much. Bye bye.