Does Artificial Intelligence (AI) Support Diversity, Equity and Inclusion? by Patrice Caire
The Power of AI: A Deep Dive into its Impact on Gender Equality
I am Doctor Patrik, and today, I invite you to join me as I explore the intriguing intersection between artificial intelligence (AI), robotics, and gender equality.
The Role of Women in the Tech Industry
In today's tech industry, women only represent a mere 23%. Consequently, female viewpoints are often excluded from the design and evolution of our everyday AI tools and systems. These systems can unknowingly perpetuate societal inequities and stereotypes in their absence. Today, we will delve deeper into, "How can AI systems support diversity, equality, and inclusion?"
The Amazon and Facebook Scandal
To effectively understand AI's potential role in promoting equality, it's crucial to scrutinize instances where it's failed to do so. An example is the scandalous case of Amazon and Facebook (now Meta). These tech giants allegedly manipulated their systems to exclude individuals above 54 years old from viewing certain job ads, hence breaching equality laws.
Understanding AI and Machine Learning
In simple terms, AI systems mimic intelligent behavior, analyze their environment, and take actions to fulfill certain goals with the help of techniques such as machine learning and deep learning. Their effectiveness heavily relies on the quality and diversity of data sets during training. For instance, if a system lacks training with the data of a 'zebra,' it will fail to recognize a 'zebra' in action.
The Dilemma of Bias in AI
Unfortunately, biased data sets can lead to flawed and discriminating results, such as the infamous incident where Google's AI mislabeled an African woman as a gorilla. Similarly, not too long ago, a search for 'CEO' in an image search engine returned exclusively white males, reflecting an inherent bias.
Addressing AI Bias: Why It Matters
The violation of AI ethics goes beyond inappropriate or untrue labels. It infiltrates parts of our life where the impact can be distressing, like automated decision systems for housing and loans. A classic example is Apple's Credit Card Scandal, where a woman had a significantly lower credit limit than her husband, despite a better credit history.
In March this year, the US National Institute for Standard illustrated that the biases we are beginning to acknowledge only skim the surface, resembling the exposed tip of an iceberg. While we're aware of and beginning to address statistical and computational biases, human and systematic biases remain substantial and mostly untouched.
Moving Forward: Ensuring Ethical AI
Addressing biases in AI is not simply the duty of AI producers and regulators. As consumers and citizens, it's highly imperative we stay vigilant, report ethical breaches in AI, and pressurize companies into transparent practices.
With more diversity in AI teams and multi-stakeholder testing, we can ensure fairer outcomes. It's crucial that we push for laws and standards to create a robust framework for AI production.
Conclusion
So we face a choice- do we want to let AI systems reinforce the stereotypes and inequalities in our society, or do we want to make sure that we work together and we try to address these issues and work towards AI systems which support diversity, equality, and inclusion? The instruments for change are in our hands.
Let's Utilize AI to Promote Equality
With only 23% women in the tech industry, we need more representation. We need to be aware of unconscious biases and take every opportunity to balance the scales, be in vigilance or voicing out for justice. With combined efforts, we can leverage artificial intelligence not only for technological advancements but ethical progression as well.
Video Transcription
Thank you very much for having me. I'm very happy to be here with you today. I am Doctor Patrik and uh I do research in uh artificial intelligence and robotics.I have a phd in computer science and I have worked uh extensively in Silicon Valley and in academia um as you know, everywhere, very well. Today, only 23% of women are working in a in the tech industry today. No wonder then that the point of view of women is not included in the design and development of the um artificial intelligence tools and systems that we use every day. Instead, this system often reinforce these inequalities and stereotypes of our society. It is exactly for these reasons that have decided to focus my work on this question. How can artificial intelligence systems support diversity, equality and inclusion? Let me start with the story. I have a friend, Jane who was um looking for a job. So I see an ad I sent it to her and she contacts me right back and says, Patrice, I cannot see the ad and I said, but Jane, I'm looking at it right now. I have it under my eyes, what was happening? Well, what was happening was that Amazon who was trying to uh hire people between the age of 18 and 54 tried to, which is illegal. Of course, uh arranged uh something with uh uh at the time Facebook now meta to bypass the law.
So how did they do that? Well, they displayed the ad only to people to users whose profiles were indicating that they were between the age of 18 and 54. This is illegal. This is a way to bypass the law and this is not fair. My friend Jane was 60 at the time. So she could not see the ad, she could not see it because it was not shown to her. And I was below 54 and I could see the ad. So we can see that it is in the interactions between A I systems and humans that often the uh the ethical, ethical issues are taking place. But we use today many, many different kinds of systems. For example, when we log on to our phone, we use image recognition. When we also uh dictate, we use N LP natural language processing. And of course, today, there are more and more robots with uh embedded A I systems. But what are A I systems today? Well, there are systems which display intelligent behaviors and they do that by analyzing their surrounding and taking actions to fulfill some goals. What is behind that? Well, today, the main approach is neural network, machine learning, and network and deep learning and so forth.
So for people who are not familiar with these topics here is a very high level um example, let's imagine that we have our system and there is a new data in this case, this black cat and we um want to know what it is, the system outputs, it's a black cat, it's a cat. So what was happening? How did was that possible? Because in the training model, we had used a data set which was uh containing lots of cats, label cats and lots of lots of um dogs, label dogs. So the bigger the data sets and also the uh more um diverse the data set, the more refined will be the training uh model and the more precise will be the output. So we see this is important, this this data set is important. So let's consider now that we put as new data, another animal, whatever uh zebra. So we have a zebra and what is our system going to do? Well, it is not going to recognize it. Of course, because there was no zebra in the training data set. All right. Well, we see again the importance of this. So let's look in real life how this can play out. In fact, I'm going to show you an image there was this uh engineer who was looking at images and suddenly he sees his girlfriend but how shockingly she was labeled as a gorilla. This was terrible. Of course, this is not possible and what was happening there.
Well, the data has been had been wrongly labeled and this happens sometimes uh very often because it is uh still done manually this labeling and by so called uh Amazon Mechanical Turks. And this is some uh sometimes causes difficulties when some in these cases, wrong labels are used on images. So this is like our zebra before a zebra problem. Again, the data set was not of higher quality. So we can see that this was an issue. So of course, Google tried to address this issue. And how did they do that? Well, a couple of years ago, they just removed the gorilla label. That was a bit of a quick fix, but probably it was cheaper than to address systematically the issue. But we have other examples. For example, image search is very important. And when you looked at image search a few years ago, you could return it could it would return for the term CEO it would return the this output only white male. How is that possible? Our zebra problem again? Yes. The data set didn't include many women, probably any for the for the CEO term. So many that maybe the team which has uh who was uh working on this issue was not combining many diverse population or didn't have many women among them. But in any case, it was not addressed and it was in all the different search engines. So what happened was that to check on the issue, which had made quite a scandal.
The University of Washington earlier this year decided to do a project to see what was happening there. So they decide to check and um what was the issue when you were um searching for CEO and very nicely it displayed here on Google, the uh a more uh statistical uh representation of what today the CEO can be. Well, they decided, then the University of Washington decides to go further and to use a technique used by security and privacy uh researchers when they do test crashes for cars, for safety of the cars in these cases, what do they do? They uh test the challenge, the computer systems by um a number of examples and they choose to see how they hold up the system hold ups. So they decide the University of Washington decided to do a very, very slight change to start with. And so they just added a search term to this very slight change, an enormous difference happened in the output. And here is what we got our zebra problem. Again, there was no women again in this uh in this output, no people of color. It was outdated da data which had been used to fix the issue of the first page and nothing had been done to do a systematic approach to the whole problem.
So this is an example of these are a few examples there, but we can also see sometimes where there are very, very, the impact can be very dramatic on our lives. For example, when we use automated decision systems, it is used very pervasively today for many issues for housing and for loan, for example, and there was this couple who uh was filing a husband and wife, they were filing together their tax return and they were actually the wife had the better um uh tax credit tax history than her husband.
But they had a new project and they wanted to check their credit line on uh the Apple card. No, what happened when they saw that there, the husband had 20 times the credit limit than his wife. How was that possible? They contacted Apple and you know, requested to have some explanation and as an explanation, Apple returned the answer that the automated decision algorithm was actually based on a 1974 law which was of course, bringing some totally outdated output when the uh data set was uh used in that way because the situation of women in 1974 of course, especially the economic situation was very different from today.
So our zebra problem again, wrong data set and wrong outcome. But we saw now many, many different uh different uh uh different uh examples of biases and different way to deal with it. But how was, how can we manage these uh these biases. This was what the nit uh the US National Institute for Standard and um Technology address. This year, they published a report in March and in this report, they showed this picture. What is pretty interesting there is that what we are starting just starting to address and to look at is the tip of the iceberg, meaning that we are looking at, you know, the problems which are biases, which are statistical and computational biases. So the rest of it is not addressed at all and the rest is pretty important. So the the point, the top tip of the iceberg that we're addressing today is uh main is uh is addressed by many companies by providing toolkits, toolkits. They are often which are often uh called Python code using Python code and providing a number of uh metrics for fairness and also so providing um uh decision de biasing methods and as well as visualization. But the rest of it, what is below the surface, the human biases, the systemic biases is not taken into account as they say and it is still something to remain to be done, but we still can do something, we can do something.
In fact, what we can do is we can be on the lookout whenever we see some ethical, ethical issues with A I, we have to report them, we have to talk to each other, we have to try to find solution and not let it go, we have to be aware of this situation. And of course, we have to ask and demand that companies be um responsible and they can, they are transparent, for example, they should be given a giving access to their data sets so that we know the composition if they are diverse, if they are um uh gender balanced. And of course, the responsibility is enormous for these companies because in the uh automated decision making um algorithm that we see more and more in our lives and which impacts us directly. This should be really companies should be really writing, write up the sites that they are using these automated systems. And so that we as users have the the the opportunity to address and to, to contact and to say, well, there are some issues there, we have to fix something because the result was not right. And of course, we have seen that we are only 23% of women. So yes, we have to make sure that we have some teams which are diverse and and balanced gender balanced in the whole A I life cycle. They have to be from the design, from the pre design, the design and development and the deployment.
They have to be diverse and representing society. We cannot just have a few part, a few percentage of the society being there doing all this work that we use all the time. And we have to make sure also that we have multi factor and multi stakeholders to test, to validate, to evaluate all the, to do the testing scenario for all these um systems. And finally, we have to make sure that companies are not operating in the lead void that they are operating in today. This is not possible because the companies are not there for philanthropic goals, they are there for business. And we have to make sure that our governments and our institutions are really there working for us to make sure that they are laws and standards A I standards to give a framework to all the companies producing A I systems. And we have to make sure that all these are enforced and that we have set up also some legal ways to audit algorithms, all algorithms. So we see we can do a lot and to conclude, I am going to let you read this sentence. So what do we want to do? Do we want to let A I systems reinforce the stereotypes and inequalities in our society?
Or do we want to make sure that we work together and we try to address these issues and work towards A I systems which are supporting diversity, equality and inclusion. This is our choice. Thank you very much for your attention and I'll be taking questions now. So there is a question actually here for um from Paula Castro. Um there is uh if there is no clear legal frameworks, how can we make sure A I is being regulated? Well, that is really, uh, a good question because we have to, uh, follow the fact that, uh, they are, the government are publishing some results all the time and some laws. So we have to follow the issue and see and see what are the, uh, the bodies who are actually working on this. So that way we can, uh, make sure that we can also um when there is um uh some petitions to us because for example, there were many petitions a few years ago uh from scientists who are trying to regulate for A I, for example, the weapon industry, the autonomous weapon industry, which is a really critical area.
So we have to make sure there that something is happening and maybe join these discussions uh for the free frontier or from uh uh the electronic frontiers and or the A I um bodies which are um working on this topic. So there are, there are some uh they are starting the I triple E for example, has been putting a report together starting with the um looking at different uh areas such as the area of trust. So how can how can trust be um a a fundamental area to be able to build uh some um more gender equality and also more um uh some, some specific standard for that. So there are some uh some, some area, every country actually has started to set up to, to be in movement to have something like this. So it, it is for us and that was my first point. It is from, from us to also try to um get informed and, and try to make sure that we follow the advances which are being done in this area. Um uh If there are other questions, um please don't hesitate. I mean, gender equality is a, is a huge question there. I mean, we cannot being only 20% 23% of women in the tech industry.
Of course, you know, we are not represented and it's not to say that um the, the, the white males who are working on these uh products and on these Softwares are not um are, are biased consciously biased. It's, we have a lot of biases which are just in conscience, we have, we have interior them. So we don't know. I mean, people often don't know that they have that they are biased until they are being shown that this is at the expense of other people, other groups, other minorities or other, you know, ethnic or other, you know, people from the society often when you are uh in uh a part of the control group, you don't see that actually alls do not have these, the, the, the problem that you, you don't see because you don't have them.
So this is why the n was uh showing this importance of the human biases and this um and the problem of uh of um of the systemic biases, we all have biases. So the, the interesting part with a I's uh tools and systems is that they show us that we have biases because they are like reinforcing these biases. And we, we see them even more because it, it's so powerful. I mean, there are many instances of that in natural language processing where if you translate one language into another, for example, it was done with um with the doctors and nurse in some um in some uh languages, the uh doctor is neither feminine nor masculine. And in some languages where it it has a gender, it was translated by a male automatically even though sometimes it was the the it was a female. So female are considered nurses and doctors and males are considered nurse are considered doctors, excuse me. So it is all these biases which are not good because they reinforce also um what is happening in our society. For example, another example, for the past 10 years, we have been having this um um uh dictation on the on the phone like Siri, like uh what we see on on Amazon, I mean, everybody now as a as a digital assistant.
Well, from the start, the digital assistant were feminine voice, female voice. And they have been for even now for some uh um uh digital assistant, it's still a female voice. Do you know why that is. Well, simply because the research, social science research has, has shown that social, that female voices appear to be more passive and more admirable. And so people prefer having a digital assistance which would be passive rather than some uh thing they would perceive as more uh controlling or more more um uh uh dictatorial or more, you know, male oriented if it was a voice. In fact, um it's only this year that Apple added to its Siri voice, a gender neutral voice. So we are making progress because people have been complaining for ages to see that these uh voice um assistant would be uh more gender bias or less gender bias and this is uh has finally occurred, but it takes a long time and in case it takes people to be on the lookout, that's what we have to be on the lookout whenever we see something which doesn't feel right where you feel that you are being um not um favored.
But on the contrary, we are, we are being discriminated, then you should just speak out and demand that things change because it's only like this, that they will change.