Ideshini Naidoo - Ethical AI for a better world

Automatic Summary

Understanding the Impact of AI on Society

In this rapidly evolving digital era, Artificial Intelligence (AI) is more pervasive than ever. Its power and potential are enormous - but how we harness it remains a topic of contentious debate. As technologists, we bear the weighty responsibility of shaping the world with intelligent systems that are ethical, safe, and beneficial to everyone.

Technology is neither good nor evil in and of itself. It's all in how people use it. The power of technology has always held a dichotomy. The same tools can be used for different, often contradictory, purposes. Yet, the difference today lies in the unprecedented speed at which technology is evolving. This makes it harder for societies and governments to keep up, making responsible use of AI more challenging, but also more crucial.

Getting to Grips with AI Ethics: The Three Big Challenges

AI ethics is a big topic. Notwithstanding its size and complexity, it is our responsibility to make a positive impact, no matter how small, in steering its direction. Three significant challenges arise when good intentioned AI goes awry: feedback loops, bias, and disinformation.

Feedback Loops: The Dark Side of AI Efficiency

Feedback loops are one of the most common unintended consequences of AI algorithms. An AI system learns and improves its function based on outcomes from past data. A prime example of a feedback loop is YouTube's recommendation engine. On surface level, the algorithm successfully optimizes watch hours, with 70% of content viewed on YouTube recommended by this engine. However, in misjudging the content as good or bad, it inadvertently endorses questionable content. Consequently, with users exposed mostly to specific kinds of content, misinformation can proliferate and potentially radicalize viewers.

Rather than resist the inevitability of feedback loops, we have to monitor their potential side effects carefully and be deliberate in the design decisions that we make. One positive example is from a lead machine learning engineer at Meta, who remedied gender bias by removing gender as a factor from the company's Meetup recommendation algorithm.

Addressing Bias in AI

Bias is another crux of responsible and ethical AI use. While biased data is often blamed for skewing AI-based systems and algorithms, researchers at MIT have identified at least six other areas where bias can seep into the AI machine learning process.

Two of such biases, historical bias and representation bias, frequently make their presence felt, frequently stemming from existing societal biases. To address these, algorithm developers must use diverse and curated data sets for training and testing AI systems, but also remember to build trustworthy AI systems by increasing transparency and explainability in the algorithms.

Disinformation: Shattering the Glass Box

Propaganda is not a new phenomenon; it's just more prevalent and impactful in today's hyperconnected world. Disinformation usually comprises half-truths, and often than not, seeds of truth taken out of context, making it powerful and dangerous as it confuses even the discerning mind.

In the face of disinformation, we must exercise discernment, verifying the sources of information before accepting them at face value.

Join the Fight for Ethical AI

In conclusion, the fight for ethical and responsible use of AI is one that we must all join. Making a difference begins with understanding the potential pitfalls and learning how to circumvent them.

At Thompson Reuters, we aim to play a crucial role in this global challenge and are always looking for like-minded individuals interested in making the world a better place through technology. If this resonates with you, consider joining our teams, where we tackle the great opportunities of our digital transformation program and delve into the realm of AI ethics.

Change begins with us - let's ensure AI technology is used to create a world that we'd be proud to have our names associated with.


Video Transcription

Welcome everybody. And thank you so much for your time and your attention on this really important topic as our world transitions from a state in which everything is running on A I capabilities A I is moving from a space where it's pervasive to where it is ubiquitous.A really important question rises up. What will this mean for society? What will it mean for all of us as technologists, we play a pivotal role in this time in history, in this unique time in history to use this power to shape a better world. And how are we going to do that? How are we going to shape the world that we want to have? We do this by starting off by making sure that we understand what the power of this technology really is and to use that knowledge to guide it into ethical use. It is a truism that technology is neither good nor evil in and of itself. It's all in how people use it. Just like the technology of um the ability to enable us to protect ourselves, protect our privacy and our security under repressive government regimes can also be used to conduct illegal transactions. All the technology that is used by, by scientists to fight epidemics can also be used to create biological weapons. This particular story, this dichotomy of the power of technology has been in existence throughout history.

Perhaps way back to the very first technology which is arguably fire in ancient times. Just as in modern times, fire can be used to cook our food or to cauterize the wound or it can burn down the village. But there's something different about the technologies of antiquity or even recent history and what we're experiencing right now. And that difference happens to be the unprecedented rate at which technology is rapidly changing new applications of technology, whether good or evil can come about far faster and far easier than ever before. And governments and society can't actually keep up with this to figure out the way in which we can protect ourselves in this situation in this, in this, in the face of this flywheel of unstoppable momentum. We are wearing a very weighty cloak of responsibility. It is up to us to ensure that this powerful technology is used to create the world that we want to live in a world in which we feel safe, a world in which we are comfortable with our Children inheriting and one that we will not feel ashamed to have our names associated with.

But then I go back to my earlier question. So what do we do about this? How do we make this possible A I ethics is a big topic. It's a big challenge and just thinking about the depth and breadth of the scope can be really daunting. But like the story of the little girl standing on the seashore and throwing back one starfish at a time while thousands lay upon the shore, every small difference that you make will help every little bit of influence on any initiative, regardless of the size will add up. I want to talk to you about three commonly known challenges where good and well intended A I can go bad and create catastrophic effects and those three topics. So those three areas are feedback loops, bias and dis and disinformation. Let's get stuck into that very first topic.

Feedback loops. In very simple terms. A feedback loop refers to a system that is designed to improve itself by learning from its past experience. And this happens when your algorithm is responsible for the next set of data that the user is going to see. Perhaps one of the most commonly known examples of a feedback loop is youtube's recommendation engine. This engine is responsible for serving over 2 billion users who watch over a billion hours of video a day. And this algorithm is designed to optimize exactly one metric and that's the number of hours of video that is watched and it does a fantastic job. Something like 70% of the content that is viewed on youtube is recommended by this algorithm and from a business performance perspective, that is an outstanding result, but it has a dark side. This algorithm led to an out of control feedback loop which led to the New York Times running this headline article, youtube has unleashed a conspiracy theory. Boom. Can it be contained New York Times author Kevin Roose tells the story of a youtube star whose documentary conspiracy theories with Shane Dawson hit a record 30 million views and then went on to have an additional 20 million views for his story in which he pauses theories ranging from apple devices, recording every word spoken by their owners to popular children's TV programs, urging Children to commit suicide to the spate of wildfires in California being caused by homeowners looking to commit insurance fraud.

Mr Dawson doesn't present any facts or any evidence to back up his stories, but his loyal teenage fan base eats it all up. Now, clearly, the al the algorithm is not responsible for Mr Dawson's con content. The algorithm is doing exactly what it was intending to do. It's predicting what people would like and it's recommending exactly that. But here's the rub these algorithms have tremendous power to determine what people would even see and more importantly and more security, what they won't see. It is very, very unlikely that Mr Dawson's loyal teenage fan base would ever be shown content that would debunk his theories and will present, present them with alternative perspectives, but please bear with me as I go through the next section of the story, it's about to get a little monotonous.

After all. The story is about a loop. You see human beings, all of us, we like controversial content. What that means is that recommendation engines very often recommend conspiracy theories. And it turns out that people who like conspiracy theories also like watching online videos, which means that these people get drawn more and more to youtube and the more conspiracy theorists watching youtube, the more conspiracy theories get recommended by youtube and other extremist content.

The more extremists watching youtube, the more extreme videos get recommended by youtube. And here comes the very scary part. The more people watching youtube develop extremist views, the more extreme videos get recommended by youtube. There, you see it ladies, the toxic out of control spiral in the age before a I's powerful feedback engines who do exactly what they're intended to do, they match and reinforce whatever is already there. They don't judge the content as good or bad or bad before this age. A would be conspiracy theorist would have to go out into the world and curate their own content and in doing so, they would come across a wide variety of views and maybe just maybe they could have been diverted off their intended path. But in the face of 50 million views, affording these these videos crowdsourced credibility. Even the mildly curious can turn into an extremist. But this story is not all bad news, it's not all doom and gloom. There are shining examples out there that we can point to and we can learn from like Evan Stola, the lead machine learning engineer at meta who discovered that men were showing more interest in attending tech meetups than women. So he realized that by taking gender into account in meetups recommendation algorithm, this would result in the recommendation algorithm recommending fewer tech meetups to women, which would mean that woman was seeing and responding to and attending fewer tech meetups, which would reinforce the cycle.

So Evan and his team made the ethical decision to remove gender from that part of their model. And in so doing, they broke the cycle of the recommendation engine and prevented the systemic imbalance from reinforcing itself. We can take a leaf out of Evan's book, we can learn from his example and we can be very cautious about the design decisions we make and how we choose what features we will include and what features we will leave out. And we can very carefully monitor our algorithms to ensure that there are no unintended side effects, edge cases that we, we did not predict and unintended consequences from our work that we did not foresee and we would not want our names associated with. Yes, it's possible that if we do this, we might affect business performance, but just as companies will choose to forego profits so that they can use only sustainable products or only work with an approved vendor list. So too, we need to be the voice for ethical use of algorithms that brings me to my second topic and that's bias. This particular topic has been brought sharply into public consciousness by the shocking outcomes from the predictive policing and the facial recognition algorithms.

And even though this is a very broad topic, it's got many, many layers of social and technological complexity. When it comes to the world of A I, the blame is often placed squarely upon the shoulders of biased data. And yes, bio data is a problem. But researchers at mit discovered that there are at least six places in the machine learning process where bias can be introduced. I'm gonna very briefly touch on two of those areas, historical bias and representation bias. Now, according to the researchers, historical bias is a fundamental structural issue affecting the first step of the data generation process and it can exist regardless of how perfect your sampling or your feature selection. If the data is about human beings, then the data has bias.

Bias in the data is pervasive because bias in us in human beings is pervasive and it it doesn't matter the data, medical data, housing data, political data, it all has bias in it. And an example of this was, was literally was recently brought to the public's attention by propublica when they showed up the flaws in the predictive policing algorithm, the compass North point predictive policing algorithm which even though it has a 61% accuracy rate of predicting viv it has fatal flaws.

Blacks are twice as likely to be rated as a high risk for committing another offense, but actually don't go on to commit another crime and it makes the opposite mistake. Whites are twice as likely to be rated as a low risk of re re offense but then do go on to commit another crime. Did you wonder what happened with all of the press around the facial recognition algorithms where it had 100 times more errors for dark skinned people than for light skinned people. And then a year later, the error rates for dark skinned and light skin are almost the same. That's an incredible change in just a year period. At the time when this news about the performance of facial recognition algorithms was in the news, a whole lot of people were responding by saying that the reason that these algorithms are performing so poorly is because computers can't tell the difference in the features of dark skin.

It turns out that the real issue is the systemic imbalance in the data sets used to train these algorithms in very simple language. The developers that both these algorithms failed to train the models with enough dark faces or test them with dark skinned people. Yes, these stories are incredibly upsetting and deeply painful. And hurtful, but we can learn from them and we can do better and doing better starts with us making sure that when we build algorithms, we train them and we test them with diverse and curated data sets. But please bear in mind that addresses representation bias. It doesn't do anything about measurement bias or historical bias. Here, we need to go deeper all of the challenges that we're experiencing around racial and gender and other forms of bias in our data in our algorithms has led to public outcry for more A I transparency less black box and more glass book. This means that the onus is on us to create trustworthy A I to increase the transparency of our algorithms. And we do this by going in the direction of explainable A I an ex an explainable A I doesn't mean a model that that can explain itself. No. What it is is an explicit design decision that developers make that we make, we've got to make the decision to build what is required for an explanation into the design of the algorithm. And we do this first of all by understanding what explanations are going to be needed.

We figure that out by talking to the people who require the explanations like the users of our algorithm or the people that the results of the algorithm will affect or the regulators. And the second popular option here is to make sure that the learning process produces interim results so that those results together with the end result can be used to explain how we arrived at this point. That brings me to my last topic. And that is disinformation. Disinformation like bias has a long, long history. It has been used throughout the ages to, to spread discontent and disharmony and to get people to give up on seeking the truth. In short, it is propaganda and many of us will be tempted to think that disinformation is fake news and false information. But in reality, disinformation often contains the scenes of truth or half truths taken out of context. And this makes it far more powerful and far more diabolical because it can confuse even the most discerning mind and cause people to give up on finding the real truth. The story that you see referenced in this photograph of the disinformation campaign led by Bell Pottinger in South Africa is a story that hits really, really close to heart for me.

I lived through this experience, I went through the the riots and the unrest and the the pain and worry and fear that my country was descending into civil war. You see Bell Pottinger preyed on South Africa's painful history and stirred up racial tensions in the country to a level that we had not seen since apartheid, bringing the country almost to the brink of civil war. And even though the Mi Koreans that did this have actually been brought to justice the side effects of their work is still being felt in South Africa today and we're not sure exactly where that will go. Here's the very sad news radicalization is increasingly happening in the online world, in our online communities, in our online social and in groups where we can't tell the difference between real people, expressing real opinions and bots auto generating content based on some nefarious agenda.

So what does that mean? Do we, does it mean that we need to distrust everything? No, that isn't the answer either. That doesn't create the society we want either. We do need to trust but we need to verify. We need to check the sources of our information very carefully and make an explicit choice about whether we believe those sources. We need to look at information broadly and deeply and take the time and invest the time to inform ourselves because we are worth knowing the truth.

Ladies, that brings me to the end of my talk for today and if I have stirred up in you any interest in taking up the fight for A I ethics, then I urge you please come and join me at Thompson Reuters where we intend to play a significant role in this global challenge. We are looking our teams here at the labs and and Thompson Reuters engineering. We're looking for people like you, people who are interested in making the world a better place. So please come and join us in our information booth. Talk to our recruiters, hear about all the great opportunities, our digital program, our transformation program and of course, what we're doing in the world of A I Ethics. Thank you.