Wearing the hat of ethical AI solution architect by Vidhi Chugh

Automatic Summary

An Insightful Dive into the World of Ethical AI Solutions

Hello, friends! I'm thrilled that you're here to explore a fascinating realm of technology - Ethical Artificial Intelligence, where I wear the hat of an Ethical AI Solution Architect. I have over a decade of experience in AI and Machine Learning innovation, with numerous presentations at international forums like Predictive Analytics World and Open Data Science Conferences, just to name a couple.

The Scope of the Subject

In our discussion today our primary focus will be on the intricate concept of ethics in AI, challenges encountered in operationalizing AI-powered ethical solutions, understanding bias beyond just technical terminology, and most importantly, the effects of unethical AI solutions on its users.

Why Does Ethics Matter in AI Systems?

Owing to the dramatic rise of AI systems in industries like healthcare, judiciary, and banking – we've seen a rapid adoption of these systems. However, while this has crusaded a significant impact on end-users, it's become crucial to ensure our solutions abide by an ethical framework.

Ethics, albeit abstract and theoretical, comes into play in terms of fairness and explainability. Though there are guidelines in the market promoting the principles of ethics, they often fail to explain how to embed these principles into practice.

Understanding the Concept of Push-Pull in AI Ethics

The concept of push-pull arises from broadcast information (the push) colliding with queries and questions inherent in implementation (the pull). In the AI industry, certain guidelines assumed reasonable ethics understanding and these principles were broadcasted accordingly.

However, the successful implementation of these principles requires a pull-based system. Practitioners come across a spectrum of problems while following these guidelines. By raising questions, these practitioners, in a collaborative effort, we can better understand how to put these principles into action.

Bias and Its Implications in AI Systems

Bias in AI isn’t just restricted to data or algorithms. It could arise anywhere in an AI project lifecycle; data collection, data quality maintenance, data transformation – all these stages are potentially vulnerable to bias.

It’s vital to document assumptions and constraints as they emerge and mitigate any risks as much as possible.

Learning from History – The Role of Case Studies in Ethical AI

The ideal way to ensure ethical AI is through extensive case studies. Lessons learned from previous mishaps enable us to avoid the same pitfalls. Using each case study as a tool to develop an ethical lens to view and design AI solutions can help in mitigating bias and unethical practices.

The Impact of Unethical AI Solutions

Unethical AI can undoubtedly have adverse effects. It can lead to unfair access to opportunities based on characteristics such as race, age or gender. This form of discrimination can impact the quality of service and can take a toll on users’ emotional, medical and financial health.

Action Items for an Effective Ethical AI Ecosystem

Maintaining ethics in AI doesn't traverse a single path but involves multiple steps. Incentivising and sensitising employees through training and workshops, establishing multidisciplinary cross-functional governance teams, and monitoring KPIS are some steps that ensure high ethical standards.

Always asking uncomfortable questions from the onset, maintaining an iterative approach, and fostering a collaborative team effort are the best practices to ensure an ethical AI ecosystem.

Remember, ethics evolve continuously and require persistent efforts and collaboration. Let's tread this journey together, promoting responsible and ethical AI practices. For any questions or further discussion, feel free to reach out via Linkedin.


Video Transcription

Hello, I am a, I'm glad to have you all join here for the top title, wearing the hat of an Ethical Air Solution architect. But before I proceeded with the agenda for the talk today, I'll quickly introduce myself.I'm an A IML innovation carrying over a decade of experience. Uh I've spoken at multiple international forums, uh like predictive analytics, words and open data science functions to name a few. My core focus has been on following data centric science and I'm passionate about promoting responsible and ethical views of A I, I'm an A I artist and I've also conducted many workshops demonstrating how to embed the use of ethics in A I. And apart from that, I'm an advocate for DN I and I have been associated with many products, promoting women in technology. So let me quickly take you through the agenda for the talk today. The title also describes it a little bit, but I'll focus on ethics in A I and give you a definition of what do exactly mean, what are the gaps and challenges that you observe in operational I the uh ethics in A I powered solutions. There is a very intriguing concept of push and pull, which is gonna come towards the middle of the talks ECA you for that most of the time when we talk about ethics, uh bias is the first thing that comes to our mind.

So I'll explain what is bias and uh it's beyond the technical term terminology, what are its implications? And finally, what is the impact of having an unethical A I solution and being a user of it, how it impacts the users and what are the action items that an A I ethical architect can take? So what is uh ethics? And over the last few years, we have seen that there have been rapid adoption of A I systems in many industries, specifically in high impact applications, for example, healthcare, judiciary and banking domain as much as we can call it as a rise of A I systems. So uh as we understand with high impact applications, the meaning is that they have a high impact or or life changing impact on the end user. And when it happens, we need to make sure there is ethical framework with which the solutions abide by. But what do we mean by ethics? Ethics means that the the the the whole definition of ethics is very abstract and theoretical and difficult to define it. But no matter how carefully we are curating those ethical Jews, there is always going to be someone on the other side. So who decides what is ethical and who needs to comply with the code? For example, in an M solution?

When there are predictions, there is a human expert who can always validate that in case of ethics, do we have a ethics expert who can validate that this particular prediction or the output given that the model is the most uh or correct? So that's the ethics definition which was little loosely defined, but the definition for A I certainly not. So what does A I mean? So A I is the capability of a machine to mimic the human behavior so much that it does not need explicit guidance of how a human behaves. We don't need to explicitly code it out, right? So if it is tied to human behavior, we need to understand how do we as humans function. So what are the challenges in the journey when we think of operational ethics and what do operationalization specifically in terms of ethics mean it means that it's not a particular rela project. It's not specifically with respect to a data science or technology term. It means that within an organization, there is a well-defined framework with which we can scale up our ethical solutions. We mean uh we understand there is a definition of fairness. Do we have explainability modules that define the module outcomes?

Is there a way that we can explain why a model is giving a certain predictions? And what is uh when is the right time? When we cannot have enough confidence in model prediction and we need to go back and debug it. So these are some things which are span organization wide and come when ethics is adopted as a culture in the organization. So we have the these guidelines in the market for the last few years. For example, in 2019, approximately we had 80 such guidelines and tools but still a lot of unethically A I Powered Solutions News have hit the guide, have hit the headlines. So why is that, what is the reason behind that? It's primarily because all those guidelines, they typically talk about ethics or the principles of ethics at a fairly high level, they definitely tell you about what is ethics and what are those principles, but they don't focus on the, how part of it and the how part is what is the A I practic practitioners of the data science community needs the most.

How can we embed those principles into the implementation when we are developing our ML solution? I'll give you an example. For example, there is a mathematical formulation, if uh there's a function which takes two values as an input A and B always, and if the function is to summit up, always, the output would be A plus B, right? But in ethics, it doesn't happen all this, it might fulfill one context or scenario very well, but it might not fill in the other context altogether. So it it's largely context driven and has some bit of subjectivity and tailed in it due to which it becomes difficult to measure it. Also, there are a lot of prior beliefs and rationales which are not well defined that lead to difficulty in adopting the ethics and A I solutions. At this point, we also need to understand that algorithms are not good or bad in themselves. We cannot say the algorithm was unethical or this particular algorithm is ethical. These are just technology, but the design choices we make while developing the solution is what is definitely ethical or may be considered as unethical if it is push and pull.

So this is one paper that I read last year which is talking about operational A I ethics. I highly recommend everyone to give a uh to, to give it a read. It mainly takes the whole theoretical approach of understanding ethics to a more question based scenario. Some of the summary of this paper is what I've listed here and how it is labeled as push and pull. Why am I calling it as push or pull? The reason is we then we initially laid down the guidelines, we assumed that A I practitioners in the industry, they already have a reasonable understanding of what those principles mean. And based on that understanding, a certain guidelines were laid out. So when we broadcast certain information, this is called as we are pushing some information into the system. Now whether somebody is acting upon it or they are full of questions, we don't really get to know what we are promoting here is a pull based model. So what does that do? The practitioners, they follow those guidelines, try to put them into practice and whatever problems they face, then it's a pull based system, they raise those queries and questions and together we as a community come together and understand how those principles can be better put into practice.

What are those extra steps or what are those barriers that are not letting those effectively operationalize ethics into practice? Some of the examples that are listed in this paper are first to understand the current unders uh first to enable the current understanding of the practitioners.

What do they understand by ethical principles? How are they translating these principles, these theoretical concepts in when they have to put it into practice? How are they able to put them up? What is the basic motivation? Why should anyone be bothered about ethical principles and putting them into design practices? So one of the response coming from the survey that the results of that survey has been put into this paper, you are again free to read it out. But one of the common answer is that it enables trust in the c customers. So trust is a big word and it definitely is a bigger motivation to embed and think about ethics. But what could be the possible barrier So if trust is a bigger enabler for us to act on enabling principles, then what are the barriers that lead us to that stop us from translating those principles or that stop us from putting those extra effort into translating those principles into practices, time and resources.

A lot of organization think then the time spent in these uh translation and in when putting these principles into practice, kind of erodes their competitive question because they are they become a little slower in terms of putting effort into innovation, that same effort goes into translating those it takes and putting them into design.

But the how can we assist the practitioners, all the data science community so that they are better able to understand and put and put these ethics into practice. The answer is case study because we already have historical examples. It should be enough for us to know where the previous solutions and the architects of those solutions fell through so that we should not be doing the same mistake. But as we have seen empirically, those mistakes are continuously happening and we have a lot of examples that we are still repeating those mistakes. So the best solution is to do it by practice and that would be done through a case study. I'll give you an example of how case study should look like. At the end of this talk, you should take it up and talk, try to solve a case study for yourself to see whether you are able to develop that ethical lens by yourself or not bias and its implications. So whenever we talk about, I think the first word that comes is that the outcome of it is bias. It was biased towards a particular group of group of people. Uh But the bias is could it not just restricted to data or an algorithm?

It could be the way we infer the output bias could be present in any step of an ML project life cycle. It could be in the data quality issues. It could be when we are collecting the data. If we don't have any particular age group or a country or a class of people being shown to the model during training time, that could be a bias as well if we are doing some data transformation and we see that we don't have the target label available, it's missing in some of the records.

Sometimes it happens that we loosely eliminate all those records. Assuming you have the luxury of dealing with uh uh large scale data that leads to misrepresentation for a certain group. It could be for a particular gender, race or color. And if you are dealt with imbalanced data, are you taking enough, uh are you taking enough measures to augment the data or do you have enough examples for the model to learn for minority classes? Well, uh have you adjusted the classes? These are all the measures that we need to make sure that during our implementation, there is no bias included, but we understand that building an ML model is not an easy game and there are a lot of variables at play. So everything cannot be made available at a particular time, you have to keep going. So nobody's stopping you from that. What we expect is to have a standard process with which we can identify the assumptions or the constraints under which we are building the model and try to document them as and when they arise and solve and try to mitigate those risks as much as possible if it happens to be today or later in the project life cycle.

If there is a workaround, when it is well documented, it's a collaborative effort. Anyone, either you or your team, they can always go back and refer to those vouchers and see whether they hold you now as well. Lastly, when we talked about case study. So case study is something, for example, you pick a particular solution and think of it when, when you're proposing a particular IPO solution, what is its objective? Who is going to be the end user and what is going to be the impact on those users? The degree of impact is it going to impact your life? Right, or it's about the choices you are going to make based on this model out? Is there a particular way that you are able to identify it? And are those risks all mitigated and cover? Uh Is there any color case that you need to kind of document and analyze those constraints if it is not well covered? Have you been able to communicate and act upon them in a timely manner when you try to do this exercise project by project one solution over another and you are able to deal with them. So it's about developing that ethical lens with which you are able to encode the solutions. So what's the impact if something goes wrong? So if we talk about fairness, it can definitely deprive someone from the quality of service.

Uh The buyer solution can lead somebody to not have an appropriate access to goods and services like education, loan or housing, which can take a toll on their medical, emotional financial health. So to put in this terms, it can lead to inappropriate excess of opportunities, unequal excess of opportunities to humans based on the grounds of their characteristics such as race, age or gender. And this kind of discrimination is highly discouraged. So what are the action items on our plate?

Then there is no one single solution. So healthcare industry has been, has set a golden standard for us. It has been in for for decades and the it holds the ethics of highest standards. Though all the ground developers, they have those skills with which they can identify and mitigate the ethical risk and accountability is the key if they feel they are empowered enough and they are accountable for the solutions, they are putting in the market, they feel the need to uh to be able to explain the impact.

They know how the black glass box model should be, should be uh output a certain protections for which we need to sensitize and incentivize the employees by giving them certain trainings, the workshops, toolkits making them aware of what all guidelines exist in the market and the golden standards.

Cross functional governance teams also plays a critical role in it. It can be bias or unethical issue can escape one pair of eyes. But if you have a multidisciplinary team coming and consisting of legal compliance, product, engineering sciences and leadership, the chances are less that it will pass through. So identifying the right stakeholders, if you have found out an issue, whom to raise that issue to, there needs to be an escalation matrix, what KPIS are you going to monitor? And there needs to be a quality assurance plan. The ethics are always evolving. So you can always have refer to an A I ethicist that would be able to help you in identifying the ethical problems. We need to always scrutinize what possibly can go wrong and take the worst case scenario. Before even putting the product out during the development stage, we need to have a fair estimate of how we expect this solution to behave in future. Is it our expectation versus we are preferring to, for it to behave in a certain way. We need to know that there is a clear difference between the two. One can, cannot just sit and hope for the best that either the issue will not come or if it will come, it will get solved on its own. It doesn't happen.

So it needs to have a life cycle approach. We're asking uncomfortable questions right at the beginning so that a lot of kind of effort has already not gone into it. There has been enough time with this. We can act on those issues, resolve them in a timely manner. It's not something that you can do once, but it's something like an Mr project. It's ever evolving and requires a collaborative team effort with this. I've come towards the end of the talk. If there are any questions, you can reach me out via linkedin and I'm happy to take questions of time as well. Thank you.