Building Robust AI Models: Demo on Generative AI and Adversarial Feedback
Usha Jagannathan
AI Innovation LeaderVideo Transcription
Go ahead. Thank you all for joining. So today, this is a very interesting topic because we are, living in the revolutionary AI era. Right?So, we are trying to build I know want to make sure the models that has been coming in whatever the models that we build for the applications, we want to make sure that it is built in robust, robust in the sense, not in a more of a sturdier manner, but more of, like, how we can make it, you know, inclusive of fairness, accountability, transparency, explainability, all the ethical con considerations that we can include in and try to minimize the bias as much as possible.
And that's what we are trying to see, and I'm going to do a, walk through. And that's what, this, this is about. And if you have any questions, I'm happy to address this at the very end of the session. Okay. Let's get started. So we all have been hearing about generative AI. Right? So we know about what the goal of AI is. The goal of, like, in in general, the AI is about, you know, how it mimics the human. Right? So that it can enable the missions to perform the task. Right? And we provide that human intelligence in order to make it to perform all the mundane tasks, all repetitive tasks so that it can be automated. So such as, like, how, you know, whether it's recognizing the speech, identifying any objects in the images which you are we started with seeing the ImageNet that, professor Lily started with, and then there were so many other things that have come through so far and how we make the data predictions today.
Right? So but then we suddenly we had a huge, like, disruption that happened was with the chat gbt, right, which is one of the Gen AI model, the tools, the chat gbt that we use today, this generative AI, it's like it takes the inputs what what we have, and then it generates its own new content.
Right? That is the beauty of it, and we all are, like, just floor, Right? Looking at that. So, using that generative AI, how are we going to create that adversarial feedback and adversarial training that we are going to provide? And that's what this topic is all about. So how is this, the machine learning is, learning from the data? So we know that there are 3 types sorry. 3 main types, which we already know. What is a supervised model and what is an unsupervised and how the reinforcement happens. Right? Supervised will have labeling. Unsupervised would not have labeling, but it is like you find in a group or in a cluster form and find out, and then you see, okay, the these are the unlabeled data and then, predict, okay, this is what it is. That is how the, model predicts it. Right?
And then there is a reinforcement learning, how you keep repeating that action to perform in order to, make them to understand that, technique. Right? So but today, what do we see here? We have supervised. We have unsupervised. We know how it does work. We know reinforcement that is learning by doing. But there is also this generator, how it can make plausible fakes. Right? And those deep fakes that it can happen because it takes the content, it provides it, provides an, answer based on what your prompt is. But we need to make sure that response that you get is definitely the right answer before you take it forward and apply it. Whether it you're applying it in an application, whether you're let's say you are writing an email, or writing some factual thinking that the facts that what the chat GPT provided is the right one.
And you cannot without, having the proof to make sure that this is this is the right one. You cannot just, circulate that email. Right? You want to make sure whatever the facts that you're that you're adding into that email, if there is something static statistical facts, you want to make sure that the training that has happened with that GPT grade data is only up until November 2021. Right? So you want to make sure that the data, what it is you are getting from it, it's it's, authentic. It is, foolproof, but it is not, always. And you want to make sure that it is you want to, because it's getting all the data generated from tons and tons of data that's out on the Internet. Right? And what the data that has been fed and trained. So you want to make sure that how that adversarial training that we can provide and so that that the feedback that adversarial training will come later, but we want to make sure that it is you cross check before you share it.
Right? And that is something that, that is very key when we talk about this generating one generative AI. So let's go about how you think, like, this AI is getting biased. It's because what we provide in the missions, that is what it takes and gives us spits up spits the output. Right? So but who where is this happening and who how this is getting passed? So what you see here, this is from World Economic Forum, where the biased a design, it's it says it's it's in the practice, you know, from the data that you feed in, preprocessing, cleaning, all these things that happens. That right from there, when we ingest the data up through the entire life development cycle, you see that there is some amount of bias in there in every stage.
And why that happens. So you see here the application injustices that they are talking about. You see here the bio the discriminatory data, whether it's a sampling group, out of the population group, and what are the different types of bias, all those things that you see here. And how it in the decision making, it happens in health care, in many other industries, supply chain, from where it comes from and where you receive from the source to the destination. So many hands, you know, it changes. And but at the same time, what is, you know, what type of, bias happens in each stage when in that life cycle of, you know, ML development life cycle, that is a key thing that we need to look into. Right? So a is often biased, And who is causing that bias? So what do we see here? Let's take an example.
So we see some, an operation theater where a man and, his son was met with an accident, rushed to the critical care, and the doctor doesn't want to operate on the boy because, it says he's my son. Okay? So how could this be? So look at this statement. Probably, you would have seen this example thousand times somewhere else, but I just want to reiterate this particular example that I want to bring it to attention. What do you see is, like, the boy and the man and the boy, have come to the critical care. But the doctor looks at the boy and explains, I can't operate on this boy. He is my son. So who is the doctor here? Is it the man, or how could this be? Who who is that? So what do we assume before we saw the slide? Many might have assumed.
If we have not seen this particular example is that doctor is the guy who came in, and, probably because it's one least talks about the boy, being hurt. The guy itself is the the father itself is operating the son, or it could be some other doctor who is a male. Right? So that's what it looks like. Right? So we did many as we assume that it couldn't be a the the doctor behind it might not be another gender. Right? So that's what the majority of the test subjects, so if we will look today, it's like, oh, that the doctor is a she, including men, women, and other genders. Right? So so we want to make sure that these are the things that happens, like, while we are ingesting the data. Okay. Let's go in here because we need I I would like to show the walk through the code as well.
So bias in the data, this human data, it perpetuates in that perpetuates the human bias. Right? It's all in the data. That's what causes the bias. Right? There are we know we have 1500, types of biases out there. We don't need to know all that. Right? But the ML learns from that human data. Because as, you know, doctor Andrew Ng mentioned about, you know, how data is the oil. Right? So here, ML, it learns without data, a is nothing. Right? So the ML learns from the data what we feed in, and that is that receptor is what we call it as a bias laundering or a bias network effect. And you might have seen this where the NIST standards where, you know, they talk about the different biases. The folks talked about the when we talk about different biases, we talk about the examples of different model failures. Right?
We know about Microsoft Tie, which, in 2016, they had to bring it down in 16 hours because it started, you know, spewing, like, racial slurs and some inappropriate words. Right? And then, when Google, you know, did not, classify some of the ethnicity, and it, marked it as a different, way it, named it. So do and then Amazon, we saw that how there was a flaw in the hiring algorithm. So and also the Zillow, in the buying, online buying, program. So all of these, if you see, just an examples, which is like how why that a failed to live up to its potential. And we saw that even in during the pandemic that happened. Right? In UK, when they released the coronavirus app, the COVID app, like, that there was a flaw in that.
So it's like it's a flaw would you consider that the flaw happens not because what view feed in is what the a is going to how we develop it, but that there is a bias in it. Right? We got the picture of it. Right? So the unintended consequences what we see today is not just from the individual side, the organizations when we implement it and also at the society. Why how it starts with us. And because each one of us, whether you are a data scientist, whether you are a developer, whether you are an ML engineer, we all are behind that as a stakeholder. And the whoever the business stakeholder in it. Everyone, we all are in this together, whoever is building that AMML application. Right?
And when you build that model behind it, you want to make sure that there is a diverse diversity in the group of stakeholders, diversity in the group of the program must team data scientist team. So it's not all males or not all females or not all just these 2 genders, all different genders, different ethnicities, that different perspectives provide that outlook so that they can avoid or minimize the bias. It is like food hunger. It's you cannot say I've eradicated the hunger. Right? It's it's we can only minimize the bias to the maximum extent because we all have unconscious bias. We all have some amount of bias is in there, and that will be there. But how we can avoid maximum to the maximum extent is what we are trying to see so that we can be able to, confront the risk of AI.
So where all the biases can get introduced, we talked about the ML life development life cycle. You know that oh, sorry. You we saw that it's rooting out the bias. Start from the planning itself and the collecting data, ingesting, preparing, cleansing. Right? And then when we train the model, testing even in testing, evaluation, even post deployment, that that exists that can happen by us. And when we derive the actionable insight, when we say, okay. This is the certain decision that this model made, and the insights what we deliver. Let's say I am, I am an, I am the key, person to the client facing or to the stakeholder. The insights that I derive from that model and share it with the stakeholders, there could be some bias that I would be adding into that.
Or that model, what it provides, even the post deployment. How it is predicting that outcome and what exactly that we are conveying. And if what we are conveying might be, not relevant to what the model is conveying, but what the model is conveying could not be explained because it might be a black box. So we want to make sure that it is interpreting correctly the message. Right? So the bias can happen everywhere and how we avoid it. And that's why today you see that there is an AI ethics officer or an AI governance officer that is out there, some someone like, project, scrum master or a manager or an agile coach, like, how how you call it? Like, where they are there or a main stakeholder right from the beginning of the planning, even after post deployment during the during how you maintain the application, there is always somebody at Vigilante. Right? So there should be somebody to vigilant the so that we know that we have the proper governance practices that is being followed at all times.
So how we control this? It talks with the conceptualization, data management. That's what we talked about in the previous where all it happens, how we can control it, how we can provide accurate data, how we can protect because data, privacy, trust, this is all very crucial when it comes to any business. Right? So and the bias, how we can address it? Because we need to we are now going into the demo part of it. So we need to know the data is absolutely matters. Without data, nothing, nothing happens. Right? So with the AI part. So this is this might be, like, how we understand the data. Then if it is skewed, correlation, how we is it, like, small population of the sample? Then we cannot say whether the data is, like, how the data is being trained. It's all those are all the things that we need to take into account. Right?
And, like, how we do best coding practices when it comes to designing a web application, we need to you know, just an example I'm giving. Anytime, any coding practice when today, the people, the leaders who are looking at reviewing your code, we want to make sure that any AML application, particularly when you are developing, we it has to be inclusive design practices as followed. Right? The coding practices are followed. Right? Because you would also need the UI where it's an inclusive design, and, as well as you also need for the, for the the the model that you are designing. Right? So that is very, key. And, so the it need to help make the fairer decisions. Right? So how we can as I mentioned, how we can only minimize. We cannot say eradicate the bias or the eliminate sorry. Eradicated might not be the right word. Eliminate the bias. Right?
So here, you see what are the 6 stages, like, 6 potential ways that we can do it. Right? And, the the slide speaks for itself, so I don't want to go so much in detail. So we have time for the demo. So it talks about how it, the potential ways, for AI practitioners and business leaders to consider, at each stage. And then, this is, this is one of my papers that I wrote, that I wanted to share this where this is one of, this I took this and implemented the ethical framework where these four things are key components. There are many other things that to consider. Security, everything, that's all very key. But security approvals will be there. Security protocols will be there in each and every business that you, that you are in. Right? Any organization that you go to, security is the top priority. Right?
The so in that but what are the 4 key areas that we need to develop in order to create the responsible AI products or these fairness, ethics? I use that fiat approach, fairness, ethics, accountability, and transparency. Right? This is so that it's a glass model and no no longer a black box. Right? So and when we say black box, white box, we talk only about that explainability, but what all needs to be included, And that's what this for framework comes into picture. And it talks about I have expanded it here and talked about how that framework should be laid out. And moving forward, now coming to we now got a picture of, okay, what exactly it's like, we saw the introduction. What is what are the, key terms, and, how to build how are we need how do we need to build the robust AI models?
And we understood what a bias in AI, how to minimize the bias. And now we are going to talk about the techniques for building the robust AI models. These are the 4, techniques that I've, I have shared here, like data clean cleaning and, preprocessing, and in the adversarial feedback. So explainable a and continuous monitoring and improvement. I talked about explainable a in one of the previous speaker session, last year, with women in tech. So here in the, for social good I talked about, so here, for the adversarial feedback, here, we are going to see how we can improve that model's robustness. Right? When we say model's robustness, robustness means here, like, how when you test for bad scenario or good scenario, it should be or worst scenario, it should be able to provide the response the way you you're looking at. Right? The how you are looking the response should be.
So it's like the example could be, like, you there is something the techniques that we can use is the adversarial technique or model distillation. So we can provide these techniques and identify by removing using biased stop words, like how certain stop words that we use. We can remove those bias in the training data. So those are all the key areas that we can start with. And then we can also use whether it's an interpretable model you want to build or you want to build an explainable model. Do you want to provide the explainability before or add the text plan to technique like line, shape, in post deployment? And how do you want to do that? More post model deployment, I'm talking about. And then, think continuously how we need we can do that monitoring model monitoring and improving.
These are the things that I would suggest for the techniques that are, required for building the robust AI models. It's not like 1, like, an horse who like, we can just say, oh, I can, like, minimize the bias just in the training data by using certain biased stop words. Not just that. There is many other areas. You can also put the explainability. And so as much as that you do this, this is very critical. That's why there are ethics team in almost every organization that where AI is, AI where where they use AI in, in a wide scale. So where you see that that ethics, committee provides all these things. Like, saying, like, okay, you cannot just say in 1 all these 4 areas. So let's move forward. So what exactly is this adversarial feedback that I have talked about in my title? It says, like, okay.
Building robust AI models, but Gen and adverse real feedback is alright? So what exactly is this adverse real feedback? As I mentioned, it is the robot how we improve that robustness. Right? But one other thing that you need to know is it is also the how you provide that variation in the training data to make that model more, prone to adverse real attacks. Like, you you might have also heard the prompt injection attacks. Right? So it's like if you have worked on database, SQL Server, SQL injection that you would have at a how we do the SQL injection attacks. The hackers can, know how we can be able to, help make robust database development so that, you know, hackers cannot be able to hack into.
The same way here, how we can make it more resistant. So, like, heat resistant, like, adversarial attack resistant, and that's what we are going to see. And one best way to go about is the adversarial training. So I've taken very simple example to explain here. So let's talk about that. So here, what do you see here? I'm going to pause a minute here because it should be also engaging. So when you see this image, I'm going to start with my demo. All cats have 4 x. I have 4 x. Therefore, I am a cat. So what kind of bias is this? I can't see the chat now, but I would, I'm hoping, yeah, because if I stop sharing, I won't be able to see. So I'm gonna switch I'm hoping you someone would have mentioned it about as generalization. Right? We are generalizing it. Right?
So I'm taking this, like, I'm, going to take a few lines of data, and then write something. And I'm going to see how we can curve the we can be able to do this adverse real feedback using the Gen AI, and that's what I'm going to do. Okay? So let's see here. Let's go to this one. Okay. I hope everyone is able to see the screen. So what I've done here is I'm I've done, I've taken this, CSV file. So let's let's see this. So I've take I put a few, lines of code, but what you see here is you might notice in each and every line, there is some kind of bias. You are expected to attend the weekly brown bag session. We no longer call it as a brown bag session. You can call it as a lunch and learn series.
Okay? Or you can also see include those confined to your wheelchair, and that is not the right way to phrase it. Right? And all worker no play makes Jack a dull boy, mentioning only, gender, one particular gender, degree from a top school with strong academic performance. Nowadays, you know that, you know, without, credentials also, people with no college degree, they can be able to also apply. And, not just top tech giants, but also start ups and all other organizations who are looking at welcoming where we could be able to train people coming from a community college or any, any, educational background or even changing careers and train in a boot camp, and then they brought bring them as a raw talent.
Right? So as long as we have the passion and interest to do software development or this technology side of area, then anyone could be able to do it. And it is doesn't matter just for technology. Right? For any other, disciplines also, it doesn't matter, like, only people coming from a top school could be able to perform. Right? And that's these are all some of the data I've taken where it shows that bias and lack of diversity with regards to the skin color of people in fashion magazines. And you make managing multiple tasks with tight timelines, look like a cakewalk, excellent verbal and return English native speaker, and he or she will be a skilled marketing strategist, genuine care and interest in elderly and handicapped people. We no longer call even, like, we don't want to even use the word disabled.
We want to today use it as differently abled because their ability in the, you know, handling things and doing is, is certainly different and and certain things what they do is, like, you are you you really feel like, you know, that's that's a god's gift in how they how their brain works in handling certain things, for people that they are differently able because they tune to how they can be able to work on mundane tasks or routine tasks or anything that they do, whether they need to use their, you know, brain in order to do some mental work or if it's a physical part.
Right? So how we phrase it matters, and these are the certain words I have used here. And what you see here is I have loaded the data from the CSV file, and I have, loaded the, model. I've used GPT 2, not the GPT 3, and then I've defined a list of words that indicate the bias. So that's what I've done. I've taken those words that I feel like anywhere it says retarded, anywhere it says confined, wheelchair. All those things I've taken. And then, what I've done here is, hold on a second, I have taken this. I've split the sentences. So and I could generate based on, you know, use tokenizer. I've generated it. And what you see here is, do the possible bias detection.
And if it finds a bias, then it say it says that there is a bias directed. If not, possible bias is not directed. That's that's how I'm I'm, stripping that sentence and trying to see which one is biased and which which, sentence is not biased. So I've looped through the dataset and checked for the bias. And if it if there is a bias, it will print. If not, it will print no bias detected. So and that's what I've done here. And it so it does it's, because I've installed the transformer, it's installed that, using deep install. And then what I've done is you can see a bunch of it shows here by a bias where it is directed and not directed. So and what you see here see? Bias deducted here is because it shows as elderly and handicapped people, which I have already taken that as a stop word.
It is able to say, okay. This is a bias detector. But what it also does, this, this program is, like, you can see that also, take this dataset and runs it against the GPT 2, and that's what you see, like, here. One example in April 2016, the federal law something it talks about there, it is able to see, like, it takes content from the gpt2. It it runs against it and see, like, okay. There is a possible bias directed where it takes, like, oh, learning from this, it is able to see from all these bias words we gave. Okay. There is something called manpower. Why it couldn't be driven by. Right? Because it's something like that direct it it is able to see that is a bias word.
And, likewise, you are able to see the rest of the things where where all it can detect. That's what it is. You can see it here. And, it's a very simple example, and you can take this and expand it based on what the dataset that you want to feed in, and then you could be able to do it. So for now, I just wanted to show for this, session how this bias can be directed. So what I've done here is how this is relating to that adversarial feedback is how we are trying to make in the training data as robust as we can possible, how we can detect all these different biases that can happen. And because what the data you feed in, that is what it is going to the if any conversation may is going to respond. Right? So based off of that, how it is how we can be able to, okay, take this because generative way is learning from that content what you provide.
So from that, it is able to learn and it the generative way is able to cross check with the GPT 2 and what all the content data that it has. You can be able to see, like, okay. Whenever okay. Based off these words, if there is a word something like that that looks like it's a little biased, let me point that out saying that this is a possible bias deducted. If not, this is not a bias. So all we need to know is so we can go and, tweak the data. Right? And that's before we retrain it, and that's what we are trying to do. See, every time when you retrain it, there is a quite an amount of computer resources we are using. Right?
And we should also make sure when we talk when we see this inclusive design practices that I talked about, we also have to make sure that we be, cautious of, like, cognizant of sustainable design practices as well, where, you know, if you can stop the cloud instances whenever you can, if if your program is running or where you do not have to retrain if there is not new things that you have added into the dataset, then you can use the same dataset to, work on it.
Right? So these are the things that you can do. What are the best practices you can do in order to avoid or be cautious of so that, you know, we can be able to use the best practices possible. And that's what I wanted to, emphasize here, and I'm going to go back to this part. Let me go to this one. So that's the demo that I wanted to I know it's a very small program, but I think any, anyone can take this and, work on it to see how it does. With g p t 3, I couldn't be able to do it in the because you need to have a specific account. I have to subs subscribe it, and so I I did it with the GPT 2.
So you could be able to do how you want to do it and then see how it works. So this is a stepping stone or a simple step that you take in order to see how the training can be provided, and so how we can, largely avoid the bias wherever possible. And then we could be able to go with, go go with, like, making sure the model is is robust. Model is, like, sturdy enough. And sturdy enough in the sense here is it can take bad, good, worst, all scenarios, and it can be able to handle that. So together, let us join hands to solve any it's not doesn't have to be societal problems. It could be societal organization, business problems using AI. So here I've asked which one are you? Are you, here it says, like, are you, in allow, the cynicism, pessimism, optimism. Right?
So we are talking about responsibility product, and we have a problem. And can I help solve it? And that is how I see it, but each one of us might with an emotional quotient because, you know, as a leader, I work with teams that emotional quotient is very important when we build that responsible AI, not looking at what I think about. Right? It should be holistic approach. And so responsibility with an emotional quotient is how I see it to solve the problem. Because it's like, if I am a technologist, I shouldn't say, oh, I can only be able to give guidance and direct the team. I should also be able to jump in to help solve the problem. And that's how I am, and that's how I look at it. So each one of you might have a different way of thinking, and you could be able to see how we can all join hands in order to solve the problem using AI.
So I'm giving you I'm saying I'm asking for a call to action, how we each one of us can take steps to create responsible and ethical, product development, like product development and say, AI product development, any products that have a component of AI, because not all products need AI.
Just because I am in the field of AI, it doesn't mean every product needs AI. Right? So bare bare all you think AI is needed, where your team thinks AI is needed, your stakeholders think, then use caution releasing datasets. Right? Ensure you are using explainable models and avoid variable there is any unfair data bias in, training data. Avoid that and minimize that as much as possible. I'm open to questions. So thank you so much, and you can connect me on LinkedIn. If you search for, you should be able to connect me on LinkedIn. And, I'm really happy to be here, and, I'm now happy to address any questions that you have. Let me stop sharing. Let me go into That's right, Sandra. Yes. Different types of biases you talked about. Yes, Namita. And then I'm reading out all your questions.
So if you try to weed out biases, how you say bias going to be close to the roof? Yeah. Deepika, yes. Here, we cannot be able to, any biases. If you think this this is a bias, and let's say you're going to tweak your data, we need to make sure it's not a single percent decision. It's, it has to be, like, as a team decision to see, like, okay. Is this going to affect our output? And have that consensus decision and then incorporate that and make sure whether that that needs to be eliminated or how it needs to be, tweaked based on how the output has to be. So we cannot tweak too much also in the sense, like, how the data is being fed in, it's going to provide, but it learns from it. Right? Learns from how you feed the input, and that is how it is, it's able to, release it. Right?
So we need to make sure, like, when we do that training, we have to make sure, like, when you say positive and negative biases, you say, like, sometimes a measure of negative bias may help. Can you give an example when you say a measure of like, when you say negative negative bias in addition making? There will be, obviously. Positive bias is like unconscious bias. That is all going to that implicit bias is going to be there, but in even in the implicit bias, it has to be reduced to a certain extent. You cannot make some implicit bias thinking like, I had done a post on LinkedIn a week ago about how, if assuming if chat gpt is a technology leader, and asking that leader, asking chat GBT to provide feedback for a cons constructive feedback for, for a male team member and a female team member after they have come back from parental leave, since they are working on some critical project.
So it the, from the content, what it has been fed in before, the chat GPT provided 2 different feedback where for women, it addressed in a different way. It was more critical. For men, it was more saying, like, okay. This could have been, you know, you have been you have been doing well before you went to parental leave. You, we know you could do better. So if you need any help, ask for, my help or come and, you know, talk to your team. So it showed totally different, you know, it was contrast, the feedback was. Even though, you know, all what the only word I changed in that question is he and she. So the pronouns were changed, gender pronouns.
So what it but if you keep providing that same and checking that every day, what happens is it is a the ChargeGPT is able to understand there is something like, there is a bias in it. So because we say, like, okay. This is not how I expected a response. If it if that is what if you have written, what it does is right away, it takes that feedback from the user, and then it makes it as a general neutral. You can observe that in char gpt. What it does it then it will keep referring it as they or them, and it doesn't it makes the, it doesn't refer it as a he or a she. So that way, the feedback will be the same for what it provides for he and what what gender pronoun provides and what gender pronoun provides as for a he or a she.
So, that is how you know, you you can how that's what they say, like, you know, say great, thank you, then it provides the, information response based on what your prompt you are giving so that it learns from it and try to, not being, like, how it can be humble or how it can respond.
It can also be able to see, like, okay. These are the good things, and I'm able to do the right way, and that helps in order to train the data on its own. So, because it takes it and then it reps responds. Right? So that's what it's a key thing. Is there any other questions, that's coming from the chat? I see only, 4 people now. So if you have any questions, please let me know. And we have 3 more minutes, I guess. So, yeah, in conclusion, what I would say say is, like, building this robust AI model, it this is very crucial for because, for the real world scenario. Right? So you can, you can use the techniques such as how you use it for data cleansing, the adversarial feedback that I talked about, and then the explainable AI, and continuously model monitoring and, improvement.
Because we can improve the quality and fairness of the AI systems by adopting these techniques. And it's super important that, we need to keep in mind that a is not a silver bullet solution. Right? So this recurs an ongoing development and maintenance to make sure that we remain, effective. So that's why I did this call to action for, responsible and ethical AI development. So we need to work together to build the DA system so that we can benefit not only us, and also for everyone for a better future. Right? Because whether you develop it for your organizations, because there is a like, you are serving for, for the organization or whether you are building it for any societal impact. For each one, look at what is the purpose driven and what is the value you are creating, but it is impactful. Right?
And we want to make sure that we are doing it, with the with the reduced bias, with as much as bias reduced as possible. And thank you all, and I would, like to be here for a few minutes for any other questions.