Not all that glitters is gold: what does it mean to use AI responsibly?
Understanding the Responsible Use of Data and Artificial Intelligence
Today, we delve into a crucial topic: the responsible use of data and artificial intelligence (AI). This area of concern is increasingly gaining attention in our digital age. As we understand more about the potential of AI, we also become more aware of the risks associated with its adoption. Let’s discuss how we can mitigate these risks and ensure a responsible use of AI.
A Tale of AI Gone Wrong
Imagine you decide to join the local gym, known for its revolutionary, AI-based fitness program. You fill out the necessary online paperwork, integrating your personal data and lifestyle habits only to be met by an automated rejection message: Sorry, we are not suited for you. Feeling frustrated, you seek an explanation from the gym – only to be told the decision was made by their AI system, and nothing can be done. This scenario is a prime example of irresponsible use of data and AI.
The Risks of AI and How to Mitigate Them
Performance Risk
Accuracy errors is one of the key concerns when using AI. These errors are tangible and frequently stem from poor data quality, bias in the training set, inadequate testing procedures, or overfitting. Opaqueness and poor understanding of the AI system also pose risks, leading to transparency and interpretability issues.
Security Risk
AI solutions often face cyber intrusion, privacy risks, and open source risks. These are traditional risks shared by most IT systems. However, AI introduces a new breed of threats like adversarial attacks, where the model is taught falsely, misguiding the system.
Governance Risk
Governance risks emerge from reduced human agency, difficulties in establishing accountability, and misuse of AI solutions leading to unintended consequences. These could include discrimination or increased inequalities.
Establishing Responsible Use of AI: A Comprehensive Approach
Addressing these risks involves several crucial steps. To avoid irresponsible AI use, organizations should:
- Ensure data privacy requirements are satisfied, including the General Data Protection Regulation (GDPR).
- Ensure input data is of high quality and representative of the population being modelled.
- Collect and analyse the final outcome of the model to ensure fairness and omission of bias.
- Measure the accuracy and implement interpretability techniques.
- Document outcomes, key decisions, and ownership for transparency, and to ensure accountability.
- Integrate protocols and appoint an AI Ethics Officer to ensure future models comply.
Aside from these, ethical principles should be defined with guidelines and best practices. Establishing structures of responsibility and integrating these principles into current standards ensures these values are enforced.
Final Thoughts
The world is becoming increasingly digital, with AI integration becoming a norm rather than an exception. As such, organizations failing to consider AI risks face potential losses, legal violations and, more dangerously, societal impacts like increased inequality. Therefore, it's vital for everyone to understand these risks, advocate for responsible AI use, and raise the bar in their respective organizations. Whether you're a developer or an end-user, everyone can contribute towards responsible AI.
Video Transcription
Hello, everybody and thanks for being here today um and attend to this brief call. Uh We'll start uh mm right now because uh we we have a tight timelines. So during this session, if you feel like asking any questions, please post them on in the chat.And then my colleague Chilla will help me at the end of this session to answer to all uh your question. Otherwise we can um keep talking through the chatters afterwards. Uh Today, I would like uh to postpone my introduction and uh start with a small uh imagination game.
So uh
imagine uh that you just woke up and you feel uh very energetic, so much that you can think that uh today is the right day to activate that offer you received uh by email to activate the uh gym pass uh to this exclusive uh uh gym that has uh mm become so famous in these days because it provides personal training uh programs based on the data collected, not only in the gym but also uh in uh through your smartphone.
So uh from your everyday um activities and uh um and your lifestyle, your friend just started last month and she's really enthusiastic and satisfied. So you get to your email, you, you follow the link, you got to accept the cookies and the connection between the gym's app and some other apps uh that you have your in your phone. And after a few minutes,
uh you
receive a red message. Sorry, we are not suited for this Jen. So what what happened? You, you feel shocked and frustrated. So you decided to call for the Supper service and ask some explanation uh about what happened and who is responsible for this decision. But the operator
uh cannot uh
answer and can just say, I'm sorry, I don't have any information. I don't know how it works. I just know that this is uh an automatic system and we cannot do anything about it. So how would you feel about that? Um And um what the gym could have been, could have done to avoid this to happen? I mean, what, why you are not your friend? So this
is an example
of uh a bad user or a not responsible user of data and artificial intelligence. And this is exactly what I tried to avoid when helping my clients uh in my work. So I finally can introduce myself. I am Saram Mancini and I work with uh uh Cicilia at PWC Italy uh when we are working in the Responsible I team uh which aims at supporting our clients, mostly um organizations in the public sector and financial sectors uh in realizing and adopting a solution based on artificial intelligence that uh are aware of the risks that this
technology uh pose and put the
human at the center of the technology itself. In fact, in the last years, uh we all witnessed uh a revolution uh technology and digital
revolution that
enable a broad set of applications uh that are reaching our uh everyday lives. So with new products, new services, new apps, uh all these kind of uh solutions and artificial intelligence and has been adopted to not only um by big tech companies like uh Google, Facebook, but also uh all the organizations uh in different sectors. So we, we mentioned the banks um for example, also uh pharmaceutical uh and uh alpha insurance uh companies and all this type of uh organizations. So the more we use artificial intelligence, the more we know that it has uh uh a lot of
benefits. Uh but
we are also more aware that there are some risks uh related uh to this, to its adoption. What are these risks and how can we uh mitigate them and avoid them? So this is the scope of today talk if we start from uh uh the fundamental components of artificial intelligence, uh we find that data and the analytical models that crunch the data and provide the information to the, to the people that can take de make decisions and take actions.
Uh are the most important uh uh components and these components are subject to a set of different risks. So we start from the performance risk to the security risk and governance risk starting from the performance risk. Uh These are the ones that are more, let's say
tangible. So uh
the ones that you can actually uh sometimes measure and uh uh concretely concretely see uh because you can assess the performance of the model. So the accuracy, the errors uh and so on, uh
where the errors
can come from, they can stem from, for example, poor data quality or, or uh hidden bias in the training sector or inadequate testing procedures or for example, overfitting for those less acquaintance to uh this uh kind of uh ter terminology, overfitting happens when a model cannot generalize uh what uh it learned from uh the training phase
to new
uh cases or new instances. And then uh the errors arise from that, for example, a very famous one actually. Um we, we, we remember what Amazon did a few years back uh when uh they developed a tool for recruiting uh that scanned the CV S from uh applicants uh for technical jobs. And uh after a few years, they used it, they realized that no man, no, no woman was selected for this kind of jobs. And why that happened that um the reason was that in the training set, only men were historically selected for this job as men were the only ones applying for those jobs.
So the the tool learned that only men can do
that uh kind of jobs. But as we are all here, we know that women are more than good for those kind of uh of roles. Another kind of per month performance risk are those that come from um opaqueness uh and
the poor uh
understanding of the user and uh the function, how, how the model functions uh which leads to in poor interpretable and transparency. Uh If we move to the next uh kind of uh risk, so that security risks, we know that there are some uh usual, let's say type of risks uh like cyber intrusion, intrusion privacy risk and open source open source soft risk uh that are common also to other uh uh it systems. But artificial solutions um exacerbate this kind of risks and also um are subject to a new ty typology of security risks. And for example, um adversarial attacks
uh that
uh mm uh provide so uh some examples of some wrong example uh to learn, teach to the model uh wrong predictions. For example, uh when uh a model is
taught to classify
um a, a banana uh as a toaster. And this is an experiment. The last category of risk are those that are called governance risk uh which arise also from the combination of a combination of the previous ones. For example, this include uh the reduced human agency. Uh which is the ability uh for a human to freely take a decision, decision uh without being influenced or forced uh to, to choose something else that it's uh what the machine let's say or the artificial intelligence wants to, there are also um difficulties in establishing clear responsibilities and accountability because uh for the, the action that are taken upon the decision uh based on artificial intelligence and uh wrong use or misuse of the solutions, this also results in uh the occurrence of unintended
consequences uh such
as the perpetration of discrimination or inequalities. Uh A recent example is the one that we can uh here in the slide. Um And it was an experiment in which uh the participants of people uh were asked to communicate with an a system that was suggesting photos of online dating candidates or fictitious political candidates. Uh So
the, the
system wa was pointing to the, to the people uh who to message or who to vote. The study shown that A I actually impacted the choices of the people uh into different ways. So, um the, the romantic partners were um let's say the people were impacted for their choice for romantic partners if the suggestion was hidden. Uh On the contrary, if the suggestion were clear, they were also uh induced to choose a national political candidate. So these risks uh remind us that artificial intelligence is indeed a tool in the hands of the humans uh that are um actually the, those that can uh first of all decide and then secondly, um shape the technology to account for all the aspects that we just mentioned. Uh Let me come back to, to our initial imagination um game and uh let's say, see how this is true. So how the G could have done what the G could have done to avoid uh the risk uh that uh that we saw. So, first of all, uh they, they had to, to, to take my heavy complaints about uh the, the mi that they didn't want me. So uh and then they decided to beg for my pardon and uh and give me a big, big discount. So, uh but then after that, they have to, to do something in their uh in their organization.
So uh they decide to ask to the development team uh
to first of all uh check if all the data privacy requirements were
satisfied. So, for example, if GDPR was respected, if all the consent given by the customers were expecting and so on so forth, the second thing that the development team uh did was to check for the input data. So uh if the, for example, the
training sector
has had a good quality uh for data quality, let's say, and then it didn't, it did not hide any biases. And if it was representative of the population um that they were modeling, let's say, what did they found? Did they find uh they found that uh uh actually the um the training data was not representative enough for people older than 45 years old and especially women. So they decided to collect more data uh on this group and retrain a specific model uh to improve the, the performances
uh the
overall of, of performances. And then they decided to present the output uh
to uh a person that
could take the final decision instead of having a fully autonomous system. The next step was to check
um
the final outcome of the model. So the behavior of the model and uh uh in this sense, they mm uh they wanted to uh to check if uh the behavior of the mother was biased towards uh some uh sensitive group or if it used some sensitive information such as gender, race or sex orientation, or if it detected some proxies of this uh uh characteristics.
After that, they uh not only measured the accuracy of the model but also some fairness measures. Finally, they want to also increase the level of understanding of the of the model uh by um identifying the most important features uh and applying some techniques uh of local or global in uh interpretable. Uh Just a quick uh uh note local interpret interpret relates to the understanding of the functioning of the model uh related to one single instance uh or example, while the global interpretation relates to the overall functioning of the model. In this case, uh the team find uh implemented the treatment equality uh which is a measure uh a measure that ensures that uh the model uh has the same degree of correctness between the different groups. This means that they computed the ratio of false negative and false positive examples for the two groups, the one, the people younger than 45 years old and the people older and they mm let's say um balance the performance. Uh So the the parameters of the model
to ensure
a good balance between accuracy and treatment e equality. After that, they uh implement the sharp values uh which is a technique uh that breaks down a prediction to show the impact of each feature of the model. In this case, they realized that the most important variables were age weight and the fact uh that uh the applicant was smoking or not. After that, all the outcomes of the analysis were documented and uh um stored uh in together with the ownership of the key decisions around the the model to ensure transparency, transparency and accountability. But uh this is not enough. So um at the end, they decided to uh standardize this approach. And uh um in order to ensure that also in the future, um the development of any models uh was compliant with this, uh uh let's say procedures. So they integrate their protocols and appoint an A I and the ethics office. So we come to the end of this example, which is not a real one, but what else could have been done to avoid this uh uh
situation. Uh First of all, we
can define a code of conduct. So a document that states what are the ethical principles or the principles that organization would like to follow? And for each of them define, for example, guidelines and best practices. We can also define a doc rules or um responsibilities to ensure that everybody is
um
aware of this uh new um principles and that the principles are respected and then integrate um current procedures and protocols or standards uh to uh actually verify that uh the principles are enforced are forced. So, mm, what if you think that we are all working in a world that is going to be every day, more digital and that many of us uh will deal with artificial intelligence solutions. Uh more and more in the future. We understand that we need to uh think about this risk and do something about it. If a single uh organization
uh does not take into
account the possibility of this risk to happen and will not act on them, then it will experience some enterprise risk such as um reputational damage, financial loss or legal and compliance violation. But if
all the organizations uh uh overlook this principle and don't take
action upon them, uh then we
can expect
some societal impact too, such as anything inequalities or uh uh in in the intelligence divide or increasing the concentration of economic resources and power to small uh group of players. Um So what can we do to avoid this
uh risk?
And uh what is your organization, for example, uh doing in this space? Um So what what's happening around
them? So remember that although you
are not at the front line, so although you are not probably a developer or
um
person that has responsibilities over a solutions, you can still make a change and raise the bar for the others. For
example, you can start uh informing yourself uh or take a course uh
learn more, learn more more about this topic. Uh Or for example, you can search for uh um news uh where A I um went wrong in your context of application. So uh maybe competitors uh that uh did something wrong, you can investigate how this topic is um or um is taken into account in your organization, what your company is doing. Uh Or, or it's not aware of this and then advocate uh and talk about it uh with everybody. So also friends and colleagues uh because
uh uh we can
actually do something together. So uh this is all thank you for uh for being here and uh let's see if you have uh uh
any questions, no questions, uh Just a couple of comments. So OK, we OK. Well, I think I also finished my
time so perfectly. Uh Thank Thank you all. If you have any other comments or questions, please uh keep uh talking in the chat. Thank you.