How to Lead in the Age of Superintelligence

Prerna Kaul
Senior Technical Product Manager
Automatic Summary

Acting as a Leader in the Age of Artificial Intelligence (AI)

Welcome to our discussion about being a leader during this exciting and yet potentially challenging time of AI growth. I'm an AI expert currently leading the Alexa unit at Amazon, and am eager to discuss the potential and pitfalls of AI.

Understanding Superintelligence

The discussion of AI often brings up the concept of superintelligence. But what is superintelligence and why should we care about it? Superintelligence, often associated with terms like artificial general intelligence, collective intelligence, and augmented intelligence, is the notion of an intelligent system that surpasses human intelligence. This system is capable of ingesting copious amounts of information, generating new knowledge, and performing actions that were traditionally conducted by humans or partially automated by computers.

But why should leadership care? Leaders must consider superintelligence in their decision-making due to its potential benefits and risks.

The Benefits of AI and Superintelligence

  • Efficiency: AI can streamline processes, making them more efficient and productive.
  • Innovation: AI drives innovation across a variety of industries that were traditionally inefficient, including healthcare, education, and finance.

The Risks of AI and Superintelligence

Despite all the potential benefits, we must remember that AI can also pose existential threats if mishandled. The main issue lies in the alignment of AI with human values. Current AI development doesn’t inherently incorporate human values like fairness, independence, and the pursuit of happiness. If not considered carefully, the misuse of AI can lead to unfairness and inequality.

The Impact of AI on the Workforce

One of the areas where AI's impact is felt the most is in the workforce. By 2030, AI and related technologies are predicted to replace nearly 300 million full-time jobs. Yet, these same technologies may also create approximately 200 million new jobs.

Emerging opportunities include increasing need for skills in human-computer interaction, augmented reality, neural computing, quantum computing, and more. Traits such as critical thinking, emotional intelligence, empathy, and strategic agility, which are hard for AI to imitate, remain decisively human and would still be in demand.

Developing an Ethical Framework

For leaders in the AI space, it's necessary to address ethical considerations in AI implementation. This is a vast topic that needs to be explored by asking questions around fairness in AI, representation in the workforce, processes for data sharing, collaboration, and trust.

Several companies like Google, Microsoft, IBM Salesforce, and Intel have started considering ethical considerations by forming AI Ethics Boards to develop AI transparency, fairness, and inclusivity. However, concerns remain about AI’s potential to disrupt jobs significantly.

Without global collaboration and agreement on ethics in AI, successful ethical implementation could be challenging.

Call to Action for Leaders

Leaders need to consider two key steps:

  1. Develop ethical frameworks and governance systems for AI.
  2. Seize the opportunity for global collaboration and aim to use AI towards the betterment of humanity rather than for short-term gains.

Meeting these crucial challenges head-on will ensure that we can maximize the benefits of AI and superintelligence while mitigating their risks. Let's continue to explore these topics and work together to shape an AI-empowered future that aligns with our values and needs.


Video Transcription

Firstly for joining this presentation uh by way of background, I am a product leader at Amazon. I work in the space of the IML, I'm a leader at Alexa at the moment.Um And my background is uh in engineering and, and product um had a chance to work with uh some very interesting folks back at University of Toronto at the Creative Destruction Lab, which was an A I start up incubator where I had a chance to meet Jeff Hinton, who is one of the three sort of um godfathers of A I.

He's a well known and renowned for his work in neural networks. And uh he's also a Nobel Prize winner. So that's kind of where I'm coming from, where I've kind of had that exposure early to um both the benefits and the risks of A I and how A I has evolved over the past 6 to 7 years to really uh commercialize um and change the face of so many different industries. Um So I would love to kind of uh share with you my thoughts on how leaders can act in the age of A. I give me one second. I'm having a a technical issue one second, sorry by starting off by kind of see what is superintelligence and why should we care about it in the first place? Right. Um I think we're at a stage where uh there has been so much hype and so much buzz about artificial intelligence talking about art artificial general intelligence where you have the super agent like a singular agent that is uh all knowing, it has all the information uh from past and present.

Uh and can not only sort of assimilate that information but also generate new knowledge, new information to um to perform actions that previously humans could take on their own and were partially automated by, by computers as uh our history and our society evolved and now is being taken to another level, right?

So at a very base level, um you know, we're, we're in a place where we're talking about a lot of different uh aspects of artificial intelligence and how it can be leveraged. Um But there's also a lot of hype and a lot of misinformation. So I feel that, you know, kind of taking a step back, it's important to clarify. What do we mean when we say superintelligence, why should we really care about it? So super intelligence in very simple terms, how it's been defined by researchers in this industry as well as um uh authors and thinkers who have, who have given a lot of good consideration is an agent that possesses intelligence. Far, far surpassing human intelligence. The common terms that are used to refer to superintelligence are artificial general intelligence, collective intelligence, Augmented Intelligence. These are all common terms uh popular terms that are used and why leaders should care uh about this. Uh sort of topic is because there are both benefits and risks associated with it. The benefits are that you are able to gain efficiency, higher productivity and innovation across industries that have previously been very inefficient. To be, to be very frank, right? Uh health care, education, finance, manufacturing, transportation, I think um from uh personal interaction standpoint, we have all experienced the the pains of having to pay for a bill in time uh in health care, having the right price information in front of you in order to make a good health care decisions for yourself, being able to quantify your data from an education standpoint.

Uh the fact that we all go through standardized testing and don't have a curriculum personalized to our own learning styles. And our speed of learning has always been a pain point. And uh there are so many ways in which A I can benefit uh humans there and same goes for the other industries. But the risk that we're seeing is that now it is becoming very clear that there is an aspect of unfriendly A I that we need to think very consciously about what that means is. Uh not only is a I beneficial to society, but it, if not used in the right way, could be an existential threat. Uh There is an issue where a lot of researchers are now coming, coming forth and saying, uh we've not really built in the right uh controls to ensure that A I is values aligns with humans. So human values including um you know, access to uh a freedom, access to your own uh sort of independence, the pursuit of happiness, the idea of how do you, how do you ensure that uh you know, people have opportunities to, to utilize their skill set?

How do you give equal access to everyone? These are human values, but these frameworks are not built in per se into, into an artificial intelligence. And that's really where the risk comes in and, you know, feel free to stop me anywhere where you'd like to really focus and understand more details. I'm I'm sure we could have a very good conversation about that. So where we're seeing this evolve, uh as I talk about benefits and risk is we are now in a phase where A I could replace nearly 300 million full time jobs in the industry today by 2030. And this is a well known fact, I mean, all all sorts of occupations, right? So um the the combination of biotech artificial intelligence, uh nanotechnology and uh uh also just automation will, will replace around 300 million jobs, but it will also create 200 million jobs in the process.

And I think that's a very interesting consideration, right? So what kind of job roles would be created? Uh few examples. And of course, we are still discovering this is we are seeing opportunities emerge in the space of human computer interaction. So more folks who understand how humans and computers interact and are able to build A I assisted technologies will have more scope. We're seeing popularity of uh augmented reality already with the metaverse and everything. Uh We are now starting to talk about neural implants.

Uh neuralink is a is a very well known start up that is exploring how uh brain computer interaction will work when uh you know, you're able to control the human brain um and have a collective intelligence um B by using, by using technology, right? So how do you kind of operationalize that? And how do you uh do we have the right skills for it? Uh Quantum computing uh is another aspect as well as across the board, across these different industries. How do we ensure the right governance and the right security controls? So these are kind of the job roles we're starting to see emerge. And we think that uh things that are differentiated, that are unique for humans that A I would not be able to easily replace are skills like critical thinking. So thinking even you know, if you, if you use chat GP T today, you will see that the thinking that is displayed within the A I system is pretty much general. It's, you know, it will be able to sort of assimilate and consolidate information. But to figure out what are the critical parts of this, how to prioritize information, what are the insights? These are things where humans really excel in addition to showing creativity, um strong emotional intelligence, having empathy for people um and the ability to think strategically and be agile about it.

These are actually very human, you know, human qualities that are not easy to replace uh through A I. And we think that that's where the sort of future of work is evolving. Um For leaders, I think I would like this to be a conversation rather than me saying, what are the things that leaders can do? I think the areas where we need to really collectively think about is number one, what is our ethical framework? How will we align, work with human values as collectively as leaders? I think this would be uh an area where I really emphasize, how will we enforce fairness in A I? What is the gold standard for representation in the workforce? And then how will we establish processes for data sharing, collaboration and trust? And finally, what are our contingency plans? How are we baking in our contingencies for unforeseen recent failures rather than you know me saying this is what we need to do. I think this needs to be an open conversation where we ask each other these sort of more nuanced and challenging questions and come up with ideas and solutions around it. A few case studies that I'd like to mention uh based on, based on the information that we have is that there are, in fact, uh A I Ethics Boards that each company is bringing to the fore to oversee that development of A I has an underlying current of transparency, fairness and inclusivity and uh companies such as Google, Microsoft, IBM Salesforce and Intel are actually working on uh this mental model.

However, researchers still have concerns that uh there is a huge potential for A I to automate jobs leading to significant job losses and social disruption. And companies will always, you know, uh just in terms of incentive structure, um the priority is to maximize business value.

And how do we think about that in the, in the new lens of A I? Um I see that uh one of the participants has a question, do you think that without global rules and laws, there will be enforcement about those ethics uh implementations in the I think that's a great question, right? And honestly, my answer based on my reading so far is no, I don't think that without a global collaboration. Um in fact, a collaboration model of some sort, we will achieve a state where companies themselves um or even uh countries themselves will be able to provide the right uh framework for ethical implementation. Um So I am getting a note here to end the session. Uh But I will sort of end with uh talking about few call to actions for leaders. So the first is to ensure that we we are establishing to write ethical frameworks, we have a governance system. We are thinking about the questions that I had highlighted earlier. And second, as Stephanie was saying we have the opportunity to collaborate and we should use it and we should see this as a way to better humanity rather than maximize for uh sort of short term gains for individuals. And I would really urge leaders to do that.

And uh these are a few things that uh that are top to top of mind. I I hope this presentation was uh was useful and uh look forward to your uh questions and, and uh sort of feedback there. Thanks.