AI and the Enterprise: Considerations for secure and responsible deployments of native, 3rd party and integrated models
Demystifying AI: Discussing its Role and Potential in Today's Enterprises
Are you searching for some insight into the capabilities of Artificial Intelligence (AI) today and how to use it in your enterprise? Look no further. This blog post discusses key generative AI concepts and shares various internal use cases deployed within organizations to assist senior leaders and others across various industries in their AI journey.
Understanding the Power of AI
AI is on everyone's mind these days.
Whether we're integrating AI into our products, utilizing its functions from products we purchase, or practising a combination of both, it's crucial to define our intended outcomes carefully. AI's potential is grand, but it needs to be used thoughtfully and responsibly, grounded on robust security and privacy models.
Generative AI holds a particularly exciting promise. With generative AI, we're not only speaking about models that can generate content that appears human-written – this could be anything from emails and code creation to video production – but those that can create new, original content based on learned patterns.
Generative AI's Rapid Pace of Adoption
What's perhaps most remarkable is the speed at which AI is being adopted. Usage of AI is outpacing even PC and smartphone usage in the first four years of true adoption. To maintain a competitive edge and provide up-to-date services and products in industries like hospitality, incorporating this technology is becoming increasingly crucial. However, enterprises need AI models that reflect their desired business outcomes.
These models need to be adaptable – capable of being fine-tuned and designed to deliver secure, quick, and relevant outputs that support these outcomes. Using a 'retrieval-augmented' generation model, or a RAG, you can increase the relevancy of the responses by connecting the AI model to a trusted knowledge source or database.
In essence, generative AI works using 'embeddings' – vector representations of your data – and the better the embedding model, the AI model will be better equipped to work on language problems.
Key Enterprise Use Cases for AI
Once you grasp the basics of how AI works, the burning question everybody asks is, "How can we use AI effectively within our businesses?"
There are several practical AI applications for businesses such as:
- Summarization: With AI, you can create a summary of original text.
- Semantic search: Unlike a general keyword search, semantic search aims to understand the query's overall meaning.
- RAG: RAG can enhance semantic search and summarization by pulling data from your knowledge source or database to provide the most accurate information.
- Chat: Chatbots and other interactive conversation tools can solve a variety of problems, from simple to complex.
Internal Use Case: Oracle Implementation of AI
The potential of generative AI is endless, and Oracle has made excellent progress in this space through their utilization of RAG in their internal enterprise search, .com search, AI service desk, and smart virtual contact centre agents.
Their primary internal site, MyOracle, receives around five million hits each month from employees searching for internal resources. With 500,000 queries per month being raised through their My Oracle search feature, they found a 65-70% click-through rate on searches and received positive feedback from users on the quality of answers generated.
Oracle also built an AI service desk, which has led to a 20-30% reduction in ticket deflections thanks to accurate, step-by-step solutions provided by the AI.
Deciding on the Right AI Pathway for Your Business
The journey to AI doesn't have to be overwhelming. Oracle AI offers a fully-managed AI service, called the OCI GenAI Agent Service, soon to be released. This service automates the entire process from understanding user prompts to data retrieval and necessary actions. This service aims to make AI more accessible for all businesses.
For those considering building their own models from scratch, Oracle Superclusters are an excellent option. These platforms offer one of the highest performance and lowest cost CPU cluster technologies in the world.
To Sum Up…
AI offers boundless opportunities for enterprises to deliver on their key scenarios, such as search support and contact centres. A clear end objective should guide the development of AI products. Include a diverse group of testers to ensure the model will meet your broader use cases and continuously analyse and monitor your model to ensure it performs well throughout its entire lifecycle.
Eager to learn more? Consider the resources provided. You could also reach out to Oracle's experts to discover more about AI and how it can help reshape your organization.
Conclusion
AI offers an invaluable toolset for transforming the way we work and the products we offer. By diving deep into AI and understanding how to best make use of it, we unlock the potential for more efficient, effective, and innovative products and services. Its capabilities are boundless, but it is imperative we wield it responsibly. For those embarking on the journey of AI, buckle up, the future is exciting.
Video Transcription
Alrighty. Well, if you've just seen the last presentation, we're gonna continue to trend on AI now.So thank you so much for joining the session for hanging on, for the last keynote of of the day here. So as Margo mentioned, my name is Christine Saros. I'm the senior vice president of an organization called, our Oracle Enterprise Engineering team. Use of AI is, you know, top of mind for most of us these days. Whether we're building AI into our own products, we're consuming its functions from products we buy, while we're practicing a bit of both reintegrations, it's really important for us to really think about defining our intending outcomes. So in my session today, what I'm hoping to do is discuss some key generative AI concepts. I have some slides for you to show you some graphs and other things that will hope will hopefully help you with some visual aids.
And I'm hoping to introduce you to several internal use cases, that we've deployed within my own organization. With the objective to really aid senior leaders and you know, really folks across the industry, set yourself up for a successful AI journey. So as we begin, it is our corporate policy. I just wanna quickly convey our safe harbor statement. So the content I'm covering today is informational, and then it's not any sort of formal, commitment. By the company. So as we get started, I just wanted to share just a little bit about my experience with technology and why the subject is so near and dear to the heart for me. You know, so as we've talked about, I've been working in a technology industry for 25 years.
Actually started my tech career as a software support tech answering the phones and the help desk. And then now I run a 1500 person global organization of engineers and operators who are driving strategic project product innovation, every day around the company, which is just really exciting. And at Oracle, very specifically, consuming AI as part of our employee experience is actually my responsibility. It's a big responsibility, very important responsibility. So whether this entails, using Oracle's own AI, embracing embedded AI in the products that we can see from other, you know, companies, whether this is a third party or others, or whether we're using some sort of hybrid model, or maybe we're using some of our data and agreeing to other company systems, etcetera, My team and I have spent thousands of hours, literally thousands, maybe more than tens of 1000 of hours researching and implementing a lot of these AI services, before we deploy them to, you know, our internal employee base, globally who are using it as part of their daily, business workflows.
And I think that's you know, really what this is all about is, you know, how do we begin that journey and how do we really start with, you know, what are the things we can start with, you know, out the gate And the last thing I just wanted to quickly say is I believe the decisions we make now, are especially as we're considering large data sets, across nearly all of the industries that we all participate in need to be thoughtfully based on security and privacy models.
I truly believe that artificial intelligence has a power to transform our worlds. And if we use it collectively to drive positive outcomes by understanding the basics, we're gonna have that much better results with the things that we design and deploy. So quickly on my agenda, I just have 3 kind of general sections that I will discuss with you today. So firstly, I'd like to introduce just sort of some of the background and basic concepts behind generative AI. Then I'll discuss how we're applying generative AI here at Oracle within my team. And how we're leveraging some of the Oracle products and other capabilities. And then I'm gonna finish my talk with that. Just a quick summary for you. Have some good AA practices, which hopefully will help you build your own AI solutions.
So as I'm sure that you have heard about many times today, if you've been listening, to some of the presentations as I have you know, the topic of AI is just there's so much excitement around it, in the industry as large. And, you know, I wanna set the baseline with sort of defining generative AI. Interesting enough, autonomous generation of new text was available in the 19 sixties in the form of chatbots. If you could believe it, And then we had a lot of early 1st generation miss machine learning capabilities. And a lot of those things were focused on structured data, often working with numbers, giving us the ability to do things like predictions, classifications, and finding ways to not only recognize patterns, but to operationalize things against them. An interesting example of a sort of early AI application was sorting the mail in the post offices by recognizing 100 and ZIP code digits.
So lots of very powerful techniques still very relevant today. But gender to AI itself, it's really new and different. And what it brings to the table now is models that can actually create content that looks and feels as if a human have created that content. This is to be something like email, It could be, you know, writing code, creating videos, many, many applications. And these models are getting much larger they're getting really powerful, and they could do all kinds of interesting things. But there's sort of 2 distinct characteristics I wanted to talk about in in particular with generative AI. One is sort of the generative function. That's its ability to create things based on patterns that it's seen you train on large sets of data and then leveraging those observations or learnings to from those patterns to create content based on it.
The other part of it also significant is the generalization on it. And that's based on those patterns. It can be asked to do things or to create content that's new and not like something is seen before. So it can generalize that data. Itself to to create new content. So what's really exciting is this really rapid adoption of AI. On this graph, which you can see here, I've got time representatives on the x axis, and I've got US users, on the on the y axis here is AI, AI usage is outpacing even PC and smart phone usage in the 1st 4 years of adoption, like true adoption, which is crazy because everybody has a phone.
Right? Most people have PCs. So we're seeing even more the quicker accelerated piece of adoption in those products. So from an industry standpoint, say things like the service or sort of hospitality type industries, It's becoming increasingly crucial, to incorporate this technology just to even maintain competitive edge and to evolve products and services. But what businesses need or enterprises need are really models that reflect business outcomes. These are models that can be really fine tuned. Adapted to a specific organization's data, specific data being the keyword there, and designed to deliver secure, fast, and relevant outputs that support business outcomes which can be providing that automated service or simply providing an automated report that allows a human to make a data driven decision.
And this is sort of a lot different than what you see in some of the consumer spaces where, you know, some things are consuming just general internet data as a whole. So with all of these advances in adoption and capabilities also comes in rapid influx of investment. With the next expectation that AI is going to contribute $16,000,000,000,000. That's with the T, to global GDP by 2030. But let's take a step back just for a minute and just really understand exactly what is going on with generative AI. Let's look behind the curtain and that's really understand sort of the magic that's happening behind the scenes. So let's start with the data. So we have a variety of data out there in the world.
Maybe 20% is structured, things like database records, And then you've got a whole lot of unstructured data, things like texts, videos, social media content, These are big opportunities if businesses can figure out how to use all of these millions of points of data. So to process this data, AI represents them as vector. And these vectors allow us to understand, the relationships, the underlying relationships of the data. And the vectors are used to represent the semantic content or the meaning of those items. So in this three d graph that I have represented here on the right hand side, my graph is considering features such as age, gender, and royalty. So in this particular example, queen and princess are semantically close. They're both females and they're both royalty, and they have similar representations. Same thing with sort of king and prince on this chart.
So pound its core, a vector, is a sequence of numbers known as dimensions, and the dimensions are designed to capture the essential features of the data. And these vectors are generated using deep learning embedded models. So we take this knowledge and we sort of use this concept of data representation and we leveraged the entire web to build data AI models. We can think about large language models. LOMs are models that are trained on a large internet corpus, and they crawl public data that exists in a in the public domain. Things like Wikipedia, social media data, or general web data, web pages on the internet. And these models can do some really cool, neat, interesting things.
For example, they can pass up our exam and they can write term papers and they can do relatively complex tasks like comparing 2 philosophers and their philosophies, or they can simply just give you a bit of information, a bit more contextual information from things you used to do with just a general Google search asked as an example.
But here's the thing, the most important thing. Most businesses are not trying to write term papers about philosophers. They're not likely to ask questions on the general internet and the huge corpus to make critical business decisions. What businesses are looking for are models that reflect things that are specific to the business. They're models that can be, again, fine tune or adaptive to that specific data. And let her feel like they're very relevant. So at the end of the day, generative models recognize patterns and embeddings to understand language and to predict things like the next best, word in a sentence. And it may sound kind of complicated, but it's actually pretty simple. Main thing that you really have to understand is that generative AI works by embed by using embeddings.
And these are those vector representations of your data. So the better the embedding model, the better able the model is to work on language problems. So the question everyone's asking is, okay. Great. That's nice. I understand the science, but how do businesses, actually use this to leverage use AI and use the these concepts, to leverage these models. And so, you know, as I mentioned earlier, businesses are not looking to just go scrape the internet and use all of that data to run their business. Enterprises need to use these models in a very trustworthy way. That provides data privacy and protection against things like data leakage, bias, hallucinations. There's a lot of innovation in this space, including the idea of retrieval, augmented, generation, or RAG. The general principle is that generative AI is great.
It's us amazingly cool things. But it still has limitations, things like hallucinations, and hallucinate hallucinations are just what you think they are. They're incorrect or misleading results. And those results, those incorrect results reduce the reliability of your behavior accuracy. And so RAG is a specific method, designed to resolve that. And what you do very simply is you take that model and you connect it to a database or some sort of source of trusted knowledge. You then let that model query, and pull back this retrieved documents as to be used as part of its response. And what that really does is it gives you a few things. The model then is citing known trusted knowledge sources, which is improving your context and your reliability of the response. And the model is also relevant.
It has relevance to the known private knowledge that you define and equally as important, expect up to date. So you're not pulling data that could be published at any particular point in time. It's just as relevant as the way that you're pulling the data and the data that you're providing it. So by using RAC to incorporate your specific domain and your enterprise data, the results become much more useful to you as the recipient of that data. So now that we've covered, so, you know, sort of the the basics high level, Let's put all these pieces together and just take a look at a very simple, simplified, genoa workflow. So starting on the left hand side of this slide number 1 here, the user is issuing a query that query is vectorized and it's passed to the Spectre DB we talked about.
In step 2, the vector DB is processing the query, and it's looking to find the closest proximity results to the question. That was posed. In step 3, the user's input query plus the results obtained from the vector DB or pass to the LOM, which generates the answer. And finally, number 4, that final answer is sent to the user. So this approach known as the RAG approach is an excellent use case when you're doing things like QA, chatbot, summarization, semantic search to find the best answer based on this enterprise data that you've associated to. So my next slide here, my my intent sort of shifting gears just for a moment is just to highlight why it's important to define your use cases before you designate your solution. So for enterprise use cases, you've got different different types, different sizes of LLM models are available, which depend on the order of magnitude of the parameters that you want I hear your model to really consider.
So on the left, you have a small or medium sized LMM models, which have orders of billions of parameters as a as a potential, which require a smaller hardware footprint and are often, more cost effective. If you're looking to just get started and you don't have you know, trillions of parameters to consider as an example. And they also may have some lower latency in some cases. On the right, are for these really large models that have trillions of parameters. They're a lot more powerful. They require a larger hardware footprint. So you've gotta make sure that whatever you're using, whatever systems you're using behind the scenes that are powering that LLM are, highly tuned and account for things like latency. Considerations.
So the net here is really for enterprise use cases, not only is important to consider, you know, the way This all works and the LLMs, but it's important to consider cost and latency, as far as which LLM you're you're looking to actually choose. So, again, we've talked about the basic concepts and the models. So I wanna talk about a couple of key use cases for enterprises. So the first is summarization that I'll talk about, and this is really creating a summary of original text. So it could be include things like, you know, But please summarize my project documents, my legal contract, a policy, potentially Symantic search is instead of using just keywords, it's really aiming to understand more around the meaning of the query, that semantic context of the query.
RAG is this model that improves the relevance of responses that you can layer on top of that. And then chat is really, you know, allowing for interactive conversations with, our users to solve, you know, simple to complex problems. I'm gonna jump into a couple of things that we've done here at the company. So, couple of the use cases where we've already implemented a, you know, with a RAG model is with our, internal enterprise search as well as our search on orgal.com, our AI service desk, which is something that supports all of our, internal employees for AI search service and our smart, contact center virtual agents.
So I had a nice little video here on the hand side, but PDFs didn't work for that. So I'm my apologies for that, but I've got just a snapshot here for you of, my article. Which is our main internet site for Oracle employees, which allows, access to all internal resources. It receives 5,000,000 hits monthly from our employees and enables employees. It's sort of like the, the main landing page for employees for access to any and all internal content related to anything employees need to do to complete their daily jobs. And one of the most important features of this portal is something we call my Oracle search. And what that is, it's an enterprise search service. And in that service itself, off of this page gets 500,000 queries per month. So we used to use electrical search to retrieve the results from our enterprise search.
To improve relevance, we moved from this lexical or sort of surge to a semantic surge which allowed employees to get more relevant results. And we saw about a 65 to 70% click through rate on search results. And we also receive 75% in positive feedback from the users on the quality of the generated answers, and we're just getting started. The other thing that you may not be able to sort of see on this slide is with RAG, we've also combined a semantic search with LLMs to provide a simple summarized answer. You don't have to click through and just get a whole bunch of links, click through this link, read this thing, and so we're actually summarizing the most relevant answer and providing that in a in a very crisp, clear, response, which actually becomes conversational. And if it doesn't answer the question or if it requires the next step, We have the opportunity to then take an action for the employee to say, you know, click here and we'll make this change on your system, etcetera.
So much more conversational, again, in nature than just giving somebody a link or just giving somebody sort of a flat, response. The next scenario I wanna talk about is our AI service desk. So this is, you know, com the customer support scenario has commented likely all of our industries, are all of our support today is done through a chatbot. Our old bot was a rule based system that matched keywords and provided initial answers from it. And as a result, unfortunately, because we didn't have great knowledge behind it, and many cases, more than 90% of our bot engagements resulted in tickets that actually needed to be passed through. So what we were able to do is we were we invested in Genai.
We built this AI service desk, and it actually we were actually able to clean up our data and provide accurate step by step solutions. To use our inquiries. And thus far, we've just been live since February on this. We've seen, ticket deflection rates of 20 to 30%. Meaning at 20 20 per 30 percent of the things we used to have to pass through before or no longer being passed through because we're actually providing the right information for either the employee to solve the problem themselves, or we're solving it for them through through the through the bot.
And we are anticipating, a further 20% deflection by the end of May because we're doing a lot of model to, tune which is, you know, an important part of this. Right? You've gotta start somewhere, and then you've gotta continue to tune your model as you go along. Moving along, so we've had all these, applications built on top of our Oracle AI stack. And so I just wanted to provide a quick bird's eye view of it. And just to illustrate that we have capabilities that range from infrastructure to data platforms, doing a number of AI services, and one of the key differentiators that I'll just quickly mention is all of these things are integrated now with our Oracle sauce applications. So if you're already using some of the Oracle sauce, applications as part of your portfolio. Many of these AI services I'm talking about are already, you know, part of that. So just something for you to consider.
In addition to accelerate building a, AI applications, Oracle's gonna be releasing, very soon a new managed service, called, the OCI, genai agent service. So it's a fully managed service It actually automates the end to end process, and it includes, the whole process from understanding the user prompts, the data retrieval that we've talked about, and taking necessary actions. So the retrieve data itself is actually used to trigger execution, of the contextually correct action in this case. So in this example, I'm showing you here, the user, query is booking a flight, and it's processed with a final output being the confirmation of the of the booking itself. So this is a really this is a nice option, whether it's COCI service or some other service, It's a nice option for enterprises who are looking to go to market for AI, but maybe you don't have all the in house skills to make it happen or you don't have the time. There are managed services coming to market in this in this space.
And then if you're interested in building your own models from scratch, say you really, you know, You have the time. You have the skills. You've got, you know, some great ideas or a use case. That's very, very unique. You may wanna consider OCI superclusters. They provide 1 of the highest performance and lowest cost CPU cluster technologies in the world. With really great networking, large local storage, and a bare metal compute. So with that being said, I think I'm just about out of time here. So I'm gonna quickly wrap up soon. So, I just wanted to say, you know, Jenai provides an excellent opportunity for enterprises to deliver on their key scenarios like search support, contact center, as well as many other cases that you have. Realizing this potential requires adoption and execution of good AI practices I'm just gonna call it a quick few.
So when you're you're building your product, you wanna carefully derive your end objective and how it supports your business goals. You know, use data that is representative of your scenario specifically and keep it updated to ensure the models work well, in practice. You wanna set up introspection and visualization tools to understand the data pipeline, and you wanna make sure you understand that end user experience. Regarding the people, it's critical for you to identify the key stakeholders who have domain knowledge, customers who use the service, and to have diverse groups of testers who are representative of your broader use cases, the more diverse, the better to consider your use cases.
And you wanna send those feedback to continually learn from your user experience so you can drive the right improvements in the right areas. And then from a technology standpoint, you wanna leverage the right foundation like, understand the foundational models, leverage the right foundational models, or consider managed services, from cloud providers and fine tune them with your data. There's lots of options out there now. What's most important and imperative in the technology space is ensuring that you understand the data security and privacy of anything that you're using. And then you wanna do continuous analysis and monitoring to ensure that your model is performing well throughout its entire life cycle. So to close out here, I've got a couple of resources that I'm providing, my email or product lead email are also included on these slides if they're if you have interest in any of our products in particular.
I'd like to say thank you so much for your time today. I've really enjoyed the session, and I hope you're able collect maybe even a few, snippets of information that will help you along with your AI journey. It was a pleasure. To discussing hi with all of you today, and thank you for your participation.