AI-Generated Transcript
It is my pleasure to welcome Dr. Andrew Ng tonight. Andrew is the Managing General partner of AI Fund, founder of Deep Learning AI and lending AI chairman and co founder, of course Era, and an adjunct professor of Computer science here at Stanford.
Previously he had started and led the Google Brain Team, which he had helped Google adopt modern AI. And he was also director of the Stanford AI lab. About 8 million people, one in 1000 persons on the planet, have taken an AI class from him.
And through both his education and his AI work, he has changed numerous lives. Please welcome Dr. Andrew Ruiz.
Thank you, Lisa. It’s good to see everyone. So what I want to do today is chat to you about some opportunities in AI.
So I’ve been saying AI is the new electricity. One of the difficult things to understand about AI is that it is a general purpose technology, meaning that it’s not useful only for one thing, but it’s useful for lots of different applications, kind of like electricity. If I were to ask you, what is electricity good for? It’s not any one thing.
It’s a lot of things. So what I’d like to do is start off sharing with you how I view the technology landscape, and this will lead into the set of opportunities. So a lot of excitement about AI.
And I think a good way to think about AI is as a collection of tools. So this includes a technique called Supervised Learning, which is very good at recognizing things or labeling things, and Generative AI, which is relatively new, exciting development. If you’re familiar with AI, you may have heard of other tools.
But I’m going to talk less about these additional tools. And I’ll focus today on what I think are currently the two most important tools, which are Supervised Learning and Generative AI. So Supervised Learning is very good at labeling things or very good at computing input to output or A to B mappings.
Given input A, give me an output B. For example, given an email, we can use Supervised Learning to label it as spam or not spam. The most lucrative application of this that I’ve ever worked on is Priority online advertising, where given an ad, we can label if a user is likely to click on it and therefore show more relevant ads for self driving cars.
Given the sensitive readings of a car, we can label it with where are the other cars? One project that my team AI fan worked on was Ship route Optimization, where given a route that the ship is taking or considering taking, we can label that with how much fuel we think those are consumed and use this to make ships more fuel efficient. Did a lot of work in automated visual inspection in factories. So you can take a picture of a smartphone that was just manufactured and label is there a scratch or any other defect in it.
Or if you want to build a restaurant review reputation monitoring system, you can have a little piece of software that looks at online restaurant reviews and labels that as positive or negative sentiment. So one nice thing, one cool thing about supervised learning is that it’s not useful. For one thing, it’s useful for all of these different applications and many more besides.
Let me just walk through concretely the workflow of one example of a supervised learning labeling things kind of project. If you want to build a system to label restaurant reviews, you then collect a few data points or collect the data set where it say the pastrami sandwich great. Say that it’s positive, servos are slow, that’s negative.
My favorite chicken curry, that’s positive. And here I’ve shown three data points. But you’re building this, you may get thousands of data points like this or thousands of training examples we call it.
And the workflow of a machine learning project, of an AI project is you get labeled data, maybe thousands of data points. Then you have an AI engineering team train an AI model to learn from this data. And then finally you would find maybe a cloud service to run the trained AI model.
And then you can feed it. Best multi I’ve ever had. And that’s positive sentiment.
And so I think the last decade was maybe the decade of large scale supervised learning. What we found starting about 1015 years ago was if you were to train a small AI model. So train a small neural network or small deep learning algorithm, basically a small AI model, maybe not on a very powerful computer.
Then, as you fed it more data, its performance would get better for a little bit, but then it would flatten out, it would plateau, and it would stop being able to use the data to get better and better. But if you were to train a very large AI model, lots of compute on maybe powerful GPUs, then as we scaled up the amount of data we gave the machine learning model, its performance would kind of keep on getting better and better. So this is why when I started and led the Google Brain team, the primary mission that I directed the team to solve at the time was let’s just build really, really large neural networks that we then fed a lot of data to.
And that recipe fortunately worked. And I think the idea of driving large compute and large scale of data that recipes really helped us driven a lot of AI progress over the last decade. So if that was the last decade of AI, I think this decade is turning out to be also doing everything we had in supervised learning, but adding to it the exciting tool of generative AI.
So many of you, maybe all of you were played with Chat, GPT and Bard and so on, but just given a piece of text, which you call a prompt like I love eating. If you run this multiple times, maybe you get bagels, cream cheese or my mother’s meatloaf or altar friends. And the AI system can generate output like that.
Given the amount of buz and excitement about genitive AI, I thought I’d take just half a slide to say a little bit about how this works. So it turns out that genitive AI, at least this type of text generation, the core of it is using supervised learning that inputs output mappings to repeatedly predict the next word. And so if your system reads on the Internet a sentence like, my favorite food is a bagel with cream cheese and locks, then this is translated into a few data points where if it sees my favorite food is a in this case, try to guess that the right next word was bagel, or My favorite food is a bagel.
Try to guess next word is with. And similarly, if it sees that in this case the right guess for the next word would have been cream. So by taking text that you find on the Internet or other sources and by using this input output supervised learning to try to repeatedly predict the next word.
If you train a very large AI system on hundreds of billions of words. So in the case of the largest model is now more than a trillion words. Then you get a large language model like Chi GP.
And there are additional other important technical details. I talked about predicting the next word. Technically, these systems predict the next subword, a part of Word called a token.
And then there are other techniques like Rohf for further tuning the AI output to be more hopeful, honest and harmless. But at the heart of it is this using supervised learning to repeatedly predict the next word, that’s really what’s enabling the exciting, really fantastic progress on large language models. So while many people have seen large language models as a fantastic consumer tool, you can go to a website like Trackgp’s Website or Bods or other large language models and use it, I think they’re fantastic too.
There’s one other trend I think is still underappreciated, which is the power of large language models, not just as consumer tool, but as a developer too. So it turns out that there are applications that used to take me months to build, that a lot of people can now build much faster by using a large language model. So specifically, the workflow for supervised learning, building the restaurant review system, say, would be that you need to get a bunch of labeled data and maybe that takes a month to get a few thousand data points and then have an AI team train and tune and really get optimized performance on your AI model.
Maybe that’ll take three months. Then find a cloud service to run it, make sure it’s running robustly, make sure it’s ruggedized. Maybe that’ll take another three months.
So a pretty realistic timeline for building a commercial grade machine learning system is like six to twelve months. So teams I’ve led will often took roughly six to twelve months to build and deploy these systems, and some of them turned up to be really valuable. But this is a realistic timeline for building and deploying a commercial grade AI system.
In contrast with prompt based AI, where you write a prompt, this is what the workflow looks like. You can specify a prompt that takes maybe minutes or hours, and then you can deploy it to the cloud, and that takes maybe hours or days. So there are now certain AI applications that used to take me literally six months, maybe a year to build that many teams around the world can now build in maybe a week.
And I think this is already starting, but the best is still yet to come. This is starting to open up a flood of a lot more AI applications that can be built by a lot of people. So I think many people still underestimate the magnitude of the flood of custom AI applications that I think is going to come down the pipe.
Now, I know you probably were not expecting me to write code in this presentation, but that’s what I’m going to do. So it turns out this is all the code I need in order to write a sentiment classifier. So some of you will know Python, I guess, import some tools from OpenAI, and then I have this prompt that says classify the text limited by three dashes as having either a positive or negative sentiment.
I don’t know. Learned a lot and also made great new friends. All right, so that’s my prompt, and now I’m just going to run it.
And I’ve never run it before, so I really hope thank goodness, we got the right answer. And this is literally all the code it takes to build a sentiment classifier. And so today, developers around the world can take literally maybe like ten minutes to build a system like this.
And that’s a very exciting development. So one of the things I’ve been working on was trying to teach online classes about how to use prompting not just as a consumer tool, but as a developer tool. So to talk about the technology landscape, let me now share my thoughts on what are some of the AI opportunities I see this shows what I think is the value of different AI technologies today, and I’ll talk about three years from now.
But the vast majority of financial value from AI today is, I think, supervised learning, where for a single company like Google can be worth more than $100 billion a year. And also there are millions of developers building supervised learning applications. So it’s already massively valuable and also with tremendous momentum behind it, just because of the sheer effort in finding applications and building applications.
And ingenitive AI is the really exciting new entrant which is much smaller right now. And then there are the other tools I’m including for completeness. If the size of these circles represent the value today, this is what I think it might grow to in three years.
So Supervised learning already really massive may double, say, in the next three years from truly massive to even more massive. And generative AI, which is much smaller today, I think will much more than double in the next three years because of the number of amount of developer interest, the amount of venture capital investments, the number of large corporates exploring applications. And I also just want to point out three years is a very short time horizon.
If it continues to compound at anything near this rate, then in six years it’ll be even vastly larger. But this light shaded region in green or orange, that light shaded region is where the opportunity is for either new startups or for large companies incumbents to create and to enjoy value capture. But one thing I hope you take away from this slide is that all of these technologies are general purpose technologies.
So in the case of Supervised Learning, a lot of the work that has to be done over the last decade but is continuing for the next decade is to identify and to execute on the concrete use cases. And that process is also kicking off for generative AI. So for this part of the presentation, I hope you take away from it that general purpose technologies are useful for many different tasks.
A lot of value remains to be created using Supervised learning. And even though we’re nowhere near finishing figuring out the exciting use cases of Supervised learning, we have this other fantastic tool of generative AI which further expands the set of things we can now do using AI. But one caveat, which is that there will be short term fads along the way.
So I don’t know if some of you might remember the app called Lenza. This is the app that would let you upload pictures of yourself and they’ll render a cool picture of you as an astronaut or a scientist or something. And it was a good idea and people liked it.
And its revenue just took off like crazy like that through last December. And then it did that. And that’s because Lenzer, it was a good idea.
People liked it. But it was a relatively thin software layer on top of someone else’s really powerful APIs. And so even though it was a useful product, it was in a defensible business.
And when I think about ads like Lenser, I’m actually reminded of when Steve Jobs gave us the iPhone shortly after someone wrote an app that I paid 199 for to do this, to turn on the Led, to turn the phone into a flashlight. And that was also a good idea, to write an app to turn on the Led light. But it also wasn’t a defensible long term also didn’t create very long term value because it was easily replicated and underpriced and eventually incorporated into iOS.
But with the rise of iOS, with the rise of iPhone, someone also figured out how to build things like Uber and Airbnb and Tinder, the very long term, very defensible businesses that created sustaining value. And I think with the rise of gentle AI or the rise of new AI tools, I think really what excites me is the opportunity to create those really deep, really hard applications that hopefully can create very long term value. So the first trend I want to share is AI is a general purpose technology.
And a lot of the work that lies ahead of us is to find the very diverse use cases and to build them. There’s a second trend I want to share with you which relates to why AI isn’t more widely adopted yet. It feels like a bunch of us have been talking about AI for like 15 years or something.
But if you look at where the value of AI is today, a lot of it is still very concentrated in consumer software Internet. Once you go outside tech or consumer software Internet, there’s some AI adoption, but a lot of use very early. So why is that? It turns out if you were to take all current and potential AI projects and sort them in decreasing order of value, then to the left of this curve, the head of this curve, are the multibillion dollar projects like advertising or web search or for ecommerce product recommendations or company Amazon.
And it turns out that about 1015 years ago, various of my friends and I, we figured out a recipe for how to hire, say, 100 engineers to write one piece of software to surf more relevant ads and apply that one piece of software to a billion users and generate massive financial value. So that works. But once you go outside consumer software Internet, hardly anyone has 100 million or a billion users they can write and apply one piece of software to.
So once you go to other industries, as we go from the head of this curve on the left over to the long tail, these are some of the projects I see and I’m excited about. I was working with a Pisa maker that was taking pictures of the Pisa they were making because they needed to do things like make sure that the cheese is spread evenly. So this is about a $5 million project, but that recipe of hiring 100 engineers or dozens of engineers to work on a $5 million project, that doesn’t make sense.
Or another example, working with an agriculture company that with them, we figured out that if we use cameras to find out how tall is the wheat, and wheat is often bent over because of wind or rain or something. And if we can chop off the wheats at the right height then that results in more food for the farmer to sell and is also better for the environment. But this is another $5 million project.
That that old recipe of hiring a large group of high school engineers to work on this one project that doesn’t make sense. And similarly, materials grading, cloth grading, sheet metal grading, many projects like this. So whereas to the left in the head of this curve there’s a small number of let’s say, multibillion dollar projects and we know how to execute those delivering value in other industries.
I’m seeing a very long tail of tens of thousands of, let’s call them $5 million projects that until now have been very difficult to execute on because of the high cost of customization. The trend that I think is exciting is that the AI community has been building better tools that lets us aggregate these use cases and make it easy for the end user to do the customization. So specifically, I’m seeing a lot of exciting low code and no code tools that enable the user to customize the AI system.
What this means is instead of me needing to worry that much about pictures of Pisa, we have tools because we’re starting to see tools that can enable the It departments of the Pisa making factory to train an AI system on their own pictures of Pisa to realize this $5 million worth of value. And by the way, the pictures of Pisa, they don’t exist on the Internet. So Google and Bing don’t have access to these pictures.
We need tools that can be used by really the Pisa factory themselves to build and deploy and maintain their own custom AI system that works on their own pictures of Pisa. And broadly, the technology for enabling this, some of it is prompting, text prompting, visual prompting, but really large language models and similar tools like that. Or a technology called data centric AI whereby instead of asking the pizza factory to write a lot of code which is challenging, we can ask them to provide data which turns out to be more feasible.
And I think the second trend is important because I think this is a key part of the recipe for taking the value of AI, which so far still feels very concentrated in the tech world and the consumer software internet world and pushing this out to all industries, really to the rest of the economy, which sometimes is easy to forget. The rest of the economy is much bigger than the tech world. So the two trends I shared AI.
Is a general purpose technology. Lots of concrete use cases to be realized as well as low code, no code, easily used tools enabling AI to be deployed in more industries. How do we go after these opportunities? So, about five years ago, there was a puzzle I want to solve which is I felt that many valuable AI projects are now possible.
I was thinking how do we get them done? And having led AI teams in Google and Baidu in big tech companies, I had a hard time figuring out how I could operate a team in a big tech company to go after a very diverse set of opportunities and everything from maritime shipping to education to financial services to healthcare and on and on. It’s just very diverse use cases, very diverse go to markets and very diverse, really customer bases and applications. And I felt that the most efficient way to do this would be if we can start a lot of different companies to pursue these very diverse opportunities.
So that’s why I ended up starting AI fund which is a venture studio that builds startups to pursue a diverse set of AI opportunities. And of course, in addition to lots of startups, incumbent companies also have a lot of opportunities to integrate AI into existing businesses. In fact, one pattern I’m seeing for incumbent businesses is distribution is often one of the significant advantages of incumbent companies.
They play the cause right, can allow them to integrate AI into their products quite efficiently. But just to be concrete, where are the opportunities? So I think of this as this is what I think of as the AI stack. At the bottom level is the hardware semiconductor layer.
Fantastic opportunities there, but very capital intensive, very concentrated. So it needs a lot of resources, relatively few winners. So some people can and should play there.
I personally don’t like to play there myself. There’s also the infrastructure layer, also fantastic opportunities, but very capital intensive, very concentrated. So I tend not to play there myself either.
And then there’s the developer tool layer. What I showed you just now was I was actually using OpenAI’s API as a developer tool and then I think the developer two sector is hyper competitive. Look at all the startups chasing OpenAI right now.
But there will be some mega winners. And so I sometimes play here, but primarily when I think of a meaningful technology advantage because I think that earns you the right or earns you a better shot at being one of the megawinners. And then lastly, even though a lot of the media attention and the buzz is in the infrastructure and developer tooling layer, it turns out that that layer can be successful only if the application layer is even more successful.
And we saw this with the rise of SaaS as well. A lot of the buzz excitement is on the technology, the tooling layer, which is fine, nothing wrong with that. But the only way for that to be successful is if the application layer is even more successful so that frankly they can generate enough revenue to pay the infrastructure and the tooling layer.
So actually, let me mention one example, Amorai. I was actually just texting the CEO yesterday, but Amorai is a company that we built that uses AI for romantic relationship coaching right and just to point out, I’m an AI guy and I feel like I know nothing really about romance. And if you don’t believe me, you can ask my wife.
She will confirm that I know nothing about romance. But when we want to build this, we wounded getting together with the former CEO of Tinder, Renata Nyborg, and with my team’s expertise in AI and her expertise in relationships, because she ran Tinder, she knows more about relationships than I think anyone I know. We’re able to build something pretty unique using AI for kind of romantic relationship mentoring.
And the interesting thing about applications like these is when we look around, how many teams in the world are simultaneously expert in AI and in relationships. And so at the application layer, I’m seeing a lot of exciting opportunities that seem to have a very large market, but where the competition set is very light relative to the magnitude of the opportunity. It’s not that there are no competitors, but it’s just much less intense compared to the developer two or the infrastructure layer.
And so because I’ve spent a lot of time iterating on a process of building startups, what I’m going to do is just very transparently tell you the recipe we’ve developed for building startups. And so after many years of iteration and improvement, this is how we now built startups. My teams always had access to a lot of different ideas, internally generated ideas from partners.
And I want to walk through this with one example of something we did, which is a company bearing AI, which uses AI to make ships more fuel efficient. So this idea came to me when a few years ago, a large Japanese conglomerate called Mitsui that is a major shareholder and operates major shipping lines, they came to me and they said, hey, Andrew, you should build a business to use AI to make ships more fuel efficient. And the specific idea was, think of it as a Google Maps for ships, where you can suggest to a ship or tell a ship how to steer so that you still get to your destination on time, but using, it turns out, about 10% less fuel.
And so what we now do is we spend about a month validating the idea. So double check is this idea even technically feasible? And then talk to prospective customers to make sure this is marketing. So we spend up to about a month doing that.
And if it passes this stage, then we will go and recruit a CEO to work with us on the project. When I was starting out, I used to spend a long time working on the project myself before bringing on a CEO. But after iterating we realized that bring on a leader at the very beginning to work with us, it reduces a lot of the burden of having to transfer knowledge or having a CEO come in and having to revalidate whether we discovered so the process is, we’ve learned much more efficient.
We just bring on the leader at the very start. And so in the case of Bearing AI, we found a fantastic CEO, Dylan Kyle, who’s a repeat entrepreneur. One successful exit before and then we spent three months, six, two weeks sprints to work with them to build a prototype as well as do deep customer validation if it survives this stage and we have about a two thirds 66% survival rate.
We then write a first check in which then gives the comfy resources to hire an executive team, build the key team, get the MVP working, minimum viable product working and get some real customers. And then after that, hopefully, it then successfully raises additional external rounds of funding and can keep on growing and scaling. So, I’m really proud of the work that my team was able to do to support Mitsui’s idea and Dylan Kao as CEO.
And today there are hundreds of ships on the high seas right now that are steering themselves differently because of Bearing AI. And 10% fuel savings translates to rough order magnitude, maybe $450,000 in savings in fuel per ship per year, and of course is also, frankly, quite a bit better. It is my pleasure to welcome Andrew is the Managing General partner of AI Fund, founder of Deep Learning AI and Lending AI chairman and co founder, of coursera, and an adjunct professor of computer science here at Stanford.
Previously he had started and led the Google Brain team, which he had helped Google adopt modern AI. And he was also director of the Stanford AI lab. About 8 million people, one in 1000 persons on the planet have taken an AI class from him.
And through both his education and his AI work, he has changed numerous lives. Please welcome Dr. Andrew Rings.
Thank you, Lisa. It’s good to see everyone. So what I want to do today is chat to you about some opportunities in AI.
So I’ve been saying AI is the new electricity. One of the difficult things to understand about AI is that it is a general purpose technology, meaning that it’s not useful only for one thing, but it’s useful for lots of different applications, kind of like electricity. If I were to ask you what is electricity good for? It’s not any one thing, it’s a lot of things.
So what I’d like to do is start off sharing with you how I view the technology landscape and this will lead into the set of opportunities. So a lot of excitement about AI. And I think a good way to think about AI is as a collection of tools.
So this includes a technique called supervised learning, which is very good at recognizing things or labeling things, and genitive AI, which is relatively new, exciting development. If you’re familiar with AI, you may have heard of other tools, but I’m going to talk less about these additional tools and I’ll focus today. On what I think are currently the two most important tools, which are Supervised Learning and generative AI.
So Supervised Learning is very good at labeling things or very good at computing input to output or A to B mappings. Given input A, give me an output B. For example, given an email, we can use Supervised Learning to label it as spam or not spam.
The most lucrative application of this that I’ve ever worked on is prior to online advertising, where given an ad, we can label if a user is likely to click on it and therefore show more relevant ads for self driving cars. Given the sensitive readings of a car, we can label it with where are the other cars? One project that my team Aifan worked on was Ship Route optimization. Where given a route that the ship is taking or considering taking, we can label that with how much fuel we think those are consume, and use this to make ships more fuel efficient.
Still a lot of work in automated visual inspection in factories. So you can take a picture of a smartphone that was just manufactured and label is there a scratch or any other defect in it. Or if you want to build a restaurant review reputation monitoring system, you can have a little piece of software that looks at online restaurant reviews and labels that as positive or negative sentiment.
So one nice thing, one cool thing about Supervised Learning is that it’s not useful. For one thing, it’s useful for all of these different applications and many more besides. Let me just walk through concretely the workflow of one example of a Supervised Learning labeling things kind of project.
If you want to build a system to label restaurant reviews, you then collect a few data points or collect the data set where, say, the pastrami sandwich great. Say that it’s positive, servos are slow, that’s negative. My favorite chicken curry, that’s positive.
And here I’ve shown three data points. But you’re building this, you may get thousands of data points like this or thousands of training examples we call it. And the workflow of a machine learning project of an AI project is you get labeled data, maybe thousands of data points.
Then you have an AI engineering team train an AI model to learn from this data. And then finally you would find maybe a cloud service to run the trained AI model. And then you can feed it.
Best quality I’ve ever had. And that’s positive sentiment. And so I think the last decade was maybe the decade of large scale Supervised learning.
What we found starting about 1015 years ago was if you were to train a small AI model. So train a small neural network or small deep learning algorithm, basically a small AI model, maybe not on a very powerful computer. Then, as you fed it more data, its performance would get better for a little bit, but then it would flatten out, it would plateau, and it would stop being able to use the data to get better and better.
But if you were to train a very large AI model, lots of compute on maybe powerful GPUs, then as we scaled up the amount of data we gave the machine learning model, its performance would kind of keep on getting better and better. So this is why when I started and led the Google Brain team, the primary mission that I directed the team to solve at the time was let’s just build really, really large neural networks that we then fed a lot of data to. And that recipe fortunately worked.
And I think the idea of driving large compute and large scale of data that recipes really helped us driven a lot of AI progress over the last decade. So if that was the last decade of AI, I think this decade is turning out to be also doing everything we had in supervised learning, but adding to it the exciting tool of generative AI. So many of you, maybe all of you, have played with Chat, GPT and Bod and so on, but just given a piece of text, which you call a prompt like I love eating, if you run this multiple times, maybe you get bagels, cream cheese or my mother’s meatloaf or altar friends.
And the AI system can generate output like that. Given the amount of buz and excitement about genitive AI, I thought I’d take just half a slide to say a little bit about how this works. So it turns out that genitive AI, at least this type of text generation, the core of it is using supervised learning that inputs output mappings to repeatedly predict the next word.
And so if your system reads on the Internet a sentence like, my favorite food is a bagel with cream cheese and locks, then this is translated into a few data points where if it sees my favorite food is a in this case, try to guess that the right next word was bagel, or My favorite food is a bagel. Try to guess next word is with. And similarly, if it sees that in this case the right guess for the next word would have been cream.
So by taking text that you find on the Internet or other sources and by using this input output supervised learning to try to repeatedly predict the next word. If you train a very large AI system on hundreds of billions of words or in the case of the largest model is now more than a trillion words, then you get a large language model like Chi GP. And there are additional other important technical details.
I talked about predicting the next word. Technically, these systems predict the next subword a part of word called a token. And then there are other techniques like Rohf for further tuning the AI output to be more hopeful, honest and harmless.
But at the heart of it is this using supervised learning to repeatedly predict the next word. That’s really what’s enabling the exciting, really fantastic progress on large language models. So while many people have seen large language models as a fantastic consumer tool, you can go to a website like Trackgp’s Website or Bods or other large language models and use it, I think they’re fantastic too.
There’s one other trend I think is still underappreciated, which is the power of large language models, not just as consumer tool, but as a developer too. So it turns out that there are applications that used to take me months to build that a lot of people can now build much faster by using a large language model. So specifically, the workflow for supervised learning, building the restaurant review system, say, would be that you need to get a bunch of labeled data, and maybe that takes a month to get a few thousand data points and then have an AI team train and tune and really get optimized performance on your AI model.
Maybe that’ll take three months. Then find a cloud service to run it, make sure it’s running robustly, make sure it’s rugged. Maybe that’ll take another three months.
So a pretty realistic timeline for building a commercial grade machine learning system is like six to twelve months. So teams I’ve led will often took roughly six to twelve months to build and deploy these systems, and some of them turned up to be really valuable. But this is a realistic timeline for building and deploying a commercial grade AI system.