FIS Toronto 2024

How to think about the economics of AI

Ajay Agrawal. Photo: Jack Smith

The most underrated area of innovation in artificial intelligence is not in computing, nor is it in the development of algorithms or techniques for data collection. It is in the human ability to recast problems in terms of predictions. 

Leading economist and academic Ajay Agrawal told the Fiduciary Investors Symposium in Toronto that it helps to think of AI and machine learning as “simply a drop in the cost of prediction”. 

Agrawal serves as the Geoffrey Taber Chair in Entrepreneurship and Innovation at the University of Toronto’s Rotman School of Management, as well as being a Professor of Strategic Management. 

“AI is computational statistics that does prediction,” Agrawal said. 

“That’s all it is. And so, on the one hand, that seems very limiting. On the other hand, the thing that’s so remarkable about it is all the things we’ve discovered that we can do with high fidelity prediction.” 

Agrawal said prediction is, in simple terms, “taking information you have to generate information you don’t have”. And it’s “the creativity of people to recast problems, that none of us in this room characterised as prediction problems, into prediction” that underpins developments in and the potential of AI, he said. 

“Five years ago, probably nobody in this room would have said driving is a prediction problem”.  

“Very few people in the room would have said translation is a prediction problem. Very few of you would have said replying to email is a prediction problem. But that’s precisely how we’re solving all those things today.” 

Whether it’s predictive text when replying to an email or enhancing investment performance, the supporting AI systems are “all implementations of statistics and prediction”, Agrawal said. 

These prediction models reached a zenith in large language models (LLMs), where machines were trained on how to predict the next most likely word in a sequence of words that made up sentences, paragraphs and whole responses. 

“If you think about language, let’s say English, every book, every poem, every scripture that you’ve ever read, is a resequencing of the same…characters: 26 letters, a few punctuation marks just re-sequenced over and over again makes all the books. What if we could do that with actions?” Agrawal said. 

From LLMs to LBMs 

The principles of LLMs (next most likely word) are now being applied to large behavioural models – robots – by training them to predict the next most likely verb or action. 

“In that case, we could take all the tasks – think about everyone that you know, every job they do, and every job probably has 30 or 40 different tasks, so there’s hundreds of thousands of tasks. But what if all those tasks are just really sequences of a small number of verbs?  

“So what they’re doing is they’re training that robots to do a handful verbs – 50, 80, 120 verbs. Then you give the robot a prompt, just like chat GPT. You say to the robot, ‘can you please unpack those boxes and put the tools on the shelf?’ The robot hears the prompt, and then predicts what is the optimal sequence of verbs in order to complete the task.” 

It is, Agrawal said, “another application of prediction”. 

Agrawal said that businesses and industries are now facing a “tidal wave of problems that have been recast as prediction problems”.  

“So we now are pointing machine intelligence at many of these.  

“The problem is, it has come so hard and so fast, that people seem to be struggling with where do we start? And how do we actually point this towards something useful?” 

Agrawal said it pays to be very specific about the metric or the performance measure that needs to be improved, and then “[point] the AI at that”.  

“AIs are mathematical optimisers, they have to know what they’re optimising towards,” he said. 

“If the problem is a tidal wave of new solutions available, and the problem is we don’t know how to harness it, here is a way to think about the solution – a short-term and a long-term strategy.” 

Agrawal said short-term strategies are basically productivity enhancements. They’re deployable within a year, aim for 20 per cent productivity gains, and have a payback period of no more than two years.  

“And here’s the key point, no change in the workflow,” he said. 

“In other words, it’s truly a technology project where you just drop it in, but the rest of the system stays the same.” 

Genuine game-changers 

Long-term strategies take longer to deploy but they’re genuine game-changers, offering gains 10 times or more greater than short-term deployments. But critically, they require a redesign of workflows. Agrawal said AI, like electricity, is a general-purpose technology, 

a useful analogy is when factories were first electrified and started to move away from stream-powered engines. 

In the first 20 years after electricity was invented, there was very low take-up – less than 3 per cent of factories used electricity, and when they did, “the main value proposition…was it will reduce your input costs” by doing things like replacing gas lamps. 

“Nobody wanted to tear apart their existing infrastructure in order to have that marginal benefit,” Agrawal said. 

“The only ones that were experimenting with electricity were entrepreneurs building new factories, and even then, most of them said, ‘No, I want to stick with what I know’” in terms of factory design. 

But a few entrepreneurs realised there was a chance to completely reimagine and redesign a factory that was powered by electricity, because no longer was it dependent on transmitting power from engines outside the factory via long steel shafts to drive the factory machinery. 

When the shafts became obsolete, so did the large columns inside the factories to support them. And that opened the door to lightweight, lower-cost construction, and factory design and layout changed to having everything on one level. 

“They redesigned the entire workflow, Agrawal said.  

“The machines, the materials, the material handling, the people flow, everything [was] redesigned. Some of the factories got up to 600 per cent productivity lift.” 

Agrawal said initially, the productivity differences between electrified and non-electrified factories were very small. 

“You could be operating a non-electrified factory and think those guys who want the newfangled electricity, it’s more trouble than it’s worth,” he said. 

“But the but the productivity benefits just started taking off from electricity. 

“Now we’re seeing the same thing with machine intelligence [and] the adoption rate of AI.” 

This one learns from us 

However, Agrawal said the characteristic that “makes AI different from every other tool we’ve ever had in human history, is this the only one that learns from us”. 

He said this explains the headlong development rush and the commitment of so much capital to the technology. 

“The way AI works is that whoever gets an early lead, their AI gets better; when their AI gets better, they get more users; when they get more users, they get more data; when they get more data, then the AI the prediction improves,” he said. 

“And so, once they get that flywheel turning, it gets very hard to catch up to them.” 

Agrawal said AI and machine learning is developing so quickly it’s virtually impossible for companies and businesses to keep up, let alone implement and adapt. 

“The thing I would pay attention to is not so much the technology capability, because obviously that’s important and it’s moving quickly,” he said. 

“But what I’m watching are the unit economics of the companies who are first experimenting with it, and then putting it into production,” he said. 

“Cost just keeps going down because the AI is learning and getting better. And so that, like my sense there is, just pay very laser-close attention to the unit economics of what it costs to do a thing.  

“And you can go right down the stack of every good and service watching how, when you start applying these machine intelligence solutions to that thing, do the unit economics change?” 

Join the discussion