Computing power has advanced to the point that the once-impractical process of reinforcement learning is now a viable tool for asset owners, the Top1000funds.com Fiduciary Investors Symposium has heard. 

Reinforcement learning trains software to make decisions by mimicking trial and error and is used in investment decision making to generate the best potential result. 

John Hull, Maple Financial chair in derivatives and risk management at the Joseph L. Rotman School of Management, told the symposium that reinforcement learning has several advantages and outperforms simpler modelling approaches. 

“It gives you the freedom to choose your objective function – it’s a danger with some of the simpler hedging strategies and so on that you’re just assuming good outcomes are as bad as bad outcomes,” he said. 

“You can choose your time horizon, tests indicate that it’s robust… and gives good results during stress periods and there’s a big saving in transaction costs. Why are we talking about it now? Well, because computers are now fast enough to make it a viable tool.” 

Hull said reinforcement learning techniques can reduce transaction costs by as much as 25 per cent compared with traditional hedging approaches. 

“It’s a way of generating a strategy for taking decisions in a changing environment – you’re not just taking one decision, but a sequence of decisions,” he said. 

“Perhaps you’re taking a decision today and then you take another decision tomorrow, and so on. Let’s suppose you’re interested in a strategy for investing in a certain stock and say what’s a good strategy for this stock – I think it’s going to work out okay, but it may not. What strategy should I use over the next three months. What do you do?” 

Hull said normally a stochastic process – which assesses different outcomes based on changing variables – would be used to assess a stock. 

“It’s uncertain how the stock price is going to evolve and you might use a mathematical stochastic process, you might use a historical data on the stock price behaviour, something like that. You have some model for how the stock price behaves,” Hull said. 

“Then your problem is defined by what we call states/actions/rewards.” 

Hull said the aim is quite simply to decide what action should be taken in each possible state to maximise the expected reward.  

“You’d say okay, we don’t know how this stock price is going to evolve but it will evolve in some way, and so there will be certain states we find ourselves in. We should take a certain action, and that’s what we’re trying determine, and there will be a certain reward,” Hull said.  

“In other words, you’ll make a profit or a loss. The way I think about it, it’s just sophisticated trial and error.” 

This means by starting off with having “no idea at all” about what a good action to take is and to then try different hypothetical outcomes. 

“It works well or it doesn’t work well, then you try a different action and so on and then eventually you come up with what seems to be the best action to take when a particular state is encountered,” Hull said. 

Hull said reinforcement learning traditionally is computationally expensive, takes a lot of computation time and is “data hungry”, but that’s not the case these days. 

“But fortunately, the other thing that’s happened that makes this a viable tool… is that we can now generate unlimited amounts of synthetic data that’s indistinguishable from historical data,” he said. 

“You collect some historical data… maybe a couple of thousand items of historical data [and] you can generate as much synthetic data as you want to that is indistinguishable from that historical data.” 

Hull said that while his experience has mostly been in applying reinforcement learning to the hedging of derivatives, he noted there’s many other areas where it can also be applied. 

“Because really it can be applied in any situation where the goal is to develop a strategy for achieving a particular objective in changing market,” he said. 

“There’s something out there that’s going to change in a way you don’t know, and you have to model that.” 

Financial Innovation Hub, or FinHub for short, carried out the research that Hull prestned to the symposium. 

Hull said one of the distinctive features of FinHub is that it’s not just academics within the Rotman School of Management that work on its projects, but also practitioners and the university’s engineering faculty. 

Reinforcement learning is just one of the projects FinHub has been working on, with Hull explaining the centre has also been doing work on natural language processing, amongst other initiatives. 

“We’ve worked with the Bank of Canada on monetary policy uncertainty,” Hull said. 

“We’ve done work on modelling volatility services and using natural language processing to forecast different market variables.” 

A rigorous governance process needs to be at the top of investors’ minds if they wish to have a portfolio management approach that is versatile enough to adapt to changing investment environments and still provides sufficient accountability, according to three prominent pension investors from Canada, the US and the UK. 

Despite pension markets’ varying levels of maturity, the Top1000funds.com Fiduciary Investors Symposium in Toronto has heard the goal of combining portfolio resilience with meeting fund objectives is the same, and it can be achieved through different manifestations of governance structures. 

The State of Wisconsin Investment Board (SWIB) head economist and asset and risk allocation chief investment officer Todd Mattina said governance plays an important role in setting the overall fund objective. 

SWIB has close to $150 billion in assets under management, of which the majority is the Wisconsin Retirement System, which funds approximately 667,000 participants – or one in 10 Wisconsinites, to put the number into perspective. 

The fund’s liability is dividend-based – a pensioner receives a benefit on retirement, then accrued dividends over time – and the dividends are a function of SWIB’s average rate of return, Mattina said. 

“There’s a risk-sharing that involves the pensioners and the system. This has allowed us to keep the system fully funded over time, which is quite unique Stateside,” he said. 

“To the extent that we make average returns over a discount rate that’s set in the law, our pensioners receive dividends. To the extent that we have average returns below that key threshold, we actually claw back benefits.” 

To that end, SWIB has the objective of achieving a long-run rate of return of 6.8 per cent a year to keep the system fully funded and provide a stable dividend. 

“Our asset allocation has an explicit allocation to policy leverage, which is currently 12 per cent of the fund. That’s approved by our board just like the allocation to public equities and private markets. 

“What that allows us to do, essentially, is achieve our 6.8 per cent target rate of return while [being] able to leverage up a more efficient portfolio, which includes a significant amount of fixed income – we have 19 per cent allocation to TIPS [Treasury Inflation-Protected Securities] – and gives us some of the resilience factors.” 

Accountability first 

Meanwhile, the UK’s Local Pensions Partnership Investments (LPPI) chief investment officer Richard Tomlinson said for him, one of governance’s crucial roles is providing accountability in investment teams.  

LPPI is a part of the bigger Local Government Pension Scheme (LGPS). In 2015, the UK government began a process that saw individual LGPS funds (state and local authority pension funds) gathered into larger pools for purposes including cost reduction, and created seven consolidation vehicles, of which LPPI is one. 

LGPS collectively has £360 billion ($400 billion) in assets and 91 underlying funds, and LPPI manages approximately £25 billion ($32 billion) and has three clients. 

“We have a formal way [of governance] in the UK… which is the three lines of defence model so that the risk function does not report to me as CIO, it’s segregated through a CRO [chief risk officer] and we try and work in partnership,” he said. 

“The governance very much runs through how we operate from the way we’re structured. 

“For our clients, they know who’s accountable for their portfolio performance, which is basically myself and the investment team, as opposed to some other pension structures where you have this fragmented governance structure where it’s not always clear where accountability sits.” 

Tomlinson said he has been enjoying the different lenses this governance structure brings to the portfolio. 

“I really like having a segregated risk function, who think completely differently to me, because I’ve worked in places where the biggest risk factor has been the grouping of the lead PM or the CIO,” he said. 

“You find they’ve built all the models, they’ve built the risk architecture, and there’s a glaring commonality of the way they’ve built it. 

“It leaves you open to a certain assumption, so having that diversity of thought and model to me is really, really important.” 

Tomlinson said of the seven consolidated vehicles, LPPI is the one closest to the Canadian model – this is in relation to internalisation, access to private markets and the fiduciary model. 

Forms of independence 

However, even within the Canadian model there are differences in governance models. British Columbia Investment Management Corporation (BCI) vice president and head of investment risk Samir Ben Tekaya said the fund, unlike its Maple 8 peers, doesn’t have a standalone CIO. 

BCI has C$233 billion in assets under management and 80 per cent of that is from pension clients; the rest consists of insurance company money. Over the past eight years, Ben Tekaya said BCI has been internalising more assets and moving into more complex strategies. 

“One of the bases [of the strategy] is investment process governance, but also risk management,” he said. 

“We don’t have a CRO per se. We have me, I’m head of the investment risk, which I report to the head of strategy and risk, which is portfolio construction. Our CEO is the CIO.” 

BCI operates with a dual-accountability model which makes it accountable to both clients and the BCI board. The seven-member BCI board consists of four directors appointed by four largest pension plan clients and three are appointed by the Minister of Finance (two of which need to be client representatives). 

“So some people ask governance-wise how independent we are, Ben Takaya said. 

“But I think at the same time, if you have the right governance, the independence is there – we don’t need to be independent in terms of reporting, et cetera. 

“You have the governance, you have the exposure to the board of the client, and the BCI board. But what helps is we are the same group as the portfolio construction and asset allocation, and it really helps to have the culture of risk there.” 

For asset owners to stay the course of a long-term investing view, the trick is not only just getting their own investment teams behind the objective, but also making sure their board and external asset managers are aligned.  

Otherwise, the Fiduciary Investors Symposium heard, pension investors might find themselves fighting an uphill battle in a market where short-termism is increasingly prevalent 

Mario Therrien, head of investment funds and external management at Caisse de dépôt et placement du Québec (CDPQ), said while it’s easy to outline long-term investment goals in a mandate, the challenging part is making sure that managers stay on track over time.  

Mario Therrien

CDPQ is one of Canada’s Maple 8 pension funds and has C$434 billion ($315 billion) assets under management. 

“We try to outline the investment policies, risk appetite, benchmark, post investments and everything [in our mandate], but how do we execute on it? How do we make it live?” Therrien said. 

“And also as asset allocators, [we need to decide] what is our tolerance for pain. Because especially in the last 15 years, we’ve seen… really smart teams underperforming markets. 

“And we’re kind of forgetting the thesis of first of all, why did we invest [with these managers]? In which environment were they supposed to add value or detract value? Our role, when we go in front of investment committees, is making sure that everybody around the table understands what this is all about.” 

CFA Institute chief executive Margaret Franklin said the total portfolio approach, “in its broadest, most philosophical sense” is also an important driver of long-term visions.  

“What I call ‘systems thinking’ really manifests itself in a total portfolio approach, putting all the pieces together rather than heuristics or embedded systems that we have – that were developed 30 years ago, partly because between technology, modern portfolio theory, and CAPM [capital asset pricing model], we could put those into place efficiently and cost effectively,” she said. 

“Those systems were designed for the previous 30 years’ problems, so 60 years later, we need a new way of thinking about these things in a much more complex world where we don’t have the playbooks. 

“I think what it [TPA] does allows for innovation, allows for purpose, and has to necessarily have a long-term view, but it also recognises the importance of the short-term.” 

Margaret Franklin

FCLTGlobal chief executive Sarah Williamson said the difference between long-term and short-term investors is that the former thinks about the disruptive forces in the future, and does not make the poor assumption that “the future will be like the past”. FCLTGlobal describes itself as a not-for-profit organisation whose mission is to focus capital on the long term to support a sustainable economy. 

“Our shorthand for thinking about this [long-term investing view] is the five Ds of disruption,” she said, these being de-leveraging, demographics, decarbonisation, de-globalisation and digitisation.  

There are questions worthy of asking if asset owners wish to evaluate whether they are a long-term focused organisation, she said, such as whether they are formally separated from political cycles, whether senior staff are accountable for the total fund’s multiyear performance, whether they engage with portfolio companies on long-term issues, and whether they use internal charges for key unpriced externalities like carbon. 

Keith Ambachtsheer, a pioneer of the Canadian pension model and University of Toronto Rotman School of Management executive in residence, said asset owners also need to generally articulate their investment methods in a more understandable way, which could encourage more long-term practices.  

He said organisations should use toolkits such as the Integrated Reporting model, which can help articulate key aspects including purpose, governance, business model, results and strategy of the organisation in a concise way (in that order, notably).  

“We have a lot of half sentences about this thing and that thing… it goes on and on,” he said.  

“I think what we need to do and practice is an understandable way of describing how you actually invest.” 

The rise of artificial intelligence as an actually useful business tool presents multiple issues for asset owners. They must take stock of the impact of AI on the businesses they invest in on the one hand, while at the same time assessing the implications of AI for their own businesses, including making investment decisions. 

The Fiduciary Investors Symposium in Toronto earlier heard from Geoffrey Taber Chair in Entrepreneurship and Innovation and Professor of Strategic Management, University of Toronto, Rotman School of Management Ajay Agrawal that at its current stage of development, AI is typically applied in one of two ways. 

The first is a short-term, specific use-case approach that enhances productivity by improving an existing process, but otherwise leaves the process largely unchanged; and the second is more systems focused, where entire workflows are reimagined and re-engineered with AI at their core. 

APG Asset Management global head of digitalisation and innovation, Peter Strikwerda, said that “the true answer is it’s a bit of a mix”. 

“In practice, what you see is sometimes just very small problems in a process on automation, on specific information gathering or analysis or whatever we’re trying to fix, that typically fits the use-case driven approach,” he said. 

“We take small areas, but I think increasingly you see that bigger areas, and maybe that you could call that a system-type of approach, are being addressed. 

“One example…is the whole process of information gathering, organising, standardising analysing, predicting [and] decision making in private markets, because it’s very different from public markets in terms of data availability, standardisation, quality, et cetera. I’m not really sure if you could call that ‘systematic’, but what I see there is that the width of the usages is broader.” 

Jacky Chen

OPTrust director of total fund completion portfolio strategies Jacky Chen said there are “a few things that I would recommend people to think about” about applying AI to systems and processes in the short term. 

“One is how to get started,” Chen said. 

“If you don’t get started, you’re never going to be able to accumulate the knowledge to discover what are some of the key workflows. Inaction at this point is not an action because you really have to think about what are some of the early wins. You have to get started in order to accumulate the knowledge, get some skin in the game, in the short term.  

“There are already some low hanging fruits that you can really do for you to improve the operational efficiency standpoint. 

“You need to get your hands dirty in order to start doing that.” 

Chen said that when considering the long-term applications of AI, it is important for asset owners to consider carefully who they’re working with. He said that it is unlikely asset owners will have “a whole division that just building this type of technology”. 

“A lot of time you’re going to be buying, and who are the partner[s] that you’re going to work with?” Chen said. 

“There’s a bit of competition going on, and once there’s established a first mover advantage, we need to think about who’s going to be the second and the third mover. A lot of time, you have to find a proven winner who has the ability to continue to pivot.  

“Internally, you have to remain very nimble and agile in your approach, and externally, if you’re working with a partner on this, you have to remain very cautious about who you’re working with, and continue to pick the right the people that you believe that as it’s continued to evolve…they will be the provider that can help you to reach there.” 

PSP Investments managing director digital of innovation and private market solutions, Ari Shaanan, said that PSP, like other asset owners, is currently focused on short-term applications of AI but, echoing what APG’s Strikwerda suggested, is finding the application of AI becoming broader. 

Ari Shaanan

“The applications are growing both in breadth and in what you’re able to do”. 

“And also in size and scope, it just feels like it’s more and more accessible now,” Shaanan said, which is in part a function of more readily available data. 

“Clearly there’s just more data available just being, practically speaking, sold by third parties, vendors that we could all now leverage,” he said. “[It’s] much more practical, easier to get in the door these days.” 

Shaanan said there’s a second aspect of AI applications relevant to asset owners focused on generative AI and both large and small language models.  

Small language models manifest as agents that can carry our specific tasks, while large language models can be developed to undertake tasks such as research on specific industries, sectors or geographies. 

“You can build in an LLM internally to do something like that, and…then run an analysis on fundamentals. And you could run an analysis on how that fits in the portfolio. And you could actually stitch together now four or five or six different agents, and have those working together. 

“And I think that’s more and more the world we’re going to head in where it’s not just one answer for everything in one model running, call it portfolios, but it’s many agents that can be stitched together that can be leveraged by analysts and our PMs.” 

APG’s Strikwerda said the starting point for the organisation’s adoption of AI is its broader business strategy, and while it’s willing to test AI applications internally it’s also fully prepared to kill off a test if it does not achieve the expected result. 

“We look at the application of AI as a means, we judge it, as a means to these ends,” Strikwerda said. 

“If you’re an alpha strategy, we look at AI as an opportunity to generate alpha, always combined with data.  

“When you look at running index products, it’s maybe not about alpha, it’s about having a more efficient operation to support that.” 

“We never approach it from the AI, we approach it from what we are for, our purposes company, and then see how we can apply it…and then try to be able to gather proof points, support that and expand from that,” Strikwerda said. 

“Or kill it, if it goes south. That’s also what’s happened.” 

Strikwerda said APG’s strategy also includes being leaders in responsible investing, and there are obvious opportunities there for the application of AI because of the state of available data. 

“That’s where I see a lot of growth potential, and not yet a level playing field,” Strikwerda said. 

“And so the commoditisation…in capital markets, you see that data is very much commoditised to a large extent, [but] in responsible investing that’s still growing.” 

The hype and near-hysteria around AI and its potential to revolutionise businesses and industries can be difficult to see past and presents asset owners with a difficult decision: to invest in new, rapidly evolving technology and run the risk of backing something that doesn’t work out in the long term; or wait and see, and run the risk of missing some stellar investment returns.

Identifying the specific impact of AI on businesses is vexing at a time when US companies are already “as profitable as they’ve ever been”, Employees Retirement System of Texas chief investment officer David Veal told the Fiduciary Investors Symposium in Toronto.

“That makes sense at some level,” Veal said.

“The question is, okay, does that mean revert, or is there another leg to this? That’s where this really starts to factor in that some of that thinking is, look, you can do new things or enhance margins even further, potentially. That’s great news for corporate America, which is generally good for the stock market.”

But there will also be industries and businesses that are negatively affected – the benefits of the technology will not be evenly distributed economy-wide, “and it’s worth thinking through what that looks like, as well”, Veal said.

Veal said ERS has an internal public equities team that has been considering these questions in depth for some time, and so far so good.

“We’ve owned Nvidia at size, we’ve gotten this trend right, which is one of the reasons our performance has been as good as it has been,” he said.

“But the question is, how do you sustain that? How do you stay on top of these trends?”

Impact on internal practices

Veal said he also worries about the impact on ERS’s internal business practices.

A paper out of the University of Chicago last week talked about the fact that AI, Chat GPT, is actually better at predicting earnings than human analysts,” he said.

“Okay, how do you think about that? One of the other conclusions from that paper was the fact that Chat GPT plus a human analyst is actually better than either one individually. That’s something we can work with.”

From an investment perspective, seeking to capitalise on the potential of AI presents a series of risks, not least of which is concentration risk, because recent AI development, at least, is being driven by a very small number of very large organisations, such as Google, Microsoft and, of course Nvidia.

It also presents the downside risk of backing the wrong horses in the AI race. And it presents very clear and present career risk for investment professionals who consciously avoid AI opportunities, take a wait-and-see approach and miss the potentially enormous upside.

“The hardest part is, is where do you make your commitments?” Veal said.

David Veal

“Do you change the way you commit capital? We don’t have a lot of exposure to venture capital, for example, [but] that’s been by design – we didn’t feel like we have the scale. But does that need to change? Is it too late to change? Something that we are really wrestling with is: have we missed the boat in some way?”

Jennison Associates managing director Nick Rubinstein told the symposium that AI is “at a seminal moment” from an investment perspective.

“AI essentially takes all of the [enabling technology] pieces that we’ve put into place, along with the predictive learning element, and enables us to essentially make predictions, streamline businesses, and potentially augment both top-line growth and cost efficiencies within organisations, as it democratises access to all of this data that we’ve created for decades so far,” Rubinstein said.

“And also it will add incredible amounts of efficiency to processes that previously were incredibly inefficient.”

Take-up will accelerate

The symposium heard earlier that initially the take-up of AI across businesses and industries has been limited, but it will accelerate exponentially as a wide range of use-cases are validated and results become tangible.

“So far, we’ve basically put the building blocks in place, and you’ve seen the growth, especially in companies like Nvidia and what we like to call the picks and shovels providers,” Rubinstein said.

“The cloud companies build the infrastructure, but then you need the applications to run on them. So the way we think about it is looking across industries. Where can these efficiencies be distributed?”

Rubinstein said there are practical applications of AI emerging across economies and often in some unexpected areas, such as agriculture where it builds on already existing automated practices.

Nick Rubinstein

“But now the next wave will be in the farming industry,” he said.

“How do you do predictive farming? How do you take inputs of weather patterns of past years? Which areas of your farm crops did better? How do you see those crops and embed all of that intelligence into an industry that used to be an incredibly manual process?”

Other industries such as healthcare, travel and customer service were candidates for AI-driven enhancements, Rubinstein said.

“There’s going to be a lot of diagnosis that goes on. We’re in the early days of measuring returns. And I think [Ajay Agrawal’s] example is very good, which is, can you get a 20 per cent productivity advancement in two years? If you can, you’ll probably make that investment regardless,” he said.

“But if you look over a longer-term framework, and suddenly the impact of that return multiplies and essentially goes geometric, then I think that will knock down the walls of mass adoption across industries.”

AI is even being applied to AI itself, Rubinstein said. Nvidia, has used the technology to cut its own product cycles.

“Product cycles that for Nvidia used to take two and a half years suddenly became two-year cycles,” he said.

“Within the past few years, that’s gone down to a year and that’s because they take the building blocks of what they had done for prior product cycles, applied them to go forward, and suddenly, their time to market was essentially cut by more than half.”

The most underrated area of innovation in artificial intelligence is not in computing, nor is it in the development of algorithms or techniques for data collection. It is in the human ability to recast problems in terms of predictions. 

Leading economist and academic Ajay Agrawal told the Fiduciary Investors Symposium in Toronto that it helps to think of AI and machine learning as “simply a drop in the cost of prediction”. 

Agrawal serves as the Geoffrey Taber Chair in Entrepreneurship and Innovation at the University of Toronto’s Rotman School of Management, as well as being a Professor of Strategic Management. 

“AI is computational statistics that does prediction,” Agrawal said. 

“That’s all it is. And so, on the one hand, that seems very limiting. On the other hand, the thing that’s so remarkable about it is all the things we’ve discovered that we can do with high fidelity prediction.” 

Agrawal said prediction is, in simple terms, “taking information you have to generate information you don’t have”. And it’s “the creativity of people to recast problems, that none of us in this room characterised as prediction problems, into prediction” that underpins developments in and the potential of AI, he said. 

“Five years ago, probably nobody in this room would have said driving is a prediction problem”.  

“Very few people in the room would have said translation is a prediction problem. Very few of you would have said replying to email is a prediction problem. But that’s precisely how we’re solving all those things today.” 

Whether it’s predictive text when replying to an email or enhancing investment performance, the supporting AI systems are “all implementations of statistics and prediction”, Agrawal said. 

These prediction models reached a zenith in large language models (LLMs), where machines were trained on how to predict the next most likely word in a sequence of words that made up sentences, paragraphs and whole responses. 

“If you think about language, let’s say English, every book, every poem, every scripture that you’ve ever read, is a resequencing of the same…characters: 26 letters, a few punctuation marks just re-sequenced over and over again makes all the books. What if we could do that with actions?” Agrawal said. 

From LLMs to LBMs 

The principles of LLMs (next most likely word) are now being applied to large behavioural models – robots – by training them to predict the next most likely verb or action. 

“In that case, we could take all the tasks – think about everyone that you know, every job they do, and every job probably has 30 or 40 different tasks, so there’s hundreds of thousands of tasks. But what if all those tasks are just really sequences of a small number of verbs?  

“So what they’re doing is they’re training that robots to do a handful verbs – 50, 80, 120 verbs. Then you give the robot a prompt, just like chat GPT. You say to the robot, ‘can you please unpack those boxes and put the tools on the shelf?’ The robot hears the prompt, and then predicts what is the optimal sequence of verbs in order to complete the task.” 

It is, Agrawal said, “another application of prediction”. 

Agrawal said that businesses and industries are now facing a “tidal wave of problems that have been recast as prediction problems”.  

“So we now are pointing machine intelligence at many of these.  

“The problem is, it has come so hard and so fast, that people seem to be struggling with where do we start? And how do we actually point this towards something useful?” 

Agrawal said it pays to be very specific about the metric or the performance measure that needs to be improved, and then “[point] the AI at that”.  

“AIs are mathematical optimisers, they have to know what they’re optimising towards,” he said. 

“If the problem is a tidal wave of new solutions available, and the problem is we don’t know how to harness it, here is a way to think about the solution – a short-term and a long-term strategy.” 

Agrawal said short-term strategies are basically productivity enhancements. They’re deployable within a year, aim for 20 per cent productivity gains, and have a payback period of no more than two years.  

“And here’s the key point, no change in the workflow,” he said. 

“In other words, it’s truly a technology project where you just drop it in, but the rest of the system stays the same.” 

Genuine game-changers 

Long-term strategies take longer to deploy but they’re genuine game-changers, offering gains 10 times or more greater than short-term deployments. But critically, they require a redesign of workflows. Agrawal said AI, like electricity, is a general-purpose technology. 

A useful analogy is when factories were first electrified and started to move away from stream-powered engines. 

In the first 20 years after electricity was invented, there was very low take-up – less than 3 per cent of factories used electricity, and when they did, “the main value proposition…was it will reduce your input costs” by doing things like replacing gas lamps. 

“Nobody wanted to tear apart their existing infrastructure in order to have that marginal benefit,” Agrawal said. 

“The only ones that were experimenting with electricity were entrepreneurs building new factories, and even then, most of them said, ‘No, I want to stick with what I know’” in terms of factory design. 

But a few entrepreneurs realised there was a chance to completely reimagine and redesign a factory that was powered by electricity, because no longer was it dependent on transmitting power from engines outside the factory via long steel shafts to drive the factory machinery. 

When the shafts became obsolete, so did the large columns inside the factories to support them. And that opened the door to lightweight, lower-cost construction, and factory design and layout changed to having everything on one level. 

“They redesigned the entire workflow, Agrawal said.  

“The machines, the materials, the material handling, the people flow, everything [was] redesigned. Some of the factories got up to 600 per cent productivity lift.” 

Agrawal said initially, the productivity differences between electrified and non-electrified factories were very small. 

“You could be operating a non-electrified factory and think those guys who want the newfangled electricity, it’s more trouble than it’s worth,” he said. 

“But the but the productivity benefits just started taking off from electricity. 

“Now we’re seeing the same thing with machine intelligence [and] the adoption rate of AI.” 

This one learns from us 

However, Agrawal said the characteristic that “makes AI different from every other tool we’ve ever had in human history, is this the only one that learns from us”. 

He said this explains the headlong development rush and the commitment of so much capital to the technology. 

“The way AI works is that whoever gets an early lead, their AI gets better; when their AI gets better, they get more users; when they get more users, they get more data; when they get more data, then the AI the prediction improves,” he said. 

“And so, once they get that flywheel turning, it gets very hard to catch up to them.” 

Agrawal said AI and machine learning is developing so quickly it’s virtually impossible for companies and businesses to keep up, let alone implement and adapt. 

“The thing I would pay attention to is not so much the technology capability, because obviously that’s important and it’s moving quickly,” he said. 

“But what I’m watching are the unit economics of the companies who are first experimenting with it, and then putting it into production,” he said. 

“Cost just keeps going down because the AI is learning and getting better. And so that, like my sense there is, just pay very laser-close attention to the unit economics of what it costs to do a thing.  

“And you can go right down the stack of every good and service watching how, when you start applying these machine intelligence solutions to that thing, do the unit economics change?”