Equities

The future is quant

The pace of technological change and advances in machine learning and quantitative methods will result in a “shake out” in investment management according to Campbell Harvey, Professor of Finance at Duke University.

Harvey, who is well respected for his extensive research work on factors, says that even discretionary managers can not deal with the amount of data now available and need to use machine learning to help inform their decision making.

“The future of finance will be much more quantitative than it is today. We are moving much more in that direction whether it is systematic or discretionary trading, machine learning is here to stay,” he said in a podcast conversation with Michael Kollo [see below].

“However there’s a big spread in the competence in terms of applying it. These small firms running machine learning, will be defeated by firms which have been around for at least five years which have PhDs in mathematics, statistics, and machine learning and know the best way to process and do validation. They are the firms that will win.”

Harvey has developed a set of due diligence questions that investors should ask managers about the research process in terms of machine learning and big data.

“Right now it’s the wild west and people are playing the hype of machine learning. This is something I believe in, but there is a lot of snake oil out there.”

While Harvey acknowledges that in some scientific fields, such as genetic research, data mining is not necessarily a bad thing but in economics and finance it’s a completely different story.

“We have models from first principles that have had a significant impact especially in capital markets, and that allows us to layer on something else in terms of our process of discovery. There’s a paper floating around that looks at 2.4 million trading strategies based on balance sheet information of companies and correlates that with future stock returns. The results showed that the second-best strategy is a measure of EPS divided by rental payments the firm owes four years in the future. It makes no sense, but that variable knocks the ball out of the park. What I’m saying is simple: that variable pops up in the top 10 for a machine learning algorithm, but no one in their right mind would consider trading it. It is a false factor and would be discarded. The number one thing is we need to use the theory we have, and that guides us.”

Harvey also warns that machine learning is complicated and there are hundreds of different machine learning approaches so a potential problem needs to be matched with the right approach, with inference built in.

Factor research

Harvey is well-known for his prolific work on factors, including a paper that sought to collate all the factors that had been identified in academic finance – the result was more than 400 factors.

One of his substantial papers, co-authored with Wayne Ferson from the University of California was on the variation of risk premia.

“We made the economic case that risk premia change through time. If I wrote that paper today it would be called “Factor timing”,” he says.

Harvey now has some work in progress on matching risk with investment horizons.

“Some people have longer horizons than others and that creates some opportunities. There is a reason Warren Buffet is sitting on $150 billion of cash, he’s waiting for the recession… and he’s also long negative skew,” he says. “A lot of this depends on the horizon. I wish pension plans had these longer horizons. Buffett makes it happen even though he has shareholders, and pension funds seemingly have long horizons but they don’t act like it. They should be long illiquidity and welcoming a negative skew because they can ride out a GFC.”

Harvey, whose PhD showed that inverted yield curves predict recessions, was editor of the Journal of Finance for six years and he has published more than 125 scholarly articles. But despite his success as an academic he is critical of the motivations behind academic finance.

“The problem with finance research is it looks for an effect or factor that is significant statistically. The usual rule is 95 per cent confidence which is two standard deviations. However that only applies to a single test,” he says. “If you’re trying 20 different factors and one appeared to work and 19 didn’t, that’s no big deal, that’s what you would expect purely by luck, it’s got nothing to do with the factor being true…. Declaring a factor to be a real factor with a two standard deviation confidence is just false.”

[He previously received the Bernstein Fabozzi/Jacobs Levy Award for Best Article from the Journal of Portfolio Management for his research on distinguishing luck from skill.]

But this problem is not just about factor research, according to the Canadian, but extends to empirical research in finance. And much of it stems from the structures of academic finance as an industry and the importance of publication for academics.

“To get published in a top journal you have to have a good idea, and it’s almost always the case the result is significant. Journal editors are not enthusiastic about publishing non- results, because they don’t get cited and it doesn’t advance the journals profile,” he says.

In addition an academic in one of the top universities who is published in a top journal can look forward to a job for life, a reward system Harvey says leads to data mining.

“This leads to a massive data mining exercise where many things are tried until something works in the sample, you spin a story around it, and it gets published. Many years later it might be found not to hold up out of sample, but you’re already done because you already have tenure. Academically the incentive is to publish at whatever cost. It’s not like you’re falsifying or fabricating… but it is basically a data mined result.”

In terms of factor research, the same data sets have been used and analysed since the 1960s.

“The low hanging fruit has already been picked,” he says. “Because that is the case if you find something new and exciting I’m very sceptical. The barrier to find something new is very high.

“We need to be very careful about this. I have no problem with smart beta products based on risk premia we have known and have been tested for a long time. I do have a problem with ETFs wrapped around an academic paper published in 2015.”

According to Harvey a factor is a source of a risk premia that is excepted to persist in the long term. And this appears in two ways.

The first is structural, such as equities achieving higher returns than treasury bills, or illiquid assets returning more than liquid assets.

And the second is an alpha source that is purely a trading strategy.

“You find some information that is not incorporated in the market quickly and design a strategy. Once people figure it out, and use an algorithm to process the information faster, those are the types of alphas that will fade quickly. Investors need to distinguish between those two and time horizon is the defining thing.”

 

 

The Curious Quant series, hosted by Michael Kollo, is published by Conexus Financial, the publisher of conexust1f.flywheelstaging.com. It is a discussion between technically-minded professionals in the financial services, technology and data science fields that examines the application of new data and new methodologies to common problems in financial markets. The aim is to promote better discussions about these emerging areas, and a better understanding of new technologies.

Michael Kollo has a PhD in Finance is from the London School of Economics where he lectured in quantitative finance in addition to Imperial College and at the University of New South Wales. He has created models and led quantitative research teams at Blackrock, Fidelity and Axa Rosenberg in the UK before more recently moving to Australia where he established the quantitative team for the $50 billion industry superannuation fund, HESTA. 

For more podcasts click here

Join the discussion