Scientific Beta has just released a study claiming that there is “no evidence that ESG strategies outperform” after controlling for risk. Their research has been seen as paradigm-shifting since, previously, there was apparently clear evidence that ESG delivers alpha. Several journalists and practitioners since contacted me asking for my views on the study, so I thought to summarise them here. I am very grateful to the Financial Times for publishing my letter on the study; this post goes into more detail.
What Does the Paper Actually Do?
The paper constructs portfolios based on (1) A company’s ESG scores from MSCI – either in aggregate or focusing on E, S, or G independently; (2) Changes in ESG scores (which they call ESG Momentum); (3) A mix of the two. They show that, relative to the Capital Asset Pricing Model (i.e. controlling only for the market beta), ESG portfolios outperform, replicating the findings of prior research. They then go beyond prior research with three main findings:
1. ESG No Longer Outperforms When Controlling For Risk
The authors add additional risk exposures than the market beta. Three are from the famous Carhart (1997) model: exposure to the size, value, and momentum factors. They also add volatility, profitability, and investment factors to form a 7-factor model, and finds that the alpha (= outperformance) goes away for all portfolio types.
2. ESG Does Not Provide Protection From Downside Risk
If ESG does not improve average returns, perhaps it provides protection from downside risk? Indeed, this is a common claim from ESG advocates. However, the authors construct a “downside risk factor” and find that ESG portfolios are not negatively correlated with it.
3. The ESG Alpha Is Partly Due to Investor Demand
The authors suggest that the outperformance of ESG may be due to SRI funds buying into ESG companies and raising their prices even without any performance improvements. If so, (a) this is inconsistent with common arguments that ESG improves performance, e.g. due to greater employee satisfaction and customer trust, and (b) the outperformance is unlikely to persist going forwards since fund preferences will be priced in.
The authors find that ESG outperformance is much higher in periods where SRI funds received large inflows, suggesting that it is partly due to demand. Thus, they stress that studies claiming an alpha need to be based on long time periods – otherwise their conclusion might stem from studying a short window where SRI funds enjoyed large inflows.
What Does This All Mean?
I believe that the three tests that Scientific Beta conducts are valuable, and I fully agree with their conclusion in the introduction that “claims of positive alpha in popular industry publications are not valid because the analysis underlying these claims is flawed. Omitting necessary risk adjustments and selecting a recent period with upward attention shifts enables the documenting of outperformance where in reality there is none.”
However, I’m surprised by the blockbuster reaction that the paper has had in the practitioner community, suggesting that the paper has overturned prior findings of unambiguous outperformance. This, in turn, has led to ESG sceptics rubbing their hands in glee and claiming victory, and ESG advocates defensively alleging that the authors must have an ideological bias against ESG. Some emailed me asking me to write an article explaining why the study is wrong, without being open to the possibility that it might be right.
My surprise is because none of the checks that Scientific Beta does will be of news to any academic or practitioner who understands research methodologies. Controlling for omitted variables and checking for robustness over a long time series are simply essential aspects of any rigorous paper. (I ignore point 2, on downside risk, because many studies focus on alpha and do not make claims about downside risk). No top academic journal should publish a paper that fails to do this. The five papers that Scientific Beta reference include two that are not published in any journal, and three published in practitioner journals which have a much lower bar for rigour than academic journals. Thus, their analysis attacks a straw man.
Certainly, the straw man needs to be attacked, because many people treated those studies as gospel – potentially due to confirmation bias, because ESG advocates wanted their findings to be true and ignored their serious methodological limitations. Indeed, it was disappointing how ESG advocates crowed about the outperformance of ESG as little as one month into the pandemic, since ESG is about the long term. If ESG had underperformed in the first month, an ESG advocate would claim that one month is far too short to form inferences.
Are the Findings New?
The Scientific Beta study is therefore useful. However, it should have never been needed in the first place. If readers evaluated studies based on their scientific rigour, rather than whether or not they like the findings, then we would not have such black-and-white thinking, swinging from one extreme to the other – first believing that ESG definitely leads to outperformance, and then claiming that no ESG strategy works.
Indeed, a prior study already suggested that ESG outperformance goes away after controlling for other factors. Even though it was by serious academics, it got less attention, potentially since their claims were less sweeping – they only claimed that ESG did not outperform in the COVID pandemic (not in general). If there is one criticism of the otherwise valuable Scientific Beta study, it is that their conclusions are much more sweeping than their actual analysis merits. For example, the headline of their press release is “no evidence that ESG strategies outperform”. However, they have only overturned five studies, none of which were published in top academic journals. This does not mean that there is no evidence at all that ESG strategies outperform.
Research published in top academic journals does address Scientific Beta’s valid concerns. For example, in my paper showing the outperformance of employee satisfaction, I control for the Carhart 4-factor model, industry performance, and many other characteristics such as dividend yield, the stock price, and sales performance. I did not control for quality as the quality factor had not been discovered at the time, and I took my control variables off-the-shelf from a highly-cited paper (Brennan, Chordia, and Subrahmanyam (1998)) to address concerns that I was choosing whatever control variables worked. However, an asset management firm recently asked me to rerun my analysis controlling for quality. I did so, and the results remain robust.
In addition, the study was based on 26 years of data, and a follow-up used 28 years. This addresses concerns that the inferences were based on short time periods. Moreover, I explicitly test the explanation that the alpha was due to SRI fund demand. I hand-collected a list of all SRI funds that used labour or employment screens, calculated the change in their stock ownership of companies with high employee satisfaction, and calibrated the effect of the increased demand on the stock price. I found that demand factors could only explain 0.02% of the annual outperformance of 2.1-3.5%.
The point of the above example is not to focus on my own paper specifically, but to make a general point. Publication in a top academic journal requires authors to demonstrate robustness and to address alternative explanations. This highlights the importance of focusing on papers in top academic journals, as I explain in Evaluating Research, a cut-out-and-keep guide on how to know what research to trust.
The Lessons We Can Learn
I have been disappointed to the defensive reactions to the Scientific Beta study, which seem not to be based on its analysis but an ideological view that it must be wrong – or that the authors had some hidden agenda – because the respondent believes that ESG must work. See, for example, this FT reader comment:
FT is probably publishing this knowing it will be crushed in the coming weeks. I’m certain the methodology and research will be exposed as flawed and faulty and probably grossly selective. Long term, it will actually strengthen the credibility of ESG one it’s been refuted. Kapoor is evidently some sort of anti-ESG troll who has his own vested interest in trashing ESG.
There are dozens of reputable studies from independent academics, to equity research and rating agencies that support the link between sustainability and long-term stable returns going back to before the financial crisis, https://onlinelibrary.wiley.com/doi/abs/10.1111/jacf.12347 it all depends on what you measure and how you measure it.
Companies perceived as more sustainable are seen as more resilient, have better risk management and are trusted more than those that aren’t and are therefore more likely to outperform over the long term. A company’s social capital acts as a kind of insurance during downturns and crises. It’s a no-brainer, really.
The reader speculates (hopes?) that it will be “crushed” in the coming weeks. S/he is “certain the methodology and research will be exposed as flawed and faulty and probably grossly selective” based on no evidence. If the reader is so certain that there are flaws, then s/he should point them out. The second paragraph hand-picks one paper which s/he claims shows outperformance (when it actually shows that ESG does not outperform in general, only in crisis times), and the third makes a “no-brainer” statement without backing it up with evidence.
There was a similar reaction on LinkedIn. One senior investor wrote “Where is the angry emoji on LinkedIn?”, implying an emotional reaction to the study. A leading ESG advocate pointed out that it was “One academic study out of many”, when in fact it points out problems with several studies. Moreover, had the study found the opposite result, it would likely have been heralded despite being only a single study.
What the Scientific Beta study has highlighted is that we should not react to research based on emotion. We should apply the same scepticism to a study that reinforces our view of the world as one that contradicts it, and have the same openness to a study that finds an “inconvenient truth” to one that supports what we would like to be true. They should not have had to make this point, but it is valuable that they have.
Who Are Scientific Beta?
In Evaluating Research, I highlight the importance of taking note of who the authors are. Of course, the main basis for evaluation is the research itself. But, because getting into the weeds of a paper’s methodology requires time and specialist academic expertise, a time-pressed reader may not be able to do this. Instead, s/he can quickly look at the credentials of the authors and any bias that they may have.
Bias is particularly likely for a topic such as ESG, because there are strong pre-existing views on a topic. Thus, a consultancy would be unlikely to publish a paper showing that diverse companies, purposeful companies, or long-term-oriented companies perform worse, because doing so would worsen its brand. Instead, it would have a strong incentive to run many possible methodologies and choose the one that shows that these companies perform better.
Similarly, expertise is important for a topic such as ESG, which might be correlated with many other factors that can explain returns. For example, high ESG companies may be disproportionately in the tech industry, and may already be profitable to begin with (which is why they can spend on ESG). Research expertise is important to be able to control for these factors.
While I have to admit that I had not heard of Scientific Beta before, they are transparent about who they are. They provide a smart beta platform, and so there is no clear reason to think they may be biased. If they claimed that ESG outperformed, this would increase the popularity of smart beta products based on ESG factors. They also were established by EDHEC-Risk Institute, a strong academic institution, and two of the three authors have PhDs in finance. Of course, this does not automatically mean the paper will be correct, and so most of this article scrutinised what the paper actually did, but it does assuage immediate concerns of bias and expertise.
Does the Research Actually Exists and Is It Based on Actual Data?
These are the first two questions in my Evaluating Research guide. They might seem obvious, but many authors have issued press releases on research that they have not made available in the public domain – or it is in the public domain but it does not explain its methodology. In both cases, it is impossible to scrutinise whether the methodology is robust. (Worryingly, respected newspapers have written articles about non-existent studies).
I was first alerted to the Scientific Study by a Financial Times article. There were several comments asking for the study, and the journalist (who wrote an otherwise excellent article) replied “Thanks for your interest, but as I mention in the story, the paper has not been published yet, hence there’s no webpage to link to”. This is actually not correct. There are two aspects of publication: (1) the paper being published in working paper form, i.e. being publicly available, and (2) it being published in a peer-reviewed academic journal. The paper has indeed not been published in the sense of (2) (because the peer-review process takes a long time), but is indeed publicly available. I was able to find it in a few seconds by Googling and was surprised that no-one, including the journalist, tried to use Google. It’s here; once again, the authors have been transparent. In addition, the paper is very transparent about its methodology.