While environmental, social, and governance (ESG) ratings provide useful information to stakeholders, it’s unclear whether firms care about them. On the one hand, ESG activities may not align with the traditional goal of maximizing shareholder wealth. Further, ESG ratings often disagree, so much so that firms may disregard them. Finally, the number of ESG raters has swelled to over 100 in recent years, so the ratings of any single one may be immaterial. On the other hand, many investors rely on ESG ratings to guide their decisions. To the extent that firms want to attract these investors, they have an incentive to pursue flattering ESG ratings.
In a new paper, we examine the sensitivity of firms’ behavior to criteria underlying ESG ratings. When an ESG rater adjusts its methodology for producing ratings, do firms adjust their behavior in response?
Data from Sustainalytics, a leading ESG rater, provide an opportunity to address this question. Sustainalytics’ ratings are weighted sums, meaning they reflect a combination of raw scores and weights. A firm’s overall ESG rating reflects a weighted sum of E, S, and G scores. Each of the E, S, and G scores, in turn, reflects a weighted sum of dozens of underlying criteria scores. Our basic approach tests whether, and over what time horizon, firms improve their raw criteria scores in response when Sustainalytics places greater weight on a given underlying criterion.
Our main finding is that a 1 percentage point increase in weight for a given criterion is associated with an increase in raw score of about 14 percent of a typical change. This response occurs in the same month as the change in criterion weight.
We consider several explanations for how firms change their raw ESG scores so quickly in response to changes in criteria weights. First, firms may legitimately and nimbly adjust their ESG behavior. To test this possibility, we incorporate data about ESG news from RepRisk. We begin by breaking down firms’ raw criteria scores into two components, one related to the criteria weights applied by the rater and the other a residual component. We focus on the first component. If the portion of raw scores associated with criteria weights reflects real firm behavior, then this component should predict a decrease in the likelihood that firms receive negative media coverage for their involvement in ESG mishaps. We find no evidence of this effect.
We also incorporate data from the U.S. Environmental Protection Agency’s Toxic Releases Inventory (TRI) Program. This analysis focuses on firms’ environmental criteria. Again, we break down firms’ raw environmental criteria scores into weight-driven and residual components. We find scant evidence that the weight-drive component has predictive power for the toxic chemicals firms release, recycle, recover, treat or transfer over as many as two years in the future.
Another explanation is that Sustainalytics caters to firms, not the other way around. We test this possibility by focusing on instances where Sustainalytics stops using certain criteria in the creation of its ESG ratings. If Sustainalytics deemphasizes criteria when firms struggle to perform well under them, then raw scores should decline in the period leading up to criteria terminations. We find no evidence of this pattern. If anything, raw scores associated with terminated criteria exhibit a slight upward trend through the month of termination. We also find that raw scores during the month of termination are higher than raw scores among criteria that are newly introduced.
A fourth explanation is that firms manage their ESG ratings by pressuring ESG raters. This opportunity could present itself when ESG raters incorporate feedback from firms during the rating process. For example, in a 2017 methodology brief, Sustainalytics indicates that the last step of its rating process is to solicit feedback from the rated company:
“A draft report is sent to every company that we research for feedback. In our company contact process our goal is to gather feedback as well as additional and updated information from the company.” (Sustainalytics, 2017).
Simon MacMahon, head of ESG research at Sustainalytics, describes it as follows:
“Once we have completed our ratings process, we send the profile to the company for feedback. During those conversations, we’re looking for any additional information or clarification that can enhance our analysis. New information doesn’t always lead to a change in our rating, but we do listen. As ESG rating outcomes become more important, we certainly hear from people inside firms who forcefully argue for their point of view.”
The results are consistent with this explanation. ESG criteria measure either preparedness, disclosure, or performance. We find the relation between criteria weights and raw scores is greatest among criteria designed to measure preparedness. Many of these criteria involve the drafting of policies on topics such as money laundering, conflict minerals, data privacy, and so forth. Presumably, it is more difficult to quickly and credibly adjust disclosure- or performance-related criteria, such as establishing a corporate foundation or increasing racial diversity of a board of directors.
A consequence of the ratings management hypothesis is that firms’ incentive to manage ESG ratings should vary with monitoring by ESG-focused stakeholders. For example, firms with more ESG-focused institutional investors should have greater incentive to manage their ESG ratings. We examine Form 13F filings and find that the relation between criteria weights and raw scores is more pronounced for firms with high ESG investor ownership. We also find that the effect is more pronounced among firms that derive more revenue from ESG-conscious customers.
Overall, the results show how firms influence their ESG ratings when they participate in the rating process. We conclude that firms “manage” their ESG ratings to appeal to ESG-focused stakeholders.
 This practice remains in place as of 2021 under Sustainalytics new Risk Ratings. URL accessed July 11, 2023: https://hbr.org/2020/09/the-challenge-of-rating-esg-performance
Sustainalytics, 2017, “Sustainalytics’ ESG Rating Research Methodology”.
Sustainalytics, 2021, “ESG Risk Ratings – Methodology Abstract”.
This post comes to us from professors Jess Cornaggia and Kimberly Cornaggia at Pennsylvania State University. It is based on their recent article, “ESG Ratings Management,” available here.