AI Might Make a Great Proxy Adviser

Proxy advisers exercise immense influence through their voting recommendations to shareholders. Corporations, aware of the importance of those recommendations, expend valuable resources to ensure that the recommendations align with management’s preferences by hiring governance consultants to influence outcomes. These consultants may be independent third parties who lobby the adviser on the firm’s behalf, or they may be the consulting arm of the adviser itself. In the latter case, a regular criticism by practitioners, academics, and regulators is that the proxy adviser acts as both paid consultant and judge of the corporation. Given the dual roles, concerns have emerged about the objectivity of their recommendations. However, measuring this influence empirically has remained elusive due to data limitations and the often-hidden identity of clients.

Proxy advisers broadly dismiss these concerns by pointing to their annually released voting guidelines, which they claim provide an objective framework for recommendations. They further assert that their consulting services are wholly independent from their recommendation functions. Nonetheless, these assurances do little to alleviate suspicion, and firms remain motivated to hire consultants who can shape proxy adviser recommendations.

In a new paper, we introduce a novel approach: using artificial intelligence (AI) to serve as an unbiased proxy adviser. We use an AI model to issue binary (“For” or “Against”) recommendations on shareholder proposals, relying exclusively on publicly available guidelines from ISS, the largest and most influential proxy adviser, alongside textual information from proxy filings. The AI model then processes increasingly subjective and conditional ISS guidelines to arrive at this binary recommendation. Because the AI has no financial relationship with firms and follows stated guidelines, its recommendations provide a unique benchmark for evaluating potential bias in traditional proxy adviser decisions.

The AI in our study makes recommendations that align closely with ISS across most cases. These recommendations appear to broadly capture shareholder preferences; controlling for the effect of the ISS recommendation, a positive recommendation from the AI is associated with a significantly higher level of shareholder support. We then examine the role of third-party governance consultants, which would represent management’s interest in opposing these shareholder proposals. We find evidence that these consultants influence proxy adviser recommendations. In cases where the ISS recommendation opposes the proposal, AI is significantly more likely to support the  recommendation if the firm has disclosed employing an external governance consultant. While our analysis cannot definitively identify the source of influence involving these consultants, it supports concerns about the dual roles and incentives of proxy advises and their affiliates. Our findings therefore speak directly to regulatory debates about the transparency and independence of proxy advisers.

Our study focuses exclusively on shareholder proposals for several reasons. Most important, the proposal generally includes both supporting statements from shareholders and opposing statements from managers. The AI is thus able to evaluate both sides’ arguments and how they fit into ISS’ policies. Second, those policies are generally stated on these proposals, and the statements contain the relevant information that ISS policies explicitly consider. Thus, the proposal can be evaluated based solely on the statements, without incorporating additional data that would be necessary for other proposal types. Finally, shareholder proposals are nearly universally opposed by managers, so we can clearly evaluate whether hiring an external consultant reduces the likelihood of an opposed statement getting passed.

Our study begins with an analysis of annual voting guidelines released by ISS for shareholder proposals. In these guidelines, ISS states whether they will recommend a vote “For” or “Against” each type of proposal. In many cases, ISS will list various conditions that qualify for a “For” (“Against”) vote, or they will state that they will make a recommendation  case by     case. We group these conditional and case-by-case recommendations into a “Case-by-case” category. We find that, over the course of our 2009-2021 sample, ISS guidelines have increasingly relied on more subjective case-by-case decisions. In 2021, nearly 60 percent of guidelines on shareholder proposals are classified as “Case-by-case,” compared with less than 20 percent in earlier years of the sample.

We next examine the AI’s voting recommendations, how they align with ISS’, and whether alignment improves when the AI is provided with additional information. We test the AI in two progressive scenarios. First, we analyze only the text of the proposal from the shareholder sponsor (“P”), its supporting statement (“S”), and the opposition statement from management (“O”). Second, we analyze these items with the addition of the publicly available ISS voting guideline (“G”). We find that the AI significantly improves alignment when the ISS guideline is provided, culminating in approximately 79 percent consistency when all information is incorporated in our “PSOG” model.

A proxy adviser is responsible for representing shareholder interests. Our next tests evaluate the AI adviser’s performance in this regard by testing how closely its recommendations align with shareholder support. Our empirical design controls for the effect of recommendations from both ISS and management, shown in prior work to be a significant determinant of shareholder support, along with a variety of other firm and proposal characteristics. In this setting, we find that a positive recommendation from the AI proxy adviser is associated with significantly higher levels of shareholder support, suggesting that the AI recommendations are more effective in capturing shareholder preferences than is a positive ISS recommendation alone.

Considering the immense influence of proxy advisers, firms have an incentive to influence recommendations by hiring external governance consultants. Notably, both major proxy advisers, ISS and Glass Lewis, provide consulting services to corporations. They do not disclose the names of their clients, however, presenting a significant obstacle to researchers. We address this problem by looking to firms that voluntarily report hiring a governance consultant. In these statements, the name of the consultant is rarely disclosed.

We acknowledge that this approach does not perfectly align with the clients of the proxy advisers consulting businesses, and we make no claim that our results are driven entirely by proxy adviser consulting. However, we do note that both ISS and Glass Lewis have significant market share in the governance consulting industry. And while numerous other governance consultants exist, one of their key tasks often involves lobbying ISS or Glass Lewis. Our tests therefore aim to determine whether employing a governance consultant, regardless of whether that consultant also serves as a proxy adviser, influences the likelihood of alignment between ISS recommendations and managerial preferences.

Our empirical design focuses on the likelihood of a specific outcome: an ISS recommendation opposing the proposal (and therefore aligned with management preferences) when the AI proxy adviser issues a recommendation “For” the proposal based on ISS policies. We find that this outcome is significantly more likely when a governance consultant is present. This result is consistent with the consultant successfully influencing ISS to side with management’s preference to reject the proposal when the proposal may have otherwise earned support based solely on the guidelines.

Our analysis provides empirical support for longstanding concerns about the objectivity of proxy adviser recommendations, while also highlighting the potential of AI tools to introduce transparency and consistency into a process often criticized for opacity and subjectivity.

This post comes to us from professors Choonsik Lee at the University of Rhode Island and Matthew E. Souther at the University of South Carolina’s Darla Moore School of Business. It is based on their recent paper, “Beyond Bias: AI as a Proxy Advisor,” available here.

Leave a Reply

Your email address will not be published. Required fields are marked *