How Artificial Intelligence Could Transform Proxy Advisory Practices

Proxy advisers play a pivotal role in corporate governance by providing institutional investors with recommendations on how to vote at shareholder meetings. These firms influence key corporate decisions, including the election of directors, executive compensation, and governance policies, thereby exerting a substantial impact on the management of publicly listed companies around the world. Given the magnitude of their influence, it is essential to scrutinize the methodologies behind their recommendations and to assess whether these recommendations are produced objectively and without bias.

Indeed, the process by which proxy advisers develop their recommendations remains opaque, giving rise to concerns regarding accountability, undue influence, and conflicts of interest. In a recent paper, I focused on one particularly salient aspect of those concerns: the actual and potential use of artificial intelligence (AI) by proxy advisers and the implications such use may have for corporate governance.

Existing Concerns and Global Regulatory Responses

There is limited disclosure about how proxy advisers formulate their voting recommendations. Moreover, there is little empirical evidence demonstrating that these recommendations consistently have a positive impact on shareholder value or corporate strategy, and assessments are divided on whether such advice aligns with long-term firm performance.

In addition, the potential conflicts of interest inherent in the proxy advisory business model – arising from the coexistence of advisory and consulting services and from structural incentives whereby the market value of recommendations increases as controversies expand – have raised fundamental doubts about the neutrality and reliability of proxy advice.

In response to these concerns, countries have adopted varying institutional approaches. In the United States, the initial regulatory focus was on institutional investors who use proxy advisers, requiring them to manage conflicts of interest and to independently assess the quality of the advice they receive. In 2020, a rule was introduced to directly regulate proxy advisers by classifying their advice as “solicitations” and mandating conflict-of-interest disclosures and opportunities for companies to respond.[1] However, part of this rule was rolled back in 2021,[2] and voluntary practices have since taken precedence.

In contrast, the European Union introduced a legal framework under the 2017 Shareholder Rights Directive II (SRD II), which requires proxy advisers to disclose their methodologies and evaluation criteria and to ensure transparency in managing conflicts of interest.[3] This has resulted in a dual-layered system combining oversight by the European Securities and Markets Authority (ESMA) and industry self-regulation through the Best Practice Principles Group (BPPG).

In Asia, regulatory responses vary. In Japan and South Korea, the main approaches are soft-law guidelines based on national stewardship codes,[4] while in India, a relatively comprehensive legal framework has been established under the Securities and Exchange Board of India (SEBI), including a registration system, comment procedures, and a grievance mechanism.[5] While the approaches differ, the question of how to position and regulate proxy advisers remains a shared policy challenge.

AI in Proxy Advisory Services — Risks, Benefits, and Preconditions

In recent years, the introduction of AI has begun to influence the institutional framework for proxy advisers. Currently, AI is primarily used in preliminary stages such as information extraction, classification, and scoring,[6] but there is a growing possibility that it will be used in the formulation of proxy recommendations.

Such use entails serious risks – including the black-box nature of decision-making processes, the acceleration of unjustified convergence within a single proxy adviser’s recommendations as well as divergence across different proxy advisers, and the institutional embedding of conflicts of interest through training on historical data – which could exacerbate opacity, undue influence, and conflicts of interest. Moreover, AI obscures who is involved in making judgments and where accountability lies, potentially undermining the institutional basis for contesting recommendations or demanding explanations.

Despite these potential concerns, AI can, if properly designed and employed, offer two advantages: accelerating information processing and enhancing the consistency of judgments. AI can quickly and efficiently process large volumes of unstructured data, enabling faster analysis of complex proposals and legal documents. It also allows for pre-formulated evaluation logics, which can reduce subjective variability and arbitrariness, thereby facilitating more impartial assessments.

Accordingly, the introduction of AI should not be rejected, but rather reconsidered so that its characteristics – such as efficiency and consistency – can be effectively harnessed within a regulatory framework. Four requirements should be emphasized: (1) ensuring transparency in data collection, analysis, and output generation; (2) establishing institutional mechanisms that ensure accountability and allow plausibility checks for AI-generated outputs, even when trade secrets limit the detailed disclosure of model structure or logic; (3) designing and operating AI models in ways that do not reproduce structural conflicts of interest embedded in past recommendations or client relationships; and (4) clearly institutionalizing responsibility for people involved in making recommendations. When these conditions are met, AI may justifiably be integrated as a core component of proxy advising.

Requirements for using AI are set forth clearly in the ESG Rating Regulation adopted by the European Union in 2024.[7] It requires detailed disclosure not only of evaluation models, input data, and conflicts-of-interest policies, but also of the extent to which AI technologies are used. It thus demonstrates a clear institutional commitment to ensuring transparency and explanation in AI-based evaluations of ESG-related information.

Importantly, these principles are not unique to ESG assessments; rather, they represent a general regulatory framework for AI-assisted decisions and should be considered equally applicable to proxy advisory services.

Whistleblower Systems as Core Infrastructure for AI Governance

To make such institutional requirements function effectively, it is essential to adopt complementary measures that reflect the specific characteristics of AI technology. In particular, when the judgment-formation process is automated by algorithms, its structure is difficult to observe.

Even when model architectures and input data are disclosed, there are institutional and technical limits to third-party evaluations of the validity of outputs or the presence of underlying biases. Given this black-box nature of AI, the establishment of whistleblower systems – including protection mechanisms and monetary incentives – is essential for detecting and correcting misconduct or bias in AI-generated judgments.

This is especially important in the initial phases of model design, data selection, and the setting of weightings, where discretion or organizational pressure may be introduced, and internal reports can play a decisive role. A notable example is the proposed AI Whistleblower Protection Act, introduced in the U.S. Senate in 2025.[8] The bill protects employees and contractors who report violations of AI-related laws or substantial risks – including disclosures to external authorities and internal reports to supervisors or designated compliance personnel.

In the age of AI, whistleblowers are not optional safeguards, but essential for ensuring the integrity of AI-supported advisory functions.

Conclusion

AI is poised to become an integral part of proxy advisory services, offering efficiency and consistency but also amplifying concerns over transparency, undue influence, and conflicts of interest. Given the global reach of major providers, international coordination and regulatory coherence are needed. At the same time, responses will and should reflect each jurisdiction’s legal and market structures. Institutional frameworks should ensure accountability in data processing and judgment formation and clarify the locus of decision-making and any embedded conflicts of interest. The challenge is not whether to promote or restrict AI, but how to integrate it responsibly into the governance of proxy advice.

ENDNOTES

[1] Proxy Voting Advice, Securities Exchange Act Release No. 34-95266, Investment Advisers Act Release No. IA-6068, File No. S7-17-21, 87 Fed. Reg. 43168 (July 19, 2022).

[2] Proxy Voting Advice, 17 C.F.R. Parts 240 and 276, Release Nos. 34-95266; IA-6068; File No. S7-17-21, RIN: 3235-AM92 (2021).

[3] Directive (EU) 2017/828 of the European Parliament and of the Council of 17 May 2017 amending Directive 2007/36/EC.

[4] For Japan, see Council of Experts on the Stewardship Code, Principles for Responsible Institutional Investors [Japan’s Stewardship Code], March 24, 2020 (originally published February 26, 2014; revised May 29, 2017 and March 24, 2020), available at: https://www.fsa.go.jp/en/refer/councils/stewardship/index.html; for Korea, see Korea Stewardship Code Council, Principles on the Stewardship Responsibilities of Institutional Investors [Korea Stewardship Code], December 16, 2016, available at: https://sc.cgs.or.kr/eng/about/sc.jsp.

[5] Currently, proxy advisers in India are regulated under the SEBI (Research Analysts) Regulations, 2014 and the Procedural Guidelines for Proxy Advisors. For the latter, see SEBI, Procedural Guidelines for Proxy Advisors, SEBI/HO/IMD/DF1/CIR/P/2020/147 (Aug. 3, 2020).

[6] ESMA. 2023. Artificial Intelligence in EU Securities Markets. ESMA50-164-6247. https://www.esma.europa.eu/sites/default/files/library/ESMA50-164-6247-AI_in_securities_markets.pdf.

[7] Regulation (EU) 2024/3005 of the European Parliament and of the Council of 27 November 2024 on the transparency and integrity of Environmental, Social and Governance (ESG) rating activities, and amending Regulations (EU) 2019/2088 and (EU) 2023/2859, OJ L 2024/3005, 12.12.2024.

[8] AI Whistleblower Protection Act, S.1792, 119th Cong. (introduced May 15, 2025)

This post comes to us from Masaki Iwasaki, an associate professor at Seoul National University School of Law. It is based on his recent paper, “Proxy Advisors under Artificial Intelligence: Unverified Reasoning in Shareholder Voting Recommendations,” published in Asian Journal of Law and Economics and available here.

Leave a Reply

Your email address will not be published. Required fields are marked *