Should private social media companies have a free hand in deciding what content to allow on their platforms? Governments from the European Union to India to the U.S. states of Texas and Florida are increasingly answering no, as they seek to co-exist with a range of self-regulatory initiatives created by the platforms. Meta – formerly Facebook – has gone furthest in establishing a self-regulatory mechanism, creating an appeals process for Facebook and Instagram users who disagree with its content moderation decisions in the form of a quasi-judicial Oversight Board. This body of independent experts, established in late 2020, determines whether content submitted for its review violates the social media company’s Community Standards, Values, and international human rights law.
In a new paper, we consider the impact that the Oversight Board is likely have on Meta and on the content moderation landscape more broadly. While many commentators have dismissed the board because of its limited authority and enforcement powers, we argue that it is having an impact. Drawing on the experience of international human rights tribunals such as the European Court of Human Rights, the Inter-American Commission on Human Rights, and UN human rights treaty bodies, we argue that the Oversight Board is gradually expanding its ability to hold Meta accountable and generating decisions that clarify how international human rights law shapes the responsibility of social media companies when moderating online content.
It is important to highlight several key facts about the board. First, it operates entirely independently of Meta; it selects its own members, deals with the company at arm’s length, and is funded by an irrevocable trust. Second, the precise function of the board is still contested. It describes itself as a company-based grievance mechanism, but it only hears 20 or so cases each year (out of more than 1 million appeals), and most of its work is focused on creating legal norms. Meta’s articulated purpose in creating the board was to have it answer difficult legal questions, while civil society wanted the board to function as a check on Meta. Third, the board applies both internal company and international human rights law, giving primacy to the latter.
Our paper first discusses why it is appropriate to analogize the board to an international human rights tribunal. For example, tribunals and the board are tasked with holding their creators to account, which explains why they are often delegated limited authority. The board and human rights tribunals also issue binding and non-binding guidance, have promotional and quasi-judicial functions, face overwhelming demand, and have limited ability to enforce their decisions. Of course, there are also differences – most important, the board is a creation of a private company, not states. But we think that the similarities provide a basis for looking to the experiences of international human rights tribunals for insight into how the board can have an impact despite the limitations on its authority.
Some might argue that comparing the two suggests the Oversight Board will not have much impact, given the many limitations of international human rights tribunals. Yet those tribunals have (1) incrementally expanded their authority to hold states accountable for violations and (2) developed international norms in response to new and emerging human rights problems. These achievements are especially noteworthy given that the tribunals lack the authority to compel compliance by sovereign states.
The Oversight Board is already beginning to chart a similar trajectory. First, it is asserting its authority to hold Meta accountable for following its content moderation policies and conforming those policies with human rights law. Second, it is developing a fact-specific jurisprudence that explains how the norms in human rights treaties apply to private social media companies. Most decisions involve freedom of expression, but the board has also considered cases involving privacy, nondiscrimination, and economic and social rights.
On the first issue – accountability – the board is already following strategies that human rights tribunals have developed. For example, it has refused to defer to Meta’s determination that an appeal is rendered moot by the removal or restoration of the content after the board selects the case for review. It is also pushing to narrow the exception to its jurisdiction over content Meta removes in response to a “valid report of illegality” by a state. With respect to compliance, the board is carefully tracking Meta’s responses to its binding rulings and policy recommendations. Its recently-released first annual report creates metrics to grade the company’s responses and establishes a monitoring team to follow up on its decisions and policy guidance.
On the second issue – norm generation – the Oversight Board’s more than two dozen decisions to date have tackled a range of unresolved issues concerning how international human rights law applies to corporations. The cases are highly diverse, involving disputes relating to speech in favor of the Russian opposition politician Victor Navalny, LGBT users’ reclaiming of anti-gay slurs in Arabic, and images of breast cancer survivors, to name just a few. The decisions discuss a range of novel issues relevant to content moderation. For example, the Oversight Board held that the human rights responsibilities of social media platforms are heightened when the platforms are the dominant source of news in a country or region, during armed conflicts, and when minorities face government suppression. The board has also provided guidance on corporate due diligence assessments, identifying the risks that Meta should consider with respect to issues such as automated decision-making, machine learning, technical design, and transparency. These decisions are likely to influence initiatives to regulate internet platforms by national governments and the European Union.
We argue that the board should continue this trajectory, developing strategies similar to those that international human rights tribunals have used to promote accountability and develop international norms. However, this sanguine vision of the Oversight Board’s future is not assured. The board’s efforts to expand its authority may provoke resistance by Meta, and there is a risk of negative spillover if states or companies misuse its jurisprudence to weaken human rights standards. Nonetheless, as it continues to provide Meta with interpretive guidance and recommendations, we expect that the board will gradually shape the company’s content moderation policies and its culture, as well as international human rights law writ large.
This post comes to us from professors Laurence R. Helfer at Duke University School of Law and Molly K. Land at the University of Connecticut School of Law. It is based on their recent article, “The Facebook Oversight Board’s Human Rights Future,” available here.