The advisory group that reviews Facebook and Instagram content moderation decisions has published its first annual report Wednesday, the end of the first year of operation.
In 2021, the Supervisory Board apparently received over a million appeals from Facebook and Instagram users. Most of these requests asked the council to remove content in meta apps that was removed for violating rules against hate speech, violence, and bullying. The Board issued rulings and clarifications on 20 cases, which it called “significant”. In 70 percent of the cases reviewed by the group, she overturned the Meta’s original decision.
“There was clearly a huge pent-up demand among Facebook and Instagram users for some way to challenge Meta’s content moderation decisions to an entity independent of Meta,” the board said in a report.
The most notable decision of the Supervisory Board to date is the issue of reinstatement of former President Donald Trump, which was removed from Facebook after supporting the U.S. Capitol uprising. The board reacted to the decision by asking Meta to clarify the rules by which the former president was ousted from the platform. “In enforcing this penalty, Facebook did not follow a clear published process,” the board wrote at the time, adding that Facebook does not have a “perpetual” suspension rule like the one issued to Trump.
Beyond their decisions, which set a precedent of sorts for the future enforcement of the policy, the board also makes more general recommendations to Meta on how the company should think about specific aspects of content moderation and the rules it should set.
In less high-profile cases, the board has recommended that Meta tighten restrictions on Facebook and Instagram. rules against doxingdemanded that the company release a transparent report on how well it enforces COVID-19 related regulations and asked it to prioritize fact checking for governments that spread health disinformation through official channels.
The Supervisory Board made 86 policy recommendations in its first year of operation. Meta has implemented several board suggestions to improve moderation transparency, including providing users with more information when they violate the platform’s hate speech rules, informing them whether AI or human moderation resulted in a forced decision, and completely ignoring others. These results tracked in the annual reportwhich sheds some light on how effective the group’s influence really is and how often Meta implements or embellishes its recommendations.
The Review Board reviews content moderation cases from around the world, sometimes looking at linguistic and cultural nuances that Meta itself has failed to integrate into its moderation decisions, automated or not. Facebook whistleblower Frances Haugen has repeatedly raised concerns about the company’s ability to track its social platforms in non-English-speaking markets. According to the report, half of the decisions of the Supervisory Board concerned the countries of the Global South, including some countries in Latin America and Africa.
Initially, the council only considered cases where users requested the restoration of content on Instagram and Facebook, but after a few months, the group expanded to consider cases with requests to remove content. However, the scope of decision-making by the Review Board is limited to questions about individual posts, and not about many of the other features that people use on Instagram and Facebook.
The board says it wants to expand its scope to advise Meta on moderation issues affecting accounts and groups on its platforms, not just individual posts. The Supervisory Board is currently in a “dialogue” with the company, which still has the final say on what the semi-independent advisory group can actually do.
Credit: techcrunch.com /