There were about 1.1 million cases in the first year of Meta’s independent appeals system against their decisions to remove content on Facebook and Instagram.
Controversial posts, most of which originated in the US, Canada or Europe, were removed for widespread violence, hate speech or bullying. Of the 20 cases about which the Oversight Board published decisions, 14 times ruled against Meta.
One case was removing images of a woman’s breasts in a breast cancer post.
Others displayed an image of a dead child with text asking whether retaliation against China for its treatment of Uighur Muslims was justified and the decision to ban Donald Trump after the Capitol Hill riots. The board overturned META’s decision to remove the first two instances but backed its decision to ban Mr Trump – although it criticized the “uncertain” deadline.
The board has just released its first annual report covering October 2020 to December 2021.
Anyone, including Meta itself, can appeal if they disagree with the decision to remove the content. Of the 1.1 million cases received during the 14 months, only 47 came from the firm.
On average, around 2,600 cases were reported per day.
However, Facebook has over two billion users worldwide, making it a relatively small percentage of its vast content. It was also worth noting that relatively few complainants were from outside Western countries.
Of all matters referred to the Board:
1% were related to Instagram posts, and the rest were about Facebook content
94% requested content to be reinstated. Only 6% wanted to remove it – but most were about people’s posts rather than someone else’s
Only 1.7% came from Sub-Saharan Africa and 2.7% from Central and South Asia 49.4% came from the US and Canada.
The oversight board is known as the “Supreme Court” and was formed by meta boss Mark Zuckerberg. It operates independently, although META covers its wages and other costs. This includes journalists, human rights activists, lawyers and academics.
Mr Hughes described the relationship between the board and META as “constructive but important”.
It has made 86 additional recommendations to the tech giant, including translating its policies into more languages and being more specific in explaining why content based on hate speech has been removed.