Washington Institute For Defence & SecurityWashington Institute For Defence & SecurityWashington Institute For Defence & Security
Washington, DC 20001

Facebook has yet again come under fire for failing to stem anti-vaccination disinformation on its platform. But its failure to do so is not unprecedented; Facebook repeatedly hosts a variety of disinformation campaigns, ranging from allowing far-right extremist merchandise pages to allowing authoritarian nations to pay for adverts filled with false information. No action taken against Facebook has been enough to convince the tech giant to combat the rise in extremist content on its platform.

During the Burmese military’s ongoing genocide of the Rohingya, Facebook proved to be a pivotal tool for the military, amplifying tensions in the Northern Rakhine state and permitting citizens to post incendiary posts, many of which encouraged others to commit acts of violence. A United Nations’ report found that Facebook played a “determining role” in spreading anti-Muslim and hate speech in Burma. For example, the anti-Muslim Buddhist monk, Ashin Wirathu used Facebook to post videos claiming that a young Buddhist girl had been raped by a Muslim man, causing an enraged mob to go after the accused. A later investigation found that the accusations were false. Wirathu’s account was removed but the videos had not been taken down. Social media, and Facebook in particular, remains a popular way to grow support for movements, ideologies, and individuals. By failing to curb growing support for extremists like Wirathu, these social media companies inadvertently become part of the problem and ensure the continuation of hate speech, dehumanising rhetoric, and propaganda, all of which results in more polarisation and extremism.

Facebook’s community standards define hate speech as “direct attack against people on the basis of what we call protected characteristics.” Unfortunately, this narrow definition means that attacks on concepts or any non-people related entities are fair game. Following international outrage and calls for action, Facebook did expand its definition, noting that online hate can lead to offline violence, but hate speech still remains on the site because the company fails to fully remove the problem. It may, for example, remove or suspend individual accounts but simultaneously fail to take down pages or videos related to or produced by those individuals. On a post about Wirathu’s anti-Rohingya rhetoric published by The Straits Times, comments calling Muslims and Rohingya ‘vermin’ are still up, four years later. Even when users report posts or comments that contain hate speech, they often receive messages from Facebook saying that the content did not violate the community guidelines. Thus, the issue is not that Facebook cannot define hate but it is not equipped to identify types of hate.

Whilst Facebook has taken steps to change its approach with the Rohingya, it still has yet to adequately address the Chinese state’s usage of the platform to spread disinformation about its Uyghur population. Despite Facebook being banned in China since 2009, its government is still able to pay for adverts and have accounts for its officials and affiliated news outlets. In fact, according to Social Blade, a website that monitors the top most liked pages on Facebook, four Chinese state-affiliated news outlets (CGTN, China Daily, China Xinhua News, and People’s Daily, China) are within the top twenty liked pages. Additionally, these outlets have exploited this fact by encouraging users to like their pages through innocuous posts and then later, posting videos or posts with incorrect or misleading information, slowly inculcating users with what these outlets want them to believe.

These outlets often stage news reports, alleging that Uyghurs live comfortable lives in sanitised videos and employ state officials to aggressively respond to any comments or posts disagreeing with China’s official statements on Xinjiang, all of which is done on Facebook and without any firm pushback from the company. A video posted by CGTN on 15 July shows a clip of Zhao Lijian, a Chinese Foreign Ministry spokesperson, accusing the United States of ‘committing forced unemployment and forced poverty’ through its Uyghur Forced Labor Prevention Act. Although Facebook labels CGTN as a China state-controlled media, it allows CGTN to post similar videos denouncing the West for taking action against China.

Facebook’s response has been entirely too slow. It has noted it will monitor the situation in Xinjiang and wait on the responses of organisations such as the United Nations before it will act on public concerns over human rights violations. Rather than take immediate action, Facebook avoids holding itself accountable for how its platform has been repeatedly abused and weaponised against minorities. When the Wall Street Journal reported that several Facebook employees raised concerns about ads taken out by Chinese organisations, alleging an idyllic life for Uyghurs living in China, a Facebook spokesperson noted that those ads “did not violate our current policies,” despite evidence from a number of organisations, including the United Nations, that stressed China’s habit of utilising ads to promote false information and government propaganda.

Moreover, the burden of responsibility to identify hate speech or extremist content has fallen mostly upon Facebook’s users. Instead of banning the Chinese government from buying ad space to promote disinformation, they rely on unverified, arbitrary individuals to flag potentially harmful posts. They are encouraged to report anything they find abusive; however, this method does not guarantee that this content will be removed quickly or even removed at all. By leaving problematic posts up even for a few hours, Facebook runs the risk of exposing disinformation to thousands of its users and increasing the chances of further radicalising already vulnerable people.

Western governments are attempting to hold Beijing accountable for its maltreatments of Uyghurs; however, these governments should also hold social media and tech companies accountable for their role in allowing countries like China and Burma to use their platforms to spread disinformation. If governments are serious about combatting disinformation campaigns, they must be more firm with social media companies like Facebook and impose serious ramifications on those who continue to allow abuse and disinformation to proliferate on their platforms.



Leave A Comment

Subscribe to our newsletter

Sign up to receive latest news, updates, promotions, and special offers delivered directly to your inbox.
No, thanks