Hate on social networks: great power, great responsibility


Massive anti-racist protests, nearly 150,000 deaths from coronavirus, and an ever-growing social divide, stirred up from the political arena. It is a turbulent time in the United States, and in many of the debates where the country looks itself in the mirror and wonders what is happening, Facebook appears, probably the company that best exemplifies the dynamics of social and commercial communication in the twenty-first century

The company created by Mark Zuckerberg – founder, CEO and largest shareholder – is in the spotlight especially for its alleged role as a broadcaster of hate messages, both political and racial. This is not the first time Facebook has faced similar accusations: the debate has been alive since the 2016 presidential election, which carried Donald Trump to the White House. However, the death of George Floyd at the hands of the police, and the massive anti-racism demonstrations this caused and continues to cause, have increased the pressure on Facebook.

In mid-June, three weeks after Floyd’s death, civil rights activist groups launched the #StopHateForProfit campaign, accusing Facebook of profiting economically from the publication of materials that incite violence and hatred. 

Little by little their campaign gained speed, and by July 1 they had already convinced 600 companies to pull their advertising off Facebook. Some of these are extremely powerful advertisers, among them Pfizer, Starbucks and Unilever.

The trigger for the protest was, in large part, Facebook’s attitude to a post from Trump. In connection with the protests over Floyd’s death, the U.S. president wrote: “When the looting starts, the shooting starts”. Twitter hid the message, considering it a glorification of violence. Facebook kept the post without providing any type of notice.

A more reputational than economic impact

Several weeks after the start of this campaign, it is clear that its impact is greater in terms of reputation than economics: in the last month and a half, the stock, at historical highs since it began trading in 2012, has barely fallen by 2%. 

One reason is that Facebook has a highly-diversified advertising portfolio, with significant weight from small- and medium-sized businesses. According to data from MoffettNathanson, the 100 largest advertisers in the United States account for 60% of advertising billing on general interest television and nearly 50% on cable television. In the case of Facebook, they make up only 20% of their advertising revenue. 

Accordingly, if Facebook takes steps to head off these accusations of spreading hate on social networks, it will not be because of economic pressure. It will do so because it is convinced of its social responsibility, or because regulation makes it obligatory. 

Zuckerberg’s company has spent years defending its fight against hate messages, walking a fine line where it also defends freedom of expression. When are certain messages part of a political debate and when are they a call to hate? 

Facebook has to constantly answer this question, and gives assurances that it takes the fight against hate speech very seriously, with automated systems capable of detecting 89% of messages that are dangerous to coexistence before they are published. But, as Jonathan Greenblatt, CEO of the Anti-Defamation League, founded in 1913, says, “Ford couldn’t sell their cars if they said that 89% have working seat belts”.

Whether or not Facebook’s commitment to healthier political and social discourse is credible and honest, it seems that the pendulum is swinging towards greater regulation, especially if, according to an article in the British weekly The Economist, Democrat Joe Biden wins the November presidential election. It is a question of regulating the social and political debate according to some basic rules, in which everyone recognises their responsibility. And the responsibility of powerful technology companies is very large.