Home / Software / Apps / Holocaust denial hate speech lawsuit against Elon Musk’s Twitter in Germany

Holocaust denial hate speech lawsuit against Elon Musk’s Twitter in Germany

Elon Musk, the owner of Twitter and a self-described “free speech absolutist,” is up for legal action in Germany because of how the social media site handles hate speech against Jews.

HateAid, a group that fights against hate speech, and the European Union of Jewish Students (EUJS) have filed a lawsuit in the Berlin regional court yesterday, alleging that Musk-owned Twitter is failing to uphold its own policies against antisemitic content, including holocaust denial.

Germany has strict laws prohibiting antisemitic hate speech, making the Berlin court an appealing venue to hear such a challenge. Holocaust denial is a crime in Germany.

“Despite the fact that Twitter’s Rules and Policies forbid antisemitic hostilities, the website still allows a lot of such material to remain online. Even if users notify the platform about it,” the plaintiffs claim. According to a study by the Center for Countering Digital Hate, 84% of posts containing antisemitic hate speech were not reviewed by social media platforms. Therefore, Twitter is aware that Jews are routinely the targets of on-platform public abuse and that antisemitism is pervasive in our culture. Furthermore, that the platform’s response is far from adequate.

Musk has repeatedly asserted that Twitter will abide by all applicable laws in the nations in which it conducts business (including European speech laws). Nevertheless, he hasn’t yet addressed this particular lawsuit in the media.

Since the Tesla CEO took over Twitter at the end of October, he has significantly reduced Twitter’s headcount, cutting staff in regional offices across Europe, including in Germany, as well as in key safety functions like content moderation. Additionally, he has completely disbanded Twitter’s Trust and Safety Council and reinstated a large number of accounts that had previously been banned for breaking the social media platform’s rules, which appears to be the perfect environment for hate speech to proliferate unchecked.

Anecdotal reports and some studies have suggested that there has been an increase in hate on the platform during Musk’s roughly three month tenure as CEO of Twitter. While many former users have attributed their departure from the platform since he took over to an increase in hate and abuse.

Notably, according to Bloomberg, which first reported on the litigation, the lawsuit is concentrated on instances of hate speech that have been posted to Twitter over the past three months while Musk has been in charge.

The lawsuit applies an external lens to how the platform is enforcing anti-hate speech policies in an era of erratic (and drastic) operational reconfiguration under the new owner’s watch, so it appears to be an interesting legal test for Musk.

Although the wealthy libertarian generally tries to dispel accusations that he’s leading Twitter into toxic waters through a combination of denial, fishing for boosterism, targeted attacks on critics, and ongoing self-aggrandizement (of what he couches as a quasi-neo-enlightenment effort to be a handmaiden to the future of human civilization, by “freeing the bird,” as he couches his Twitter speech “reforms”), he did acknowledge an early spike

When claiming that Twitter engineers had reduced hate speech impressions to a third less than “pre-spike levels,” at the time, they tweeted a chart to support their claim (as he christened the sudden uptick in hate seen in the period directly after his takeover of Twitter). He did, however, add that the spike was only related to a select few accounts and not to a more general decline in the effectiveness of content moderation since he took over and began tearing down the previous rulebook.

Although Musk appears to enjoy creating the impression that he is a “free speech absolutist,” the reality, as it always does with the space cowboy, appears to be much less cut-and-dry.

He has, for instance, unbanned Kanye West (aka Ye) and then re-banned him for tweeting an image of a Swastika with a Star of David; the latter is a symbol of Judaism, the former a Nazi emblem. He has also made a number of other decisions that appear to be unilateral and arbitrary regarding whether to censor (or not) specific posts and/or accounts.

Or restoring the former US president Donald Trump’s account, which was suspended following the violent assault on the US capital by Trump supporters. However, Musk has steadfastly refused to reinstate InfoWars’ hate preacher Alex Jones, as Musk seems to take issue with Jones’ infamous conspiracy theory that the Sandy Hook school shooting victims were actors.

Other choices made by Musk regarding Twitter content moderation seem to be motivated solely by self-interest, like his decision to block a user account for tweeting the coordinates of his private jet, which he dubbed “assassination coordinates.” A number of journalists who covered the incident were also suspended by him last year after he claimed their reporting jeopardized his personal safety. He later changed his mind after facing a barrage of criticism for allegedly censoring the free press.

To shape the narrative about how the platform’s former leadership handled content moderation and related issues, like inbound from state agencies making requests for tweet takedowns, etc., Musk has essentially invited a number of hand-picked hacks in to sift through internal documents and publish what he’s dubbed the “Twitter files” in what appears to be a naked (but very tedious) attempt to do so. Musk has also added fuel to conservative conspiracy theories that claim that the platform’s former

(Contrary to what actual research conducted by Twitter, prior to Musk, looking at its algorithmic amplification of political tweets found, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left in 6 out of 7 countries studied.) Who, however, is interested in non-cherry-picked data?

Musk is also quite capable of dishing out abuse and hate on Twitter, using his strategy of “wokism,” or the megaphoning of trolling and mockery of vulnerable groups, to feed his right-wing base at the expense of those who are disproportionately vulnerable to abuse, like the trans and non-binary people whose pronouns he has purposefully mocked.

Musk has also fallen victim to abusive pile-ons from his followers by tweeting and/or amplifying targeted attacks on specific people, such as the one that compelled Yoel Roth, Twitter’s former head of trust and safety, to leave his own house. So much hypocrisy regarding threats to personal safety? a great deal.

Even a casual observer of Musk-Twitter would undoubtedly come to the conclusion that the Chief Twit lacks consistency in his decision-making. If this arbitrariness results in patchy and partial enforcement of platform policies, it will be bad for Twitter users’ trust and safety (and RIP for any notion of “conversational health” on the platform).

We’ll have to wait and see if Musk’s contradictions also result in a German court ruling ordering Twitter to remove unlawful hate speech as a result of the HateAid-EUJS lawsuit.

According to Josephine Ballon, head of legal for HateAid, “Twitter’s actions are based solely on its own, opaque rules, relying on the fact that users have no chance to appeal — for example, when it comes to the non-deletion of incitements to hatred.”

“The authorities have not ever brought a case against a social network for this. Because of this, civil society must get involved and find means of urging the deletion of such content. The communities affected by the incitement of hatred and hostility are represented by us as an NGO on a daily basis. As a result, we can gradually increase pressure on the platforms.

Intriguingly, it doesn’t appear that the lawsuit is being brought under Germany’s long-standing hate speech takedown law, known as NetzDG, which, at least in theory, gives regulators the authority to fine platforms up to tens of millions of dollars if they don’t promptly remove illegal content that has been reported to them.

However, as Ballon points out, there have been no NetzDG prosecutions for content takedown violations (although messaging app Telegram was recently fined a small amount for breaches related to not having proper reporting channels or legal representation in place).

With an eye on upcoming EU digital regulation as the Digital Services Act, which starts to apply later this year for larger platforms and harmonizes governance and content reporting rules across the bloc under a single standard, one local attorney we spoke to who is not directly involved in the HateAid-EUJS case suggested there has been some sort of tacit agreement between federal authorities and social media firm that Germany won’t enforce NetzDG on the content moderation issue.

The parties involved in this hate speech lawsuit against Twitter say they seek legal clarification on whether people (and advocacy groups) can file a lawsuit to have “punishable, antisemitic and inciting content,” such as Holocaust denial, removed from the site even if they are not personally offended or threatened by it.

They clarify [emphasis theirs] the following in a FAQ on a webpage outlining their arguments:

The court will determine whether we have the right to make this demand. In light of Twitter’s Rules and Policies, it is currently unclear to what extent users have the right to request the removal of such content when they are not personally impacted. We contend that Twitter must uphold the self-imposed regulations it brags about in its contract terms in order to remove antisemitic posts and ensure that Jews can feel secure using the service.

By taking this action, we fulfill Twitter’s contractual obligations. Platforms should be forced to remove antisemitic content, in our opinion, so that it can be done.

If they are successful, they say they hope it will be simpler for users to demand the removal of illegal content from other significant platforms as well. Therefore, if the lawsuit succeeds, there might be wider repercussions.

“With this fundamental procedure, we want the courts to establish clearly that platforms like Twitter are already required by their own user agreements to protect users from antisemitic digital violence,” they continue. “Such a ruling will make it simpler for users in the future to defend their rights against the major platform operators. The idea behind it is straightforward: If Twitter owes the user removal of the content because the contract’s terms state that hate speech is prohibited. The Internet could then be made more secure by having this enforced, for instance, by NGOs like HateAid.

Twitter was contacted for a comment on the lawsuit, but since Musk took control of the company, it no longer has a regular external communications function and has not responded to any of TechCrunch’s inquiries. (But we persisted.)

It’s important to remember that, prior to Musk, Twitter wasn’t receiving a lot of praise for its ability to combat unlawful hate speech either.

Prior to Musk’s takeover, Twitter was performing comparably poorly to other signatories when it came to promptly responding to reports of unlawful hate speech, with the Commission reporting that it removed just 45.4% of such content within 24 hours (vs an average of 75% for signatories). This was according to the most recent EU report monitoring the bloc’s anti-hate speech code, a voluntary agreement which Twitter and a number of other social media platforms have been signed up to for years. While Twitter reported just under 1,100 reports of unlawful hate speech during the monitored period from March 28 to May 13 (Facebook received the most reports). So it seemed to be both hosting a sizable amount of illegal hate speech (in comparison to peer platforms) and lagging behind its competitors in terms of how quickly it removed harmful content.

So it will be interesting to see where those metrics stand later this year when (or if) Musk-owned Twitter submits a new set of data to the Commission.

About Chambers

Check Also

Researchers have recently identified the initial fractal molecule found in the natural world

Fractals, which are self-repeating shapes that can be infinitely magnified without losing their intricate details, …