The top diplomat of the European Union has criticized a controversial move by Twitter owner Elon Musk to end free access to its APIs by February 9. He has warned that it could threaten researchers’ ability to study disinformation at a crucial time, as Russia aggressively weaponizes disinformation to try and provide cover for its war in Ukraine.
High commissioner Josep Borrell urged more investigation into how social media platforms are being used to spread Kremlin propaganda in a speech today that detailed how the bloc has responded to Russia stepping up online disinformation campaigns since its invasion of Ukraine last February.
“We need to learn more about social media platforms. Study how misinformation spreads, where it originates, and what the effects are, he urged in his speech to the diplomatic service of the European Union (EEAS).
The EU is concerned about the news that Twitter is planning to restrict free access to its APIs, which Borrell warned would be “a serious step back from early commitments,” he said. Borrell singled out Twitter — and Musk as its owner — for naming and shaming.
🔴 LIVE NOW: Keynote speech by HR/VP @JosepBorrellF at the EEAS Conference "Beyond disinformation – EU responses to the threat of foreign information manipulation" ⤵ https://t.co/EjFog6rGMo
— European External Action Service – EEAS 🇪🇺 (@eu_eeas) February 7, 2023
“Early commitments” most likely refers to Twitter’s voluntary initiative to encourage social media platforms to address the “fake news” crisis, which the Commission unveiled in 2018. Twitter was a signatory of the EU’s Code of Practice on Online Disinformation from the start (as it was still sometimes referred to at the time).
Since March, Twitter has been legally obligated to ensure that its platform does not host the Kremlin-backed media outlets Russia Today (RT) and Sputnik because the EU has also banned some Russian state media (plus any subsidiaries). The ban would be nullified if the channels weren’t prevented from distributing content in the EU.
Since the bloc’s 2018 Disinformation Code was established, disinformation research has advanced significantly, and much is now known about the various methods and strategies employed to distort the truth, spread false information, and amplify it in order to influence public opinion, undermine faith in democratic institutions, and sway the outcome of free and fair elections.
However, threats to the integrity of information don’t stop, just like with cybersecurity. The commissioner also brandished a copy of the EU’s first report on foreign information manipulation and threats, which he said showed “clear trends in the threats against our information space,” and stressed the need for Western democracies to take additional precautions to defend themselves against malicious foreign infoOps by increasing their investment in the study of information threats.
The report’s main finding confirms “a new wave of disinformation techniques,” which the author claimed Russia is using to create false images and videos in an effort to spread and amplify anti-Ukrainian propaganda. He also warns against what he calls a “worrisome” cooperation between threat actors like Russia and China.
According to Borrell, “diplomatic accounts and state-controlled channels manipulate perceptions about the European Union, blaming the West for all the effects of the conflict in Ukraine and amplifying lies about military-led Western biolabs in Ukraine aiming to target its neighbors. It is something that needs to be addressed.
“We must foresee and prevent such activities with real-world actions and measures. We must continue to assist Ukraine… Finally, we need to work harder at developing resistance to authoritarian regimes that try to sway public opinion and manipulate the information.
While propaganda as a tactic is nothing new in human history, Borrell claimed that the Internet and other digital tools that speed up information distribution have clearly increased the threat associated with information manipulation. He argued that liberal Western democracies must therefore organize a commensurately serious response to such a rapidly growing disinformation risk.
It’s fair to say that the EU’s legislators still haven’t found a convincing “front foot” for effectively combating online disinformation, despite some recent high-level attention to the issue from the EU, with the EU’s high commissioner seeking to build on existing efforts to raise awareness of Kremlin propaganda around the Ukraine war (such as the EU vs Disinformation campaign).
The problem is complicated because taking stronger action could make it easier for people to spread false claims that efforts to preserve the accuracy of information amount to censorship of free speech. As Borrell noted in his speech, the weaponization and systematic mass production of false speech by authoritarian regimes drown out the opinions of real people, making a flood of manufactured disinformation the real threat to democratic free speech.
“We have observed the creation of fake networks to disseminate [Kremlin] misinformation. In order to prevent any other voice from being heard any longer, they have been oversaturating the information space. With a narrative and versions of the event that are constantly changing, people have been tried to be misled and confused. The goal is to get everyone to stop believing in news reports, he claimed.
“How can I tell the difference between the lies and the truth? To make people think that everything is ultimately a lie. And they want to undermine public confidence in all forms of media as well as in our institutions. And I want to stress that we need to take this very seriously right now. It’s not just a specialist’s problem. It affects all parties, not just those involved in the information system. The public needs to be informed about it, and we need to address it politically at the highest level.
Unsurprisingly, the nearly five-year-old initiative has failed to stop various waves of propaganda, whether they were related to coronavirus disinformation or Ukraine. The EU’s Code of Practice on Disinformation is still not legally binding.
This failure has at least been acknowledged in part by the bloc. Since the Digital Services Act (DSA) went into effect last year and will begin to apply to a subset of larger platforms later this year, it was announced last year that the Code would be strengthened. Most significantly, it was also stated that the Code’s observance would be linked to compliance with the DSA (with the bulk of digital services expected to be compliant in 2024).
Therefore, if the DSA acts as a stick to force platforms to take countering disinformation more seriously, EU lawmakers will be hoping for better times in the future.
However, for the time being, it appears that a significant gap exists between what is actually happening online and the EU’s efforts to date to combat disinformation.
The disconnect is also becoming embarrassing.
However, Twitter remains formally a signatory to the EU Code under Musk. However, in practice, Twitter’s new owner has made a number of decisions that very obviously go against the initiative. For example, he dismantled the COVID-19 misleading information policies after taking over last year, and he immediately caused chaos around account verification by allowing anyone to pay him to get a blue check mark, which immediately led to a flood of malicious impersonation.
And despite their claims to be at the forefront of digital regulation, EU lawmakers have so far been unable to accomplish much more than warning Twitter to uphold its “commitments”. Or cautioning it that “huge work” will be required if Twitter is to be able to abide by the DSA — whenever it may apply to the platform.
Musk has frequently been charged with personally amplifying Kremlin propaganda, which is even more embarrassing for the EU’s standing as a digital rule-maker.
In one infamous incident from last year, the president of Ukraine himself intervened by tweeting a sarcastic poll asking his followers to choose which @elonmusk they “liked more”: The choices are “one who supports Russia” or “one who supports Ukraine.”
Naturally, the option that supported Ukraine won the poll. But Western democratic institutions continue to look like the big losers when it comes to the spread of disinformation because they don’t seem to be able to stop people like Elon Musk, who is now figuratively the CEO of Twitter, from willingly (or, at best, credulously) doing so.
Borrell’s attack on “Twitter’s owner” today may be the closest the EU has gotten to criticizing Musk. And, more generally, to realizing the need for a more systematic approach in order to eliminate the growing threat of authoritarian disinformation.
Which @elonmusk do you like more?
— Володимир Зеленський (@ZelenskyyUa) October 3, 2022
Meanwhile, Musk keeps amplifying Kremlin misinformation using the platform he purchased with billions of dollars in debt last year.
Only this week, after Musk had credulously reacted to some fake metrics that claimed to list high rates of Ukrainian casualties by uncritically mentioning a “tragic loss of life,” did Twitter users who hadn’t already stopped using the service to denounce another instance of the “Chief Twit” doing the Russian regime’s bidding.
Musk did not remove his rubber-stamping response to false claims after being called out for promoting the Kremlin’s Ukrainian war propaganda; instead, he suggested using Twitter’s Community Note feature to “correct the numbers.” The false claims had been posted by an account that actually used a picture of Russian President Vladimir Putin wearing a halo.
You can’t fix a big lie with a “small correction,” as Twitter user David Rothschild astutely observed. By doing so, you are implying that you are complicit in the dissemination of a massive lie that paints a false picture of Russia’s conflict in Ukraine and undermines support for Ukraine’s ongoing resistance to Russia.
link to the tweet?
— SamaelSS (@SwampIand) February 6, 2023
Borrell stated today that “we need more transparency and accountability, not less,” while urging Twitter and Musk to uphold their earlier pledges to combat misinformation. He continued, “I call on Twitter — and on its owner — to ensure that all obligations that they have taken will be honored.
In his speech, he continued to call for greater organization among those working to combat information manipulation, encouraging them to develop interoperable systems for exchanging analysis and best practices. He also announced that the EU would be doing more by creating a new central resource that would be used for gathering data on threats from disinformation and encouraging the sharing of intelligence.
“This is a battle at a distance. It won’t be won over night, he cautioned. “We must have the equipment. Additionally, this information exchange and analysis hub will improve our responses and make it possible for us to better defend our democracies.
In light of Borrell’s comments, contacted Twitter to get a response and to find out if the company intended to reconsider ending free access to its APIs for researchers.
Musk recently announced an arbitrary reprieve for bots providing “good content that is free” — whatever “good” means in that context — in response to criticism that his plan to end free API access would likely kill off a large number of useful Twitter bots. But so far, he doesn’t seem to have said anything about the researcher API issue. (And on the dangers that the type of “bad content” he enjoys disseminating poses to democratic interests.)
In light of worries about how cutting off researchers would affect experts’ ability to study misinformation, we questioned Twitter about whether it would reconsider doing so at the time this article was written. We’re not expecting a response, though, as Musk also decided to close Twitter’s external communications division and ignore media requests for comment.