Foreign Influence Operations Infiltrated, Polarized Online Communities

The ongoing scandals over data privacy, in which social media giants were found to be (among other things) flooding selected readers with hypertargeted and corrosive political advertising, is still unfolding — and it will continue to evolve for some time as new pieces of evidence and data are unearthed and put together to get a true idea of its scope.

Although the broad brushstrokes of how disinformation and propaganda work to affect elections are already known, subsequent reports help to further flesh out the attempts to mislead Americans in the lead-up to 2016’s elections, effectively further disintegrating social bonds within the country, rattling trust in its national institutions, and finally trying to dissuade people from voting en masse.

The organization Stop Online Violence Against Women (which is aimed at strengthening laws and polices to protect women online, started by tech nonprofit founder Shireen Mitchell) released a study in October 2018 detailing how black and Latino Americans were particularly targeted by the now-notorious Internet Research Agency out of St. Petersburg, Russia.

The preliminary report, which details methodology and examples of how the dark ads worked, pulls no punches:

Facebook’s statement, in Politico’s 2017 article, in reference to the ads were misleading:

“The vast majority of ads run by these accounts didn’t specifically reference the U.S. presidential election or voting for a particular candidate. Rather, the ads and accounts appeared to focus on amplifying divisive social and political messages across the ideological spectrum — touching on topics from LGBT matters to race issues to immigration to gun rights.”

Our report shows that the depth and breadth of the ads specifically targeting black voters led to criminal acts of voter suppression. Our interactive data visualization reveals that if we focus on volume, the order of identity targets and issue areas were: 1. Black Identity 2. Chicano Identity, 3. Policing, 4. Second Amendment Concerns and 5. Immigration. Those categories would be followed by religious ads citing Christianity and Islam and then Texas. Texas is the only state specifically targeted in these ads.

The ads were initially intended to build up “a trusted network of Black and Latino voters,” the report says. “These were highly active and intense voter suppression campaigns targeting Black and Latino voters.” The report also indicates that voter suppression efforts were being built as early as 2013.

Twitter, meanwhile, released its own data on messaging attempts from Russian and Iranian interests just a few days later, saying in a blog post:

In line with our strong principles of transparency and with the goal of improving understanding of foreign influence and information campaigns, we are releasing the full, comprehensive archives of the Tweets and media that are connected with these two previously disclosed and potentially state-backed operations on our service. We are making this data available with the goal of encouraging open research and investigation of these behaviors from researchers and academics around the world.

These large datasets comprise 3,841 accounts affiliated with the IRA, originating in Russia, and 770 other accounts, potentially originating in Iran. They include more than 10 million Tweets and more than 2 million images, GIFs, videos, and Periscope broadcasts, including the earliest on-Twitter activity from accounts connected with these campaigns, dating back to 2009.

The report was quickly analyzed and picked over by experts:

A study conducted by the Digital Forensics Lab (a project of the Atlantic Council, a nonpartisan American think tank that focuses on international issues) concluded that much of Russia’s efforts — as with Facebook’s dark ads — was aimed at infiltrating, then polarizing American communities to deepen and widen societal divisions, particularly regarding issues of racism and ethnicity, but there were other goals as well:

One main purpose was to interfere in the U.S. presidential election and prevent Hillary Clinton’s victory, but it was also aimed at dividing polarized online communities in the U.S., unifying support for Russia’s international interests, and breaking down trust in U.S. institutions.

Iranian efforts seemed to be more focused on pushing its government’s messaging than breaking down trust in American institutions, but each operation inflames the public discourse in its own way:

The Russian and Iranian troll farm operations show that American society was deeply vulnerable, not to all troll farm operations, but to troll accounts of a particular type. That type hid behind carefully crafted personalities, produced original and engaging content, infiltrated activist and engaged communities, and posted in hyper-partisan, polarizing terms.

Content spread from the troll farm accounts was designed to capitalize on, and corrupt, genuine political activism. The trolls encapsulated the twin challenges of online anonymity — since they were able to operate under false personas — and online “filter bubbles,” using positive feedback loops to make their audiences ever more radical.

Despite efforts by social media giants to stop disinformation, propaganda, and threats, trolls and bots continue to evolve in sophistication and messaging, leaving vulnerable people and messages open to online attacks. “Identifying future foreign influence operations, and reducing their impact, will demand awareness and resilience from the activist communities targeted,” says the Digital Forensics Lab, “not just the platforms and the open source community.”

These studies indicate that unity and coöperation is not just a high-minded utopian ideal; it’s also anathema to corrosive trolling tactics.

The Stop Online Violence Against Women report recommends further solutions from tech companies:

These current measures will not address the misinformation techniques and population targeting described herein. In particular, Facebook’s new policy to require submitters of ads deemed by Facebook to be political to present valid identification will miss the majority of malicious ads as executed in effect and form in previous elections cycles. The tech and social media solutions offered in response to the post-election questions from Congress fail to adequately address either voter suppression or hate speech. More sophisticated social media and online ad monitoring measures must be developed and deployed.

The post Foreign Influence Operations Infiltrated, Polarized Online Communities appeared first on What's True?.