The Unpublished Project: Part I

The White Helmets Project 5 Years Later

Photo by Towfiqu barbhuiya on Unsplash

Since 2017 we’ve worked on numerous projects, which for reasons of confidentiality have not been published. We will feature five previously unpublished projects now edited. Looking back, this is also a reflection of what’s the same, what’s changed, and what we learned. Our goal is to bring some of our past efforts out of the shadows. ________________________________________________________________

The original report was delivered in March 2019.

Bottom Line Up Front (BLUF)

This client the target of a long-running disinformation campaign. The project supported the client’s MRM (Media, Research, and Monitoring) efforts, which attempted to counter disinformation efforts against it.

The White Helmets have been victimized by a vicious, sustained disinformation campaign since at least 2014. This report tells the story of 9262 unique Twitter profiles driving the online conversation about the White Helmets and analyzes the online Twitter activity of thirteen public figures and well-known proponents of pro-Kremlin narratives and the bots supporting the amplification of their messages. From August 2018 until March 2019 data from a variety of Twitter sources (audience engagement with specific Pro-Kremlin profiles and the hashtag #WhiteHelmets) was collected and analyzed. The primary project goal was to identify automated behavior connected to the amplification of messages and manipulation of platform metrics contributing to a campaign of global disinformation.

_________________________________________________________________

As the key goal of this project was to identify Twitter profiles that are most likely bots, it is important to start with two definitions: (see source #1)

  1. Bots are defined as “Pieces of software designed to automate repetitive tasks, such as posting content online. On social media, bots often purport to be genuine human agents, with the intent to deceive both humans and algorithms”
  2. If a live event is an organic experience, then a programmatic event can be defined as a synthetic one. As a synthetic event is meant to imitate a natural product, making synthetic social participation an act of manipulation. As social bots only represent one tool in the information operators’ toolkit and a small percentage of the Twitter audience, this report will present evidence that suggests the White Helmets are victims of a concerted campaign of targeted, manipulated disinformation.

By combining message amplification (collective volume) and engagement metrics (retweets, likes, replies) synthetic manipulation is intentional behavior that over time can:

  • Normalize perception: read the same narrative enough, see the same memes over and over and it can be perceived as fact or truth
  • Censor: Flooding an online conversation constitutes a form of censorship, either by drowning out organic points of view or silencing organic voices following harassment
  • Gaming: This activity also influences or “games” algorithms driving search engine results, further amplifying disinformation by allowing manipulated content to disproportionately dominate the online conversation

Disinformation tactics and campaigns erode trust in public discourse and institutions while crowding out truthful content and debate. Human-like profiles imitating organic engagement constitute “Triple P” (Pervasive, Persistent, Partisan) information threats, which actively erode truthful online discourse.

This report’s findings present an important case study representing a much bigger socio-political problem that requires policymakers to respond. The results presented in this report raise important questions related to a variety of suspect Twitter audience behaviors. This has been a process to establish identification methods, classifications, and definitions specific to bots, which will be an ever-evolving process as the tools and tactics for information warfare adapt to automated detection methods and policy changes by social media platforms.

The report is not:

  • Firm attribution
  • Proof of Impact
  • Harassment of profiles engaged in suspect activity

The report is:

  • Specific to Twitter
  • Provides the client with a foundational dataset to support counter-disinformation monitoring, analysis, and response

Research Methodology (edited):

Analysis A: The hashtag #whitehelmets is a highly relevant conversation. Using the Mentionmapp scheduling tool profile data connected to those using the hashtag was exported from the Twitter API at random moments each day during two time periods.

Analysis B: Focused on mentions or retweets of “The Dirty Dozen”: (see source #2) As of February 27, 2023, all but two of the thirteen profiles are still active on Twitter.

Unlike the hashtag #whitehelmets which can be deemed “the conversation” and as such could attract a wider and potentially diverse range of audience participants, those profiles choosing to engage directly with the Dirty Dozen could have different motivations and intentions. By collecting profile data from both the hashtag and those engaging with the thirteen specific accounts, it was agreed this approach could provide a broader audience whose activity would be analyzed over the duration of the project.

Ten weeks of analysis was based on segmenting the audience into three categories by their seven-day average daily tweet volume (see source #3)

  • Cyborgs (72+ tweets/day)
  • Moderates (36–71 tweets/day)
  • Low-volume (>1–35 tweets/day) (see source #4)
  • All of the combined data returned 9262 unique profiles.

Mentionmapp further segmented the data to analyze a dataset of 1770 unique profiles for which there is reasonable evidence to classify as bots/bot-like based on the following considerations:

  • Profiles with 0–2 replies
  • Skewed following/followed ratio such as those following twice as many as being followed back
  • Time on platform
  • Manual profile review to identify evidence of pro-Kremlin narrative
  • The Low-volume group was included to account for seven-day tweet average fluctuations

Findings:

Mentionmapp benchmarked the Twitter activity of four distinct categories of synthetic and/or suspicious activity:

  • Bot_Cyborgs: these profiles consistently exhibited highly suspicious behaviors with every scan.
  • Bot_Modulators: this category reflects profiles that modulate their tweet volumes between cyborg to non-cyborg
  • Bot_Moderates & Low-volume: there can be modulation between these two groups)
  • The attrition class: profiles that have been suspended; user not found; not authorized, and dormant. Mentionmapp continues to track these profiles because profiles may reappear

In a sample of profiles other forms of suspect behavior were noted such as: * A decline in profiles being followed (in one example, a profile demonstrated a 76% reduction in the number of profiles being followed.

* Another showed a sudden 80% decline, suggesting that lost contacts were the result of closed networks of profiles); high-volume cyborgs going dormant; the existence of suspicious profiles with near-identical screen names

Mentionmapp’s experience monitoring suspect profiles suggests that a number of these profiles (modulators and the “not authorized”) are being operated to avoid being detected for violating the platform’s terms of service and subsequent suspension/deletion.

Questions, Observations, and Conclusions (then)

In many ways, this project has been the exploration of an uncharted ecosystem. This descriptive analysis tests a sound but preliminary foundation of methods and processes, but the questions still far outweigh the answers.

This project contributes to the analysis of the complexities of the digital information ecosystem, attempting to define and describe the problem while considering models of adversarial intent.

From a Mentionmapp Analytics perspective, we reflect on the following implications for further research:

  • Models are needed to understand how state actors and their proxies operate and manipulate bots and this might differ from other bad actors. A behavioral model of adversarial intent has yet to be developed.
  • Bot scores provide indicators or signals, which allow analysts to track noticeable fluctuations in scores over time. Further research is required to understand if adjustments are also automated or if they are facilitated by human profile operators themselves.
  • Confidence classifying bot profiles declines as tweet volume declines. Low tweet volume starts to appear more human-like, evading automated detection and removal. This could also reflect programmatic augmentation such as using scheduling tools (Buffer or Hootsuite), which are often operated without nefarious intent.
  • Further research and analysis are required to refine and agree on a common classification of bot types. Classification definitions must take into account volume, manipulation of metrics, coordination of multiple accounts, chatbots, and more.

In conclusion, the collection of bot/bot-like profiles suggests there is enough synthetic behavior targeting the White Helmets specifically and promoting Pro-kremlin narratives to cause significant concern about

Reflection

As per the client’s requirements, this project was Twitter-centric and bot-focused, and clearly, the volume of Tweets was an attribute that was given prime consideration. In retrospect, it would have been interesting to dedicate resources to examining the behavior, attributes, design, connections, and narrative patterns of the low-volume profiles. Profiles like these fly under the radar, yet over time a large enough collection could cumulatively support a campaign of strategic disinformation narratives.

Beyond the scope of the project, it would have been valuable to have analyzed links and documented the websites that the audience was being directed to. We’re claiming any correlation but noted recently that as of February 2022, RT is the fifth most visited website by Syrian audiences behind, Google, YouTube, Facebook, and Wikipedia with SputnikNews ranked fourteenth.

DIGITAL 2022: SYRIA

Five years later, eleven of the thirteen “Dirty Dozen” profile are still actively advancing and amplifying pro-Assadist, pro-Kremlin, anti-imperial, anti-West, and a variety of corrosive conspiracy theories into the information ecosystem.

_________________________________________________________________

Sources

#1 European Think Tank article “Polarisation and the use of technology in political campaigns and communication”

#2 The “Dirty Dozen” was a list of profiles provided by the client based on their internal research. The group represents the most influential key profiles engaged in negative discourse about the White Helmets

#3 DFRLabs is a leader in disinformation research, specifically related to automated (bot) activity. DFRLab’s (December 2016) definition of “suspicious”: “For the purposes of this analysis, a level of activity on the order of 72 engagements per day over an extended period of months — in human terms, one tweet or like every 10 minutes from 7 am to 7 pm, every day of the week — will be considered suspicious. Activity on the order of 144 or more engagements per day, even over a shorter period, will be considered highly suspicious. Rates of over 240 tweets a day over an extended period of months will be considered as “hyper-tweeting” — the equivalent of one post every three minutes for 12 hours at a stretch.”

#4 Given that bots are assets, it’s fair to suggest their operator may adjust and change their activity as a form of countermeasure. At some point, the deletion or suspension of assets is a cost. As well, by focusing on only high-volume profiles (cyborgs) we run the risk of missing other programmatic behaviors that in the aggregate are eroding and damaging the information landscape.
________________________________________________________________

Contact admin@mentionmapp.com to discuss our contract threat intelligence research, analysis, and reporting. Our focus is on disinformation, misinformation, and influence operation threats, risks, and vulnerabilities.

Leave a comment