The Unpublished Project: Part V

#Kremlin: Using Hashtags to Analyze Russian Disinformation Strategy and Dissemination on Twitter. #MH17 Twitter Cyborgs Three Years Later.

Foto de Towfiqu barbhuiya en Unsplash

Since 2017 we’ve worked on numerous projects, which for reasons of confidentiality have not been published. We will feature five previously unpublished projects now edited. Looking back, this is also a reflection of what’s the same, what’s changed, and what we learned. Our goal is to bring some of our past efforts out of the shadows. Part I Part II Part III Part IV

________________________________________________________________

The original report #Kremlin: Using Hashtags to Analyze Russian Disinformation Strategy and Dissemination on Twitter, was a collaborative project with Sarah Oates ( Merrill College of Journalism, University of Maryland) and we co-presented the paper at the 2019 APSA conference.

Bottom Line Up Front (BLUF)

Out of curiosity, in September 2022 we returned to an original set of 340 Twitter profiles classified as cyborgs (see note #1) and connected to the hashtag #MH17 to see what if anything has changed about their activity.

Account attrition was noteworthy with 181 accounts, being either suspended, inactive, or no longer existing. Additionally, we categorized 62 accounts having a pro-Kremlin feed.

______________________________________________________________

Image From EUvsDisinfo “SIX YEARS OF MH17-LIES: THE KREMLIN IS LOSING ITS OWN GAME”

Background

The following are excerpts from the original paper.

This original paper leverages both social science and data science by using traditional content analysis and Twitter analytics to trace how key aspects of Russian strategic narratives were distributed via #skripal, #mh17, #Donetsk, and #russophobia in late 2018. This work will define how key Russian international communicative goals are expressed through strategic narratives, describe how to find hashtags that reflect those narratives, and analyze user activity around the hashtags. This tests both how Twitter amplifies specific information goals of the Russians as well as the relative success (or failure) of particular hashtags to spread those messages effectively

This research makes two important assumptions. First, it assumes without further investigation (based on previous media and analytical reports) that the Russian government is choosing to pursue a disinformation strategy on Twitter. Nor does this paper attempt to measure the specific effect of the disinformation on the target population, i.e. whether it is causing any change in attitude or behavior. What this paper does is analyze the presence and behavior of Twitter users linked to hashtags that are promoting Kremlin strategic narrative priorities. By doing this, we can provide critical evidence to identify those who are creating or amplifying disinformation in our media ecosystem. By combining the identification of hashtags linked to key Kremlin narrative goals, Twitter users who are amplifying those hashtags, and the behavior of Twitter users over time, we can identify more precisely how online communities of disinformation function. This allows the discussion to evolve beyond identifying whether a Twitter user is a bot or cyborg, which is useful but only one element of disinformation activity. Rather, we are more broadly interested in how the Kremlin’s strategic narratives are amplified, over time and via different hashtags, on Twitter.

Our research posits that, in the case of Russia, strategic narratives can be defined and linked to specific Twitter hashtags. This is a plausible approach for the Russian case for the following reasons:

1. Russia has strategic narratives that can be readily identified through speeches and documents from the central government, specifically policy concepts and key addresses by President Vladimir Putin. As an authoritarian regime that closely controls key communication nodes (particularly state-run television and foreign broadcasts), the Kremlin provides coherent and consistent messages that can be identified in both its domestic news and its foreign propaganda. 2. There is no viable political opposition in Russia (unlike the deeply divided voices of current U.S. elites), allowing for a unified signal of national strategic messages. 3. Russia has demonstrated consistent and pervasive dedication to promoting its views on social media sites, efforts that have been fairly closely documented due to Russian interference in U.S. election campaigning.

2. Russia has a strong history of a state role in constructing (as opposed to just amplifying) strategic narratives. As a successor to the Soviet regime, Russia was forced to create symbols and narratives in the 1990s, which gave the state latitude to dictate a vision of the state from above, as opposed to reflecting on the reality from below. At the same time, the Russian state could take advantage of both the powerful images and methods of the Soviet propaganda system, which it eventually did after a relatively brief experiment with a more democratic society (Oates, 2006). 5. Russians can also hijack hashtags in an attempt to control narratives, such as when the Russian Foreign Ministry tweeted #UnitedforUkraine in 2014. This was a (see source #1) small-scale, human effort; those using automated efforts are more sustainable and could arguably be more effective.

______________________________________________________________

#MH17 Cyborgs. September 2022

This was a curiosity project, motivated by an interest to see “who” was still active on Twitter almost four years after starting the original research. The attrition is noteworthy with over 50% of the original group of cyborgs, being ‘inoperable.’ Of course this could already have changed. It fair to ask what suspended accounts are now active in the Spring of 2023, along with previously inactive account have sprung back to life. Small projects for another day.

Borrowing for Heraclitus, Twitter’s feed is like the experience, where “no man ever steps in the same river twice, for it’s not the same river and he’s not the same man.”

______________________________________________________________

Revisiting our research limitations and conclusions

One of our limitations is reconciling our classification system with using average tweet volume over a week. We have identified three different types of accounts above: cyborg, moderates, and low-volume. Mentionmapp used average activity over a time period (typically a week), but this average hides peaks and valleys in the activities of some accounts. For example, an account that is very active may fall silent or slide from cyborg to moderate levels of engagement or back again. There are also problems of suspended accounts, deleted profiles, deleted feeds of tweets, and protected tweets that can obscure the findings. Mentionmapp cannot download or analyze private accounts, although it is questionable how effective a private account can be in information warfare.

Conclusions

The challenges posed by the spread of disinformation via social media create a significant threat to democratic discourse. We deployed two approaches that can be replicated by other scholars and analysts. First, we relied on several studies of narrative to take a broader view of media messages, narrowing down to examples of Russian strategic narratives that could reasonably be identified by a single hashtag. In this way, we were able to isolate critical signals from the vast noise of social media, although we concede that a study of Twitter alone is relatively narrow. Second, this study would not have been possible without a robust retrieval and measurement tool for Twitter that had been developed and tested in a range of situations. This research was a partnership between an academic and a technology entrepreneur, an attempt to leverage different types of expertise. We found this partnership quite fruitful. Even when considering the limitations, this study provided three key findings:

1. It is highly useful to know what you’re looking for and what you’re looking at when approaching social media analytics. The pre-selection of hashtags that we feel confident represent Kremlin strategic narratives makes this a much more useful exercise.

2. The Twitter activity identified by Mentionmapp demonstrates the value of moving beyond dichotomies such as bot/human and using a wider range of user behaviors to establish relative roles specific Twitter users play in synthetic audience engagement.

3. It is possible to find disinformation influencers by analyzing user behavior across multiple hashtags linked to Russia’s disinformation priorities. This is an approach that can be applied in a range of situations beyond Russian information operations.

______________________________________________________________

Notes/Sources

Note #1 — A cyborg is a Twitter user that appears to use both automated and human activity (which will be discussed in more detail below). A bot is an autonomous program on a network that is designed to respond or post in response to programmed cues online.

Source #1- https://www.rferl.org/a/ukraine-us-russia-twitter-trolling/25362157.html

The Unpublished Project: Part IV

Canada’s 2019 Federal Election: Assessing the Information Ecosystem

Foto de Towfiqu barbhuiya en Unsplash

Since 2017 we’ve worked on numerous projects, which for reasons of confidentiality have not been published. We will feature five previously unpublished projects now edited. Looking back, this is also a reflection of what’s the same, what’s changed, and what we learned. Our goal is to bring some of our past efforts out of the shadows. Part I Part II Part III

________________________________________________________________

The original report was a collaborative confidential project and was delivered in June 2020.

Bottom Line Up Front (BLUF)

Five months before the 2019 Federal Election, 71 percent of Canadians expressed worry that foreign governments would use social media to affect the outcome of the election(see source #1), and 74 percent of Canadians were concerned that the same manipulative tactics would be harnessed by domestic special interest and partisan groups (see source #2).

Despite recording a number of suspicious digital phenomena, the researchers could not confidently attribute any of these events to the operations of a foreign government. This could be due to broader changes in the tactics of foreign actors or a determination by adversarial governments that coordinated interference did not justify the commensurate risks and costs. The growth of counter-disinformation initiatives and public awareness since 2016 has led foreign troll farms and other such entities to turn to more covert and clandestine methods (see source #3). While the researchers have still made attributions under these circumstances (see 2019’s Operation Secondary Infektion), it requires access to corroborating evidence and technical backend data that was not available in this case. Instead, the clearest signs of “foreign” interference come in the evidence of coordinated political trolling by Canadian and U.S. far-right activists.

The report provides a general discussion of mis- and disinformation during the 2019 Federal Election. It examines specific community discussions on Reddit, Pinterest, and Facebook — platforms selected for their political influence and relative lack of study in the Canadian context — as well as amplification by domestic Canadian actors of Russian state propaganda.

______________________________________________________________

Background

2019 was a contentious and polarizing election cycle. The Liberals were forced to form a minority government, becoming the governing party with the lowest share of the popular vote in Canadian history. Voter turnout was lower than it had been in 2015, dropping by 2.6 percent.

This report was informed by periodic surveys of the Canadian information ecosystem before and after the 2019 Federal Election. This initiative was focused on determining the extent of both attributable and suspected foreign influence efforts that targeted the elections process and on better understanding how the domestic information environment contributed to these efforts.

An open-source examination of the Canadian digital landscape demonstrated that negative content targeted parties and party leaders across the political spectrum, but the researchers observed a disproportionate volume of that negative content as directed at Trudeau and the incumbent Liberal government. None of the evidence pointed to any other party or party leader’s direct involvement or endorsement of the negative campaigns against Trudeau. On Twitter, anti-Trudeau hashtags such as #TrudeauMustGo greatly exceeded the volume and intensity of hashtags targeting any political figure associated with the other political parties On Facebook, some far-right extremist groups went so far as to propose fantasy scenarios to assassinate Trudeau (see source #4). On Pinterest, anti-Trudeau messages mixed with virulent anti-immigration and anti-Muslim memes, sown by inauthentic accounts and boosted by Pinterest’s own algorithms. In general, the election witnessed ample cases of viral misinformation and coordinated inauthentic activity.

Selected Cases of Dis- and Misinformation During the 2019 Federal Election

During their survey of the Canadian information ecosystem before and after the 2019 Federal Election, the researchers observed numerous instances of political dis- and misinformation, which lingered outside the regulatory authority of the Canadian government. The vast majority of this content was almost certainly domestic in origin. Beyond the activity of Russian state media broadcasters, the researchers could not make any further attribution.

Keyword trend analysis of Canadian media from January 2019 through the elections suggested the topics that received the most attention during the Canadian elections were Environment, Economy, Indigenous, and Immigration. “Environment” stories were the most popular with about 1,400 stories per day; “Economy” had about 1,060 stories per day; “Indigenous” had almost 800 stories per day; and “Immigration” had roughly 500 stories per day. Although “Immigration” received the least amount of mainstream coverage, it still received many keyword hits on social media. In particular, there are about 33,000 “Immigration”-related mentions on Twitter from January 1, 2019 to October 22, 2019. About 14 percent of these mentions were deemed to be negative, while out of 18,000 mentions of “Economy,” about 10 percent were negative. While sentiment analysis is not a perfect tool, it does offer insight into the makeup of the Canadian information ecosystem.

Over the course of the election, the issue most frequently associated with mis- and disinformation was immigration. In particular, refugee quotas remained a significant wedge issue during the election and a favorite talking point for Twitter trolls, who spread false stories about immigrant crime rates and who frequently engaged in anti-immigrant and anti-Muslim hate speech (see source #5). As the election neared, public polling found that 63 percent of Canadians believed that the nation should lower its immigration quotas, as it was reaching a “limit” in how it could integrate them. This was a marked shift from 2014, in which only 36 percent of Canadians had expressed a desire to decrease immigration numbers (see source #6).

Interestingly, actions taken by the Canadian government to safeguard and improve election processes also sowed the seeds of viral misinformation. The EMA (Election Modernization Act) was a frequent target. False stories alleged that the bill would allow non-resident Canadians to vote in the general elections — an obvious distortion of the bill’s restoration of voting rights to overseas citizens (see source #7).

Over the course of its analysis, the researchers focused on two case studies. The first regards the interchange of virulent, anti-immigrant hate speech over multiple platforms and online communities. The second regards the opportunism shown by Russian state media in its Canadian election coverage. These cases, evidencing coordinated trolling around nativist rhetoric and amplifying domestic political scandal by foreign media, most resembled the Russian influence operations conducted against the United States in 2016.

Overlapping Anti-Immigrant Narratives and Communities

As online discourse increasingly revolved around immigration during the 2019 Canadian Federal Election, some far-right communities became “echo forums,” which are dedicated to a single topic of discussion and which endeavor to speak with a single ideological voice. This behavior was especially evident on Reddit through the rapid growth of r/MetaCanada, a subreddit founded in 2011 and initially featuring general, off-color humor before coming to focus exclusively on nativist posts and memes.

In time, r/MetaCanada also appeared to associate closely with the r/The_Donald a Reddit community that became the locus of digital influence efforts for then-candidate Donald Trump in 2015 and 2016. After the 2016 U.S. election, r/MetaCanada’s membership rose sharply, from 6,500 in November 2016 to 31,000 by October 2019 (see source #8). r/MetaCanada’s tone — racist, misogynistic, and Islamophobic — came to match that of r/The_Donald closely. Like r/The_Donald, r/MetaCanada became a gathering place for far-right troll mobilization as election day drew near.

A post that derives the slang language by the U.S. president in which he referred to Haiti, El Salvador, and African countries for questioning their immigration to the United States. The post on MetaCanada took the U.S. language and added a Canadian spin.

The use of “shithole” by U.S. President Donald Trump is echoed back in MetaCanada but from a Canadian perspective. (Source: Reddit.com/MetaCanada)

Using an open-source tool to analyze commenting activity in r/MetaCanada, the DFRLab found significant user overlap with majority-American subreddits: r/HillaryForPrison, r/DrainTheSwamp, r/TuckerCarlson, r/DebateAltRight, and, of course, r/The_Donald. The existence of a common user base demonstrates strong ideological cross-germination between Canadian and American far-right communities. It also suggests that right-wing U.S. political activists engaged directly in attempts to influence the Canadian elections, just as they engaged in the 2017 French elections on behalf of the candidacy of far-right presidential candidate Marine Le Pen (see source #9).

Using an open-source tool to analyze commenting activity in r/MetaCanada, the researchers found significant user overlap with majority-American subreddits: r/HillaryForPrison, r/DrainTheSwamp, r/TuckerCarlson, r/DebateAltRight, and, of course, r/The_Donald.

The visualization shows how r/MetaCanada is a potentially similar match to r/The_Donald. The algorithm looks for similar users in the subreddits. (Source: Anavka.github.io/MetaCanada)

Furthermore, keyword analysis of r/MetaCanada found that “refugee” and “blackface” (a reference to Trudeau’s September 2019 blackface scandal) were two of the most commonly used terms of discussion. This suggests a clear interest in immigration policy and Canadian electoral politics. It lends credence to r/MetaCanada as a hub for political trolling; a departure from its original goal to provide a home for “sardonic humour, revisionist histories, memes, speculative fiction, and satire to analyze and undercut prevailing, dominant attitudes and misconceptions about Canadian life and politics.”

A website that seemed to be dedicated to the MetaCanada movement, explaining what it means and asking for contributions. (Source: MetaCanada.com)

However, the vocalization of anti-immigrant and Islamophobic sentiments was not limited to echo forums. These views were expressed through memes, messages, videos, and other content formats that spread across numerous social media platforms — including ones that did not typically host political content. For instance, on Pinterest, clusters of anti-Trudeau memes were automatically grouped alongside other racist, bigoted content, thanks to the power of the Pinterest recommendations algorithm (see source #10). If users briefly explored this galaxy of anti-Trudeau content, they were steered toward other memes that assailed American politicians like Hillary Clinton and Alexandria Ocasio-Cortez, as well as those that attacked the #MeToo movement (see source #11).

Pinterest’s recommendation pathway via the Pinterest content algorithm, leading a user from an anti-Trudeau meme to an anti-Hillary meme to an anti-#MeToo meme to an anti-Alexandria Ocasio-Cortez meme. (Source: DFRLab)

Where Pinterest’s algorithms passively pushed users toward increasingly virulent political content, Twitter’s algorithms — long the focus of far-right troll brigades — were actively gamed and manipulated in the months leading up to the 2019 Federal Election. In one case in September 2019, Canadian and U.S. political activists coordinated in order to amplify the hashtag #TrudeauMustGo until it trended internationally, helping elicit roughly 34,000 tweets from approximately 5,000 accounts (see source #12).

Disinformation researchers reported the activity to Twitter, alleging that the hashtag campaign showed evidence of automation and inauthentic coordinated activity. Twitter’s policy team replied, noting that this was coordination between human activists and therefore permitted under Twitter’s terms of service. All the while, Canadian citizens and journalists who saw the trending hashtag were left with the impression that this was an organic expression by Canadian voters (see source #13).

Opportunism by Russian State Broadcasters

In the United States, RT and Sputnik International are registered as foreign agents under the Foreign Agents Registration Act (FARA), a law that requires agents representing the interests of a foreign government to disclose information about their activities and finances in the interest of transparency. Canada lacks such a law.

When the Trudeau blackface scandal erupted in September 2019, the news went instantly viral, becoming one of the biggest political headlines in the final weeks before the election (see source #14). The reporting was generally accurate and balanced, including reporting by international media and foreign state broadcasters. By contrast, Russian state media leaned heavily into editorialization. One RT headline, “The many faces of Justin Trudeau: Canadian PM memed mercilessly after brownface debacle,” walked a thin line between international and partisan reporting (see source #15). It appeared to be the latest move in a concerted anti-Trudeau editorial campaign that had gained steam since RT had named Trudeau a year earlier to its list of “Top 10 Russophobes of 2018” (see source #16).

The RT headline was cross-pollinated across several social media platforms, as well as multiple Reddit communities. (Source: Reddit)

In another instance of sensationally slanted coverage, Sputnik International published a story about Alberta separatism — “Birth of the Republic of Western Canada is a Cry of Our Heart — Wexit Alberta Founder” — on October 20, 2019, one day before the federal election (see source #17). While the story was ostensibly a profile of a pro-secession Albertan community leader, it focused almost exclusively on the alleged failings of the Liberal government. It used coded language — Trudeau’s “globalist” agenda, Trudeau’s climate change “rhetoric” — popular among far-right political activists. This was a transparent attempt to circumvent political advertising restrictions imposed by the EMA on foreign media outlets.

As a final example, on October 22, 2019, RT published its first post-election article: “Losing majority with hysterical dignity? Trudeau’s ‘victory speech’ turns into scandal, as he jumps on stage interrupting rival” (see source #18). As with most Russian coverage of Canadian elections, the headline was an example of extreme anti-Trudeau, anti-Liberal editorialization. The “scandal” allegation, in this case, was extrapolated from a quote from a Global News anchor, who called Trudeau’s interruption of Scheer’s speech “unprecedented.” U.S. fringe media outlet InfoWars took things a step further, writing an even more hyperbolic story based on the “unprecedented” reference.

InfoWars embedded the Global News tweet in order to legitimize its reporting, all the while flooding the narrative with further hyperbole. (Source: Infowars)

That rhetoric was picked up by CTV News, too. On Facebook, the outlet shared an article with the following title: “‘Losing majority with hysterical dignity?’ Trudeau’s ‘victory speech’ turns into scandal, as he jumps on stage interrupting rival,’ reads an international headline.” In so doing, CTV News was likely aiming for the maximum possible audience engagement. In the process, however, it pushed RT’s coverage further into the mainstream, ensuring a large and new readership was exposed to the Russian broadcaster’s editorial positions.

This sort of inadvertent information laundering — a foreign state media headline, repackaged by alternative media and subsequently amplified by mainstream broadcasters — has become endemic in cases of foreign influence. The process typically functions without the need for active coordination. Rather, it works through the complementary incentives of the foreign state media (which seeks to share its content free of cost) and the alternative media (which seeks to make money by way of contrarian or conspiratorial content). When a mainstream outlet subsequently covers or shares the alternative media story, it is the foreign state media that benefits most as it watches its seeds bear fruit.

Unintentional spread of the information that accelerated Russian talking points on Canadian national affairs. (Source: CTVNews/Facebook)

Compared to other actors in the 2019 Federal Election — notably large, unregulated partisan Facebook groups — Russian state broadcasters ultimately commanded only a small amount of direct digital influence. But the gamesmanship and clever marketing of propagandists show how readily they can adapt to new regulations or exploit traditional broadcasters in unconventional ways. Should the diplomatic relationship between Canada and Russia grow more contentious, these Russian influence efforts will likely grow more aggressive, resourced, and sophisticated.

___________________________________________________________

Reflection

In retrospect, having a broader scope of work to include China’s posture could have been invaluable.

Russian interference in the 2016 U.S. election demonstrated that information operations begin well in advance of the election cycle, nor do they end when the last votes are tallied. Today, such foreign influence efforts continue to grow in sophistication, joined by a rising tide of domestic social media manipulation that utilizes many of the same tools and tactics.

We said this in 2020, and still in 2023 believe these are a few steps that will help safeguard Canada’s democracy in the future:

  • Increase funding commitment to securing the online landscape. The 2019 Federal budget has earmarked $2.1 million CAD over 3 years to support the G7 Rapid Response Mechanism. Given Canada’s role in the defense of digital democracy, this level of funding is insufficient to meet Canada’s growing multilateral obligations.
  • Consider new laws to designate foreign agents, particularly during elections. An electoral democracy must take great care before considering any measures that might proscribe or limit journalistic activity — even if that activity is conducted by agents of an adversarial foreign power. However, it is the case that Russian (and Chinese) state media in particular has begun to use its coverage of Canadian politics toward aggressive ends. It is also the case that the United States — with strong, constitutionally enshrined journalistic protections — has nonetheless had a system in place for decades to designate foreign agents. Canada should consider adopting a similar model.
  • Revisit the definition of “foreign interference.” Canada has established, and successfully tested, a number of government initiatives intended to mitigate foreign influence efforts by state actors. In the case of the 2019 Federal Election, however, the clearest indications of “foreign” interference came in political trolling coordinated between far-right U.S. and Canadian political activists. Do such nonstate, transnational influence efforts fall within the remit of the Ministry of Democratic Institutions or even the Critical Election Public Protocol? As the nature of foreign influence continues to evolve, so must governmental definitions and procedures.

Just as the tactics of influence operations and social media manipulation are continually evolving, so must appropriate government responses.

_____________________________________________________________

Sources

#1 #2 Insights West, “Canadians Alarmed Over the Influence of Social Media in Upcoming Elections,” May 10, 2019, https://www.insightswest.com/news/canadians-alarmed-over-the-influence-of-social-media-in-upcoming-2019-elections/

#3 Darren Linville, Patrick Warren, “Russian Trolls can be Surprisingly Subtle and Fun to read,” Washington Post, March 8, 2019, https://www.washingtonpost.com/outlook/russian-trolls-can-be-surprisingly-subtle-and-often-fun-to-read/2019/03/08/677f8ec2-413c11e9-9361-301ffb5bd5e6_story.html

#4 https://election.ctvnews.ca/truth-tracker-how-does-anti-scheer-sentiment-stack-up-against-anti-trudeau-talk-online-1.4643010 and Patrick Cain, Jeff Semple, “Closed Facebook Groups Where Extremists Thrive, ‘Would Curl Your Inards,’ Expert Says” Global News, October 29, 2019, https://globalnews.ca/news/6091196/facebook-investigating-19000-member-anti-muslim-group

#5 Teresa Wright, “Majority of Canadians Think Immigration Should be Limited: Poll,” Global News June 16, 2019, https://globalnews.ca/news/5397306/canada-immigration-poll/

#6 https://www.cbc.ca/news/politics/canadians-favour-limiting-immigration-1.5177814

#7 Kaleigh Rogers, Andrea Bellemare, “Misinformation Circulating Online Stokes Fears of Voter Fraud Ahead of Federal Election,” CBC News, August 30, 2019, https://www.cbc.ca/news/technology/voter-fraud-confusion-misinformation-1.5264689 38 https://subredditstats.com/r/metacanada

#8 https://subredditstats.com/r/metacanada

#9 Nicholas Vinocur, “Marine Le Pen’s Internet Army,” Politico, February 3, 2017, https://www.politico.eu/article/marine-le-pensinternet-army-far-right-trolls-social-media/

#10 #11 Kanishk Karan, John Gray, “Trudeaus and Trudeaun’ts — memes polarize in Canadian elections,” DFRLab, November 19, 2019, https://medium.com/dfrlab/trudeaus-and-trudeaunts-memes-have-an-impact-during-canadian-elections-4c842574dedc

#12 Nicole Bogert, “Truth Tracker: Are Bots Ampliying #TrudeauMustGo? Twitter Says No,” CTV News September 26, 2019 https://election.ctvnews.ca/truth-tracker-are-bots-amplifying-trudeaumustgo-twitter-says-no-1.4612390

#13 Elizabeth Dubois, Anatoliy Gruzd, Jenna Jacobson, “When Journalists Report Social Media as Public Opinion,” Policy Options, September 28, 2018 https://policyoptions.irpp.org/magazines/september-2018/when-journalists-report-social-media-as-publicopinion/

#14 Anna Purna Kambhampaty, Madeleine Carlisle, Melissa Chan, “Justin Trudeau Wore Brownface at 2001 ‘Arabian Nights’ Party While He Taught at a Private School,” Time, September 19, 2019, https://time.com/5680759/justin-trudeau-brownface-photo/

#15 “The many faces of Justin Trudeau: Canadian PM memed mercilessly after brownface debacle,” RT, September 19, 2019 https://www.rt.com/news/469123-trudeau-brownface-scandal-memes/

#16 Top 10 Russophobes of 2018: See who made RT’s prestigious list this year,” RT, October 16, 2018, https://www.rt.com/news/441417-top-10-russophobes-2018/

#17 Denis Bolotsky, “Birth of the Republic of Western Canada is a Cry of Our Heart — Wexit Alberta Founder,” Sputnik News, October 20, 2019 https://sputniknews.com/world/201910201077102166-birth-of-the-republic-of-western-canada-is-a-cry-of-our-heart/

#18 “Losing majority with hysterical dignity? Trudeau’s ‘victory speech’ turns into scandal, as he jumps on stage interrupting rival,” RT, October 22, 2019, https://www.rt.com/news/471506-canada-trudeau-election-minority/

#19 Tiffany Hsu, Ian Austen, “Canada Says Facebook Broke Privacy Laws With ‘Superficial’ Safeguards,” New York Times, April 25, 2019, https://www.nytimes.com/2019/04/25/technology/facebook-canada-privacy.html

________________________________________________________________

Contact admin@mentionmapp.com to discuss our contract threat intelligence research, analysis, and reporting. Our focus is on disinformation, misinformation, and influence operation threats, risks, and vulnerabilities.

The Unpublished Project: Part III

Unmasking the Oriental Review

Foto de Towfiqu barbhuiya en Unsplash

Since 2017 we’ve worked on numerous projects, which for reasons of confidentiality have not been published. We will feature five previously unpublished projects now edited. Looking back, this is also a reflection of what’s the same, what’s changed, and what we learned. Our goal is to bring some of our past efforts out of the shadows. Part I Part II ________________________________________________________________

The original report was delivered in June 2020.

Bottom Line Up Front (BLUF)

Oriental Review is a well-established disinformation site and shows strong cross-pollination of authorship and content with other key players in the disinformation ecosystem. Its content is also often propagated and promoted by sites with far greater established reach.

The site describes itself as “an international e-journal focusing on current political issues in Eurasia and beyond. The initiative is launched in February 2010 by a group of freelance bloggers and political analysts concerned with the aggravating security situation in the world.” Since its inaugural post on May 2, 2010 — which republished a New York Times op-ed (see source #1) “Russian Advice on Afghanistan” by Boris Gromov and Dmitry Rogozin — the Oriental Review has published approximately 3,300 articles. The website’s IP address (31.31.203.8) is geolocated to Russia and its content typically supports Russia’s strategic narratives and includes a long history of spreading disinformation.

From CIA conspiracies like a 2011 story titled “Gaddafi’s African “Mercenary” Story is a Disinformation Ploy by the CIA,” (see source #2) to an April 2020 COVID-19 conspiracy titled “Bill Gates, Vaccinations, Microchips, And Patent 060606,”(see source #3) Oriental Review has advanced a broad variety of stories related to color revolution conspiracy theories, anti-EU and anti-NATO themes, Christian traditional values and orthodoxy, historical revisionism (such as in the case of WWII), claims of Russian innocence (with regard to events like the downing of MH17, the Olympic doping scandal, and the Skripal poisoning) and COVID-19 disinformation. The site is also listed as a partner site to One World (see source #4) which was just removed by Twitter after EU DisinfoLab highlighted its connections to Russian disinformation (see source #5).

Background

The Oriental Review domain was registered by a Gennadiy Georgievich Kovtunov with a documented creation date of January 23, 2010 (see Figure A-1). Kovtunov is listed as both the registrant and admin (with contact details, address, phone and personal email: gennady.kovtunov@yandex.ru). While Kovtunov can be credited with registering the domain, there are no references to him on the website. Rather, Andrei (or Andrey) Fomin is identified as the “project pioneer” (see below). He was still listed on site as of January 11, 2011, but as of March 1, 2011, he is no longer referenced.

via Internet Archive — Time Machine

However, Fomin’s social media accounts also point to his involvement in Oriental Review, where he lists himself as its “founding editor” on his LinkedIn and Facebook pages.

Fomin does not appear to have any bylines at Oriental Review but has been published by many other fringe sites, including Global Research (see source #6), The Duran (see source #7), Free21 (see source #8), Off-Guardian (see source #9), Veterans Today (see source #10), Information Clearinghouse (see source #11), VoltaireNet (see source #12), and Fort Russ News (see source #13). Fomin’s author pages for Free21, Off-Guardian, Veterans Today, Information Clearinghouse, VoltaireNet, and Fort Russ News tie Fomin to Oriental Review.

In 2016, Fomin and Oriental Review published an article titled “Does Turkey Need Patriarch Bartholomew?” falsely attributed to Ambassador Arthur Hughes, a former U.S. ambassador, claiming that the Ecumenical Patriarchate in Istanbul was involved in a coup attempt (see source #14). The article was eventually removed.

Reviewing a list of 43 public Facebook friends of Fomin’s, it is clear that Fomin has connections with many individuals in Russian disinformation circles and who have ties to the Russian government. This network of friends includes:

  • Vyacheslav Nikonov: Russkiy Mir, Chairman of the Management Board
  • Israel Shamir: Reportedly WikiLeaks’s representative in Russia. (Shamir’s son, Johannes Wahlström, is a spokesperson for WikiLeaks in Sweden.) (see source #15)
  • Maxim Grigoriev: Director of non-profit Foundation for the Study of Democracy (see source #16)
  • Vladimir Rodzyanko: Co-founder and Managing Director at The Duran (see source #17)
  • Alex Christoforou: President and Chairman at The Duran (see source #18)
  • Modest Kolerov: Chief editor at REGNUM news agency (see source #19)
  • Sergey Nalobin: Director of the Digital Diplomacy Unit at the Department of Information and Press for the Russian MFA (see source #20)
  • Maria Zakharova: Director of the Information and Press Department of the Ministry of Foreign Affairs of the Russian Federation (see source #21)
  • Alexander Ionov: President of the Rodina Party-tied Anti-Globalization Movement of Russia (ADR) (see source #22); hosted the “Dialogue of Nations” in 2016, which brought together representatives from separatist movements around the globe; raised awareness and support for Maria Butina as her “official representative” (see source #23)
  • Ajamu Baraka: American political activist and former Green Party nominee for Vice President of the United States in the 2016 election (see source #24)
  • Mnar A. Muhawesh: Founder and Editor in Chief of Mint Press News (see source #25)
  • Viktor Olevich: Lead expert at Moscow-based think tank Centre for Actual Politics (see source #26)

Since 2014, Andrew Korybko, based in Moscow, has become the Oriental Review’s most prolific contributor (see source #27). Korybko, who is the suspected founder/manager of the disinformation site One World, appears to have subsequently taken on a significant role related to the publication’s editorial output. His Sputnik News bio (see source #28) notes that he “is a political analyst, journalist, and a regular contributor to several online journals, as well as a member of the expert council for the Institute of Strategic Studies and Predictions at the People’s Friendship University of Russia.” It appears that with regard to Oriental Review, Korybko begins his ascendance while Gennadiy Georgievich Kovtunov fades.

Research on the website Kontrus (see source #29) shows that “Kovtunov Gennady Georgievich was registered as an individual entrepreneur on May 28, 2010,” which coincides with his registering the site domain for Oriental Review earlier that year. The site also indicates his main mode of business was “Activities of news agencies.” It is most important to note that in 2014 his business holdings were “appropriated by the tax authority” and that he apparently liquidated his intellectual property (see source #30). This points to the notion that Kovtunov’s business was under duress. This same year, Korybko publishes his first post for Oriental Review titled “Coup in Western Ukraine: the Arab Spring unleashed in Europe” in January 2014 (see source #31).

While there is no information about Kovtunov since his cessation of entrepreneurial activity on June 25, 2014, Korybko has gone on to publish 703 articles (as of June 18, 2020) for Oriental Review. The fact that Kovtunov’s exit corresponds with Korybko’s ascendance and is occurring in 2014 as Russia is taking an overtly hostile posture towards Ukraine.

Leonid Savin is also a contributor (see source #32) to Oriental Review. Savin (see source #33) was previously editor-in-chief at Katehon, the St. Petersburg-based think tank with ties to the far-right. He was also previously editor-in-chief at Geopolitica.ru and has contributed to the Strategic Culture Foundation.

Overall, Oriental Review consistently delivers content aligning with key Russian strategic narratives and themes, such as:

  • Ukraine as a failed or unreliable state
  • U.S. and NATO aggression or interference in other countries
  • European divisions and weakness
  • Global elections
  • Immigration
  • Russia’s doping scandals in sporting competitions
  • Turkey is an aggressive, destabilizing force
  • Defending Russia and its government
  • Traditional values and orthodoxy
  • Historical revisionism

As of February 28, 2023, the Oriental Review is still going strong with many of the same contributors as in June 2020.

_____________________________________________________________

Sources

#1 https://www.nytimes.com/2010/01/12/opinion/12iht-edrogozin.html

#2 https://orientalreview.org/2011/04/04/gaddaffi%e2%80%99s-african-%e2%80%9cmercenary%e2%80%9d-story-is-a-disinformation-ploy-by-the-cia/

#3 https://orientalreview.org/2020/04/29/bill-gates-vaccinations-microchips-and-patent-060606/

#4 http://oneworld.press/?module=partners&action=list&page=2

#5 https://twitter.com/DisinfoEU/status/1272447204150689797?s=20

#6 https://www.globalresearch.ca/author/andrey-fomin

#7 https://theduran.com/members/andre-fomin/

#8 http://www.free21.org/author/andrey-fomin/?lang=en

#9 https://off-guardian.org/2018/03/13/fatal-quad-who-is-assassinating-former-mi6-assets-on-british-soil/

#10 https://www.veteranstodayarchives.com/2015/07/22/the-new-red-menace-and-natos-plans-in-the-arctic/

#11 http://www.informationclearinghouse.info/article43998.htm

#12 https://www.voltairenet.org/auteur125416.html?lang=en

#13 https://www.fort-russ.com/tag/oriental-review/

#14 https://www.rand.org/content/dam/rand/pubs/perspectives/PE200/PE278/RAND_PE278.pdf, https://www.euractiv.com/section/global-europe/opinion/will-ankara-take-aim-at-patriarch-bartholomew/

#15 https://www.theguardian.com/commentisfree/andrewbrown/2010/dec/17/wikileaks-israel-shamir-russia-scandina

#16 https://russiaun.ru/en/news/sideevent_syria, https://www.thedailybeast.com/trumps-new-favorite-network-oann-embraces-russian-propaganda

#17 https://ru.linkedin.com/in/vladimir-rodzianko-71aa4010

#18 https://ru.linkedin.com/in/alexchristoforou

#19 ttps://www.rferl.org/a/Russian_Journalist_Not_Allowed_To_Lithuania/1789154.html

#20 https://twitter.com/snalobin?lang=en

#21 https://www.buzzfeednews.com/article/konstantinbenyumov/maria-zakharokva-profile-russian-foreign-ministry

#22 http://aionov.ru/biografiya/

#23 https://tass.com/society/1085168

#24 https://www.gp.org/ajamu_baraka

#25 https://www.mintpressnews.com/author/mnarmuhawesh/

#26 https://www.aljazeera.com/programmes/insidestory/2019/12/peace-eastern-ukraine-191208201628985.html

#27 https://orientalreview.org/author/ak/

#28 https://sputniknews.com/authors/andrew_korybko/

#29 https://translate.google.com/translate?hl=en&sl=ru&tl=en&u=https%3A%2F%2Fkontrus.ru%2F&prev=search

#30 https://translate.google.com/translate?hl=en&sl=ru&u=https://kontrus.ru/businessmans/9240-01/person/310774614800478-kovtunov-gennadii-georgievich&prev=search

#31 https://orientalreview.org/2014/01/24/coup-in-western-ukraine-the-arab-spring-unleashed-in-europe/

#32 https://orientalreview.org/2018/03/29/the-death-of-the-liberal-world-order/

#33 https://www.b92.net/eng/news/world.php?yyyy=2018&mm=06&dd=08&nav_id=104363

________________________________________________________________

Contact admin@mentionmapp.com to discuss our contract threat intelligence research, analysis, and reporting. Our focus is on disinformation, misinformation, and influence operation threats, risks, and vulnerabilities.

The Unpublished Project: Part II

PRC Media Interference & Influence: Target Canada

Foto de Towfiqu barbhuiya en Unsplash

Since 2017 we’ve worked on numerous projects, which for reasons of confidentiality have not been published. We will feature five previously unpublished projects now edited. Looking back, this is also a reflection of what’s the same, what’s changed, and what we learned. Our goal is to bring some of our past efforts out of the shadows. Part I ________________________________________________________________

The original report was delivered in August 2022.

Bottom Line Up Front (BLUF)

The client wanted the basis for a report to focus on the People’s Republic of China’s (PRC) social media-based activity. This final report relates to overt and covert PRC attempts to undermine the people, businesses, government, and national interests of Canada.

The research took place from June to August 2022 and showed no evidence of any PRC malign activities targeting Canada’s information ecosystem. To ensure the absence of evidence wasn’t connected to user inexperience with the primary technology platform, additional information sources have been included such as a review of Chinese state media, cross referencing two recent Mandiant reports, and a survey of the Canadian Alt-Left media and academics.

The absence of malign PRC activity does not represent a reduction of risk or the absence of vulnerabilities Canada faces from an increasingly hostile PRC. During respective meetings in June 2022 (see source #1), NATO and the G7 recognized The PRC’s growing threat.

_________________________________________________________________

Image credit: Margaret Trudeau on 1973 trip to China (Maclean’s)

Despite the absence of signals in the broad media ecosystem indicating potential malign activity, there are still lessons to be applied when considering the PRC’s interests in Canada. In the course of 80 years, the Dr. Norman Bethune mythos includes how his “service to the CCP earned him the respect of Mao Zedong, who wrote a eulogy dedicated to Bethune when he died in 1939” (see source #2). Prime Minister Pierre Elliott Trudeau’s pathos for China is equally important, as “the establishment of diplomatic relations between Canada and China in 1970 was a highly controversial political decision for Canada in the context of the times” (see source #3). As this report will highlight in 2022 it is very clear that the PRC has influenced Canada into being a welcoming nation of fellow travelers.

This report will highlight how the PRC’s efforts to capture Canada’s political, business, and academic interests have left the country complicit, co-opted, corrupted, and complacent. As APSI’s report, The party speaks for you Foreign interference and the Chinese Communist Party’s united front system indicates, “Premier Zhou Enlai,(pictured above with Prime Minister Trudeau in 1973) one of the PRC’s founding revolutionaries and a pioneer of the CCP’s United Front, advocated ‘using the legal to mask the illegal; deftly integrating the legal and the illegal’ (利用合 法掩护非法,合法与非法巧妙结合), ‘nestling intelligence within the United Front’ (寓情报于统战中) and ‘using the United Front to push forth intelligence’ (以统战带动情报).”

The PRC’s leadership and state media has been elevating its strategic narrative and propaganda global posturing for the last ten years. It is important to note that “the phrase “telling China’s story well” ((讲好中国故事)), introduced by Xi Jinping within the first year of his administration, in August 2013, encapsulates the notion that Party-state media and even quasi-private actors must work internationally to strengthen and innovate external propaganda, thereby enhancing China’s “international discourse power” (国际话语权) as a key aspect of its “comprehensive national power” (综合国力).”

Furthermore, Miburo Solutions offers an accurate assessment of today’s Propaganda and Disinformation Landscape, writing that “the Chinese Communist Party (CCP) is not copying Russia’s playbook when it comes to propaganda and disinformation — they’re authoring their own… China has grown its audience share globally, maintaining centralization and control, through a different multi-pronged approach combining well-funded, overt state-run print, radio, and television media; a network of public-private partnerships; and a new generation of social media influencers softening the CCP’s image worldwide.”

In the recent Canadian context, with the right issue, perceived slight, or opportunity to manipulate public opinion, the PRC clearly has the resources to target the broader Canadian media ecosystem.

For instance, DisinfoWatch (September 2021) detailed Chinese State Interference in Canada’s 2021 Election, leading with how “an article published September 9, 2021, on the CCP-owned tabloid platform, Global Times, questions the credibility of Conservative leader Erin O’Toole and his party’s foreign policy platform on China. The article appears to include a threat — that if Canada elects a Conservative government and that government adopts the policies in the Conservative’s platform, “China will pay back with a strong counter strike and Canada will be the one to suffer.”

According to DFRLab analysis, (July 2021) China weaponizes discovery of graves at Canadian residential schools to avoid Xinjiang criticism. Chinese state media in a response published 85 articles between 18 June 2021 and 11 July 2021 about 1,100 unmarked graves that had been found at four former Canadian residential schools for indigenous children. Those articles were promoted by at least 24 different Chinese state-affiliated Twitter accounts, which referenced Canada in more than 270 tweets, compared to only 146 mentions of the US in June 2021”

From Facebook to the street, the National Post reports (January 23, 2020) Protesters at Meng Wanzhou trial claim they were offered money “Many of the protestors claim they were offered payment to be there under false pretences. Julia Hackstaff, an actor in Vancouver, says she was offered $100 on Facebook to attend. She said she thought she was appearing for a film shoot.”

Events during June, July, and early August 2022 particularly former US House Speaker Nancy Pelosi’s Taiwan visit, appear to have pushed Canada off the PRC’s information operations agenda.

China State Media

In order to ensure the lack of signals detected were not researcher/user-related, Chinese state media including Xinhua, Global Times, China Global Television Network (CGTN), China Daily, and China News, and the Twitter accounts for Lijian Zhao 赵立坚 @zlj517 Spokesman & DDG, Information Department, Foreign Ministry, China (a key “wolf warrior”), and the Chinese Embassy in Ottawa were monitored and reviewed for all Canadian-related activity. These findings help support the assertion that Canada was not a target of PRC information operations during this time period.

Mandiant

Mandiant released two reports between June and August 2022 illustrating the PRC’s lack of attention paid to Canada.

Pro-PRC “HaiEnergy” Information Operations Campaign Leverages Infrastructure from Public Relations Firm to Disseminate Content on Inauthentic News Sites (August 4, 2022)

“Mandiant has identified an ongoing information operations (IO) campaign leveraging a network of at least 72 suspected inauthentic news sites and a number of suspected inauthentic social media assets to disseminate content strategically aligned with the political interests of the People’s Republic of China (PRC). The sites present themselves primarily as independent news outlets from different regions across the world and publish content in 11 languages… Narratives promoted by the campaign criticize the U.S. and its allies, attempt to reshape the international image of Xinjiang due to mounting international scrutiny, and express support for the reform of Hong Kong’s electoral system — a change that gave the PRC more power over vetting local candidates.”

Reviewing all 72 sites to cross-check for content targeting Canada, 34 returned the same three articles. In particular the article from July 12, 2022 New evidence Canada’s Genocide ! fits with the efforts to reshape to narrative related to Xinjiang.

Pro-PRC DRAGONBRIDGE Influence Campaign Targets Rare Earths Mining Companies in Attempt to Thwart Rivalry to PRC Market Dominance (June 28, 2022)

“we observed additional DRAGONBRIDGE activity begin to target the Canadian rare earths mining company Appia Rare Earths & Uranium Corp and the American rare earths manufacturing company USA Rare Earth with negative messaging in response to potential or planned rare earths production activities involving those companies… The campaign also promoted content criticizing the Biden administration’s decision to invoke the Defense Production Act on March 31, 2022, to expedite the domestic production of critical minerals to end U.S. reliance on China for its supply.”

This report informed the Globe and Mail reporting (June 28, 2022) Chinese bots spread disinformation about Canadian rare earths company in targeted attack, report alleges

From the report twelve Twitter profiles were noted, two do not exist, and ten have been suspended none of which referenced Canada nor Appia Rare Earths & Uranium Corp. It was not possible to cross-check/reference Facebook activity related to this campaign. Searching the hashtag #Lynas on August 16, 2022 did capture evidence (four profiles) of the campaign, but nothing directed at Canada.

Canada’s Fellow Travelers

The PRC’s investment or recent lack-there-of in targeted information operations of Canada can to some degree be supported by the history, strength, and activities of its Canadian fellow travelers working in the Alt-Left media space and a cadre of academics.

Canada’s Alt-Left Media

This is not a full survey of the Canadian Alt-Left media space but highlights the echoes of Beijing’s propaganda.

The Canada Files describes itself as “a platform for critical investigation and analysis of Canadian foreign policy and the military-industrial complex. We are a proudly socialist, anti-imperialist news organization.”

This July 31, 2022 tweet is noteworthy https://twitter.com/TheCanadaFiles/status/1553852705583628288

Cooperation between TCF & @socialist_china continues! TCF Contributing Editor @Arnold_August was interviewed on Press TV on its record of countering disinformation against China, also: China-Iran 25 cooperation agreement, BRICS + Belt & Road Initiative.

https://socialistchina.org/2022/07/10/press-tvs-record-countering-disinformation-against-china/?fbclid=IwAR0ALzzLGJ_D-thIl54bBOxW5U3L2qFdmihup0k89kmz8suHyA7Lbph4xxE

*Note > @Arnold_August is not only Contributing Editor, to The Canada Files but an active contributor to the disinformation ecosystem. https://www.arnoldaugust.com/

Arnold August’s extensive work via Global Research https://www.globalresearch.ca/author/arnold-august-2

He published nothing since 2020 via CounterPunch https://www.counterpunch.org/author/frevas3111/

This July 27th tweet further supports Canada files affinity to @socialist_china, “We appreciate the republication of this great article by @socialist_china!”

https://twitter.com/TheCanadaFiles/status/1552359523087785985

Canadian labour activists oppose AUKUS, a new NATO in the Pacific

*Note > Friends of Socialist China We also have an advisory group, composed of the following members: (such as) –

Radhika Desai — Convenor, International Manifesto Group; Professor, University of Manitoba (Canada)

Alan Freeman — Co-director, Geopolitical Economy Research Institute (Canada)

John Riddell — Founding Director, Communist International Publishing Project (Canada)

The Canadian Dimension — Asia section describes itself as “the longest-standing voice of the left in Canada. For more than half-a-century, CD has provided a forum for lively and radical debate where red meets green, socialists take on social democrats”

Decolonizing Canadian Foreign Policy (8 part series)

Amid the wreckage of wars in Iraq and Afghanistan, the United States and its allies have turned their sights on China. University of Victoria professor emeritus and historian John Price examines the rise of the coalition of Anglo settler colonial states of Canada, the United Kingdom, the US, Australia, and New Zealand, and how they are today fomenting conflict in the Asia Pacific.

Alternatives in Canadian foreign policy and the racism of ‘The National’

As noted with the names on Social China’s advisory group, and John Price’s work for the Canadian Dimension there is clearly cross-pollination between Canada’s Alt-Left media and cadre of academics.

Cadre of Canadian Academics

This is not a comprehensive survey of the Canadian academic space but highlights the echoes of Beijing’s propaganda.

Examples of Pro-China commentary:

Anti-China sentiment is becoming anti-Chinese prejudice in Canada (June 21, 2021) Op-Ed written by, Paul Evans HSBC Chair in Asian Research at the University of British Columbia. Yuen Pau Woo is a senator for British Columbia.

Why Alberta must rethink its ban on Canada-China university collaborations (June 2021) John Price Professor Emeritus, Asian and Pacific history, University of Victoria

Simon Fraser University. President’s Faculty Lecture: Dr. Yuezhi Zhao China’s “Belt and Road Initiative:” A Critical Communication Perspective (January 31, 2018) *Note: The report author attended this event, it was neither “critical” nor offered testimony to China’s intellectual property theft, and practice of “debt capture” diplomacy for example. The professor played a 10-minute video with subtitles of Premier Xi Jinping “telling China’s story well.”

APSI’s recent report, Assessing the impact of CCP information operations related to Xinjiang, shows the effort to influence the academy is pernicious. It is interesting how information from Mandiant’s “HaiEnergy” report, and the July 12, 2022 article New evidence Canada’s Genocide ! connects with “the CCP’s Xinjiang-related narratives, the party is increasing its funding of academic research on influencing international perceptions of Xinjiang and other ideological topics sensitive to the CCP… evolving CCP information operation pipeline in which academic activities flow into online propaganda and engagements with international organizations offline such as the UNHRC.”

Ripped from the Headlines

While potentially of interest to a local audience, these recent news articles do not appear to be resonating in the broader information ecosystem of PRC trolls.

Former journalist on Hong Kong ‘wanted’ list receives ‘friendly’ visit from CSIS agents (August 17, 2022)

Ex-Vancouver newspaper editor of Sing Tao on Hong Kong’s wanted list (August 16, 2022)

SFU prof targeted by China for groundbreaking Uyghur research (August 11, 2022)

While referencing work being done in support of the PRC’s worldview by Canada’s Alt-Left media and its academics, capturing the country’s political elite is neither a new phenomenon, most importantly has to be keenly and continually observed.

Canadian MPs hope for trade visit to Taiwan this fall despite tensions with China (August 17, 2022)

MPs, senators should consider the consequences of Taiwan visit, Trudeau says (August 19, 2022)

Regarding the book “Willful Blindness” Canadian Senator Yuen Pau Woo who has been criticized for supporting the Beijing party line leads the criticism. Chinese Canadian leaders say journalist Sam Cooper did “not check his facts” about them (May 2021). Three examples from Cooper’s closing chapters highlight why this book has been the subject of derision from the “friends of Beijing:”

  • China’s fentanyl represents hostile state activity.” (p. 306)
  • “Vancouver was becoming a global technology node for narcos, state actors and cyber criminals.” (p. 312)
  • “Canadian politicians, community leaders and business leaders need to be aware of the threat of ‘elite capture’ and espionage… At these United Front events, Chinese consulted will place agents looking for “talent” to cultivate Xi’s foreign interference plans.” (p. 345)

Conclusion & Recommendations

Between the Speaker of the House, Nancy Pelosi’s visit to Taiwan, Sino-Russo relations related to the war in Ukraine, and a contracting Chinese economy, Canada clearly didn’t warrant the PRC’s attention (wrath) during the Summer of 2022.

When the geo-political table stakes are high enough such as human rights or the Meng Wanzhou case clearly, the PRC is willing to invest in information operations targeting Canada. Yet, as this report suggests the PRC has an established network of operations and operators influencing all of Canada’s key institutions, as Sam Cooper concludes in ‘Willful Blindness’, “this is 30 years of a crime trend that has entrenched itself into our upper levels of Canadian society. It’s frightening. The connectivity onto our business elites and corridors of power in Ottawa. It will take a long time to turn the tide.” (p. 388)

There will be no legislative solution to ridding the Canadian media ecosystem of PRC propaganda. But, three steps the Canadian government can take to reduce the negative impact of PRC malign influence will be instituting FARA (Foreign Agent Registration Act), RICO Laws (Racketeer Influenced and Corrupt Organizations), and closing the remaining Confucius Institutes still operating in the country (see source #4).

We also need to further research and ask, does the PRC perhaps engage more in microtargeting than mass media using information collected through cyber breaches, WeChat, and TikTok?

_________________________________________________________________

Sources

#1 ‘Systemic challenge’ or worse? NATO members wrangle over how to treat China

#2 https://en.wikipedia.org/wiki/Norman_Bethune

#3 The Canadian Policy Context of Canada’s China Policy since 1970 (p. 33)

#4 Universities, school boards across Canada defend ties with China’s Confucius Institute
________________________________________________________________

Contact admin@mentionmapp.com to discuss our contract threat intelligence research, analysis, and reporting. Our focus is on disinformation, misinformation, and influence operation threats, risks, and vulnerabilities.

The Unpublished Project: Part I

The White Helmets Project 5 Years Later

Photo by Towfiqu barbhuiya on Unsplash

Since 2017 we’ve worked on numerous projects, which for reasons of confidentiality have not been published. We will feature five previously unpublished projects now edited. Looking back, this is also a reflection of what’s the same, what’s changed, and what we learned. Our goal is to bring some of our past efforts out of the shadows. ________________________________________________________________

The original report was delivered in March 2019.

Bottom Line Up Front (BLUF)

This client the target of a long-running disinformation campaign. The project supported the client’s MRM (Media, Research, and Monitoring) efforts, which attempted to counter disinformation efforts against it.

The White Helmets have been victimized by a vicious, sustained disinformation campaign since at least 2014. This report tells the story of 9262 unique Twitter profiles driving the online conversation about the White Helmets and analyzes the online Twitter activity of thirteen public figures and well-known proponents of pro-Kremlin narratives and the bots supporting the amplification of their messages. From August 2018 until March 2019 data from a variety of Twitter sources (audience engagement with specific Pro-Kremlin profiles and the hashtag #WhiteHelmets) was collected and analyzed. The primary project goal was to identify automated behavior connected to the amplification of messages and manipulation of platform metrics contributing to a campaign of global disinformation.

_________________________________________________________________

As the key goal of this project was to identify Twitter profiles that are most likely bots, it is important to start with two definitions: (see source #1)

  1. Bots are defined as “Pieces of software designed to automate repetitive tasks, such as posting content online. On social media, bots often purport to be genuine human agents, with the intent to deceive both humans and algorithms”
  2. If a live event is an organic experience, then a programmatic event can be defined as a synthetic one. As a synthetic event is meant to imitate a natural product, making synthetic social participation an act of manipulation. As social bots only represent one tool in the information operators’ toolkit and a small percentage of the Twitter audience, this report will present evidence that suggests the White Helmets are victims of a concerted campaign of targeted, manipulated disinformation.

By combining message amplification (collective volume) and engagement metrics (retweets, likes, replies) synthetic manipulation is intentional behavior that over time can:

  • Normalize perception: read the same narrative enough, see the same memes over and over and it can be perceived as fact or truth
  • Censor: Flooding an online conversation constitutes a form of censorship, either by drowning out organic points of view or silencing organic voices following harassment
  • Gaming: This activity also influences or “games” algorithms driving search engine results, further amplifying disinformation by allowing manipulated content to disproportionately dominate the online conversation

Disinformation tactics and campaigns erode trust in public discourse and institutions while crowding out truthful content and debate. Human-like profiles imitating organic engagement constitute “Triple P” (Pervasive, Persistent, Partisan) information threats, which actively erode truthful online discourse.

This report’s findings present an important case study representing a much bigger socio-political problem that requires policymakers to respond. The results presented in this report raise important questions related to a variety of suspect Twitter audience behaviors. This has been a process to establish identification methods, classifications, and definitions specific to bots, which will be an ever-evolving process as the tools and tactics for information warfare adapt to automated detection methods and policy changes by social media platforms.

The report is not:

  • Firm attribution
  • Proof of Impact
  • Harassment of profiles engaged in suspect activity

The report is:

  • Specific to Twitter
  • Provides the client with a foundational dataset to support counter-disinformation monitoring, analysis, and response

Research Methodology (edited):

Analysis A: The hashtag #whitehelmets is a highly relevant conversation. Using the Mentionmapp scheduling tool profile data connected to those using the hashtag was exported from the Twitter API at random moments each day during two time periods.

Analysis B: Focused on mentions or retweets of “The Dirty Dozen”: (see source #2) As of February 27, 2023, all but two of the thirteen profiles are still active on Twitter.

Unlike the hashtag #whitehelmets which can be deemed “the conversation” and as such could attract a wider and potentially diverse range of audience participants, those profiles choosing to engage directly with the Dirty Dozen could have different motivations and intentions. By collecting profile data from both the hashtag and those engaging with the thirteen specific accounts, it was agreed this approach could provide a broader audience whose activity would be analyzed over the duration of the project.

Ten weeks of analysis was based on segmenting the audience into three categories by their seven-day average daily tweet volume (see source #3)

  • Cyborgs (72+ tweets/day)
  • Moderates (36–71 tweets/day)
  • Low-volume (>1–35 tweets/day) (see source #4)
  • All of the combined data returned 9262 unique profiles.

Mentionmapp further segmented the data to analyze a dataset of 1770 unique profiles for which there is reasonable evidence to classify as bots/bot-like based on the following considerations:

  • Profiles with 0–2 replies
  • Skewed following/followed ratio such as those following twice as many as being followed back
  • Time on platform
  • Manual profile review to identify evidence of pro-Kremlin narrative
  • The Low-volume group was included to account for seven-day tweet average fluctuations

Findings:

Mentionmapp benchmarked the Twitter activity of four distinct categories of synthetic and/or suspicious activity:

  • Bot_Cyborgs: these profiles consistently exhibited highly suspicious behaviors with every scan.
  • Bot_Modulators: this category reflects profiles that modulate their tweet volumes between cyborg to non-cyborg
  • Bot_Moderates & Low-volume: there can be modulation between these two groups)
  • The attrition class: profiles that have been suspended; user not found; not authorized, and dormant. Mentionmapp continues to track these profiles because profiles may reappear

In a sample of profiles other forms of suspect behavior were noted such as: * A decline in profiles being followed (in one example, a profile demonstrated a 76% reduction in the number of profiles being followed.

* Another showed a sudden 80% decline, suggesting that lost contacts were the result of closed networks of profiles); high-volume cyborgs going dormant; the existence of suspicious profiles with near-identical screen names

Mentionmapp’s experience monitoring suspect profiles suggests that a number of these profiles (modulators and the “not authorized”) are being operated to avoid being detected for violating the platform’s terms of service and subsequent suspension/deletion.

Questions, Observations, and Conclusions (then)

In many ways, this project has been the exploration of an uncharted ecosystem. This descriptive analysis tests a sound but preliminary foundation of methods and processes, but the questions still far outweigh the answers.

This project contributes to the analysis of the complexities of the digital information ecosystem, attempting to define and describe the problem while considering models of adversarial intent.

From a Mentionmapp Analytics perspective, we reflect on the following implications for further research:

  • Models are needed to understand how state actors and their proxies operate and manipulate bots and this might differ from other bad actors. A behavioral model of adversarial intent has yet to be developed.
  • Bot scores provide indicators or signals, which allow analysts to track noticeable fluctuations in scores over time. Further research is required to understand if adjustments are also automated or if they are facilitated by human profile operators themselves.
  • Confidence classifying bot profiles declines as tweet volume declines. Low tweet volume starts to appear more human-like, evading automated detection and removal. This could also reflect programmatic augmentation such as using scheduling tools (Buffer or Hootsuite), which are often operated without nefarious intent.
  • Further research and analysis are required to refine and agree on a common classification of bot types. Classification definitions must take into account volume, manipulation of metrics, coordination of multiple accounts, chatbots, and more.

In conclusion, the collection of bot/bot-like profiles suggests there is enough synthetic behavior targeting the White Helmets specifically and promoting Pro-kremlin narratives to cause significant concern about

Reflection

As per the client’s requirements, this project was Twitter-centric and bot-focused, and clearly, the volume of Tweets was an attribute that was given prime consideration. In retrospect, it would have been interesting to dedicate resources to examining the behavior, attributes, design, connections, and narrative patterns of the low-volume profiles. Profiles like these fly under the radar, yet over time a large enough collection could cumulatively support a campaign of strategic disinformation narratives.

Beyond the scope of the project, it would have been valuable to have analyzed links and documented the websites that the audience was being directed to. We’re claiming any correlation but noted recently that as of February 2022, RT is the fifth most visited website by Syrian audiences behind, Google, YouTube, Facebook, and Wikipedia with SputnikNews ranked fourteenth.

DIGITAL 2022: SYRIA

Five years later, eleven of the thirteen “Dirty Dozen” profile are still actively advancing and amplifying pro-Assadist, pro-Kremlin, anti-imperial, anti-West, and a variety of corrosive conspiracy theories into the information ecosystem.

_________________________________________________________________

Sources

#1 European Think Tank article “Polarisation and the use of technology in political campaigns and communication”

#2 The “Dirty Dozen” was a list of profiles provided by the client based on their internal research. The group represents the most influential key profiles engaged in negative discourse about the White Helmets

#3 DFRLabs is a leader in disinformation research, specifically related to automated (bot) activity. DFRLab’s (December 2016) definition of “suspicious”: “For the purposes of this analysis, a level of activity on the order of 72 engagements per day over an extended period of months — in human terms, one tweet or like every 10 minutes from 7 am to 7 pm, every day of the week — will be considered suspicious. Activity on the order of 144 or more engagements per day, even over a shorter period, will be considered highly suspicious. Rates of over 240 tweets a day over an extended period of months will be considered as “hyper-tweeting” — the equivalent of one post every three minutes for 12 hours at a stretch.”

#4 Given that bots are assets, it’s fair to suggest their operator may adjust and change their activity as a form of countermeasure. At some point, the deletion or suspension of assets is a cost. As well, by focusing on only high-volume profiles (cyborgs) we run the risk of missing other programmatic behaviors that in the aggregate are eroding and damaging the information landscape.
________________________________________________________________

Contact admin@mentionmapp.com to discuss our contract threat intelligence research, analysis, and reporting. Our focus is on disinformation, misinformation, and influence operation threats, risks, and vulnerabilities.

The Lights are Still On

Photo by Thomas Bjornstad on Unsplash

We have chosen to discontinue supporting our Twitter network visualization application… Mentionmapp. Keeping it going from 2015 until September 30, 2022, was a good run. Thanks to everyone who appreciated its utility and value.

However, the lights are still on at Mentionmapp Analytics Inc. We have developed bespoke tools to research and analyze the social media ecosystem. As well, our code library can be leveraged for a variety of custom data visualization projects.

We focus our contract work to investigate and deliver a threat assessment analysis about the complexities of networked narratives and the flow of disinformation. Our clients include information security organizations; media organizations/journalists; NGOs; academics and global think tanks.

Recent projects include:

  • Research, literature review & feedback for a forthcoming book (Oxford University Press) about Russian Strategic Narratives being written by Sarah Oates (University of Maryland)
  • Research, literature review & chapter for a forthcoming Information Warfare monograph for the Foundation for Defending Democracy.
  • Research, analysis, and report about China information operations targeting Canada (confidential platform client)

Contact admin@mentionmapp.com about working with us.

#Kremlin: Using Hashtags to Analyze Russian Disinformation Strategy and Dissemination on Twitter

Working in collaboration with Prof. Sarah Oates (Fellow, Woodrow Wilson International Center for Scholars, 2018–19; Professor and Senior Scholar Philip Merrill College of Journalism University of Maryland) we presented our paper “#Kremlin: Using Hashtags to Analyze Russian Disinformation Strategy and Dissemination on Twitter” on August 30th at APSA 2019 in Washington D.C. (August 29-September 1). Ours was one of four papers presented during the session: Online Disinformation: Actors, Platforms, and Users.

Abstract

Reports of Russian interference in U.S. elections have raised grave concerns about the spread of foreign disinformation on social media sites, but there is little detailed analysis that links traditional political communication theory to social media analytics. As a result, it is difficult for researchers and analysts to gauge the nature or level of the threat that is disseminated via social media.

This paper leverages both social science and data science by using traditional content analysis and Twitter analytics to trace how key aspects of Russian strategic narratives were distributed via #skripal, #mh17, #Donetsk, and #russophobia in late 2018.

This work will define how key Russian international communicative goals are expressed through strategic narratives, describe how to find hashtags that reflect those narratives, and analyze user activity around the hashtags. This tests both how Twitter amplifies specific information goals of the Russians as well as the relative success (or failure) of particular hashtags to spread those messages effectively.

This research uses Mentionmapp, a system co-developed by one of the authors (Gray) that employs network analytics and machine intelligence to identify the behavior of Twitter users as well as generate profiles of users via posting history and connections.

This study demonstrates how political communication theory can be used to frame the study of social media; how to relate knowledge of Russian strategic priorities to labels on social media such as Twitter hashtags; and to test this approach by examining a set of Russian propaganda narratives as they are represented by hashtags.

Our research finds that some Twitter users are consistently active across multiple Kremlin-linked hashtags, suggesting that knowledge of these hashtags is an important way to identify Russian propaganda online influencers. More broadly, we suggest that Twitter dichotomies such as bot/human or troll/citizen should be used with caution and analysis should instead address the nuances in Twitter use that reflect varying levels of
engagement or even awareness in spreading foreign disinformation online.

Download and read the full paper via SSRN

#digitalsherlocks 360/OS: Berlin Unbound — Mentionmapp Analytics — Medium

“To be ignorant of what occurred before you were born is to remain always a child. For what is the worth of human life, unless it is woven into the life of our ancestors by the records of history?” ― Marcus Tullius Cicero

It’s meaningful to have the opportunity to reflect on my two days in Berlin (June 22–23) attending the 360/OS conference. The stated purpose — this “ cross-sectoral network of “Digital Sherlocks” will create and cultivate techniques needed to expose falsehoods and fake news, document human rights abuses, and report the actuality of global events in real-time.” Being part of it makes the past two years of work feel worthwhile. Yet, it reconfirmed the complexity of issues and the enormity of the challenge to defuse disinformation in a hyper-connected world.

Here are some highlights and a few musings. An omission isn’t an indication of disinterest or having a personal hierarchy of importance. I learned something from every speaker. The conference held true to its name, highlighting the importance of having a 360-degree point of view along with the potential of Open Source Intelligence (data collected from publicly available sources).

I’ve suggested 2018 is the year of “Post-Trust”, and noted @DFRLab Managing Editor and Director Graham Brookie’s comment that “Open Source equates directly with trust.” As well, regardless of one’s organization (media, civic, technology or government), him saying “don’t assume your own credibility” is something worth paying heed too as well.

With our online public space floundering in untrustworthy headlines, images, and videos, Bellingcat’s Eliot Higgins, added that Open Source intelligence is operating to “identify, verify, and amplify.” The frustrating quandary is knowing the production, distribution, and amplification of disinformation exponentially outstrips current resources and capacity of the human intelligence needed to dispel the lies.

Eliot Higgins

There is no shortage of questions that need framing and asking. A two-day conference wasn’t meant to deliver all of the answers. Again, we’re still coming to terms with “speed” in relation to the speed of human deliberation versus the speed of how the spread of lies and the flows of disinformation.

We need to consider the implications connected to the speed of response. For instance, how do we respond even with proof that a state or non-state actor’s disinformation campaign is an attempt to influence or is an act of interference? We’re also forced to ask, who do we respond to? What shape or form would a remedy take? Can we set rules or codes of conduct in cyberspace? If so, who sets them? Can we even talk of rules? Who’d play by the rules? And, we also have to ask ourselves — who are the arbiters of truth? Everyone I spoke with acknowledged that the complexity of the disinformation problem only galvanizes our resolve to find solutions.

Having tools at one’s disposal doesn’t ensure the ability to communicate a solution. Getting digital forensics right was well illustrated when Nick Waters (Bellingcat) presented, Verification in Person: Finding Bana ( read the story). In this case, the safety of Bana’s family far outweighed Bellingcat going to press with the proof they uncovered. It was a moving moment.

Nick Waters talking with Bana

We “played” table games including the Geolocation Challenge, & Aggressors and Atrocities. We worked as small teams comparing visual evidence while trying to verify the actual sources. It was a humbling exercise.

Time ticking on a “table game”

It became clear that we’re also facing the increasing danger of platform enabled machine censorship. The social media platforms (Facebook, YouTube, and Twitter) play such key distribution and amplification roles while at the same time they are under increasing pressure to mediate the publicly generated content. Machines function and act on what they are programmed to do. They do not discern between jihadist propaganda and potential evidence of human rights atrocities.

Sam Dubberley (Manager) Digital Verification Corps (Amnesty International) talked about using open source research to verify human rights abuses around the world. He spoke of particular a change with Google Earth Pro (satellite imagery) which saw them lose 12 years of lost archived geolocation data.

While Hadi Al-Khatib (Founder & Director) Syrian Archive described how they lost 400,000 YouTube videos. In both cases, these algorithmic content “take-downs” or significant changes to a tool take place with no discussion, no collaboration and no regard to the forensic value. Platforms can show us today’s atrocities while their technologies could ensure the perpetrators may never face tomorrow’s justice.

I noted an important comment (but not who deserves the attribution) about how it’s one thing “to have intelligence versus having the intelligence of how to use it.”

There’s my permanent record of Ben Nimmo’s — Four D’s (dismiss, distract, distort, dismay) for responding to truths deemed inconvenient, thatnow goes with me everywhere. I’m also holding onto his advice “don’t feed the trolls.”

Andrea Bruce (National Geographic Photojournalist) — Storytelling: Power in the Proof, reconfirmed it’s “not about looking.” For Bruce, it’s about what her lens sees of a subject rather than looking for a story. For us, this is how we approach analyzing data. The data tells the story, we don’t fit the data to a story.

Former US Secretary of State Madeleine Albright started the final day of 360/OS, and her observation that “People talk with 21st Century tools; Government listens via 20th Century media; and then delivers 19th Century policy,” is scratched into my notebook.

While the event was about disinformation, it was also about our connection with history. In particular, three modern post-war periods (WWI, WWII, and the Cold War) influence our world today. Regardless of how uncomfortable any past is, willfully ignoring it solves nothing. Even more concerning are those who wish to deny or attempt to erase history, only ensures it’s repetition.

The collective absorption of immediacy has a hand in creating today’s distorted realities. While exploring Berlin, I was also juxtaposing images of yesterday and today. I don’t see history as an exercise of pining for the “good old days,” but one of understanding how our yesterdays will define the tomorrows to come.

Dealing with the disinformation problems to come, we will need (which 360/OS represents) a coalition of resources, and participants including government, technology companies, civic organizations, and citizens.

  • From government — we need access to resources (human, data, and funding) not laws or legislation.
  • From technology company’s — we need to design understanding for human intentions, not more technocratic-utopianism.
  • From civic organizations — we need new solutions designed with ground-up thinking, not yesterday’s top-down driven models of charity.
  • From citizens — we need collective civility and reject constructs of the consuming individual. The value of being informed is eroding. The harvesting of attention now makes amusement and outrage a premium.

The solutions to disinformation entail more than new cyber-security or Artificial Intelligence technologies and better tuned “black box” algorithms.

Inoculation from information operations that are influencing, interfering, and manipulating public opinion or perception might start with us imagining a society where all citizens have access to a home, to food, and the opportunity to be fully literate. Delivering these basics can forge a foundation of human cognitive security.

____________________________

Recommended reading:

Nothing Is True and Everything Is Possible by Peter Pomerantsev

Churchill and Orwell: The Fight for Freedom by Thomas E. Ricks

The Kremlin Ball by Curzio Malaparte

I owe special thanks to the Atlantic Council, Jay Brown, Mike Edwards, Bosco Anthony, Jan Enns, Kim Bowie, Elliot Funt, Dave Davies, Randall Lucas, Sam Sullivan and Travis Truong for helping make my attending 360/OS happen.

__________________________________________________________________

To help you see through the complexities of this rapidly evolving landscape, we’ve written the four-volume eBook series, Ecosystem of Fake: Bots, Information & Distorted Realities. We invite you to learn more about today’s information battlefield, proposed solutions, and a further reading resource today. Let’s work towards making cyberspace of more human place.


Originally published at https://medium.com on June 15, 2019.

Programmatic Propaganda In Action: #OperationOliveBranch

“And if all others accepted the lie which the Party imposed — if all records told the same tale — then the lie passed into history and became truth. ‘Who controls the past’ ran the Party slogan, ‘controls the future: who controls the present controls the past.” 
George Orwell, 1984

In January 2018, Turkey launched a military operation, code-named Operation Olive Branch, in northern Syria, against the Kurdish-led Democratic Union Party in Syria (PYD), its armed wing, the People’s Protection Units (YPG), and Syrian Democratic Forces (SDF) positions surrounding the Syrian city of Afrin.

Operation Olive Branch and its preceding one, code-named Operation Euphrates Shield in 2016–2017 are the first invasions by Turkish ground forces into a sovereign nation since the invasion of Cyprus in 1974.

This research highlights the incorporation of the cyberspace dimension to that conflict. Managed through forums and social media, noncombatants proxies of the state engaged in disseminating propaganda joined the fight to win the battle for controlling the narrative.

We analyzed Twitter activity (tweets and retweets) and profiles connected with the hashtag #operationolivebranch and identified activities of the Turkish state or pro-Turkish state elements to influence audiences perception around the world.

  • Two high-volume programmatic profiles (we’d classify as cyborgs being a combination of human and machine), one that poses as a journalist/blogger @PelinCiftek, and the other positioned parody profile @AkPartiNet. At the time of our research, these two profiles were tweeting at a rate of 465 tweets per day (seven-day average) and sharing the same content from 22 other profiles, which also displayed suspicious behaviors.

(In line with the DFRLab positionFor the purposes of this analysis, a level of activity on the order of 72 engagements per day over an extended period of months — in human terms, one tweet or like every 10 minutes from 7 am to 7 pm, every day of the week — will be considered suspicious. Activity on the order of 144 or more engagements per day, even over a shorter period, will be considered highly suspicious.”)

We also note these two profiles were created within one month of each other, and have nearly an identical tweet to like ratio.

@PelinCiftek (translated)
@AkPartiNet (translated)
  • These seemingly two unrelated (not directly connected) profiles are highly programmatic and serve as the hubs of a tightly connected network that is clearly amplifying the Turkish government message on the war in Afrin and other issues. This serves as an example of computation propaganda in action.

Our research highlights how this network operates in a programmatic manner, as a mass amplifier, and with the intent to manipulate or influence public perception.

Findings from our research point to an orchestrated campaign connected with the hashtag #operationolivebranch. Again, these two profiles (@PelinCiftek and @AkPartiNet) at first glance have nothing in common nor appear to be connected. Yet, we found they were operating in concert with the identical volume, timing and tweets themselves (small sample noted below).

The 22 profiles they are amplifying and whose related content undoubtedly met with Turkish government approval in relation to the conflict in Afrin.

Notes and network visualizations:

*red-avatar profiles are presenting bot-like characteristics.

Map I

Map I: This is the network visualization of #operatonolivebranch that started this investigation. The symmetry between @AkPartiNet and @PelinCiftek raises too many questions not to explore the nature of these two profiles and these connections.

Map II

Map II: shows on 03/16/2018 @AkPartiNet’s last 200 tweets the five most mentioned profiles, and the two most used hashtags. The thickness of the lines indicates the volume of tweets. Profiles in grey have been mentioned by the profiles in @AkPartiNet network. It’s interesting to see the network connectivity between the first-degree profiles and the second-degree ones.

Map III

Map III: 03/16/2018 from @AkPartiNet last 200 tweets these were the 22 profiles mentioned, and the two most used hashtags.

Map IV

Map IV: 03/16/2018 from @PelinCiftek’s last 200 tweets these were the five most mentioned profiles and two most used hashtags. Profiles in grey have been mentioned by the profiles in @PelinCiftek’s network. Again, we were interested to see the network connectivity between the first-degree profiles and the second-degree ones.

Map V

Map V: 03/16/2018 from @PelinCiftek’s last 200 tweets these were the 22 profiles mentioned, and the two most used hashtags. It’s important to note the identical pattern of activity between these two profiles.


In summary:

1: we have documented two programmatic Twitter accounts that are posing as journalists/bloggers. They served as the key network hubs and amplifiers (as illustrated above):

The “Journalist” — @PelinCiftek (no links to personal sources or professional bylines)

The Parody — @AkPartiNet → website linked to the account, akparti.com is unreachable.

2) Additional programmatic and/or cyborg Twitter accounts amplifying the Turkish government’s message about the war in Afrin. We conclude these are likely government approved proxies or messengers (if not, it’s highly improbable this content and network would exist) whose tweets are relayed by the two accounts above. At the time of our research we noted these ten profiles have a daily tweet average to classify them as cyborgs.

PopulerGundem = 229 tweets/day

BoraDemiraslan = 200 tweets/day

Enesicoo = 172 tweets/day

EsmaUyumlu = 172 tweets/day

FetoGercekler = 140 tweets/day

SiyasiKulis = 115 tweets/day

TCsonBasbakan = 97 tweets/day

AkPartiNoktaOrg = 86 tweets/day

BestepeCB = 86 tweets/day

SaameetDeemiir = = 80 tweets/day

The 12 other profiles that made up this network include the following — (cumulatively they account for another 285 tweets/day)

AsliAyDincer, Akparticom, CankayaBasbakan, ErdoganFotograf, DevletBaskaniCB, abdullahciftcib, enesiovic, mahmutovur, MevlutCavusoglumetinhocaefendi, memlktmeselesi, EmreUslu


This research and project was done in collaboration with Flavius Mihaies


While 2017 is behind us, many of the past years troubling themes are not. We’ve seen investigations into Russia’s interference in the US Presidential Election unfold, CEOs of digital platforms being questioned about how their contributing to the information crisis, along media outlets and information itself being deemed untrustworthy. With few solutions in sight, 2018 is giving us more of the same.

To help you see through the complexities of this rapidly evolving landscape, we’ve written the four-volume eBook series, Ecosystem of Fake: Bots, Information & Distorted Realities. We invite you to learn more about today’s information battlefield, proposed solutions, and a further reading resource today. Let’s work towards making cyberspace of more human place.

Sincerely

John (CEO & Co-founder)

Mentionmapp Analytics

Mentionmapp Investigates: See if your information landscape is at risk or explore collaborative research opportunities with us. Contact: john [at] mentionmapp [dot] com

#TrumpKnew: Seeing the Cyborgs Run

“Liberty does not exist in the absence of morality.” Edmund Burke

It’s been like global political hashtag roulette since the early June, #G7Summit #NATOSummit #TrumpVisitUK #HelsinkiSummit #TrumpRussia #TreasonSummit #TrumpPutin. Unlike playing Pokemon, we’re not in the position to capture (and analyze) them all. Our curiosity, and resources lead us to casting a lens on the hashtag #TrumpKnew.

We captured six moments of #TrumpKnew on July 19th (17:20, 18:00, 18:20, 18:40, 19:00, 19:20 PST) and what we found might come as surprise. This collection of data gave us 577 total unique profiles that we reduced to 300 profiles (the order in which we collected to tweets) for this report. As one profile was a business and seven were media related we ended up breaking down of the activity for 292 profiles.

It fair to call this couple hours of “conversation” polarized, hyper-partisan, and unbalanced.

Separating the data into a Blue versus Red binary is where the “conversation” gets unbalanced.

Out of 292 profiles 45 are Red.

Of these 45 we classify 21 as cyborgs or social bots.

The seven-day average number of tweets of these 21 breaks down as:

400+ tweets/day = 2 profiles

300–399 tweets /day = 2 profiles

200–299 tweets/day = 7 profiles

100–199 tweets/day = 7 profiles

70–99 tweets/day = 3 profiles

The two most voluminous profiles with 458 and 428 tweets/day respectively are:

Additionally 34 of the 46 profiles are anonymous, with a lack of veracity it’s fair to suggest there’s an accompanying lack of credibility.


Out of 292 profiles 247 are Blue.

Of these 247 we classify 104 as cyborgs or social bots.

The seven-day average number of tweets of these 104 breaks down as:

400+ tweets/day = 6 profiles

300–399 tweets /day = 8 profiles

200–299 tweets/day = 19 profiles

100–199 tweets/day = 40 profiles

70–99 tweets/day = 31 profiles

The two most voluminous profiles with 465 and 461 tweets/day respectively are:

Additionally 152 of the 247 profiles are anonymous, (again) with a lack of veracity it’s fair to suggest there’s an accompanying lack of credibility.

Seeing 292 mostly anonymous profiles (186 total) connected to the hashtag #TrumpKnew (profiles that are or appear real need verification as some are clearly not who/what they say) points to participation not as social act (good luck finding a feed with food porn, fun times or non-anthem related sports references) but to function as fervently hyper-partisan political amplifiers or metric manipulators.

We did find a few real people in the conversation such Ned (while highly political and least he’s real)

And same with

in the case we found they even shared at least one non-political tweet


Like out of Julius Caesar (Shakespeare), Mark Antony was clear “I come to bury Caesar, not to praise him.’ and these maps clearly highlight a conversation that’s not pouring out the praise.

20 profiles tweeting #trumpknew July 19, 2018 at 17:20 PST. Thick lines equate to the volume of tweets. The grey profiles are being mentioned by those highlighted in blue.
The next 20 profiles tweeting #trumpknew July 19, 2018 at 17:20 PST.

This pattern of network visualization repeats itself for each moment of #TrumpKnew that we have archived.

Arguably there are Progressives (organizations, communication professionals, individual resistant patriots) playing the game of operating cyborgs (machine meets human profiles) social bots and sockpuppet account tweeting. No doubt they’ll argue “the other the other side is doing it as well,” yet, this can’t justify adding more toxicity to the information ecosystem.

The other, and equally significant concern is the impact of people and organizations misreading the false signals. Falling prey to a belief that if a profile is flying the team colors then all is good, means you’re being gamed. We’ve long held that for every #MAGA Bot there’s a #Resist Bot, and information operations aren’t about picking sides, but are all about sowing doubt and distrust.

Like Shakespeare’s Comedy of Errors, before believing it’s worth heeding this echo of the past — “I see two husbands, or mine eyes deceive me.”


While 2017 is behind us, many of the past years troubling themes are not. We’ve seen investigations into Russia’s interference in the US Presidential Election unfold, CEOs of digital platforms being questioned about how their contributing to the information crisis, along media outlets and information itself being deemed untrustworthy. With few solutions in sight, 2018 is likely to see more of the same.

To help you see through the complexities of this rapidly evolving landscape, we’ve written the four-volume eBook series, Ecosystem of Fake: Bots, Information & Distorted Realities. We invite you to learn more about today’s information battlefield, proposed solutions, and a further reading resource today. Let’s work towards making cyberspace of more human place.

Sincerely

John (CEO & Co-founder)

Mentionmapp Analytics

Mentionmapp Investigates: See if your information landscape is at risk or explore collaborative research opportunities with us. Contact: john [at] mentionmapp [dot] com