Month: March 2023

The Unpublished Project: Part V

#Kremlin: Using Hashtags to Analyze Russian Disinformation Strategy and Dissemination on Twitter. #MH17 Twitter Cyborgs Three Years Later.

Foto de Towfiqu barbhuiya en Unsplash

Since 2017 we’ve worked on numerous projects, which for reasons of confidentiality have not been published. We will feature five previously unpublished projects now edited. Looking back, this is also a reflection of what’s the same, what’s changed, and what we learned. Our goal is to bring some of our past efforts out of the shadows. Part I Part II Part III Part IV

________________________________________________________________

The original report #Kremlin: Using Hashtags to Analyze Russian Disinformation Strategy and Dissemination on Twitter, was a collaborative project with Sarah Oates ( Merrill College of Journalism, University of Maryland) and we co-presented the paper at the 2019 APSA conference.

Bottom Line Up Front (BLUF)

Out of curiosity, in September 2022 we returned to an original set of 340 Twitter profiles classified as cyborgs (see note #1) and connected to the hashtag #MH17 to see what if anything has changed about their activity.

Account attrition was noteworthy with 181 accounts, being either suspended, inactive, or no longer existing. Additionally, we categorized 62 accounts having a pro-Kremlin feed.

______________________________________________________________

Image From EUvsDisinfo “SIX YEARS OF MH17-LIES: THE KREMLIN IS LOSING ITS OWN GAME”

Background

The following are excerpts from the original paper.

This original paper leverages both social science and data science by using traditional content analysis and Twitter analytics to trace how key aspects of Russian strategic narratives were distributed via #skripal, #mh17, #Donetsk, and #russophobia in late 2018. This work will define how key Russian international communicative goals are expressed through strategic narratives, describe how to find hashtags that reflect those narratives, and analyze user activity around the hashtags. This tests both how Twitter amplifies specific information goals of the Russians as well as the relative success (or failure) of particular hashtags to spread those messages effectively

This research makes two important assumptions. First, it assumes without further investigation (based on previous media and analytical reports) that the Russian government is choosing to pursue a disinformation strategy on Twitter. Nor does this paper attempt to measure the specific effect of the disinformation on the target population, i.e. whether it is causing any change in attitude or behavior. What this paper does is analyze the presence and behavior of Twitter users linked to hashtags that are promoting Kremlin strategic narrative priorities. By doing this, we can provide critical evidence to identify those who are creating or amplifying disinformation in our media ecosystem. By combining the identification of hashtags linked to key Kremlin narrative goals, Twitter users who are amplifying those hashtags, and the behavior of Twitter users over time, we can identify more precisely how online communities of disinformation function. This allows the discussion to evolve beyond identifying whether a Twitter user is a bot or cyborg, which is useful but only one element of disinformation activity. Rather, we are more broadly interested in how the Kremlin’s strategic narratives are amplified, over time and via different hashtags, on Twitter.

Our research posits that, in the case of Russia, strategic narratives can be defined and linked to specific Twitter hashtags. This is a plausible approach for the Russian case for the following reasons:

1. Russia has strategic narratives that can be readily identified through speeches and documents from the central government, specifically policy concepts and key addresses by President Vladimir Putin. As an authoritarian regime that closely controls key communication nodes (particularly state-run television and foreign broadcasts), the Kremlin provides coherent and consistent messages that can be identified in both its domestic news and its foreign propaganda. 2. There is no viable political opposition in Russia (unlike the deeply divided voices of current U.S. elites), allowing for a unified signal of national strategic messages. 3. Russia has demonstrated consistent and pervasive dedication to promoting its views on social media sites, efforts that have been fairly closely documented due to Russian interference in U.S. election campaigning.

2. Russia has a strong history of a state role in constructing (as opposed to just amplifying) strategic narratives. As a successor to the Soviet regime, Russia was forced to create symbols and narratives in the 1990s, which gave the state latitude to dictate a vision of the state from above, as opposed to reflecting on the reality from below. At the same time, the Russian state could take advantage of both the powerful images and methods of the Soviet propaganda system, which it eventually did after a relatively brief experiment with a more democratic society (Oates, 2006). 5. Russians can also hijack hashtags in an attempt to control narratives, such as when the Russian Foreign Ministry tweeted #UnitedforUkraine in 2014. This was a (see source #1) small-scale, human effort; those using automated efforts are more sustainable and could arguably be more effective.

______________________________________________________________

#MH17 Cyborgs. September 2022

This was a curiosity project, motivated by an interest to see “who” was still active on Twitter almost four years after starting the original research. The attrition is noteworthy with over 50% of the original group of cyborgs, being ‘inoperable.’ Of course this could already have changed. It fair to ask what suspended accounts are now active in the Spring of 2023, along with previously inactive account have sprung back to life. Small projects for another day.

Borrowing for Heraclitus, Twitter’s feed is like the experience, where “no man ever steps in the same river twice, for it’s not the same river and he’s not the same man.”

______________________________________________________________

Revisiting our research limitations and conclusions

One of our limitations is reconciling our classification system with using average tweet volume over a week. We have identified three different types of accounts above: cyborg, moderates, and low-volume. Mentionmapp used average activity over a time period (typically a week), but this average hides peaks and valleys in the activities of some accounts. For example, an account that is very active may fall silent or slide from cyborg to moderate levels of engagement or back again. There are also problems of suspended accounts, deleted profiles, deleted feeds of tweets, and protected tweets that can obscure the findings. Mentionmapp cannot download or analyze private accounts, although it is questionable how effective a private account can be in information warfare.

Conclusions

The challenges posed by the spread of disinformation via social media create a significant threat to democratic discourse. We deployed two approaches that can be replicated by other scholars and analysts. First, we relied on several studies of narrative to take a broader view of media messages, narrowing down to examples of Russian strategic narratives that could reasonably be identified by a single hashtag. In this way, we were able to isolate critical signals from the vast noise of social media, although we concede that a study of Twitter alone is relatively narrow. Second, this study would not have been possible without a robust retrieval and measurement tool for Twitter that had been developed and tested in a range of situations. This research was a partnership between an academic and a technology entrepreneur, an attempt to leverage different types of expertise. We found this partnership quite fruitful. Even when considering the limitations, this study provided three key findings:

1. It is highly useful to know what you’re looking for and what you’re looking at when approaching social media analytics. The pre-selection of hashtags that we feel confident represent Kremlin strategic narratives makes this a much more useful exercise.

2. The Twitter activity identified by Mentionmapp demonstrates the value of moving beyond dichotomies such as bot/human and using a wider range of user behaviors to establish relative roles specific Twitter users play in synthetic audience engagement.

3. It is possible to find disinformation influencers by analyzing user behavior across multiple hashtags linked to Russia’s disinformation priorities. This is an approach that can be applied in a range of situations beyond Russian information operations.

______________________________________________________________

Notes/Sources

Note #1 — A cyborg is a Twitter user that appears to use both automated and human activity (which will be discussed in more detail below). A bot is an autonomous program on a network that is designed to respond or post in response to programmed cues online.

Source #1- https://www.rferl.org/a/ukraine-us-russia-twitter-trolling/25362157.html

The Unpublished Project: Part IV

Canada’s 2019 Federal Election: Assessing the Information Ecosystem

Foto de Towfiqu barbhuiya en Unsplash

Since 2017 we’ve worked on numerous projects, which for reasons of confidentiality have not been published. We will feature five previously unpublished projects now edited. Looking back, this is also a reflection of what’s the same, what’s changed, and what we learned. Our goal is to bring some of our past efforts out of the shadows. Part I Part II Part III

________________________________________________________________

The original report was a collaborative confidential project and was delivered in June 2020.

Bottom Line Up Front (BLUF)

Five months before the 2019 Federal Election, 71 percent of Canadians expressed worry that foreign governments would use social media to affect the outcome of the election(see source #1), and 74 percent of Canadians were concerned that the same manipulative tactics would be harnessed by domestic special interest and partisan groups (see source #2).

Despite recording a number of suspicious digital phenomena, the researchers could not confidently attribute any of these events to the operations of a foreign government. This could be due to broader changes in the tactics of foreign actors or a determination by adversarial governments that coordinated interference did not justify the commensurate risks and costs. The growth of counter-disinformation initiatives and public awareness since 2016 has led foreign troll farms and other such entities to turn to more covert and clandestine methods (see source #3). While the researchers have still made attributions under these circumstances (see 2019’s Operation Secondary Infektion), it requires access to corroborating evidence and technical backend data that was not available in this case. Instead, the clearest signs of “foreign” interference come in the evidence of coordinated political trolling by Canadian and U.S. far-right activists.

The report provides a general discussion of mis- and disinformation during the 2019 Federal Election. It examines specific community discussions on Reddit, Pinterest, and Facebook — platforms selected for their political influence and relative lack of study in the Canadian context — as well as amplification by domestic Canadian actors of Russian state propaganda.

______________________________________________________________

Background

2019 was a contentious and polarizing election cycle. The Liberals were forced to form a minority government, becoming the governing party with the lowest share of the popular vote in Canadian history. Voter turnout was lower than it had been in 2015, dropping by 2.6 percent.

This report was informed by periodic surveys of the Canadian information ecosystem before and after the 2019 Federal Election. This initiative was focused on determining the extent of both attributable and suspected foreign influence efforts that targeted the elections process and on better understanding how the domestic information environment contributed to these efforts.

An open-source examination of the Canadian digital landscape demonstrated that negative content targeted parties and party leaders across the political spectrum, but the researchers observed a disproportionate volume of that negative content as directed at Trudeau and the incumbent Liberal government. None of the evidence pointed to any other party or party leader’s direct involvement or endorsement of the negative campaigns against Trudeau. On Twitter, anti-Trudeau hashtags such as #TrudeauMustGo greatly exceeded the volume and intensity of hashtags targeting any political figure associated with the other political parties On Facebook, some far-right extremist groups went so far as to propose fantasy scenarios to assassinate Trudeau (see source #4). On Pinterest, anti-Trudeau messages mixed with virulent anti-immigration and anti-Muslim memes, sown by inauthentic accounts and boosted by Pinterest’s own algorithms. In general, the election witnessed ample cases of viral misinformation and coordinated inauthentic activity.

Selected Cases of Dis- and Misinformation During the 2019 Federal Election

During their survey of the Canadian information ecosystem before and after the 2019 Federal Election, the researchers observed numerous instances of political dis- and misinformation, which lingered outside the regulatory authority of the Canadian government. The vast majority of this content was almost certainly domestic in origin. Beyond the activity of Russian state media broadcasters, the researchers could not make any further attribution.

Keyword trend analysis of Canadian media from January 2019 through the elections suggested the topics that received the most attention during the Canadian elections were Environment, Economy, Indigenous, and Immigration. “Environment” stories were the most popular with about 1,400 stories per day; “Economy” had about 1,060 stories per day; “Indigenous” had almost 800 stories per day; and “Immigration” had roughly 500 stories per day. Although “Immigration” received the least amount of mainstream coverage, it still received many keyword hits on social media. In particular, there are about 33,000 “Immigration”-related mentions on Twitter from January 1, 2019 to October 22, 2019. About 14 percent of these mentions were deemed to be negative, while out of 18,000 mentions of “Economy,” about 10 percent were negative. While sentiment analysis is not a perfect tool, it does offer insight into the makeup of the Canadian information ecosystem.

Over the course of the election, the issue most frequently associated with mis- and disinformation was immigration. In particular, refugee quotas remained a significant wedge issue during the election and a favorite talking point for Twitter trolls, who spread false stories about immigrant crime rates and who frequently engaged in anti-immigrant and anti-Muslim hate speech (see source #5). As the election neared, public polling found that 63 percent of Canadians believed that the nation should lower its immigration quotas, as it was reaching a “limit” in how it could integrate them. This was a marked shift from 2014, in which only 36 percent of Canadians had expressed a desire to decrease immigration numbers (see source #6).

Interestingly, actions taken by the Canadian government to safeguard and improve election processes also sowed the seeds of viral misinformation. The EMA (Election Modernization Act) was a frequent target. False stories alleged that the bill would allow non-resident Canadians to vote in the general elections — an obvious distortion of the bill’s restoration of voting rights to overseas citizens (see source #7).

Over the course of its analysis, the researchers focused on two case studies. The first regards the interchange of virulent, anti-immigrant hate speech over multiple platforms and online communities. The second regards the opportunism shown by Russian state media in its Canadian election coverage. These cases, evidencing coordinated trolling around nativist rhetoric and amplifying domestic political scandal by foreign media, most resembled the Russian influence operations conducted against the United States in 2016.

Overlapping Anti-Immigrant Narratives and Communities

As online discourse increasingly revolved around immigration during the 2019 Canadian Federal Election, some far-right communities became “echo forums,” which are dedicated to a single topic of discussion and which endeavor to speak with a single ideological voice. This behavior was especially evident on Reddit through the rapid growth of r/MetaCanada, a subreddit founded in 2011 and initially featuring general, off-color humor before coming to focus exclusively on nativist posts and memes.

In time, r/MetaCanada also appeared to associate closely with the r/The_Donald a Reddit community that became the locus of digital influence efforts for then-candidate Donald Trump in 2015 and 2016. After the 2016 U.S. election, r/MetaCanada’s membership rose sharply, from 6,500 in November 2016 to 31,000 by October 2019 (see source #8). r/MetaCanada’s tone — racist, misogynistic, and Islamophobic — came to match that of r/The_Donald closely. Like r/The_Donald, r/MetaCanada became a gathering place for far-right troll mobilization as election day drew near.

A post that derives the slang language by the U.S. president in which he referred to Haiti, El Salvador, and African countries for questioning their immigration to the United States. The post on MetaCanada took the U.S. language and added a Canadian spin.

The use of “shithole” by U.S. President Donald Trump is echoed back in MetaCanada but from a Canadian perspective. (Source: Reddit.com/MetaCanada)

Using an open-source tool to analyze commenting activity in r/MetaCanada, the DFRLab found significant user overlap with majority-American subreddits: r/HillaryForPrison, r/DrainTheSwamp, r/TuckerCarlson, r/DebateAltRight, and, of course, r/The_Donald. The existence of a common user base demonstrates strong ideological cross-germination between Canadian and American far-right communities. It also suggests that right-wing U.S. political activists engaged directly in attempts to influence the Canadian elections, just as they engaged in the 2017 French elections on behalf of the candidacy of far-right presidential candidate Marine Le Pen (see source #9).

Using an open-source tool to analyze commenting activity in r/MetaCanada, the researchers found significant user overlap with majority-American subreddits: r/HillaryForPrison, r/DrainTheSwamp, r/TuckerCarlson, r/DebateAltRight, and, of course, r/The_Donald.

The visualization shows how r/MetaCanada is a potentially similar match to r/The_Donald. The algorithm looks for similar users in the subreddits. (Source: Anavka.github.io/MetaCanada)

Furthermore, keyword analysis of r/MetaCanada found that “refugee” and “blackface” (a reference to Trudeau’s September 2019 blackface scandal) were two of the most commonly used terms of discussion. This suggests a clear interest in immigration policy and Canadian electoral politics. It lends credence to r/MetaCanada as a hub for political trolling; a departure from its original goal to provide a home for “sardonic humour, revisionist histories, memes, speculative fiction, and satire to analyze and undercut prevailing, dominant attitudes and misconceptions about Canadian life and politics.”

A website that seemed to be dedicated to the MetaCanada movement, explaining what it means and asking for contributions. (Source: MetaCanada.com)

However, the vocalization of anti-immigrant and Islamophobic sentiments was not limited to echo forums. These views were expressed through memes, messages, videos, and other content formats that spread across numerous social media platforms — including ones that did not typically host political content. For instance, on Pinterest, clusters of anti-Trudeau memes were automatically grouped alongside other racist, bigoted content, thanks to the power of the Pinterest recommendations algorithm (see source #10). If users briefly explored this galaxy of anti-Trudeau content, they were steered toward other memes that assailed American politicians like Hillary Clinton and Alexandria Ocasio-Cortez, as well as those that attacked the #MeToo movement (see source #11).

Pinterest’s recommendation pathway via the Pinterest content algorithm, leading a user from an anti-Trudeau meme to an anti-Hillary meme to an anti-#MeToo meme to an anti-Alexandria Ocasio-Cortez meme. (Source: DFRLab)

Where Pinterest’s algorithms passively pushed users toward increasingly virulent political content, Twitter’s algorithms — long the focus of far-right troll brigades — were actively gamed and manipulated in the months leading up to the 2019 Federal Election. In one case in September 2019, Canadian and U.S. political activists coordinated in order to amplify the hashtag #TrudeauMustGo until it trended internationally, helping elicit roughly 34,000 tweets from approximately 5,000 accounts (see source #12).

Disinformation researchers reported the activity to Twitter, alleging that the hashtag campaign showed evidence of automation and inauthentic coordinated activity. Twitter’s policy team replied, noting that this was coordination between human activists and therefore permitted under Twitter’s terms of service. All the while, Canadian citizens and journalists who saw the trending hashtag were left with the impression that this was an organic expression by Canadian voters (see source #13).

Opportunism by Russian State Broadcasters

In the United States, RT and Sputnik International are registered as foreign agents under the Foreign Agents Registration Act (FARA), a law that requires agents representing the interests of a foreign government to disclose information about their activities and finances in the interest of transparency. Canada lacks such a law.

When the Trudeau blackface scandal erupted in September 2019, the news went instantly viral, becoming one of the biggest political headlines in the final weeks before the election (see source #14). The reporting was generally accurate and balanced, including reporting by international media and foreign state broadcasters. By contrast, Russian state media leaned heavily into editorialization. One RT headline, “The many faces of Justin Trudeau: Canadian PM memed mercilessly after brownface debacle,” walked a thin line between international and partisan reporting (see source #15). It appeared to be the latest move in a concerted anti-Trudeau editorial campaign that had gained steam since RT had named Trudeau a year earlier to its list of “Top 10 Russophobes of 2018” (see source #16).

The RT headline was cross-pollinated across several social media platforms, as well as multiple Reddit communities. (Source: Reddit)

In another instance of sensationally slanted coverage, Sputnik International published a story about Alberta separatism — “Birth of the Republic of Western Canada is a Cry of Our Heart — Wexit Alberta Founder” — on October 20, 2019, one day before the federal election (see source #17). While the story was ostensibly a profile of a pro-secession Albertan community leader, it focused almost exclusively on the alleged failings of the Liberal government. It used coded language — Trudeau’s “globalist” agenda, Trudeau’s climate change “rhetoric” — popular among far-right political activists. This was a transparent attempt to circumvent political advertising restrictions imposed by the EMA on foreign media outlets.

As a final example, on October 22, 2019, RT published its first post-election article: “Losing majority with hysterical dignity? Trudeau’s ‘victory speech’ turns into scandal, as he jumps on stage interrupting rival” (see source #18). As with most Russian coverage of Canadian elections, the headline was an example of extreme anti-Trudeau, anti-Liberal editorialization. The “scandal” allegation, in this case, was extrapolated from a quote from a Global News anchor, who called Trudeau’s interruption of Scheer’s speech “unprecedented.” U.S. fringe media outlet InfoWars took things a step further, writing an even more hyperbolic story based on the “unprecedented” reference.

InfoWars embedded the Global News tweet in order to legitimize its reporting, all the while flooding the narrative with further hyperbole. (Source: Infowars)

That rhetoric was picked up by CTV News, too. On Facebook, the outlet shared an article with the following title: “‘Losing majority with hysterical dignity?’ Trudeau’s ‘victory speech’ turns into scandal, as he jumps on stage interrupting rival,’ reads an international headline.” In so doing, CTV News was likely aiming for the maximum possible audience engagement. In the process, however, it pushed RT’s coverage further into the mainstream, ensuring a large and new readership was exposed to the Russian broadcaster’s editorial positions.

This sort of inadvertent information laundering — a foreign state media headline, repackaged by alternative media and subsequently amplified by mainstream broadcasters — has become endemic in cases of foreign influence. The process typically functions without the need for active coordination. Rather, it works through the complementary incentives of the foreign state media (which seeks to share its content free of cost) and the alternative media (which seeks to make money by way of contrarian or conspiratorial content). When a mainstream outlet subsequently covers or shares the alternative media story, it is the foreign state media that benefits most as it watches its seeds bear fruit.

Unintentional spread of the information that accelerated Russian talking points on Canadian national affairs. (Source: CTVNews/Facebook)

Compared to other actors in the 2019 Federal Election — notably large, unregulated partisan Facebook groups — Russian state broadcasters ultimately commanded only a small amount of direct digital influence. But the gamesmanship and clever marketing of propagandists show how readily they can adapt to new regulations or exploit traditional broadcasters in unconventional ways. Should the diplomatic relationship between Canada and Russia grow more contentious, these Russian influence efforts will likely grow more aggressive, resourced, and sophisticated.

___________________________________________________________

Reflection

In retrospect, having a broader scope of work to include China’s posture could have been invaluable.

Russian interference in the 2016 U.S. election demonstrated that information operations begin well in advance of the election cycle, nor do they end when the last votes are tallied. Today, such foreign influence efforts continue to grow in sophistication, joined by a rising tide of domestic social media manipulation that utilizes many of the same tools and tactics.

We said this in 2020, and still in 2023 believe these are a few steps that will help safeguard Canada’s democracy in the future:

  • Increase funding commitment to securing the online landscape. The 2019 Federal budget has earmarked $2.1 million CAD over 3 years to support the G7 Rapid Response Mechanism. Given Canada’s role in the defense of digital democracy, this level of funding is insufficient to meet Canada’s growing multilateral obligations.
  • Consider new laws to designate foreign agents, particularly during elections. An electoral democracy must take great care before considering any measures that might proscribe or limit journalistic activity — even if that activity is conducted by agents of an adversarial foreign power. However, it is the case that Russian (and Chinese) state media in particular has begun to use its coverage of Canadian politics toward aggressive ends. It is also the case that the United States — with strong, constitutionally enshrined journalistic protections — has nonetheless had a system in place for decades to designate foreign agents. Canada should consider adopting a similar model.
  • Revisit the definition of “foreign interference.” Canada has established, and successfully tested, a number of government initiatives intended to mitigate foreign influence efforts by state actors. In the case of the 2019 Federal Election, however, the clearest indications of “foreign” interference came in political trolling coordinated between far-right U.S. and Canadian political activists. Do such nonstate, transnational influence efforts fall within the remit of the Ministry of Democratic Institutions or even the Critical Election Public Protocol? As the nature of foreign influence continues to evolve, so must governmental definitions and procedures.

Just as the tactics of influence operations and social media manipulation are continually evolving, so must appropriate government responses.

_____________________________________________________________

Sources

#1 #2 Insights West, “Canadians Alarmed Over the Influence of Social Media in Upcoming Elections,” May 10, 2019, https://www.insightswest.com/news/canadians-alarmed-over-the-influence-of-social-media-in-upcoming-2019-elections/

#3 Darren Linville, Patrick Warren, “Russian Trolls can be Surprisingly Subtle and Fun to read,” Washington Post, March 8, 2019, https://www.washingtonpost.com/outlook/russian-trolls-can-be-surprisingly-subtle-and-often-fun-to-read/2019/03/08/677f8ec2-413c11e9-9361-301ffb5bd5e6_story.html

#4 https://election.ctvnews.ca/truth-tracker-how-does-anti-scheer-sentiment-stack-up-against-anti-trudeau-talk-online-1.4643010 and Patrick Cain, Jeff Semple, “Closed Facebook Groups Where Extremists Thrive, ‘Would Curl Your Inards,’ Expert Says” Global News, October 29, 2019, https://globalnews.ca/news/6091196/facebook-investigating-19000-member-anti-muslim-group

#5 Teresa Wright, “Majority of Canadians Think Immigration Should be Limited: Poll,” Global News June 16, 2019, https://globalnews.ca/news/5397306/canada-immigration-poll/

#6 https://www.cbc.ca/news/politics/canadians-favour-limiting-immigration-1.5177814

#7 Kaleigh Rogers, Andrea Bellemare, “Misinformation Circulating Online Stokes Fears of Voter Fraud Ahead of Federal Election,” CBC News, August 30, 2019, https://www.cbc.ca/news/technology/voter-fraud-confusion-misinformation-1.5264689 38 https://subredditstats.com/r/metacanada

#8 https://subredditstats.com/r/metacanada

#9 Nicholas Vinocur, “Marine Le Pen’s Internet Army,” Politico, February 3, 2017, https://www.politico.eu/article/marine-le-pensinternet-army-far-right-trolls-social-media/

#10 #11 Kanishk Karan, John Gray, “Trudeaus and Trudeaun’ts — memes polarize in Canadian elections,” DFRLab, November 19, 2019, https://medium.com/dfrlab/trudeaus-and-trudeaunts-memes-have-an-impact-during-canadian-elections-4c842574dedc

#12 Nicole Bogert, “Truth Tracker: Are Bots Ampliying #TrudeauMustGo? Twitter Says No,” CTV News September 26, 2019 https://election.ctvnews.ca/truth-tracker-are-bots-amplifying-trudeaumustgo-twitter-says-no-1.4612390

#13 Elizabeth Dubois, Anatoliy Gruzd, Jenna Jacobson, “When Journalists Report Social Media as Public Opinion,” Policy Options, September 28, 2018 https://policyoptions.irpp.org/magazines/september-2018/when-journalists-report-social-media-as-publicopinion/

#14 Anna Purna Kambhampaty, Madeleine Carlisle, Melissa Chan, “Justin Trudeau Wore Brownface at 2001 ‘Arabian Nights’ Party While He Taught at a Private School,” Time, September 19, 2019, https://time.com/5680759/justin-trudeau-brownface-photo/

#15 “The many faces of Justin Trudeau: Canadian PM memed mercilessly after brownface debacle,” RT, September 19, 2019 https://www.rt.com/news/469123-trudeau-brownface-scandal-memes/

#16 Top 10 Russophobes of 2018: See who made RT’s prestigious list this year,” RT, October 16, 2018, https://www.rt.com/news/441417-top-10-russophobes-2018/

#17 Denis Bolotsky, “Birth of the Republic of Western Canada is a Cry of Our Heart — Wexit Alberta Founder,” Sputnik News, October 20, 2019 https://sputniknews.com/world/201910201077102166-birth-of-the-republic-of-western-canada-is-a-cry-of-our-heart/

#18 “Losing majority with hysterical dignity? Trudeau’s ‘victory speech’ turns into scandal, as he jumps on stage interrupting rival,” RT, October 22, 2019, https://www.rt.com/news/471506-canada-trudeau-election-minority/

#19 Tiffany Hsu, Ian Austen, “Canada Says Facebook Broke Privacy Laws With ‘Superficial’ Safeguards,” New York Times, April 25, 2019, https://www.nytimes.com/2019/04/25/technology/facebook-canada-privacy.html

________________________________________________________________

Contact admin@mentionmapp.com to discuss our contract threat intelligence research, analysis, and reporting. Our focus is on disinformation, misinformation, and influence operation threats, risks, and vulnerabilities.