Misinformation

Informing Operations. Berlin Bound

via Pixibay

“Facts do not cease to exist because they are ignored.” 
― Aldous Huxley, Complete Essays 2, 1926–29

Manipulating information, media, and news has always been a human problem. Since the first printed words (and probably before) people with good and bad intentions have practiced the art and science of bending perception and twisting opinion.

The fake news hit Trent, Italy, on Easter Sunday, 1475” (Politico)

In 1772 the Reverend Henry Bate (he was chaplain to Lord Lyttleton) founded The Morning Post, a newspaper that piled paragraph upon paragraph, each one a separate snippet of news, much of it fake.” (New York Review of Books)

I’ve spent a lot of time especially this past two years thinking about how information operates and how information operatives are effectively distorting realities. I’m packed (both physically with luggage, and metaphysically with a brain full of questions) and heading to Berlin for two days of the 360/OS Conference.

“At 360/OS our cross-sectoral network of “Digital Sherlocks” will create and cultivate techniques needed to expose falsehoods and fake news, document human rights abuses, and report the actuality of global events in real-time.”

Visiting Berlin for the first time, it’s easy as a boomer (born at the height of the Cuban Missile Crisis) to think about Berlin in the context of the Cold War. But I don’t see Berlin in a John le Carré sense. Rather, I think about Germany in terms of Gutenberg’s impact, or how philosophers like Schopenhaeur and Nietzsche and the Bauhuas Design and Architecture movement have influenced me.

I’m also aware of being in the shadow of arguably a most engaged information operator, but have to recognize that Russia isn’t the only state player in the game. I also acknowledge it’s trite suggesting there are no simple solutions to complex human problems. It’s all about the questions we ask. I’ll be framing my questions around these ideas:

  • Technology alone isn’t the solution to the complexities of disinformation.
  • Tools create what users intend. We need to design for intentionality.
  • We need to see these issues in a more qualitative context that solely a quantitative one.
  • Recognizing it takes a network to defeat a network

I’ll be asking a few people at the conference:

  • Why are you doing this?
  • What keeps you awake?
  • What helps you sleep?
  • What’s one thing everyday citizens need to think about?

Here are a few quick hits that have given me valuable ideas to think about. For fiction, instead of John le Carré, give Ian McEwan’s, The Innocent a read.

Consider what we can do to elevate the idea, conversation, and implementation of Cognitive Security. Read Rand Waltzman’s “The Weaponization of Information.”

Add to your reading list:

The Square and the Tower by Niall Ferguson review — a restless tour through power.

Alexander Klimburg‘s — The Darkening Web

Because my core interest is network visualization and analytics, I’ll recommend watching this video:

Bletchley Park: Code-breaking’s Forgotten Genius

“Gordon Welchman was a World War II codebreaking hero, without him the top secret German Enigma codes might never have been broken. The war as a result may have lasted a further two extra years and tens of thousands more would have died. Gordon Welchman should be famous, his contribution to the war was as great as Alan Turing’s so why is it that we have never heard of him?”

https://www.youtube.com/watch?v=4CSD-jqbg2Y

“The longer you can look back, the farther you can look forward” — Winston Churchill

I’ll look forward to sharing my next dispatch from Berlin.

___________________________________________________________________

To help you see through the complexities of this rapidly evolving landscape, we’ve written the four-volume eBook series, Ecosystem of Fake: Bots, Information & Distorted Realities. We invite you to learn more about today’s information battlefield, proposed solutions, and a further reading resource today. Let’s work towards making cyberspace of more human place.

Sincerely

John (CEO & Co-founder)

Mentionmapp Analytics

Mentionmapp Investigates: See if your social reputation is at risk or explore collaborative research opportunities with us. Contact: john [at] mentionmapp [dot] com

Crooked Crypto? The Hip Hop Bots.

Images via Pixibay

Letter from Mark Twain to a snake oil peddler: “You, sir, are the scion of an ancestral procession of idiots stretching back to the Missing Link”

Seeing news about today’s virtual currency gold rush is nearly unavoidable.

Undoubtedly there are people making boatloads of real money in the space, just a some have probably lost their life savings. Somewhere in between there’s a bunch of people saying, “WTF, explain this witchcraft to me again. I still don’t get it.”

Add Blockchain to the cryptocurrency conversation and the confusion factor increases. I first wrote about this subject and a Canadian startup building a virtual currency exchange in 2013 (I imagine they’re could be happy about not being in business today when considering the above WSJ headline), and convinced that separating the fakes from the facts is more important than ever.

We started exploring the hashtag #blockchain in October 2017. Our curiosity led to one tweet, to one profile, and to a bunch of questions. We’re now able to highlight a pattern of questionable behavior that seems to be inflating an image of expertise, with a network of Hip Hop Bots pumping out the volume.

The one tweet:

The one profile:

The questions: This guy retweeting that? For all this tweeting, where are all of his tweets about music and the music business? Curious, ratio of tweet to like volume? Almost double following to followers? Who else retweeted this one tweet for Anthony Schmitt Financial Services?

Here’s who:

Shared music interests, plus 8 of these 9 profile are following nearly the identical number of profiles, and similar tweet to like ratios. We noted these profiles with interest in October, and knew this was worth investigating further.

We also looked closer at @BourseetTrading, and characterize this as a profile making lots of noise but lacking of detail — no actual web-site; not much evidence of a real Anthony Schmitt (camera shy; we’ve not investigated the veracity of his LinkedIn endorsements); no calls to action or invitation to do business with him; no white papers, original content or guest expert article; no evidence of public speaking engagements. All of this leave us asking, what’s Anthony Schmitt’s deal?

Reviewing more tweets like these we also see a consistent ratio pattern of retweets to likes:

All of this leads to uncovering another 14 profiles all fitting with the Hip Hop Bot “personality ” like —

and there’s @WALKIN_ZAN, @scoolascoola4, @v_nise215, @Ronvista_, @WatchEm_FoldUp, @JohsonTrey, @PointBlankOCR, @SwiftcoEntInc, @ALWAYZ3000, @iAm_MrPearson

From these 23 profiles we collected every profile mentioned and hashtag used in their previous 200 tweets. Up to November 6, 2017 our data points to @BourseetTrading being the Hip Hop Bots best customer, as 20 of them retweeted him a total of 435 times (the next most active client was @VeganYogaDude with 21 bots generating 126 retweets).

A small sample of the tweet content and volume:

*we have more tweets for each profile archived

We noted a number of global brands mentioned by @BourseetTrading and subsequently by the Hip Hop Bots. We also can’t speculate the impact or potential risks these other brands and businesses have been exposed to in being connected with this information flow. It’s fair to consider all of this as empty engagement metrics that have been skewed by non-human interference.

From the overall patterns of behavior there’s little room to debate that Anthony Schmitt Financial Services was using the services of the Hip Hop Bot network to significantly amplify his tweets and falsely inflate his overall engagement metrics.

The Hip Hops Bots were also pumping up the hashtag volume, with the 23 of them averaging of 357 hashtags used per 200 tweets. These were the noteworthy hashtag for this research.

#MakeYourOwnLane: used 3580 times

#Defstar5: used 3465 times

#Mpgvip: used 2684 times

#Fintech: used 999 times

#AI: used 726 times

#Blockchain: used 610 times

#BigData: used 521 times

#Bitcoin: used 338 times

#Cryptocurrency: used 269 times

#Ethereum: used 144 times

#Banking: used 99 times

#HipHop: used 42 times

Seems like a pattern and volume of hashtags promising days of unicorns, rainbows, pots of gold, but not much time for dancing.

Two other profiles who pale in comparison to @BourseetTrading’s volume are also connected with the crypto conversation via the Hip Hop Bots :

11 bots retweeted @kryondo4 55 times.

11 bots retweeted him 21 times.

Of the 23 bots in this network 20 have essentially gone silent since November 6th. The three active bots are retweeting as a service for a collection “experts” and organizations who have no connection to the business of Hip Hop. All warrant future public exposure for their complicity in the ecosystem of fake.

We also see value in researching conversations that aren’t all about the Russian Bot narrative as well. In this regards, the hashtag #blockchain is likely to be a gift that keeps on giving, as the amount of programmatic bot-like behavior keeps leading us to a space that’s a vibrant ecosystem of fake. These to snapshots highlight the ratio of bot-like profile outweigh the nonbot-like:

red icon indicates bot-like behavior patterns

In terms of separating the fakes from the facts in crypto-space, we’re working on a follow-up to this research. The next report with feature the shady online behavior of a company that positions itself as “a decentralized cloud computing blockchain platform.”

Instead of finding a good quote to close with, this tweet says it all!

https://twitter.com/rlucas/status/955924604396093440


Find Out Why Social Bots Are So Dangerous. The Complete 4-Volume eBook Is Available Now on Patreon Only.

While 2017 is behind us, many of the past years troubling themes are not. We’ve seen investigations into Russia’s interference in the US Presidential Election unfold, CEOs of digital platforms being questioned about how their contributing to the information crisis, along media outlets and information itself being deemed untrustworthy. With few solutions in sight, 2018 is likely to see more of the same.

To help you see through the complexities of this rapidly evolving landscape, we’ve written the four-volume eBook series, Ecosystem of Fake: Bots, Information & Distorted Realities. We invite you to learn more about today’s information battlefield, proposed solutions, and a further reading resource today. Let’s work towards making cyberspace of more human place.

Sincerely

John (CEO & Co-founder)

Mentionmapp Analytics

Unraveling Truth while Upholding Fake

via Pixibay

“Facts do not cease to exist because they are ignored.” 
― Aldous Huxley, Complete Essays 2, 1926–29

Post-Truth was chosen as Oxford Dictionaries word of the Year for 2016. This adjective is defined as ‘relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief’.

It’s feels like the specter of Post-Truth has been looming over us for all of 2017. With every Twitter hashtag we’ve researched looking for SocialBot and sockpuppet activity the fingerprints of fake seem to mark every conversation.

It’s like fake is an eraser rubbing out truth. Separating who or what is real from the all that’s fake in cyberspace is an act teetering on the edge of impossibility. We’ve documented too many Twitter profiles that are not who or what they claim to be, not to harbor deep concerns about the state of online global discourse.

Our research this year hasn’t been limited to only exploring the relationship between Bots and the spread of misinformation on Twitter. Connecting a broad range of subjects from diverse sources has been instrumental in building our body of knowledge about, media, democracy, communication technologies, cognitive science, cyber-security, and information warfare. We connect our reading and conversations into our thinking about the complex relationships between citizen and society, between trust and blind-faith, between truth and obfuscation.

Somewhere between not buying 100% into Nietzsche’s position “there are no facts, only interpretations,” and having equal reticence in existence of absolute truths we see merit in Carl Bernstein’s notion about the “pursuit of the best obtainable version of the truth.”

With great interest our ongoing pursuit led us to The Future of Truth and Misinformation Online from Pew research. Authors Janna Anderson and Lee Rainie preface the article noting:

When BBC Future Now interviewed a panel of 50 experts in early 2017 about the “grand challenges we face in the 21st century” many named the breakdown of trusted information sources. “The major new challenge in reporting news is the new shape of truth,” said Kevin Kelly, co-founder of Wired magazine. “Truth is no longer dictated by authorities, but is networked by peers. For every fact there is a counter-fact and all these counter-facts and facts look identical online, which is confusing to most people.”

We wanted to highlight a small selection (from the many) of insightful and thoughtful perspectives, and encourage you to read the Pew research article in its entirety.

Framing the research:

The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially destabilizing ideas?

Some 1,116 responded to this nonscientific canvassing: 51% chose the option that the information environment will not improve, and 49% said the information environment will improve.

More specifically, the 51% of these experts who expect things will not improve generally cited two reasons:

The fake news ecosystem preys on some of our deepest human instincts: Respondents said humans’ primal quest for success and power — their “survival” instinct — will continue to degrade the online information environment in the next decade. They predicted that manipulative actors will use new digital tools to take advantage of humans’ inbred preference for comfort and convenience and their craving for the answers they find in reinforcing echo chambers.

Our brains are not wired to contend with the pace of technological change: These respondents said the rising speed, reach and efficiencies of the internet and emerging online applications will magnify these human tendencies and that technology-based solutions will not be able to overcome them. They predicted a future information landscape in which fake information crowds out reliable information. Some even foresaw a world in which widespread information scams and mass manipulation cause broad swathes of public to simply give up on being informed participants in civic life.


We fall into the 51% school of thought, acknowledging the complexity of the problem, refuting any naivety about the potential for any near-term solutions, while remaining hopeful we can find ways to reclaim cyberspace as a human place.

There’s a world of difference between truth and facts. Facts can obscure truth.”― Maya Angelou


Here is a selection of comments from the survey experts:

Tom Rosenstiel, author, director of the American Press Institute and senior fellow at the Brookings Institution, commented, “Whatever changes platform companies make, and whatever innovations fact checkers and other journalists put in place, those who want to deceive will adapt to them. Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to. Since as far back as the era of radio and before, as Winston Churchill said, ‘A lie can go around the world before the truth gets its pants on.’”

While propaganda and the manipulation of the public via falsehoods is a tactic as old as the human race, many of these experts predicted that the speed, reach and low cost of online communication plus continuously emerging innovations will magnify the threat level significantly.

A professor at a Washington, D.C.-area university said, “It is nearly impossible to implement solutions at scale — the attack surface is too large to be defended successfully.”

Serge Marelli, an IT professional who works on and with the Net, wrote, “As a group, humans are ‘stupid.’ It is ‘group mind’ or a ‘group phenomenon’ or, as George Carlin said, ‘Never underestimate the power of stupid people in large groups.’ Then, you have Kierkegaard, who said, ‘People demand freedom of speech as a compensation for the freedom of thought which they seldom use.’ And finally, Euripides said, ‘Talk sense to a fool and he calls you foolish.’”

A large number of respondents said the interests of the most highly motivated actors, including those in the worlds of business and politics, are generally not motivated to “fix” the proliferation of misinformation. Those players will be a key driver in the worsening of the information environment in the coming years and/or the lack of any serious attempts to effectively mitigate the problem.

Alex “Sandy” Pentland, member of the U.S. National Academy of Engineering and the World Economic Forum, commented, “We know how to dramatically improve the situation, based on studies of political and similar predictions. What we don’t know is how to make it a thriving business. The current [information] models are driven by clickbait, and that is not the foundation of a sustainable economic model.”

A vice president for public policy at one of the world’s foremost entertainment and media companies commented, “The small number of dominant online platforms do not have the skills or ethical center in place to build responsible systems, technical or procedural. They eschew accountability for the impact of their inventions on society and have not developed any of the principles or practices that can deal with the complex issues. They are like biomedical or nuclear technology firms absent any ethics rules or ethics training or philosophy. Worse, their active philosophy is that assessing and responding to likely or potential negative impacts of their inventions is both not theirs to do and even shouldn’t be done.”

Many respondents expressed concerns about how people’s struggles to find and apply accurate information contribute to a larger social and political problem: There is a growing deficit in commonly accepted facts or some sort of cultural “common ground.”

A share of respondents said a lack of commonly shared knowledge leads many in society to doubt the reliability of everything, causing them to simply drop out of civic participation, depleting the number of active and informed citizens.

Kenneth Sherrill, professor emeritus of political science at Hunter College, City University of New York, predicted, “Disseminating false rumors and reports will become easier. The proliferation of sources will increase the number of people who don’t know who or what they trust. These people will drop out of the normal flow of information. Participation will decline as more and more citizens become unwilling/unable to figure out which information sources are reliable.”

What is truth? What is a fact? Who gets to decide? And can most people agree to trust anything as “common knowledge”? A number of respondents challenged the idea that any individuals, groups or technology systems could or should “rate” information as credible, factual, true or not.

Alejandro Pisanty, a professor at UNAM, the National University of Mexico, and longtime internet policy leader, observed, “Overall, at least a part of society will value trusted information and find ways to keep a set of curated, quality information resources. This will use a combination of organizational and technological tools but above all, will require a sharpened sense of good judgment and access to diverse, including rivalrous, sources. Outside this, chaos will reign.”

Many who see little hope for improvement of the information environment said technology will not save society from distortions, half-truths, lies and weaponized narratives.

Paul N. Edwards, Perry Fellow in International Security at Stanford University, commented, “Many excellent methods will be developed to improve the information environment, but the history of online systems shows that bad actors can and will always find ways around them.”

Many of those who expect no improvement of the information environment said those who wish to spread misinformation are highly motivated to use innovative tricks to stay ahead of the methods meant to stop them. They said certain actors in government, business and other individuals with propaganda agendas are highly driven to make technology work in their favor in the spread of misinformation, and there will continue to be more of them.

John Markoff, retired journalist and former technology reporter for The New York Times, said, “I am extremely skeptical about improvements related to verification without a solution to the challenge of anonymity on the internet. I also don’t believe there will be a solution to the anonymity problem in the near future.

Jason Hong, associate professor at the School of Computer Science at Carnegie Mellon University, said, “Some fake information will be detectable and blockable, but the vast majority won’t. The problem is that it’s *still* very hard for computer systems to analyze text, find assertions made in the text and crosscheck them. There’s also the issue of subtle nuances or differences of opinion or interpretation. Lastly, the incentives are all wrong. There are a lot of rich and unethical people, politicians, non-state actors and state actors who are strongly incentivized to get fake information out there to serve their selfish purposes.”

Peter Lunenfeld, a professor at UCLA, commented, “For the foreseeable future, the economics of networks and the networks of economics are going to privilege the dissemination of unvetted, unverified and often weaponized information. Where there is a capitalistic incentive to provide content to consumers, and those networks of distribution originate in a huge variety of transnational and even extra-national economies and political systems, the ability to ‘control’ veracity will be far outstripped by the capability and willingness to supply any kind of content to any kind of user.”

danah boyd, principal researcher at Microsoft Research and founder of Data & Society, wrote, “What’s at stake right now around information is epistemological in nature. Furthermore, information is a source of power and thus a source of contemporary warfare.”


Era’s moving at internet speeds don’t last long. If 2016 gave us Post-Truth, then it fair suggesting 2017 has been the era of Post-Fact, which we forecast gives way to 2018 being an era of Post-Trust.

Please learn more how non-human actors are impacting online conversation by downloading Volume I & II of our eBook — “The Ecosystem of Fake: Bots, Information, and Distorted Realities”

The eBook like everything we write is free, but making your pledge on Patreon will help support our ongoing research and investigation into the workings of Bots & Sockpuppets and the spread of misinformation.

“In a time of deceit telling the truth is a revolutionary act.”

― George Orwell

Sincerely

John (CEO & Co-founder)

Mentionmapp Analytics


The Bullshitters & The SocialBots (Vol. II): Selling Snake Oil

photo courtesy StillWorksImagery

Snake Oil hasn’t always been connected with fake remedies. But seeing the #AsthmaCures hashtag in this tweet made me think, “What bullshit Snake Oil is being sold here?”

https://twitter.com/healthwomeninfo/status/894234135329251328

At MentionMapp, researching social bots are part of our business, and sometimes that takes us deeper than the usual skim of profile characteristics. Biographic inconsistencies, lack of detail, following/follower ratios, tweet volume and other patterns can suggest bot activity. It’s a complex, not totally scientific process, and it takes a combination of human and machine intelligence to discern the human from the non-human in our online conversations.

Using network analysis to map conversions (using hashtags) and retweet patterns, we have been studying the flow of misinformation. It’s important to identify non-human agents, but it’s invaluable to detect them as they work in concert to manipulate a conversation or influence public perception. Seeing specific tweets being amplified by a collection of social bots is one way we zero in on a Twitter profile to determine how connected it is to the ecosystem of fake.

There are real people and organizations who behave questionably, by using fake profiles to manipulate and inflate their social engagement metrics. But what’s more disconcerting is seeing fake profiles (SocialBots) being put to work to drive clicks and real people to fake news sites, fake businesses, or fake people. This is why we’ve been researching the Twitter profile Women’s Health for the past few months. They’re chronic abusers of SocialBots to amplify questionable content, like the tweets we’ve highlighted below:

Their website went from our suspected ad fraud list to having “stepped out for a bit.” Seeing this use of such a misleading hashtag — #AsthmaCures — , @healthwomeninfo has earned it’s place as our Bullshitter of the Week.

redirect via: health-womens.com

Women’s Health has used the hashtag #AsthmaCures almost 20 times in the first two weeks of August.

https://twitter.com/healthwomeninfo/status/894529828887691264https://twitter.com/healthwomeninfo/status/894881398141538304https://twitter.com/healthwomeninfo/status/894892207068643328

The volume of tweets connected to this hashtag and the number of non-humans pushing out the message is concerning. For instance, we can verify a snapshot of 100 retweets generated by 100 individual fake profiles (red icons on the network maps below are verified SocialBots). It’s our suspicion that this isn’t freely occurring behavior.

The Mayo Clinic clearly states that Asthma can’t be cured. This makes Women’s Health a reckless Twitter feed. By using and abusing this specific hashtag in combination with a small army of SocialBots manipulating the social metrics, they represent the dark side of social.

One of our main focuses is highlighting bad bot behaviors. Using bots like Women’s Health undermines human engagement everyday. Yet, there’s another questionable layer of behavior connected with this specific example. While we don’t have the answer, we’re curious about the connection between the tweets and the subsequent link to 19 links to this YouTube video:

Somewhere, clicks are turning into cash for someone.

As SocialBot creators and network operators are always working to stay ahead of those who are working to detect, expose, and ultimately dispose of them, we’re guarded about sharing our process publicly. We combine human intelligence (along with persistence and humility knowing we’re not always right), machine intelligence and network analysis that gives us the confidence to keep investigating and reporting what we uncover.


Mentionmapp Investigates: See if your social reputation is at risk. Contact: john [at] mentionmapp [dot] com for a Bot investigation and network analytics risk assessment. As Used By –

https://www.thebureauinvestigates.com/
https://firstdraftnews.com/ and http://www.neoncentury.io/

Data analytic support and insights are courtesy of our partners at Plot+Scatter.

From John’s pen (cofounder).

Please visit Mentionmapp— See’s who’s talking with who, and who’s talking about what. Discover more. #digitalmarketing #networkanalysis

The Doppelgänger Bots. How They’re Using Stolen Social Identities.

image from Pixibay

They’re amplifying misinformation, sprinkling fake news headlines through Twitter feeds and more. We’re documenting, tracking, and archiving the everyday behaviours (the bad) of fake Twitter Bot profiles. Recently we’ve uncovered fake profiles that have been forged, hijacked… stolen. It’s like witnessing social media identity theft. We’ve noted the profiles of organizations, businesses, and people that appear abandoned, possibly dormant, or in a few cases still operating. Here are seven examples of profiles we’re labeling as stolen, and all operating as Bot manipulated content feeds.

News organization — compromised.

Established organization — compromised.

Everyday citizen — compromised.

The next last four stolen profiles have a distinct pattern with the “Join Date”. As well, the retweeted content has connect themes (fake news; investing; gambling…) from specific profiles we’ve seen common to this Bot network.

Everyday citizen — compromised.

Everyday business — compromised.

Non-profit organization — compromised.

Corporation — compromised.

Our focus is observing and documenting Bot’s operating in ways that misrepresent, mislead, and misinform. In this case, these “Doppelgänger Bots” use of stolen social media identities is exposing the reputations of real people, organizations, institutions and businesses to considerable risk.

Here too it’s masquerade, I find: 
 As everywhere, the dance of mind.
I grasped a lovely masked procession,
And caught things from a horror show…

Johann Wolfgang von Goethe

Mentionmapp Investigates: Discover if your social reputation is at risk. Contact: john [at] mentionmapp [dot] com com for a Bot investigation and network analytics risk assessment. As Used By –

https://www.thebureauinvestigates.com/
https://firstdraftnews.com/
http://www.neoncentury.io/
http://www.ij.no/english

Data analytic support and insights are courtesy of our partners at Plot+Scatter.

From John’s pen (cofounder).

Please visit Mentionmapp. Come to know the flow of Info & Misinfo

Observances: Fake News, Bots and Bait

image from Pixibay

Fake news is frequently used to describe a political story which is seen as damaging to an agency, entity, or person… it is by no means restricted to politics, and seems to have currency in terms of general news.” — Merriam-Webster The Real Story of ‘Fake News’

Claiming a site or source is intentionally misleading can be difficult to substantiate. Suggesting the modus-operandi is pursuing financial or political gains, less so. For this case study we’ve observed a network of Bots amplifying news headlines. Specifically our focus is on two sites operating as purveyors of fake news where financial or political gain seem evident

Catching our attention is CitizenSlant (no mast head, no writer attribution, no source citations lend it little credibility) and FakeNewsMatters (name says it all). While labelling them both “fake news” sites we see them primarily operating as click-bait feed filler.

First: CitizenSlant tweets of Bot related activity. In terms of content the overall anti-President Trump theme is evident. There’s also a pattern in terms of volume ranging between 70–115 retweets per post.

Looking at a 200 retweet snapshot we’ve highlighted that the significant majority of retweets are generated by non-human agents. This is Bot amplified content at work.

Each profile highlighted in red indicates Bot-like behaviour/activity and warrant further investigation.
Greeted with an overlay like this (free clicks!), and when closed a redirect process is initiated

Second: FakeNewsMatters presents a very different content “finger-print”. News organization logo’s are very evident and no doubt are an attempt to resonate credibilty. Yet, every connected link with send the unwitting would-be visitor to a highly questionable web-site (we advise you not to visit the site).

Unlike CitizenSlants retweet volume to Bot ratio, this Twitter feed seems more about the volume of content flowing versus amplifying it. We’ve not found an individual tweet that’s achieved double digit retweets.

Here are 100 profiles retweeting FakeNewsMatters and like CitizenSlant, the BotMapp lights-up like a Christmas tree.

These Bots are “broadcasting” in a broader context beyond fake news. It’s more like a numbers game. This network operates by throwing out a mix of salacious headlines (fake news) pornography, pills & therapy (highlighting the reputedly high-anxiety culture), point spreads or ponzi schemes (gambling or investing), or tugs at the heart-strings (charities); sooner or later someone will act and unfortunately give up personal information, fall for a scam, or click on an ad. This is clearly a network of Bots programmed to turn clicks into cash or information currency.

There’s no reason to think these two cases are examples of a Russian plot against Western Democracy. We’ve noted Bot feeds with no political affinity by tweeting content that’s both for and against the 45th President. People have to own the responsibility for today’s polarized politic landscape. Blaming Bots doesn’t cut it.

Yet, we do believe there’s undeniable and warranted concerns about the role non-human agents play in the amplification of misinformation and disinformation. By manipulating social signals and inflating metrics (such as likes, retweets, views, comments or up-votes) and influencing the “push a winning trend” and “jump on the bandwagon” behaviour, maybe Bot’s are in fact “botifying” the unwitting.

This intersection of people, massive communication platforms, psychology, and software automation reveals the good and the bad of our human experience. Disingenuous behaviour and fake news has a long history; “Other thinges are in this Court at a good price, or to say it better, very good cheap: that is to wit, cruel lies, false news, vnhonest women, fayned friendship, continuall enimities, doubled malice, vaine words, and false hopes, of whiche eight things we haue suche abundance in this Courte, that they may set out bouthes, and proclayme faires.”
 — Antonio de Guevara, The Familiar Epistles of Sir Anthony of Gueuara (trans. By Edward Hellowes), 1575

“The past is never dead. It’s not even past.” ― William Faulkner


Mentionmapp Investigates: See if your social reputation is at risk. Contact: john [at] mentionmapp [dot] com for a Bot investigation and network analytics risk assessment. As Used By –

https://www.thebureauinvestigates.com/
https://firstdraftnews.com/
http://www.neoncentury.io/
http://www.ij.no/english

Data analytic support and insights are courtesy of our partners at Plot+Scatter.

From John’s pen (cofounder).

Please visit Mentionmapp and know about the flow of Info & Misinfo

R.I.P Minister of #bcpoli Bots

image from Pixibay

Dust in the digital breeze, delete. You’re gone, Twitter says you don’t exist. We barely knew you @ReverendSM, but you still gave us so much. Of course we’re curious if disappearing over the Easter weekend was in anyway a symbolic act? Could there be a resurrection of some form? So many questions that’ll simply remain mysteries.

Were we to eulogize the now deleted Twitter profile @ReverendSM, we first have to give thanks for the archive of Tweets and collection of Tweet amplifying Bots. We observed enough questionable behaviour to share a pair of case studies related to the hashtag #bcpoli (number I and number II). It shows the use of Bots (those deliberately masquerading as real profiles) in connection to propelling political messages is happening in Canada too.

It’s not just a Trump thing, a Russian thing, a right thing or a left thing, it’s flat out a spurious thing. Bots can’t vote, yet their being injected into and undermining the credibility of a very human conversation.

We also appreciate earning an opportunity to speak with Jon McComb about this Global issue.

https://omny.fm/shows/the-jon-mccomb-show/how-bots-can-influence-social-chatter

We reflect on some of our process, and share a selection from our archive connecting the Bots of #bcpoli, and the Shepard of this flock @ReverendSM.

https://youtu.be/zao5I-YzF5Q

“The time will come when diligent research over long periods will bring to light things which now lie hidden.” — Seneca

From John’s pen (cofounder).

Please visit Mentionmapp and know about the flow of Info & Misinfo

How Twitter Bots are Amplifying Global Issues & Influencing Elections

image from Pixibay

The BotMapp project is observing how Bots operate on Twitter and influence Human Conversations

Observances: The Bot Propelled Tweets of #bcpoli

On March 18th a single tweet caught our attention leading us to write The #bcpoli Bots. The same questionable pattern of behaviour persists. We also have to acknowledge that one Twitter profile, and a network of Bots amplifying select tweets isn’t likely the change the outcome of the May 9th BC Election. In fact it’s something that’ll be greeted with an uncaring shrug at best, and at worst an event that won’t register on most voters radar at all. We have this on our radar because of the fact it’s not a foreign political event, but one that’s happening in our own backyard.

We’re doing this BotMapp project with a deep commitment to discovering and observing how Bots operate on Twitter. Non-human agents acting as message amplifiers is a global issue. Seeing the flow of information and misinformation as it’s being injected into networks of human conversation is an important first step in mitigating the problem.

Our focus is on Bot profiles classified as Fleshmasks- deceptively and consistently portray themselves as humans”, and more specifically the Chimera – pull traits from many different sources: a profile picture from google images, a description from another user.” [1]

Example of the type of profiles we’re focused on identifying.

We’ve continued to take note of the Twitter profile @ReverendSM, and are mapping the Bot profiles retweeting content connected to the hashtag #bcpoli. This retweet map (.gif) highlights profiles exhibiting bot-like behaviour, and sets the foundation for our research

ReTweet Mapp from April 8, 2017 at 4:45pm (PST). From @ReverendSM’s previous 200 tweets, 90 profiles had retweeted.

https://twitter.com/JimVidmarWizard/status/845411024517787648

We’re “students” in terms of the Bot world and are always looking to learn more. We spoke with Jim Vidmar who was recently on the 60Minutes segment about “Fake News.”[2] More importantly Vidmar’s 15 years experience of developing software automation tools and knowledge of the Twitter ecosystem proved to be an invaluable conversation.

We agreed that Bot’s can be an extremely effective messaging propellent. Like a small drop of nitroglycerin, Bots quickly inflate the number of retweets and likes. Then you combine “talking like a 13 year old girl on Twitter,” with the fact that “people like to retweet crazy shit,” Vidmar knows the end result is that content climbs in the search rankings. It’s all about turning clicks into cash.

Cash making versus King making is the key motivation behind deploying a network of Bots according to Vidmar. From his own experience he suggested “no one is buying retweets to take over narratives.” He pointed to the existence of commercially available tools like Automatic Retweet as an example of why it’s a pipe-dream to believe there’s a “Wizard of Oz” with an army of bots shifting the course of our democratic process.


We won’t speculate about @ReverendSM’s connection to this Bot network. However, we think it’s important not to dismiss the role these Bot’s are playing as message amplifiers. We have looked at two more recent tweets to further highlight this issue. For instance, from this April 6th tweet, 21 of the 26 retweets were delivered by the type of Bots we see deliberately, “deceptively and consistently portraying themselves as humans.”

https://twitter.com/ReverendSM/status/850180019313823744

We’ve also established and documented a timeline pattern of retweets from the Bot network starting 4 minutes after the original post and all flowing into the feed in between 7:58pm to 7:59pm (PST)

Of these 59 retweets we’ve identified 43 Bots that are engaged in the same type of broadcast pattern (starting a 4:22pm ending a 4:28pm) as the previous tweet.

https://twitter.com/ReverendSM/status/849401377780617216

@ReverendSM’s choice of being an active yet anonymous participant and commentator in the #bcpoli feed should bring their creditability into question. Yet, the fact that real citizens are also choosing to share this content further illustrates the type of problem that today’s new media platforms are exacerbating. Sadly, it’s the authorship that now has more currency than the author. Call it “post-credibility.”

On one hand this is an example of non-human agents amplifying and subsequently undermining the socio-political landscape. But, we’re also concerned this behaviour is potentially exposing everyone participating in the #bcpoli conversation to the worst elements of the internet.

We’re doing more research into this particular Bot network. There are many reasons to suspect it’s a parade of fake people, fake news, fake business, and fake charities. We’ve now connected 280 Bot profiles to the retweets of @ReverendSM, and looking at those feeds we anticipate finding highly questionable content that poses a threat to personal privacy and security.

Working with our friends at Plot and Scatter, the next project in queue will observe a Bot network connected with two anti-Trump “fake news” web-sites.

“What is morally wrong can never be advantageous, even when it enables you to make some gain that you believe to be to your advantage. The mere act of believing that some wrongful course of action constitutes an advantage is pernicious.”Marcus Tullius Cicero

[1] Gregory Maus and Onur Varol. 2017. A Typology of Socialbots. In Proceedings of ACM Web Science conference, Troy, NUY, June 2017 (WebSci17). 8 pages.

[2]

http://www.cbsnews.com/news/how-fake-news-find-your-social-media-feeds/

Mentionmapp Investigates: See if your social reputation is at risk. Contact: john [at] mentionmapp [dot] com for a Bot investigation and network analytics services quote. As Used By –

https://www.thebureauinvestigates.com/
https://firstdraftnews.com/
http://www.neoncentury.io/
http://www.ij.no/english#0

Data analytic support and insights are courtesy of our partners at Plot+Scatter.

From John’s pen (cofounder).

Please visit Mentionmapp and explore the Twitterverse soon!

On the Air and in Conversation: Bots & Misinformation

On the heels of attending MisInfoCon (which is chronicled Part I, Part II, Part III) it was nice being invited to Vancouver’s Roundhouse Radio 98.3 to talk about Bots and misinformation. I spoke with host Minelle Mahtani and Jon Winebrenner, with the conversation framed by acknowledging “bad bots are accounting for 28.9% of today’s global web-site traffic.”

Given the short amount of time and the complexity of the issue we barely scratched the surface. Nonetheless, it was encouraging to see the interest for this subject in Vancouver and good to get the conversation started. It will be even more encouraging to see this conversation evolve because it’s an issue that’s simply not going fade away.

http://cirh2.streamon.fm/listen-pl-8415

For additional context the First Draft News team does a great job of distilling the complicated matrix of misinformation into the following —

Hopefully this conversation leaves you with a few things to think about before clicking on that next sensational or hysterical headline. Even better, consider this Feynman principle before that next click.

From John’s pen (cofounder).

Please visit Mentionmapp and explore the Twitterverse soon!

Observances: Bots Behaving Badly

We’ve been using our new *BotMapp prototype feature for identifying, mapping and researching Bot activity and behaviour on Twitter for the past couple of months. First of all we don’t see all Bots as being bad Bots. But, the ones we’re interested in revealing and investigating are those we see as being today’s “weapons of mass amplification.”

Mentionmapp visualizes Twitter’s connections and conversations. The BotMapp will be like a compass helping guide you through the complex network of the non-human agents infecting our sociopolitical discourse with misinformation.

We’re focusing our attention towards a few hashtags related to this years European elections. While looking at the activity connected to the hashtags #geertwilders during a tiny sliver of time on March 7th, we noticed a pattern of profile type, design and behaviour that led to us creating this first Mentionmapp Observance. As patterns emerge and distinct cases of non-human agents amplifying misinformation are discovered, we’ll highlight what’s being observed.

https://youtu.be/J2jH8GfxKz4

  • Note re: BotMapp prototype – it’s not yet publicly available, which makes us excited about starting to partner with a small group of journalists and news organization. These professionals will using the BotMapp prototype and most importantly sharing their insights and feedback to advance this endeavour from the project it is today into a more robust product tomorrow.

We have a limited number of BotMapp prototype partnership opportunities available, please email admin (at) mentionmapp (dot) com if you’d like to learn more.

“Falsehood flies, and truth comes limping after it, so that when men come to be undeceived, it is too late; the jest is over, and the tale hath had its effect.” — Jonathan Swift

From John’s pen (cofounder).

Please visit Mentionmapp and explore the Twitterverse soon!

https://powered.by.rabbut.com/p/qnHY?c=0