EdgeTheory Logo

Misinformation, Disinformation, and the need for Narrative Intelligence.

Narrative Analysis of Fake News


In the twenty-first century, for many of us the internet has become an essential part of our daily lives. It allows us to connect to others, regardless of the distance, and enables us to communicate and share content. The internet can even act as a virtual marketplace, allowing millions to buy, sell, and conduct their businesses without even leaving their homes. However, just as there are those that use the internet to enrich their lives, there are also those that exploit it to harm others. Often the most dangerous of these are the most subtle.

With the rise of fake news we have seen how institutions, and whole ideologies can be drawn into question because of something someone saw on the internet. Although it has a primarily political connotation after the 2016 US presidential election, fake news can be defined as any fabricated information that mimics media content in form, but not in organizational process or intent. And while it has a primarily political context, fake news also is also being disseminated about topics such as nutrition, vaccination, and finances. Making it all the more dangerous, fake news can spread extremely easily on social media sites like Twitter or Facebook. In fact, Facebook insights show that even after the US elections, fake news remains a problem with roughly 60 million engagements per month on fake news articles. An additional study found that on Twitter, fake news can be spread far more rapidly, and by far more people than genuine news does. While the size and scope of fake news and clickbaits has become apparent, its causes and treatments have yet to be found.

Social Media & Fake News

While both Facebook and Twitter are aware of their fake news and clickbait problems, their attempts at mitigating them have not exactly been successful. Even after multiple algorithm changes to Facebook in early 2018, many articles that have been reported as false by major fact-checking organizations had not been flagged as such, and major fake news sites had seen little or no decline in Facebook engagements . Furthermore, Facebook’s now-discontinued strategy of flagging stories as “Disputed” seemed to have backfired, as some data suggests that fake news that was not flagged, came across as far more reputable because Facebook had not flagged it. It seemed that fake news and clickbait outlets remained one step ahead of Facebook’s ability to suppress it, with many media commentators fearing that misinformation overall is “becoming unstoppable”. And you don’t have to be a politician or celebrity to find yourself the victim of one of these cyber attacks.

Misinformation scams, coming in the form of clickbait or fake news articles, afflict a wider range of people and industries than most originally think. Malicious actions (as opposed to accidental or unintentional) by internet agents seeking to gain financial information, medical information, or personally identifiable identification make up nearly 60% of all cyber crime incidents. Furthermore of those who have attempted to take legal actions, over 50% continue to be private civil actions brought to courts, with only 17% of those being criminal actions. Due to the novelty of cyber crimes, it is likely that these perpetrators will face their day in trial. And when examining the demographics of these incidents, certain conclusions can be made about the industries that face the greatest risk. A certain study found that retail, information, manufacturing, and finance and insurance industries consistently pose the greatest risk of cyber incidents. Due to the nature and scale of their businesses, these industries face significant threats from coordinated misinformation attacks, clickbaiting or phishing scams.

The presence of fake news and clickbait scams have grown over the years with a variety of motivations. Some seek to spread misinformation over topics such as politics, nutrition, and finances, while others seek to extort sensitive information from the web browsers. And although there has been a marked rise in malicious or misleading content on social media, efforts to stop their spread by the site hosts have been far from successful.

Anonymity and Information Sharing in Online Communities

Prior to the internet age, the retrieval of information was a far more laborious and involved process. For any esoteric information, one would have to journey down to the library and find the resources themselves. And for matters of opinions, politics, and pop culture, the marketplace of ideas would always weed out any fringe beliefs. However, since the dawn of the internet and social media sites, we have created environments where it is possible to share and consume false and malicious information while simultaneously being insulated from the truth.

It is argued that social media has created a unique feed-back loop where, for the sake of platform engagement, one will continue to be suggested content similar to what they already consume. For example, let us consider YouTube, and the strange avenues of videos YouTube can lead us down. To maximize engagement YouTube will offer videos for their users to watch, with over 70% of content watched on the platform being recommended content. While this may be a harmless way to find new music, problems arise with sharing reactionary news stories, and conspiracy stories. Many of these videos sharing false claims and conspiracy theories do generate more engagement and would be likely to be recommended to a user. While YouTube claims that they have made an effort to curb this problem, it is impossible to verify these claims, as the details of their recommendation algorithm are not open to the public. However, as of August 2019, the FBI included fringe conspiracy theories as a domestic terrorist threat due to the increasing number of violent incidents motivated by such beliefs. These communities sharing and promoting conspiracy theories operate by feeding off of a variety of sociological factors, such as group-think.

With the dawn of the internet and social media forms, the way we share information has been changed forever, and this also holds true for how fringe thinking has been changed. Prior to the internet most conspiratorial thinking was kept in check by public norms and values, however thanks to the ability to anonymously and secretly share information and media, these fringe groups were able to become insulated from the rest of society. These types of communities are especially susceptible to the sociological effects of group-think, and confirmation-bias- where any new information or viewpoint that challenges the presupposed beliefs is immediately thrown out. These issues are especially prominent in conspiracy-theory groups, where conspiracies operate on a paradigm of plausible deniability and coincidence. Here any dissimilar information or unrelated event may be used as proof to hint at the “truth” hidden by conventional ideologies. In fact, to conspiracy theorists, those who uphold the conventional ideologies, such as academic experts, corporate media, and/or professionals in their trade as the enemy. And as it has already been demonstrated, conspiracy theories typically entertain audiences very well and are likely to be recommended by media sharing algorithms.

Dubious Conspiracies and Malicious Content

This environment of mass sharing of dubious media content and a reactionary sentiment against corporate media and academic expertise characterize what has been dubbed by one scholar as an Information Dark Age. This Information Dark Age is characterized by the viral spread of unsubstantiated, unverified information (which seems plausible enough on the surface) through unauthorized channels, combined with a general reaction against corporate media and academic expertise, sometimes referred to by far-right bloggers as “The Cathedral”. 

In this environment, individuals do not receive information online from professional sources, but rather turn to opinion sites masquerading as news sites, and social media sites as the primary mechanisms for information retrieval, sharing viral memes, and propaganda with “fake” news. In this headspace, and on these social media forums, one cannot trust the fact-checked corporate media outlets because they are some sort of elitists who are out to manipulate the public. Instead, social media forums circulate what appears to be articles written by independent journalists who are on a crusade to spread “the truth”. However, the viewpoints these so-called independent journalists share are often entirely false, exploitative, and manipulative.

As discussed above, there are a variety of different internet media traps and scams such as clickbaits, and fake news. These types of dangers can be best summarized as intentionally misleading content written on controversial subjects with the intention of misinforming or exploiting the reader for personal, political, or financial reasons. In order to better understand the types of dangers the circulation of this media represents, let us consider the following examples and their repercussions.

Vaccine Misinformation

Many fake news amplifiers and outlets try to increase their credibility by mimicking official news outlets. In figure 1 below, the website forum, Off Guardian is imitating the popular British liberal news outlet, The Guardian. In fact, founders of the website, Off Guardian admit that the name comes from the fact that they had all been banned, or had their comments removed from (the official) The Guardian. Many other articles circulating the fraudulent guardian concern conspiracy theories, anti-government sentiment, and fringe content, which strongly infers that the founders were banned from the official news forum for sharing similar content.
Figure 1: Vaccine Misinformation from Off Guardian
The author of this article, Kit Knightly, is one of the founders of Off Guardian, who had admittedly been banned from the official Guardian. In their article, Knightly goes over the methods of advertising and public approval that the covid vaccine has gotten and their issues with said methods. Instead of presenting medical evidence on the potential risks of the vaccine, Knightly instead discusses how they do not like the celebrities publicly endorsing getting vaccinated, and they dislike that anti-vaccine endorsers had a negative public image. Lastly the author concludes the article with their own 4.9 out of 5 star rating, and a request for money (or bitcoin) and a subtle dig at The Guardian and Bill Gates for some reason.

While upon further inspection this content obviously lacks the editorial fact-checking and peer review process that goes into other corporate news, this article can convincingly pass as a reliable source of information to those who do not know better. And oftentimes this kind of controversial and anti-government media often generates a lot of engagement online. Figure 1 shows that there are already over 851 comments on the article, with the vast majority of those being calls of support for the article’s “findings”. However, it is unclear if those engaging with the article are doing so because they share the same sentiment as the author, or because they are hoping to use the article to misinform others. EdgeTheory’s narrative intelligence platform shows that in the 160 hours since the article was written, there had been 14 shares of the article online, with all 14 of these shares being by some Russian user or Russian bot. And while coronavirus and vaccines have been controversial topics with a variety of opinions, that is by no means the limit to what malicious internet content can be.

Fake "Dreamers Day" Campaign

It is also possible for malicious content and misinformation to target a business, as was evident in an attack on Starbucks coffee in 2017. In early 2017 just two days after President Donald Trump’s executive order placing a temporary travel ban on travelers from Syria and six other Muslim majority countries, Starbucks had publicly announced plans to hire over 10,000 refugees over the next five years. This action sparked backlash by many Trump supporters towards the coffee company and earned the ire of many fringe internet users on the far-right. 
Figure 2: 4Chan message board and fake promotional material
A group of anonymous users on the message board, 4Chan planned a fake promotion for Starbucks to offer discounted coffee for undocumented immigrants, and circulated coupons for such an event. This fictitious promotion, dubbed “Starbucks Dreamers Day” was created with the intention of tricking undocumented immigrants to go to Starbucks stores for discounted coffee to either catch them or expose them as undocumented. Users even photoshopped and circulated their own coupons and ads for the event around the web. Due to the convincing quality of the fake promotional images, and the scale of which it was shared, the real Starbucks quickly found out and had to scramble to try and debunk the fake promotional event.

The success of the fake Dreamers Day promotion can be contributed, in part, to an integral aspect of internet use- Anonymity. Anyone or anything can be on the other side of the keyboard, and their credibility can never be fully confirmed. Corporate media accounts may just be convincing pranksters, teenage girls may be Chris Hansen, and concerned American patriots may be international agents with ulterior motives. For example, internet content can be created anywhere or shared from anywhere, but can target certain groups or certain communities of people usually removed from typical forms of interaction with these agents.

Satire Turned Into Propaganda

One common instance of misusing and misappropriating internet media is the popular occurrence of satire pieces being repurposed as disinformation. Figure 3 depicts one such article, that yet again mimics official news outlet design and layout. The article written on Babylon Bee warns of a dangerous spread of freedom following the lifting of mask mandates in Texas and Mississippi. Babylon Bee is a conservative and Christian-leaning satire website, much like the more liberally aligned counterpart, The Onion. 

Figure 3: babylonbee.com
Much credibility can be leant to these satirical forums as the article from figure 3 never expressly states that it was written for satirical purposes, and the format is very professionally made, mimicking a legitimate news outlet. While the article may have been written with the intention of providing a laugh for its fellow politically aligned readers, it is all too common to see these articles misappropriated as disinformation, and having their quotes and findings used as evidence in their own propaganda pieces. Furthermore, using EdgeTheory's narrative intelligence platform, a web of similar agents sharing these parody articles has been found.

EdgeTheory's narrative intelligence platform found a large number of shares on the Babylon Bee article from figure 3 were made by Russian-linked internet operatives. Because of the scope of internet usage, and the ease of accessibility, one cannot forget how easy it is for anyone to connect with anyone else on the web. In the same insulated communities forged by common interests on the net, anyone can gain entry and spread their own influence. It is also a common occurrence for a group of agents to infiltrate and manipulate an insulated group. Just as a parasite attaches itself to a host, and manipulates them to further their own self-interests, these groups attach themselves to the larger host community and affect them. 

These “parasites” members can often serve to forward the interests of other “parasite” members and help falsely reinforce the validity of the self-interested claims or comments. Within the comments of a seemingly innocent and sarcastic opinion piece, readers are always only one click away from being whisked away to a much seedier Russian clickbait site. Any commentor or contributor to this page could hide ulterior motives and easily embed their own bait-and-switch links in a comment, and due to the size of the internet, there is potential for a near infinite number of shares and contributions. With what seems to be a modern-day Trojan Horse around every corner one must exercise a greater sense of caution and suspicion to navigate the modern web safely.

Disinformation Messaging Tactics

Many disinformation narratives and outlets follow specific trends and plans that make it successful as disinformation. There are a handful of tactics that fringe media outlets, and fringe media agents use that are both common and effective. These strategies often involve using dovetailed messaging, degrees of disinformation, narrative trials, and full spectrum messaging to further the reach and believability of their narratives.

Dovetailed Messaging

Dovetailed Messaging involves consistently alternating messaging themes between official media sources, and fringe amplifiers. While the media outlet may report on one specific topic, the fringe users may continually push a narrative when interacting with the media outlet. After a while, the outlet will notice the engagement and begin to pick up and amplify those comments/ sentiments. This is especially effective with anti-American messaging and news coverage around the world. For example, if a Chinese media outlet criticizes the United States for refusing to recognize Beijing’s authority in Taiwan, the commenters and amplifiers will argue that China has a legitimate right to be there. Soon the news cycle will pick up on this new sentiment and engagement and take up coverage on China’s legitimacy in Taiwan, while the commenters will in turn condemn the United States. This tactic is especially effective in that readers are exposed to the full range of propaganda from both fringe amplifiers and media outlets. And by not promoting the same issue simultaneously, more credibility is leant to these narratives. In mixing official or legitimate news mews media with fringe agendas and propaganda, those fringe media outlets are able to strengthen their reach and authenticity.

Degrees of Disinformation

By using varying degrees of disinformation, advanced persistent manipulators are able to push fringe viewpoints, while still holding some degree of credibility in other viewpoints. While national media outlets have their journalistic integrity to protect, fringe amplifiers are not bound by such journalistic rules. Furthermore, advanced persistent manipulators operate off of social media, and social media linked articles instead of news sites - which often have paywalls for its non-subscribers. These manipulators are free to publish more sensationalized and exaggerated content on social media where it is free to reach a wider audience and generate more engagement with the content. While splicing in the occasional legitimate news source, or article, advanced manipulators enforce their credibility, while their other fringe content is promoted for their financial interest. Another advantage to using social media outlets to publish content is that the publishers can instantly delete or edit their content, whereas a news agency would have to go to the trouble of publishing a retraction or a correction. Fringe media has managed to take advantage of this feature by editing their content based off of the reaction it generates.

Narrative Trials

Often the narratives that these advanced persistent manipulators have the advantage of being given a “trial run” before being fully developed. Social media often provides certain tools and metrics to track the performance and the engagement of articles, and advanced manipulators will be able to learn from and adapt to these engagement metrics. Frequently when certain narratives have picked up enough steam they are picked up and “softened” by more traditional media outlets. Especially with topics such as Covid-19 and vaccination, where these topics are so new, fringe narratives are easily picked up and examined by other more official channels. The best fringe content amplifiers know how to optimize their content so that it picks up the most attention from the widest variety of people, regardless of journalistic integrity.

Full Spectrum Messaging

Because advanced persistent manipulators are not bound by the traditional rules of journalism, they publish a much broader and often contradictory range of content. Fringe content amplifiers are often chiefly interested in maximizing their financial gain and prioritize that over the norms and practices of professional writing or research. In order to capture the widest audience possible to maximize their readership, engagement, and profit, these fringe agents will publish often contradictory and inflammatory content. While this should make these fringe amplifiers an obviously unreliable source, it instead positions the manipulator such that there are people on both sides of the issue consuming the content.


As integral as the internet has grown to our daily lives, it is paramount that we understand the dangers lurking beneath the surface of the web. With the rise of clickbaits and fake news that has spread across social media, one has to be extremely careful where they click and what they believe. These threats are disguised as media links that will try to trick the user into entering the site, only to try and monetize their visit, steal their information, download malware on their device or attempt to manipulate their thinking. And following the 2016 U.S. presidential election, fake news has taken on a far more political connotation.

In the runup to the 2016 presidential election, fake news was widely talked about, and the integrity of many information and news outlets were questioned. Even after the election, fake news articles on topics ranging from finance, to nutrition and vaccination have continued to multiply across social media. And despite their best efforts, social media moderators have been powerless to stop the flow of fake news and clickbait links.

In recent years misinformation has exploded across social media, becoming ubiquitous with internet usage. From healthcare workers to Starbucks baristas, the advanced and persistent fringe media publishers have found a way to exploit the most advanced engagement metrics of every corner of the internet. These fringe publishers abuse their knowledge of social media engagement to maximize their reach and influence for their financial gain. And by taking advantage of the psychological factors in more isolated communities, these fringe publishers can encourage a paranoid group-think effect to sustain their content or their beliefs.

Therefore it is more important now than ever to empower and teach individuals to navigate the web safely. These advanced fringe content publishers all rely on human error as the linchpin to their schemes, and a smarter content consumer could easily spot the pitfalls to avoid on the web. Through studying content trends, and common fringe publishing tactics the average web-browser will be more equipped to avoid clickbaits and disinformation all together. Although it seems that disinformation is not going anywhere, that does not mean that we cannot raise up and outsmart it.

AI-Powered Narrative Intelligence

Request A Demo