While the boom in misinformation may have had the biggest spotlight during the 2020 U.S. election, the issues were only amplified further at the height of the COVID-19 pandemic. But the spread of misinformation online isn’t a new pattern, and has always been a point of contention when considering the free flow of information within open communities.
Many may only think of the big headlines when it comes to misinformation — but in reality, bad-faith campaigns can affect individual people or businesses at any level, and can come in the form of viral social media posts or strategic negative SEO campaigns designed to hurt specific individuals or entities. However, intentionally negative SEO campaigns aren’t the only source of misleading audiences, as even the most well-intentioned content can be rife with misinformation.
No matter the subject matter of the misinformation, one thing remains true — there may never be a way to fully contain the spread of false information online. However, by understanding how both social media and SEO rankings change in the wake of trending topics, individuals and businesses alike will know how to not only discern false information from the truth, but also potentially protect themselves if they ever find themselves in the aftermath of a bad-faith campaign.
How Social Media Contributes
While misinformation online has been common since the birth of the internet, mainstream attention on the issue exploded around the 2020 US election. While all social media hubs were places for festering misinformation, Twitter and Facebook seemed to be the most overrun. Even with efforts to track and stop the spread of false and potentially dangerous information around the election, the same issues appeared to crop up without warning and with mass appeal.
Anyone who has spent time on Twitter, Facebook, Instagram, or any other popular social media site knows that once a trending topic gains traction, few things can slow it down.
Unfortunately, this same rule applies to topics of misinformation and false rumors, as well as anything else that naturally instigates arguments and discourse. Social media is built to share in conversations, and those that stir the strongest feelings tend to catch the largest wave in the public eye. Further, social media bots may catch on to trends on their own, or may even be specifically designed to spread false information by bad-faith actors. This is a common problem, and one that goes against many social media platforms’ terms of service (TOS) agreements — but the spread of them has become too great to tackle the issue by merely blocking and limiting post behavior.
While bots themselves may be against general social media rules, when it comes to actual people spreading misinformation, the ruling against them is a little more blurry. Quoted in Forbes, Roger Entner of Recon Analytics explains: “...the platforms profit from it because the more outrageous the content the more people interact with it.
He continues, “this type of 'engagement' is what the platforms are looking for; people reacting to things.” The same can be said for “hateful” content, in combination with blatant misinformation.
On top of many social media platforms allowing misinformation to spread, it’s also becoming increasingly difficult for viewers to determine what information is accurate vs. what is misconstrued or blatantly false.
What Platforms Are Doing
While social media platforms may benefit in some ways from the objective rate of engagement that comes with “enticing” and “controversial” misinformation, many have determined that the risks outweigh the benefits of letting such patterns fester. In response to such patterns, many larger social media platforms have implemented (or attempted to implement) tools to stop the spread of such harmful misinformation.
- Twitch, the streaming platform, promised to begin permanently banning streamers who were considered “chronic” spreaders of misinformation utilizing their platform.
- Spotify, a popular music streaming app, said it would begin leaving informational messages regarding COVID-19 on its platform to deter misinformation — though many critics claimed that would do little to help the rampant misinformation from some of their biggest content creators.
- Youtube announced they would be “cracking down” on vaccine misinformation being spread via content creators on their platform via new, stricter policy updates.
- Facebook, the social media giant, is perhaps most known for its rampant spread of misinformation during the 2020 election. In response, they also released a statement claiming they would be making changes to their policies to help stop the spread.
- Instagram, also owned by Facebook, released a similar statement as its counterpart.
Meanwhile, other social media platforms are facing scandals claiming that not only are they blatantly ignoring the spread of false and harmful information, but they are also actually advocating for it.
TikTok, for example, has been accused of its algorithm purposefully directing visitors to misinformation about the ongoing 2022 Ukraine/Russia conflict as well as content with COVID-19 vaccine misinformation. This is particularly alarming considering the age demographic of the app, which was reported to consist of approximately 25% of users between the age of 10-19, and 22% between the ages of 20-29 in 2021.
Where and How SEO Gets Involved
Any good SEO strategy relies on keyword-focused content, along with trending topics with broader appeal to earn links. Unfortunately, however, this can lead to exacerbating issues of misinformation, especially if the writers of that content aren’t performing adequate research into the information they’re presenting.
Worse, this misinformation can grow and evolve exponentially once you consider the amount of spam content being created by bots online, in search of inorganic traffic and boosting their rankings dishonestly by merely scraping information off the internet to create as much content as possible.
If Google’s algorithms are unable to catch and understand this information as being dishonest and misleading, it becomes like a tidal wave of false information that is then being discussed, reported, and represented by what appears to be a large number of sites online — and so the trending topic continues to grow and fester. Further, this issue only risks becoming worse as search trends and tools continue to evolve and become more accessible, as well as potentially manipulated by bad-faith actors.
Further, there are several tactics bad-faith actors use to further their misinformation. According to Search Engine Journal, some of these tactics include:
- Ambiguation: the act of intentionally flooding the web with incorrect information.
- Google Bombing: the attempt to “redefine” a term or phrase by publishing and drawing traffic (and links) to the alternative content, retraining how Google algorithms understand related queries and rank results.
- 302 Hijacking: a defunct means of redirecting a visitor from a website to another, incorrect, and potentially malicious website.
- Typosquatting, usually through misspelling common domains and/or the names of well-known people to trick people into believing information presented is from those sources.
Not unlike how social media platforms ambiguously benefit from controversial content, search engines across the board benefit in the same ways. Controversial topics get more clicks — which means they’re more likely to appear in SERPs for the next visitor when searching for a similar topic.
Many users may not be aware of Google’s team dedicated specifically to finding and neutralizing these threats and others that evolve as the web does. In part of their business statement, they explain their work as: “We look for high-impact interventions, where focusing on helping a specific group of people — journalists, civil society, or activists, for example — makes the internet and society stronger and safer for everyone.”
In this case, “open society” can refer to the internet as a whole, as well as potentially those smaller “societies” burgeoning on social media platforms, within online message boards, and other populated corners of the internet.
In online hubs where there might be established rules but a lack of oversight, open societies are prone to misinformation, group-think, and can grossly evolve into misinformation machines if the people involved are passionate enough about what they think is the truth.
But because these open societies are exactly that, controlling the wave of misinformation that potentially stems from them entails more than simply integrating new rules into social media TOS agreements and relying on google’s misinformation team — especially when SEO is being actively used to spread such information, whether to be purposefully harmful or simply out of ignorance.
How Content Strategy Can Help
Misinformation campaigns don’t have to be as big as those around the U.S. election and COVID-19 pandemic — they can come in smaller, more compact topics, especially around businesses and personal reputations. There is, after all, a reason PR firms exist and continue to thrive in the context of the online world.
For example, as further explained in the Search Engine Journal, online furniture distributor and marketplace, Wayfair, experienced a huge surge in search volume seemingly out of nowhere — but this surge came due to a dangerous and false rumor circulating online.
In essence, online threads popped up claiming that random Wayfair product listings could be associated with missing people and human trafficking. Reuters later goes on to fully debunk this conspiracy — but, at least for a short period, the misinformation circulating online without any way to stop it forced Wayfair to face huge and unexpected backlash.
Recovering From a Misinformation Campaign
Whether it’s a massive misinformation campaign around a controversial event, a sudden rush of negative attention due to misplaced fearmongering, or simply random accusations or attempts to hurt a reputation, utilizing content strategy to attempt to shift the leaning of the SERPs might feel like an uphill battle, but it isn’t impossible. And while PR isn’t only for when a business or person is hit with negative press, it is an important part of the puzzle when it comes to negative trends.
Due to the nature of trending topics, those attempting to recover from negative SEO or social media attention should target long-tail keywords regarding the issue, or use similar verbiage as the issue, and create content that either discredits the claims or explains the situation.
Others may also choose to engage in a wider PR campaign that includes content on other websites to increase the spread of the explanation (such as Wayfair’s statement made with Reuters). Further, SEOs and business owners should be aware of self-inflicted SEO mistakes that could disrupt the flow of sharing information and potentially make the situation worse.
As a victim of a misinformation trend, the most important thing to keep in mind is that you will likely never fully overcome the rumors or slander entirely. There will always be parts of the internet where clarifications and explanations and even apologies won’t reach. But by maintaining an honest and ethical response to the issue in all available, public spaces, you have an established source of truth to fall back on.
Avoiding the Further Spread of Misinformation
A secondary responsibility of content creators comes down to more than simply defending oneself against false claims online — it also includes the creation of new content itself. You must ensure everything portrayed both on a site blog as well as on social media, in newsjacks, in newsletters, and so on, is accurate and honest.
Whether or not on purpose, it’s easy to fall into the trap of spreading misinformation when the proper research isn’t done beforehand to double-check legitimacy from reliable sources. In that same vein, it’s important to keep in mind that most people don’t actively set out wanting to spread false information. They are simply victims of the algorithm and disinformation machine that happens naturally online.
To lessen the chances of not only coming across misinformation and sharing it, research project leader Kristin Lerman from USC suggests a healthy, “varied information diet” that allows for information to come from multiple different sources. These varied sources of information should also ideally present the same information from different perspectives, or at least not align with the same backgrounds to ensure information from more than one angle.
However, even with the best and honest intentions, everyone falls victim to misinformation at one time or another. For individuals, this might not be any more of a headache than simply deleting the post or social media share — but when the information is spread by your business, addressing the issue right away may hopefully save your reputation and avoid any future ire from customers or visitors to your site.
When such instances occur, Forbes contributor Anne Marie Malecha suggests: “Correct the information immediately and work quickly to determine the source of the misinformation.” This also includes potentially reaching out to any other bad-faith sites that may have linked to yours, and requesting takedowns of mentions or harmful backlinks.
From there, curating an honest and authentic content strategy is the next step in working to regain the trust of your audience, as well as considering further tactics such as new overall content creation and link building campaigns to renew and refresh your ranking in the SERPs.
With misinformation online running as rampant as it does, it’s impossible to avoid sharing or engaging with all of it. But by learning how to determine false claims, how to seek legitimate and neutral truths from multiple perspectives, as well as how to address when misinformation is shared, the spread can be slowed while highlighting legitimacy across the web.