Google’s Jigsaw unit sponsors a RAND report that recommends infiltrating and subverting online conspiracy groups from within while planting authoritative messaging wherever possible.

May 6, 2021

With a focus on online chatter relating to alien visitations, COVID-19 origins, white genocide, and anti-vaccination, the Google-sponsored RAND report published last week shows how machine learning can help detect and understand the language used by “conspiracy theorists.”

While the 108-page report can be highly technical in describing machine learning approaches for identifying and making sense of conspiracy language online, here we’re not going to focus on any of that.

Instead, we will zoom-in on the report’s “Policy Recommendations for Mitigating the Spread of and Harm from Conspiracy Theories” section and attempt to see how they might be received in the real world.

“Conspiracists have their own experts on whom they lean to support and strengthen their views […] One alternative approach could be to direct outreach toward moderate members of those groups who could, in turn, exert influence on the broader community” — RAND report

Diving into the report’s policy recommendations, they all have one thing in common — they all seek to plant authoritative messaging wherever possible while making it seem more organic, or to make the messaging more relatable to the intended audience at the very least.

The four policy recommendations are:

  1. Transparent and Empathetic Engagement with Conspiracists
  2. Correcting Conspiracy-Related False News
  3. Engagement with Moderate Members of Conspiracy Groups
  4. Addressing of Fears and Existential Threats

The original narrative from authoritative sources always stays the same, but the message is usually filtered through intermediaries that act like marketing, advertising, and PR firms.

What follows doesn’t have anything to do with the validity of any conspiracy theory, but rather focuses on the Google-sponsored RAND report’s messaging strategy through the following lens:

Are ‘conspiracy theorists’ more likely to believe an authoritative message when it comes from someone else?

Or

Are they more likely to focus on the validity of the message itself without placing all their trust on the messenger?

The Google-sponsored RAND report recommends that the government bet on the former.

But could such a move actually encourage the latter?

It’s a message versus messenger type of debate.

Let’s dig in.

“A common thread among all the conspiracy groups was distrust of conventional authority figures” — RAND Report

To begin, Jigsaw’s latest collaboration with the RAND Corporation reveals that across the board “conspiracy theorists” show a high distrust of “conventional authority figures” while preferring “their own experts on whom they lean to support and strengthen their views.”

The idea of distrust in conventional authority will be a major theme throughout this story as the RAND report promotes subversion from within, planting conventional authority messaging among certain members of the community and hoping it will spread.

The report suggests that conspiracy theorists won’t listen to conventional authority, but they’ll listen to leaders in their groups, so the plan is to target potential influencers in online conspiracy groups who are somewhat on the fence and could tow the conventional authority line.

For example, the report recommends infiltrating and subverting online conspiracy chatter by singling out the more “moderate members” of the group who could become social media influencers in their own rite.

“Evidence suggests that more than one-quarter of adults in North America believe in one or more conspiracies” — RAND report

According to the report, “Conspiracists have their own experts on whom they lean to support and strengthen their views, and their reliance on these experts might limit the impact of formal outreach by public health professionals. [all emphases are mine]

“Our review of the literature shows that one alternative approach could be to direct outreach toward moderate members of those groups who could, in turn, exert influence on the broader community.”

So the logic goes:

  • Problem – Conspiracists have their own experts
  • Solution – Direct outreach toward moderate members
  • Purpose – Exert influence on the broader community

In other words, they want to turn those who aren’t completely onboard with the entirety of the conspiracy into social media influencers for their authoritative marketing campaigns.

But what would be the incentive to flip?

“Commercial marketing programs use a similar approach when they engage social media influencers (or brand ambassadors)” — RAND report

The report goes on to say, “Commercial marketing programs use a similar approach when they engage social media influencers (or brand ambassadors), who can then credibly communicate advantages of a commercial brand to their own audiences on social media.”

Incentivizing social media influencers to become ambassadors for a specific brand means the influencers benefit by getting paid, and the companies benefit by reaching a wider audience.

It’s a deal driven by financial incentives in order to gain more influence.

But again, what’s the incentive for “moderate members” of so-called conspiracy groups to flip?

What would a moderate member gain by not only denouncing their former beliefs, but to be a continuous bullhorn shouting at people as one who has seen the folly of their ways?

Would it be for moral reasons, or for some other type of gain?

“It might be possible to convey key messages to those who are only ‘vaccine hesitant,’ and these individuals might, in turn, relay such messages to those on antivaccination social media channels” — RAND report

Remembering that all four chatter groups studied have a distrust of conventional authority figures, RAND suggests using the more easily-persuaded in the group (moderates who aren’t fully convinced) to carry out the messaging of conventional authority figures on their behalf.

With regards to “anti-vax” groups the report suggests, “it might be possible to convey key messages to those who are only ‘vaccine hesitant,’ and these individuals might, in turn, relay such messages to those on antivaccination social media channels.”

This tactic of being sneaky about where the messaging is coming from may be one of the reasons why people don’t trust conventional authority in the first place — a lack of transparency.

The Google-backed RAND report attempts to balance its infiltration and subversion technique by recommending another approach: transparency via “transparent and empathetic engagement with conspiracists.”

“Instead of confrontation,” the report reads, “it might be more effective to engage transparently with conspiracists and express sensitivity. Public health communicators recommend engagements that communicate in an open and evidence-informed way—creating safe spaces to encourage dialogue, fostering community partnerships, and countering misinformation with care.”

In any case, all efforts at “mitigating the spread and harm from online conspiracy theories” are aimed at directing users to accept the very sources they trust the least — conventional authority.

“An additional technique beyond flagging specific conspiracy content is facilitated dialogue, in which a third party facilitates communication (either in person or apart) between conflict parties,” — RAND report

Another example of transparent and empathetic engagement suggested in the report has to do with outsourcing the authoritative messaging to third-parties.

“An additional technique beyond flagging specific conspiracy content is facilitated dialogue, in which a third party facilitates communication (either in person or apart) between conflict parties,” the report suggests.

This third party approach “could improve communication between authoritative communities (such as doctors or government leaders) and conspiracy communities.”

Again, the logic goes:

  • Problem: Conspiracy communities neither trust nor interact with authoritative communities
  • Solution: Third party facilitates communication
  • Purpose: To improve communication between authoritative communities and conspiracy communities

Alternative Avenues for Authoritative Messaging

So far, we’ve discussed two of the four recommendations made in the report:

1. Engaging moderate members of conspiracy groups
2. Facilitating third party dialogues

Both of these recommendations are about finding ways to disseminate authoritative messaging.

The remaining two recommendations have the same purpose:

3. Providing corrections to conspiracy-related false news
4. Intervening to address fears and limit potential societal harms

With respect to correcting “false news,” the report suggests that public health practitioners use their positions of authority to “correct instances of misinformation using such tools as real-time corrections, crowdsourced fact-checking, and algorithmic tagging.”

On the addressing fears front, this tactic is a means of persuasion by “using the intended audience’s values rather than the speaker’s values” to get the authoritative message across.”

I came away from this report with a couple of observations:

  1. The authors recognize that conspiracy theorists don’t trust conventional authority
  2. Despite this recognition, the authors don’t try to alter the message — just the messenger

This led me to the inference that the authors don’t see the problem as being the authoritative message, which makes sense since it’s coming from them, but rather the authoritative messenger itself.

Therefore, all of their suggestions are about staying the course on the narrative while filtering it through anybody who isn’t them. It’s a marketing thing.

Message vs Messenger

Which do you believe is more important for discerning theories of any type — the message or the messenger?

A good message can fall on deaf ears if the messenger isn’t trusted, and a bad message can negatively influence audiences when the messenger is blindly trusted.

I don’t place any judgment on what basically boils down to pure marketing tactics in the recommendations, but I do question:

Are ‘conspiracists’ more likely to place their trust in the message over the messenger, or vice versa? 

From what I see, the authorities are betting on the belief that if they can just gain influence over the messenger, then their message might prevail.

Analysis: Possible Future Outcomes

I see multiple possible outcomes from taking this bet and applying the report’s recommendations in real life:

  • People will blindly follow whatever influencers in their group have to say:
    • If authoritative messaging is successful, moderate members flip to become influencers and help guide the flock to greener pastures as ‘brand ambassadors’ for the common good, teaching others the errors of their ways.
    • If authoritative messaging is unsuccessful and the subversion fails, the moderate member is elevated to the status of anti-establishment influencer, will be positively seen by the group as ‘not selling out,’ and the group still won’t trust conventional authority.
  • The authorities ask third parties to do their talking:
    • If third party dialogues are successful, the conspiracy theorists will have all of their doubts answered and backed by claims from authoritative sources that are presented in a way that resonates with them, so they can better understand the overall picture and reject conspiracies. Both sides are willing to cede some ground.
    • If third parties don’t succeed in addressing all of the group members’ concerns, the authoritative message will be remembered by conspiracists, and every time they hear the same authoritative rhetoric, they will immediately distrust it, no matter who it comes from.
  • Algorithms will identify and flag any messaging that goes against the mainstream narrative and provide alternative context (something big tech already does):
    • Some conspiracy group members will be persuaded by the bombardment of content flagged by algorithms, and they will slowly come around to believing that the fact-checkers are right by the sheer volume of evidence and/or peer pressure to conform.
    • Conspiracy group members already don’t trust authority, so the warning labels will do nothing but strengthen their resolve.
  • Authorities engage directly in civil conversation with conspiracy theorists:
    • Adversaries come together, and they find some common ground. Both sides acknowledge where they’ve made mistakes while respecting each others’ differences, so long as nobody is causing harm. Agree to disagree on some points while conceding others. There may not be a consensus where one side is an obvious winner, but some level of understanding is gained and can be incorporated into future dialogues.
    • Adversaries come together, and they can’t agree on anything. Two versions of reality exist, and no one can establish a basic set of ‘facts’ that would form the basis of any rational argument. Agree to disagree on everything. Nothing is gained.

With the above scenarios, which are by no means exhaustive, I attempted to see how each recommendation could theoretically play out in the real world while trying to take of both sides’ points of view into account.

As long as the authorities don’t call for infringing on the rights of individuals (including conspiracy theorists), is there anything wrong with some of their more subversive tactics if they’re for the greater good and done with the best intentions?

And shouldn’t any theory, conspiratorial or not, collapse when presented with irrefutable evidence to the contrary?

The reality is that the strongest arguments don’t always win out, and humans are stubborn creatures. It takes a lot to knock down long-held beliefs without some type of profound revelation taking place within the individual.

“Removal of sarcastic discourse could reduce the signal-to-noise ratio between conspiracy and non-conspiracy discussions, providing a much clearer view of the characteristic stance found in conspiracies propagated on social media” — RAND report

Personally, I think the rift between conventional authorities and conspiracy groups is too great.

Authoritative messages may get through to some conspiracy theorists, but overall, I don’t think either side is going to be persuaded in any meaningful way that would effect real change.

Trying to infiltrate groups and subvert certain members seems like a tactic that would be perceived as an intrusion that furthers the divide and lead to even less trust, but we shall see how it all plays out.

If you’re looking for some background information on the report, below are a few snippets about its origins, data collection methods, and other findings. All bullet points are direct quotes from the report.

Next on the horizon, they’ll be going after sarcasm.

Report Origins:

  • Google’s Jigsaw unit asked RAND Corporation researchers to conduct a modeling effort to improve machine-learning technology for detecting conspiracy theory language by using linguistic and rhetorical theory to boost performance.
  • This research was sponsored by Google’s Jigsaw unit and conducted within the International Security and Defense Policy (ISDP) Center of the RAND National Security Research Division (NSRD).
  • NSRD conducts research and analysis for the Office of the Secretary of Defense, the US Intelligence Community, US State Department, allied foreign governments, and foundations.

Data Collection Methods:

  • Data collection was conducted through the social media tracking company Brandwatch.
  • Social media sources were Twitter, Reddit, and a large selection of online forums and blogs. We also used one-off sources, such as the transcript of the ‘Plandemic’ viral video (2020).
  • Report studied four specific conspiracy theory topics: alien visitation, anti-vaccination content, COVID-19 origins, and White Genocide (WG).
Source: RAND

Findings on Conspiracy Theorists:

  • A common thread among all the conspiracy groups was distrust of conventional authority figures.
  • Evidence suggests that more than one-quarter of adults in North America believe in one or more conspiracies.
  • Pro-conspiracy theorists also find themselves wading deeper into social media–based echo chambers with decreasing exposure to non- conspiracy viewpoints.
    • These echo chambers contribute to a deepening polarization of viewpoints, and the posts disseminated within such echo chambers can reach and influence the broader internet.

Removing Sarcasm on the Horizon

  • One particular data quality Data and Methodology issue is the contamination of conspiracy discourse through sarcasm or quotation.
  • Determining whether certain social media comments are sarcastic can be particularly confusing even for humans, especially without context of the greater conversation.
  • Removal of sarcastic discourse could reduce the signal-to-noise ratio between conspiracy and non-conspiracy discussions, providing a much clearer view of the characteristic stance found in conspiracies propagated on social media.

I recommend downloading the full Google-sponsored RAND report here because it goes into great detail about their data collection methods and inferences, along with studies that helped the authors formulate their recommendations

Recent Categories

View All