Lies
Misinfo

“Don’t Share That!” An Expert on How to Protect Yourself From Racist Disinformation Campaigns

Online disinformation has once again taken a dangerous turn, due to an anti-immigrant rumor being amplified by the Republican candidates for president and vice president.

This article was originally published by The Emancipator.

The original fib comes from a woman in Springfield, Ohio, whose weeks-old social media post falsely claims that Haitian immigrants are eating cats. The story’s audience grew with each reshare and wound up being highlighted in the recent presidential debate. 

The Black population of Springfield is now under siege, navigating everything from bomb  threats to school closures. However, the online damage began weeks before Donald Trump and his running mate, JD Vance, doubled down on the false claims.

This fallout is why it is imperative for social media users to be aware of the dangers posed by digital influence machines, and actively seek credible information to make informed decisions before engaging with, and sharing, messages on social media.

By prioritizing engagement above all else, social media algorithms and their owners have shown themselves to be irresponsible stewards of online information.

With platforms like Instagram and Facebook hosting billions of posts daily, it was all too easy for the anti-immigrant rhetoric to be picked up and pushed out by social media algorithms that curate specific content for users based on their prior reading histories. Users of social media need to take a more active and critical stance when evaluating claims made online, where there is very little human oversight and mediation of the content that is shared there.

A business model that harnesses digital surveillance to hijack the attention and reasoning of users allows for false rumors to spread faster on social media than truthful information, and users can unwittingly promote harmful information. 

On social media, algorithms analyze user data — including search terms, time spent on-site, media viewing habits, and even scrolling patterns — to create profiles that target users with sequences of posts designed to increase engagement.

The Data & Society Institute describes these systems as “digital influence machines,” highlighting their role in creating a cycle of utilizing digital surveillance tools to optimize content that hijacks a user’s time and attention.

TikTok, for instance, is particularly effective at this, with an industry-leading 95 minutes of daily use among users. 

While these digital influence machines seem more personalized, relying on them leads their content creators and readers to prioritize the goals of the platform over their own. This makes users more vulnerable to manipulation, especially people from marginalized groups.

Malicious actors weaponize these groups’ identities and culture to drive polarization and further disenfranchisement. The same systems that drive people to engage on social media also assist and enable these bad actors to promote harmful messages.

Examples of these bad-faith digital actors include Facebook group Williams and Kevin, who pose as Black influencers promoting Black identity and culture, but later share messages that promote conspiracy theories and encourage boycotting presidential elections.

On platforms like X (formerly Twitter), accounts have been found spreading targeted disinformation against marginalized populations through AI-generated or misidentified images and videos used to provoke outrage and deepen divisions.

By prioritizing engagement above all else, social media algorithms and their owners have shown themselves to be irresponsible stewards of online information. Mark Zuckerberg of Meta has repeatedly appeared before Congress to apologize for allowing harmful disinformation on his platforms.

In contrast, X’s CEO Elon Musk positions himself a “free speech absolutist,” despite his record of only defending speech that aligns with his views. These examples show how nefarious these influence machines can be, and the incompetence of their leaders to limit them. 

Social media users are more than just consumers, they are also producers and promoters of the content on these platforms. Here are a few key news media literacy concepts that can help you and your networks resist the pull of disinformation.

First, we have to understand the term Disintermediation. This is the removal of human intermediaries, such as editors and publishers, who ensure content is relevant and useful for audiences by ensuring that it meets certain standards.

For example, The Emancipator  publication is an Intermediated outlet where editors select content that informs and engages readers through insightful commentary and reporting. Conversely, the largest social media platforms, including TikTok and Facebook, are largely disintermediated, with algorithms prioritizing engagement over credibility.

It’s critical for users to heighten their evaluative skills with content consumed from social media — especially content labeled as “news.”

Experts like myself define “news” as information of public interest that is subject to a journalistic process of verification, for which an independent individual or organization is directly accountable

Verification is crucial for news. It ensures that published claims are supported by direct, methodically collected evidence. Credible sources are transparent about what they know and how they know it. Apply this concept to the before-mentioned lie about Haitian immigrants in Springfield.

Independence involves transparency about a media producer’s motivations and any conflicts of interest. Independent sources disclose relationships with their subjects, whether business-related, familial, or otherwise, and operate under organizational oversight to ensure fairness.

In this case, the source — “The Calvin Coolidge Project” on X — fails this test. The account only identifies itself as “delivering breaking news…” and offers no organizational backing or standards, just extreme right-wing commentary.


 
 

Accountability refers to taking responsibility for published content. Credible information is tied to identifiable authors and publication dates. Consumers need to seek out bylines with real names and verifiable backgrounds.

In the viral claim about Haitian immigrants, there are no names attached to any of the sources — just the vague labels of “neighbor,” or the name of a group. This makes it impossible to verify who is providing this information, or check their credentials.

This “viral” claim clearly fails all three of these tests, yet gained traction online and in the public sphere. Once doing so, it has also proven difficult to dislodge as an unfounded rumor due to repetition.

The three principles of verification, independence, and accountability are key to cultivating a more critical relationship with the media you consume.

Applying them regularly, along with other news media literacy tools, can help you and others navigate this complex and often misleading information landscape.

1 Shares
Pin
Share
Share
Tweet