When Algorithms Amplify Division: Lessons from Charlie Kirk’s Death
Haileleol Tibebu / Sep 18, 2025As more details of the assassination of conservative activist Charlie Kirk at Utah Valley University emerge, including details from the social media feeds of 22-year-old Tyler Robinson, charged with aggravated murder, the online reaction nationally and globally already shows something bigger.
Social feeds don’t just reflect what individuals feel; they steer it.
Within minutes of the news following the shooting, social feeds split into mirrored worlds: tributes and anger in one, taunts and conspiracy in another.
Modern feeds on Facebook, Instagram, YouTube, TikTok, and X are ranked by machine-learning systems that predict what will keep participants engaged. The inputs are mostly behavioral: watch time, dwell time, clicks, shares, replies, “rewatches,” and occasionally satisfaction.
Because platforms chase engagement, the most intense content tends to spread more widely. That’s the logic behind engagement-based feeds. A study from the Knight First Amendment Institute at Columbia University shows that these systems often amplify posts that stir anger, especially toward people with different beliefs or values.
Compared to a simple, time-ordered feed, engagement-based ranking is far more likely to reward division. That doesn’t mean algorithms cause extremism. It means they often learn from our pull toward heated content and then serve us more of it.
Confirmation bias pulls in the same direction. Feeds contain lots of like-minded sources, even as the causal impact on polarization is complex. In the day-to-day, feeds normalize a person’s certainty and dehumanize opponents, especially during breaking events like this tragedy.
A growing body of research reveals that emotionally charged content — especially anger and moral outrage — travels farther and sticks longer. Experiments and platform-scale studies continue to find that outrage and conflict drive replies and “rage-clicks,” the very behaviors engagement-hungry models reward. Even attempts to graft corrective context, such as community fact-checks, can heighten anger in replies. This suggests that design choices can influence emotional cascades.
The US Surgeon General, in a 2023 report, warned of the “profound mental health risk” to youth from social media use. Another 2024 meta-analysis published in JAMA Pediatrics finds a positive association between social media use and internalizing symptoms, including anxiety and depression. Another 2025 study links higher use to “increased depressive symptoms over time.”
Certainly, correlation isn’t causation, but at the population scale, small effects multiplied by billions of sessions of use become policy problems. Layer onto that a feed architecture built for continuous affirmation, a steady diet of grievance-confirming posts, enemy caricatures, and outrage rewarded with visibility, and you get conditions that can incubate aggression.
Most people experiencing anxiety or depression are not violent; the danger arises when distress, grievance, and algorithmic reinforcement converge on a small subset.
On the positive side, social platforms also connect families, elevate marginalized voices, host lifesaving information, and provide new businesses a chance to compete. A human, not an algorithm, killed Charlie Kirk. All true. But when attention is the master metric, feeds lean toward what most keeps us tuned in, not what most serves our well-being. That’s legal, but not necessarily ethical.
The question of whether to align design with human flourishing instead of rage retention is crucial.
That is possible without censorship, by making systems more legible, more choosable, and less “anger-optimizing.” Regulators need to require useful transparency and real user choice over ranking systems.
The European Union’s Digital Services Act already pushes very large platforms to disclose recommender logic and offer a non-profiling option. Providers must also offer genuine choice: clearly labeled alternatives beyond “For You,” including chronological and friends-first feeds, as well as visible “Why am I seeing this?” boxes. TikTok’s new “Manage Topics” is a small step toward that.
Most of all, it is essential to keep teaching people how these systems work. Algorithm literacy should be as common as nutrition labels. That means explaining “watch time,” “dwell,” and “reshare velocity” in schools, orientations, and in-app explainers. Independent evaluations have shown that digital citizenship curricula can improve students’ ability to navigate social media feeds.
Did an algorithm “kill” Charlie Kirk? No, a person did.
But research supports that many attackers immerse themselves in online communities that validate grievances and lionize violence—communities that platforms make easier to find and thus enable them to spread faster. The National Institute of Justice published a study documenting how mass public shooters use social media to consume, perform, and signal violent identities— evidence that design choices can shape exposure.
The very algorithms that reward conflict helped make Kirk famous, giving his supporters the affirmation they craved and his critics the outrage they expected. This turns the medium built for connection into a loop that scales devotion and division in equal measure. This loop has to break if the country is serious about saving the next life from gun and political violence.
Authors
