Algorithms Don't Make the Rules
Gabriele de Seta, P. Kerim Friedman / Apr 25, 2025If you are still a user of X, the platform formerly known as Twitter, you probably see a lot of posts by Elon Musk. This is, in part, because Musk has dramatically ramped up his tweeting since purchasing the platform in 2022, averaging 67 posts per day and apparently sleeping very little. But there's another reason why Musk keeps popping up in your feed: he has repeatedly changed how X's recommendation system works, altering its algorithms to boost the reach of his own posts. Even users who block him are still forced to see his posts. He has also been accused of altering the system in the months after his endorsement of Trump to boost right-wing content. Musk likes to show off his power over “the algorithm,” regularly making fun of those who criticize it, or asking X users to give him personal feedback to improve it.
Algorithms seem to rule every aspect of our online lives. They decide what shows up at the top of our search results, which videos we watch, and whose posts you see on social media. Because we don't quite understand how such algorithms work, we start treating these mysterious processes as capricious gods. Tech companies have an interest in cultivating this sense of awe and confusion, creating technical smokescreens that they can hide behind as they extract profit from our data. This is why we need to demystify “the algorithm,” demand that Big Tech firms embrace greater transparency, and give ordinary users the tools and skills necessary to control their own feeds.
Ultimately, an algorithm is just a set of rules, like the steps you follow when you bake a cake. These rules are arranged in step-by-step procedures and can be used to solve problems or achieve specific goals. Because computers can execute instructions much faster, and more accurately, than humans, algorithms have gotten more complex over time. These algorithms also have much more data to work with than ever before, allowing them to make surprising connections, such as linking taste in cars to one’s political views.
Our networked societies have also made us increasingly dependent on algorithms. We have too many choices and too many online connections, and never enough time to process all of them. So, many people let search engines dynamically rank their result pages, trust dating services to suggest potential matches, and follow the optimal routes maps provide based on live traffic information. We offer up our shopping and search history in exchange for the promise of finely tuned algorithmic recommendations. It can sometimes seem like our apps understand us better than we know ourselves.
When Twitter founder and former CEO Jack Dorsey says, “the algorithms are definitely programming us, and it's very, very hard to tell what an algorithm is going to do” or when Elon Musk responds by asking “Will we actually choose the algorithm?”, alarm bells should be ringing. Yes, it is scary to cede so much power to algorithms, as it means giving up control over key aspects of our lives to processes and systems we don't fully understand. We are right to be concerned and suspicious. But algorithms don't make the rules, people do. In a time when powerful figures peddle the inevitability of algorithmic rule, we should not confuse our delegation of decision-making with the inevitability of automation. Other rules, other algorithms, are always possible.
Algorithms are not as complex and inscrutable as the tech industry makes them out to be. Companies know very well which kind of rules they are setting up, and what effects their algorithms are likely to have. For example, internal documents reveal how Facebook’s introduction of new reactions besides the iconic ‘Like’ turned out to be more than a mere cosmetic change. The new options—including ‘Wow’, ‘Sad’ and ‘Angry’—were given a higher recommendation score, presumably because stronger emotions drove user engagement and advertising revenue alongside it. That was until Facebook's own research revealed that this policy resulted in people seeing a lot of misinformation and clickbait, so they flipped the algorithm, demoting such posts instead. It is unclear what effect more recent adjustments to Meta policies and content moderation systems will have on its platforms, but the company says its goal is to intervene less.
While algorithms may not be all that mysterious when you understand them, lack of transparency fosters conspiracy theories and undermines people's faith in public discourse. We become unable to distinguish between rules designed to keep us watching cat videos and rules designed to suppress speech on controversial topics. To regain a sense of agency, people embrace folk theories about how algorithms work: theories that may sometimes be presciently spot-on, and other times have little-to-no connection with reality. For example, Instagram creators started using AI-generated texts advertising fictional car models as descriptions for their Reels, believing that this would lead the platform's recommendation algorithms to favor their content. And it is not only users who respond to this lack of transparency through speculation: Governments also latch onto the opacity of algorithms to push back against big tech or promote their vision of digital sovereignty.
The Chinese app TikTok (launched as an international version of ByteDance's Douyin short video platform) is a paradigmatic example of this. While TikTok is, at the time of writing, available in most of the world, it is banned in India, Iran, and North Korea, while countries like Russia and China implement separate moderation policies for their users. In the US, the December 2022 ANTI-SOCIAL CCP Act explicitly referred to “algorithmic learning” as a threat, alongside surveillance, censorship, and influence. Public discourse focused on the threat of Chinese manipulation, but as a US Senator recently suggested, TikTok's alleged promotion of pro-Palestine content was also a factor for some lawmakers who supported recent legislation requiring ByteDance to divest its ownership of TikTok.
If algorithms make things so complicated, wouldn't it be best to just avoid them altogether? Even on Facebook, one can still find a hidden chronological feed of all your friend's posts. The decentralized, open-source social network Mastodon only offers such a ‘reverse chronological’ feed, though you can filter it by searching for posts tagged with keywords that interest you (both options are, in fact, kinds of algorithm as well). But if you follow more than a handful of people without any kind of algorithmic curation, you will probably miss a lot of content likely to interest you. It would be like reading a newspaper where the front page is organized in the order that the reporters filed their stories, not by the importance given to those stories by the paper's editors. While one could conceivably read a newspaper cover-to-cover, the high frequency of social media posts makes it impossible to know what has been missed.
Until recently, it seemed like users were left with a stark choice: letting an opaque algorithm decide what they see, or leaving it up to the passage of time to show them whatever was most recent. This is a false dichotomy. Bluesky, originally a side project of Jack Dorsey when he was Twitter's CEO (though he is no longer affiliated with it), was built with the idea of giving users control over algorithms. Using third-party tools, you can create your own feeds, or follow feeds made by other users. For those who experiment with different algorithms or even make their own, it offers a unique playground to learn what tech companies already know: how algorithmic decisions affect what you see online. To take one colorful example, the "Fucking replies" feed by user @xunlingau.bsky.social shows any reply with the word "fuck" in it—across multiple languages. Not to be outdone, Meta’s Threads has added a custom feed tool as well. It is easier to use, but far less transparent about how it works.
Where some people see chaos, anthropologist AbdouMaliq Simone sees hope. Looking at urban life in places like Jakarta, Old Delhi, and Phnom Penh, AbdouMaliq argues that the most mundane human interactions work together, in the aggregate, to forge new possibilities. He calls this concept “people as infrastructure.” We find a glimmer of hope in this concept, at a time when algorithmic systems are making collective life ever more vulnerable to manipulation. When they work best, algorithms boost the signals made by millions of users as they engage with other posts: liking, sharing, commenting, or perhaps even blocking the user who posted it. But algorithms can also be manipulated to work against this human infrastructure, showing us instead only what some oligarch wants us to see. This has led some people to try to reject algorithms altogether. We feel that this is a mistake. We want, instead, to empower ordinary people to make the rules that shape our informational lives. We should work together to demystify algorithms, unmask their creators’ claims, and, most importantly, realize that we already are—as readers, viewers, commenters, creators, curators, and even creators of our own feeds—our best informational infrastructure.
Authors

