Home

Donate
Perspective

AI Isn’t Responsible for Slop. We Are Doing It to Ourselves

José Marichal / Jul 15, 2025

José Marichal is a professor of political science at California Lutheran University and author of the forthcoming book You Must Become an Algorithmic Problem from Bristol University Press (October 2025).

Isolation by Kathryn Conrad & Digit / Better Images of AI / CC by 4.0

Our social media feeds are increasingly being overrun by AI slop, so much so that Fast Company’s Mark Sullivan has dubbed it the ”AI Slop Summer.” Critics are drawing attention to the looming dangers of AI driven content, as in a memorable John Oliver segment. Google’s new video AI generator, Veo3, is being used to produce racist and antisemitic videos that are then posted on social media. YouTube has taken notice of this phenomenon; on July 15 it announced that the YouTube Partner Program will exclude AI slop from being monetized.

This content isn’t only banal—it can make our toxic public sphere even worse. But while the very real dangers of AI slop are often framed as large tech companies imposing dangerous tools on an unsuspecting public, perhaps we should also consider why we are so receptive to low quality content.

I suspect that most of the people reading this feel they are immune to AI slop—that our cultural tastes and critical faculties are too developed to fall for simplistic propaganda. But this is the wrong way to think about the purposes of AI slop. In the New York Times, Benjamin Hoffman defines slop as “shoddy or unwanted AI content in social media, art, books and, increasingly, in search results.” His updated definition connects modern AI slop to historic use of this term, which has traditionally meant something that is easy to digest, even though not tasteful or particularly refined.

When we think of slop more like fast food, our attraction to it becomes clearer. Sometimes we want food that is easy to digest and not particularly refined. Otherwise, fast food restaurants would cease to exist. We can think of slop as “optimized” food. It addresses our desire for quick, familiar food that satisfies our hunger. Food that is “elevated” or “challenges us” may be a memorable gastronomical experience for those with more sophisticated palates. But, for many, “food as art” might be too unfamiliar to be enjoyed. From the perspective of a Michelin-rated chef, descriptors like “complex” or “visionary” are plaudits; but the average diner does not want their food to vex them.

What happens when we exclusively opt for slop in our diets? We know that a diet consisting of comfort food is bad for our physical health. Yet we are becoming less receptive to content that challenges us and are increasingly opting for “slop” in our cultural consumption—choosing fare that might amuse, legitimate, or engross, but does not challenge.

Comfort food culture

I’ve recently started watching movies from the early seventies (pro tip -- most of them are available at the Internet Archive for free). This was an era of social, cultural and political upheaval. Directors were grappling with weighty issues surrounding masculinity, racism, the loss of belief in the unquestioned goodness of America, etc. These movies consistently unsettle me—be it the sexual violence in Sam Peckinpah movies like Straw Dogs or The Getaway, the sepia-toned, bleakness of Robert Altman’s anti-Western McCabe and Mrs. Miller, or the provocative explorations of masculinity in Steven Spielberg’s Duel and Mike Nichol’s Carnal Knowledge. These films were not intended to be easily digestible—they were intended to provoke. In each of these instances, its intent is to make you ask, “what did I just watch?”

In the interest of marital harmony, I generally spare my wife my newfound penchant for 50 year old movies and stick to whatever looks good on Netflix or Hulu. But when we watch contemporary films or TV series, I notice an emerging trend—I’m often entertained but seldom challenged. Take for example Apple TV shows. Some of them are genuinely inventive and/or challenging (Severance, Bad Sisters and The Studio come to mind for me). But there are others that I enjoy when I’m watching, but I soon forget about when the series is done. Recent Apple TV shows like Palm Royale, Bad Monkey, or Your Friends and Neighbors have great casts, beautiful cinematography and engaging dialogue—all the elements of a great series. But to me, they have an “slop-like” feel to them, as if they were conceived by AI to be as unthreateningly pleasant as possible.

Zach Schonfeld notes in a recent article in The Guardian that the quality of celebrity documentaries has declined in recent years for a similar reason. In the most well-known case, Netflix decided to shelve a documentary about Prince because of its nuanced and partly-unflattering portrait of the artist. Netflix scrapped the project under threat of lawsuit by Prince’s estate. This case is reflective of a broader trend towards gauzy, pleasingly banal portrayals of public figures that keep their beloved images intact at the expense of telling an accurate, and likely more interesting story. The result is less space for portraying celebrities as the complex people they are/were. An example is the recent documentary Becoming Led Zeppelin on Netflix. The archival concert footage made for a delightful watch, but the lack of engagement with the more unsavory aspects of the band’s behavior chronicled in the biography Hammer of the Gods, was notably absent.

Our increased discomfort with outliers

Why are we increasingly bering served non-threatening cultural fare? In my forthcoming book, You Must Become an Algorithmic Problem, I argue that after years of exposure to algorithmic recommendation engines and platform capitalism models that promise to give us “control” over our information and entertainment diet, we are losing our taste for “outliers.” I define outliers here as cultural content that doesn’t comport to our algorithmically curated views of the world. We’ve slowly become habituated to a deeply illiberal optimization ethic that rejects “outlier” perspectives. Rather than seeing deviations from the “algorithmic models in our heads” as opportunities to grow, we increasingly see outliers as dangerous anomalies to be ignored or ridiculed.

In AI and machine learning, outliers that defy explanation present challenges. In prediction models, the presence of outliers in the AI’s training data makes it harder to reduce the model’s cost function (e.g., the model’s prediction error rate). The outlier case creates a dilemma for AI models: if you incorporate the outlier from your training data into your general model, then you run the risk that the outlier will undermine your ability to predict future cases (a phenomenon known as overfitting). But, if you ignore the outlier, then your model retains a high cost function.

Optimization has been part of our cultural ethos for a long time. As a young academic back in the 2000s, I had a penchant for “productivity” books like Timothy Ferriss’ the Four Hour Work Week and David Allen’s Getting Things Done. These books promised “systems” that would increase my productivity and help me to more efficiently achieve my goals. We’ve since graduated from the 2000s era of optimization as productivity. The engagement algorithms of the 2010s ushered in a kind of optimization geared to individual preference satisfaction rather than productivity. This form of optimization is more about consuming than creating. Platform economics became an exercise in collecting enough data on users to more efficiently cater to their preferences. This meant driving users through recommendation algorithms to help them find content which the algorithm has pre-determined to fit each user’s aesthetic. This has led to the slopification of culture.

In a fascinating 2021 article by Jeremy Wade MorrisRobert Prey and David B. Nieborg, the authors catalog the ways creators must engage in what they call “cultural optimization” to get their content picked up by recommendation algorithms. These optimization practices go beyond manipulating titles or thumbnails and extend to the actual creation of the content itself. In a particularly revealing passage, they describe the ways in which musicians have to adapt to the logics of platform capitalism to be successful:

To optimize a track for streaming requires the listener to be hit early and hard with a succession of repeated hooks. It is also common knowledge among musicians that a track only registers as a “stream” on Spotify once it is played for 30 seconds. Only then does it begin to generate royalty payments and count on music charts.

This need to adapt to the dictates of the algorithm makes it so that artists are encouraged to see their work not as artistic expression but as data that must be recognized by the algorithm to become seen. Expression that is too innovative or challenging might be unseen because it doesn’t fit the algorithms categories. This is far different than the “long tail effect” that then Wired editor Chris Anderson observed in the emerging Internet of 2004. He theorized that increased storage capacity meant the Internet could make obscure and challenging artistic context more easily available to a mass public. This was especially true when compared to going to a record store to find an obscure album. Presumably, this would encourage more novel and challenging artistic expression because it could be created, stored and retrieved with greater ease.

Twenty years later, it’s hard to argue that we’re living in a “long tail” culture. If anything, the storage capacity and download speeds are exponentially greater. However, optimization logic has made it such that “outlier” artifacts are hiding in plain sight. This manifests as a consumer preference for the familiar. Music critic Ted Gioia notes in The Atlantic that 70 percent of the US platform streaming market now consists of old songs, reflecting a decline in new music production.

This impact extends to video content as well. One could argue that artists have always had to modify their content to median tastes, but the algorithm adds a new dimension. When Blake Hallinan analyzed 200 YouTube videos, they found that creators weren’t simply modifying titles and thumbnails; they were also engaging in what Hallinan calls “value optimization” (e.g., changing their content by employing rhetorical strategies that aligned with the normative value of the platform). This transformed social critique into ‘aestheticized consumption:’ instead of provocatively offering ideas or putting them in dialogue with one another, the content turned values and ideologies into consumer choices.

Increasingly our real life built environments are becoming homogenous. In Kyle Chayka ‘s brilliant 2024 book, Filterworld, he observes that travel platforms like Yelp and TripAdvisor are contributing to a homogenization of physical space towards Western, minimalist, non-threatening interior design that nonetheless scores highly on recommendation algorithms. A style he calls "AirSpace" caters more to Western tourists than local taste. These spaces have the effect of turning cities into "non-places" that increasingly lose what makes them distinctive.

This slopification isn’t restricted to culture. Our politics have become slop as well. We’re living in an era where we have endless exposure to ‘news’ about world events which can make us angry in predictable ways. We live in an optimization culture where our emotions are gamified to relentlessly confirm our preferences. Our outrage is aestheticized and turned into a commodity that can be packaged. Politics shifts from becoming a rational discussion about arriving at shared notions of the good to another instance of “aestheticized consumption.” Politics and policy-making become an exercise in catering to, but not necessarily addressing, our anger.

The politics of slop

Our inability or unwillingness to have our attitudes and beliefs challenged isn’t just an aesthetic problem, it is a political one. It would seem that we live in a society in which everyone is “asking questions” and challenging conventions. But this is happening within one’s algorithmically curated zone of comfort. Liberalism demands that we strive to become curious people, to develop the critical and intellectual tools necessary to be a questioning person in the world.

The lack of critical reflection has been the core subject for critical theorists for the last 75 years. Herbert Marcuse’s One Dimensional Man depicts citizens in a liberal democracy who are too content with creature comforts to challenge their oppression. The post-structuralist philosopher Jean Baudrillard famously argued that we prefer the sterile, predictable "simulacra" of a Disney-fied version of an Italian Bistro to the real-world alternative. Our collective desire for efficiency and predictability outweighs our desire for novelty and spontaneity.

Today, these tendencies are algorithmically amplified. With the increasing infiltration of AI, we face a less visible and more dehumanizing danger: social media allows for the removal of the ‘middlemen’ content creators and makes it easier for us to rely on synthetically generated slop that may seem to satisfy our need to consume content. However, like fast food, it fails to inspire us, to elevate our taste, or to challenge our preferences. Becoming an algorithmic problem and remaining open to understanding outliers gives us the best chance to hold onto our humanity and grow as citizens.

Authors

José Marichal
José Marichal is a professor of political science at California Lutheran University. His research specializes in the role that algorithms and AI play in restructuring social and political institutions. He is currently writing a book entitled You Must Become an Algorithmic Problem, scheduled for publ...

Related

Perspective
We Must Re-Negotiate the Algorithmic ContractMay 7, 2025
Analysis
Synthetic Media Policy: Provenance and Authentication — Expert Insights and QuestionsMay 2, 2025

Topics