Home

For Personalized AI, How Agreeable Is Too Agreeable?

Ruchika Joshi / Feb 28, 2024

As information is increasingly mediated by personal AI assistants, users must decide where to set the dial between agreeableness and exposure to diverse perspectives, says Ruchika Joshi.

Each time I start a conversation with Pi, an AI chatbot developed by the Silicon Valley startup Inflection AI, it responds by inquiring about my day. Soon I find myself warming up to it by asking for help editing an email, explaining a complex political issue, or advising me on how to confront my loud neighbor. Pi’s current functionality gives a glimpse into where custom AI tools are headed. Industry leaders predict that hyper-personalized AI assistants are eventually going to be ubiquitous, mirroring interactions in varied roles once reserved for other humans, from life coaches to soulmates.

A key feature of many personal AI assistants is their programmed agreeableness. Pi is crafted to be “useful, friendly, and fun,” while Replika, the “AI companion who cares,” seeks to “always (be) on your side.”

This approach likely makes business sense, because users prefer AI tools that are perceived as polite. They also care about the warmth service robots offer. Since humans seek confirmation for their existing beliefs, user attitudes towards anthropomorphized AI assistants are tied to how like-minded they appear.

In contrast, being confronted with perspectives that challenge those beliefs creates cognitive dissonance. It also creates more work. Understanding the logic of new ideas, assessing their validity, and integrating them into our existing worldview requires time and energy. So when I ask my AI assistant: “Does God exist?”, its business incentive is to provide only one right answer: the one I paid for.

But it’s not hard to imagine that such agreeableness may have a dark side. If personal AI assistants are optimized to avoid confronting users with challenging ideas, they may isolate them within a filter bubble of one. Users who rely heavily on personal assistants designed to appease them risk limiting their exposure to diverse perspectives and the opportunity to exercise critical reasoning to arrive at their own beliefs. An overly agreeable AI assistant may polarize users, influence their decision-making, and even undermine their autonomy. At scale, such dynamics could erode any foundation for constructing a shared collective truth and maintaining social cohesion. Even if such extremes do not materialize, marginal effects could considerably accumulate across a great volume of users.

So who decides where the balance lies? Although businesses can change how agreeableness is programmed within AI tools, encoding exposure to a greater diversity of ideas or even occasional dissent may not be profitable.

In contrast, while governments could theoretically nudge businesses to expose their users to multiple perspectives, mandating it would impinge freedom of speech. Alternatively, regulators may consider such an intervention within the ambit of emerging AI regulations which prohibit “cognitive behavioural manipulation.” Even so, enforcing accountability would be a herculean task for two reasons.

First, information is not merely exchanged between a user and their AI assistant but is co-created. This makes it difficult to hold the AI tool solely responsible for injecting diversity into the interaction. For instance, a user may ask their AI assistant to collaborate on a children’s story. The user may start with a basic description of a protagonist, upon which the chatbot adds challenges faced by the protagonist. As the story progresses, both the user and the AI contribute ideas, characters, and plot twists. This interaction exemplifies co-creation: the narrative becomes a blend of the user's creativity and the AI's responses, making it impossible to separate the contributions of each party entirely. Therefore, holding the chatbot solely responsible for content diversity becomes impractical for policymakers.

Second, the interaction is also prone to be deeply intimate, making attempts at content moderation highly contentious. Users are more likely to share extremely vulnerable details in a dialogue with their personal AI assistant that mimics human interaction. For instance, when instructing their AI assistant to schedule appointments, the user may end up sharing detailed personal routines, intimate preferences, and sensitive priorities – information they may otherwise keep private. Auditing for perspective diversity in such an intimate context would bring up massive privacy challenges for regulators.

Given the lack of business incentives and the limits of direct government intervention to curtail hyper-agreeable AI, it is going to fall to the users to demand a diversified information diet. Users will need to make the tough choice of how much agreeableness they are willing to forgo in pursuit of real, subjective pluralities of our world.

It’s not that other actors have no role in combating personalized, AI-driven filter bubbles. Policymakers should mandate businesses to meet some minimum thresholds for best practices as exemplified by EU AI Act’s transparency requirements and prohibition on manipulating human behavior. Moreover, proposed legislation like the AI Literacy Act should expand to educate users on the trade-offs between agreeableness and perspective diversity. And when users demand more diversity, businesses must provide them with customizable tools that enable that. Scholars have long argued for serendipity-driven recommender systems to challenge filter bubbles. Some products are experimenting with allowing users to have more control over their recommendations and even turning off personalized recommendations altogether, which has been shown to increase content diversity. Businesses behind personalized chatbots should pursue similar experimentation.

Ultimately, however, the dial between convenience and plurality would rest in the hands of users, who must decide how much they truly want their personal AI assistant to challenge what they know and believe.

Authors

Ruchika Joshi
Ruchika Joshi is a policy candidate at the Harvard Kennedy School specializing in AI safety and governance.

Topics