Home

Will Democracy Die in AI’s Black Box? Not If These Shareholders Can Help It

Jessica Dheere / Dec 7, 2023

Microsoft Vice Chair and President Brad Smith testifies to the US Senate Judiciary Subcommittee on Privacy, Technology, and the Law, September 12, 2023.

Today, at Microsoft’s annual general meeting, shareholders will vote on a resolution that recommends that the company disclose more about the material risks to its business and to public welfare of mis- and disinformation powered by generative AI. Material risks are those that have the potential to compromise a business’s strategy, operations, legal compliance, or reputation, and thus its value. Investors rely on companies’ disclosure of such risks and their potential financial and reputational costs to determine whether the business will yield an adequate return on investment.

The resolution, published in the company’s proxy statement as Proposal 13, asks Microsoft to address shareholders’ concerns that its involvement in generative artificial intelligence applications will accelerate the creation and dissemination of false information, compromising the systems and institutions that enable effective governance worldwide, all of which depend on ready access to accurate information. Think financial markets, public health and safety protocols, environmental well-being, and, of course, fair elections, dozens of which will take place in 2024, including the US presidential election. A strong vote indicates a shareholder preference for the company to act on the recommendation, but it is not obliged to do so, even if the vote earns a majority. Preliminary results must be reported to the SEC within four days.

In a preview of what is likely to come, last month in Argentina, generative AI tools were used by rival parties to fabricate propaganda about opponents in an attempt to sway voters. Another threat comes from the even more precise audience targeting that generative AI (and the lack of federal US privacy legislation) enables. Nearly half of all advertising globally depends on AI. Some analysts also anticipate that, in malicious hands, generative AI chatbots will lead to more sinister attempts to manipulate voters in the near future. Others note that generative AI can also amplify the misrepresentation of public opinion, as AI will make it easier to generate false correspondence or comments at scale in public-input processes. As we know from the social media era, misinformation and disinformation are not new, but generative AI drops “the cost of generating believable misinformation by several orders of magnitude.”

As the unending tide of reporting on generative AI tells us, shareholders aren’t alone in their worries. Eurasia Group ranked generative AI the third highest political risk confronting the world, warning new technologies “will be a gift to autocrats bent on undermining democracy abroad and stifling dissent at home.” Even the CEO of OpenAI and so-called poster child for the technology, Sam Altman, has said that he is “particularly worried that these models could be used for large-scale disinformation.”

Specifically, shareholders are asking the company to expand on its voluntary commitments to “Responsible AI” and on reporting it is required to do under European and Australian codes of practice on disinformation to publish an annual report “assessing the risks to the Company’s operations and finances as well as risks to public welfare presented by the company’s role in facilitating misinformation and disinformation disseminated or generated via artificial intelligence, and what steps, if any the company plans to remediate those harms, and the effectiveness of such efforts.” The resolution was filed by Arjuna Capital, with support from Azzad Asset Management, Ekō, and my organization, Open MIC. It has the support of Norway’s sovereign fund, New York City’s five pension funds, and California’s public pension giant, CalSTRS. Similar resolutions have been filed at Alphabet and Meta. Their annual general meetings will be held in spring 2024.

Microsoft, which owns 49 percent of OpenAI and is one of the companies competing for global dominance in the AI race, recommends voting against our resolution. Despite its own acknowledgment of the risks of generative AI, the company asserts that its “multi-faceted program to address the risks of misinformation and disinformation is longstanding and effective.” It says its commitment, starting next summer, to publish an annual transparency report on its AI governance practices will “cover [its] approach to mitigating the risk of AI-generated misinformation and disinformation” and make the report we want to see “unnecessary.”

Shareholders disagree. The promised annual transparency report will likely simply outline general AI policies and practices. Proposal 13 goes beyond this generic type of report, urging the company to disclose information that is of particular concern to investors. Microsoft should show its work and, ideally, quantify the costs to the company and society of pursuing generative AI at such a breakneck pace. If the company can project profits from generative AI, surely it can estimate the costs.

Further identifying the risks is not enough. Investors want to know what concrete steps the company will take to mitigate them. And finally, as with any good business strategy, the company must monitor and evaluate its effectiveness in doing so, reinvesting what it learns into future AI development. This level of detail remains undisclosed yet is essential for investors and the public to determine whether the company has appropriately or effectively considered the potential impact of generative AI on its revenues, its governance and operations, or its reputation, not to mention on the rest of the world.

In fact, Microsoft’s recent actions point in the opposite direction. The company’s lightning-fast response to the recent firing and reinstatement of Altman as CEO of OpenAI—which included an immediate offer to set up his own AI research lab at Microsoft—seemed to ignore completely any possibility that Altman’s ouster was justified by the safety concerns that led OpenAI’s board to remove him. Instead, Altman’s global prominence and Microsoft’s $13 billion and climbing investment in OpenAI appear to have eliminated remaining opportunities, however slight, for a more thoughtful, measured pace of AI development not only at Microsoft, but at any of the tech titans competing to dominate the AI marketplace. AI is already too big to fail.

Preceding the Altman episode, and perhaps obscured by it, Microsoft announced in November, the first of what it says will be five “new steps to help protect elections.” These include the release of a digital watermarking technology, the deployment of a new team to “advise and support campaigns as they navigate the world of AI,” and “authoritative election information on Bing.”

Given our experience with social media in 2016 and 2020, it is hard not to see these efforts as disingenuous, or at best an extension of Big Tech’s power to conduct business as it sees fit. Rather than focus its efforts on demonstrating responsible development and stewardship of AI technologies before they are released for general use, Microsoft, like its competitors, has opted to base its business model and stake its reputation on solving problems of its own creation. In the process, along with its corporate peers, it is normalizing distrust in online information and democratic institutions, while it claims to be part of the solution to the very same problem.

Even if the outcomes of next year’s votes are not directly compromised by generative AI-powered disinformation, the doubts that these technologies have already sown in voters’ minds about who and what to believe will continue to erode trust in information whatever its source, possibly to the point of people not being able to discern and verify what is real quickly enough to make informed decisions about the direction they want their countries, their investments, or their lives to take.

Without that trust, no system or institution that relies on the accurate communication and assimilation of fact—not democracy, not financial markets, not health or the environment, not small business, not policy advocacy, not human rights—will survive as we know it.

Before we reach the tipping point, which I and many of my colleague’s think may well be next year’s elections, it would behoove us to remember that other axiom about moving fast: Speed kills.

Authors

Jessica Dheere
Jessica Dheere is advocacy director at Open MIC, which leverages capital strategies to foster greater corporate accountability in the media and technology sectors. She previously led Ranking Digital Rights and co-founded the Beirut-based digital rights organization SMEX. Connect with her at jdheere ...

Topics