Home

Why the Hype Around AI is Not an Honest Educator

Maura Colleton Corbett / Jun 30, 2023

Maura Colleton Corbett is the CEO of Glen Echo Group.

Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

AI, for most people, happened slowly and then all at once. The turning point was arguably OpenAI’s release of ChatGPT last November, which fueled a collective global frenzy. There are many reasons for this; one of them is clearly the real, lasting and complicated consequences - both good and bad - of an AI-enabled world. Another one may very well be the contagion of crisis and hype.

We have worked at the intersection of communications and public policy since the beginning days of the commercial internet and among the many things we’ve learned is that words and messages matter. In the early days of the commercial internet, for example, everyone was full of hope and optimism, convinced that this new revolution would help solve the most difficult issues facing humanity. Now, three decades later, it is the exact opposite, and instead of optimism, there is a collective fear that AI will sow the seeds of humanity’s demise and enable the rise of our cyborg overlords.

Of course, both worldviews are false. Whether it’s unbridled hope or existential dread, neither serves the real issues at hand. They only paralyze us and/or lead us to jump off cliffs before we know what’s at the bottom.

We are in the middle of an arms race to act and react. We are acting first and understanding second, which is precisely backwards.

Screaming headlines give the impression that we have no tools at all to deal with AI, which is not only untrue, but is a disservice to decades of hard and thoughtful work by dedicated experts on the digital world’s most challenging issues. AI is the real Web 3. It is transformative, consequential and it moves fast, making attempts to regulate it a very imperfect tool.

That is not to say that we don’t need new regulations to deal with the comprehensive issues raised by AI — particularly generative AI systems —but we shouldn't knee-jerk our way into it. Claiming that we are completely unprepared to address AI is the wrong message. We have tools. We have knowledge. We can apply our existing legal authority and regulations to the most immediate AI challenges while we figure out the more complex longer-term governance and regulatory models. We can walk and chew gum at the same time.

Of course, nature abhors a vacuum and unfortunately, the confusion, misunderstanding and hype surrounding AI is filling it. The headlines rarely capture that AI isn’t even one thing. They don’t explain that referring to AI might mean systems using more rudimentary machine learning, or generative AI, or artificial general intelligence. They don’t show us that AI can be narrow or general or super, and it can be reactive, limited or self-aware. This is why the words matter so much. Policymakers cannot make good decisions on what they don’t understand, and hype is not an honest educator.

Right now, AI has already been established in the national and global conversation as something bad. Existential. Out of control. There is little discussion of what AI might do to make our world better. Could it be the thing that helps reverse climate change? Might it help cure cancer or Alzheimer’s? Could it program out bias and discrimination? Or even make the 9th circle of Hell known as customer service a more pleasant and helpful experience?

AI, like all technology innovations, is not inherently good or bad. That comes instead from what humans do with it, put into it and how they use it. Can we still build in ethics by design, or have those efforts already been overtaken by the corporate race to dominance? This is where asking the right questions can be more important than rushing towards the quick and easy answers.

We have over three decades of laws and regulation to help us to address AI’s most immediate challenges, including privacy, copyright and cybersecurity, in addition to laws that protect against bias, hate and discrimination. At the very least, those laws can inform us and shine a light on the path ahead. And those three decades have also taught us how to better explain those laws, how to educate non-engineers and non-lawyers on how innovation and technology will change the world we live in.

Opacity benefits no one and with AI especially, it does us a great disservice. We must all commit to effectively translating the legal and technical complexities of AI so that those making decisions on the rules of the road can enable AI to its best and highest uses instead of its most dire.

The words we use, and how we use them, shape our perceptions of the world around us and how we choose to participate in it, including the coming AI-enabled one. The world, we can fairly say at this point, depends upon them.

Authors

Maura Colleton Corbett
Maura Colleton Corbett is the CEO and Founder of the Glen Echo Group, a Washington-based policy communications and public affairs firm. Corbett is a board member of Public Knowledge and the Chamber of Progress. She is a frequent speaker and commentator on Internet public policy, coalition building, ...

Topics