How an AI Hotline Could Help AGs Effectively Govern AI
Kevin Frazier / Jan 14, 2026
Raul Labrador, Idaho's attorney general, center left, and John McCuskey, West Virginia's attorney general, outside the Supreme Court in Washington, DC, on Jan. 13. (Kent Nishimura/Bloomberg via Getty Images)
Like every new technology, artificial intelligence has the potential to solve problems as well as to create them. This puts policymakers in a tough spot. Hastily passed laws may become an unintended barrier to realizing some of AI’s best use cases, yet the absence of timely regulations could expose consumers to fraud, scams and abuse with few remedies.
Compounding the challenge: like every new commercial market, the AI market includes both good and bad actors. AI companies have already faced allegations of exaggerating the accuracy of their tools, advertising capabilities that don’t exist or promising results that are unachievable. Yet we also know that there’s a growing number of AI companies that are working around the clock to offer the most dependable and transformative AI tools possible.
How to separate the two is no easy task, and it’s especially difficult for state attorneys general, who are tasked with enforcing consumer protection laws and are thus not afforded the benefit of sitting on the sidelines and merely hoping a new technology works as intended.
Thankfully there’s a tried-and-true tactic to aid with this difficult enforcement task, which if expanded upon and built out nationwide could help enforcers better grapple with the ever-shifting AI landscape: asking consumers for more specific and timely information about how they are using AI and to what ends — good, bad and otherwise.
In practice, this would look like creating a dedicated consumer complaint portal online that could serve as a one-stop shop for consumers to share their experiences with different AI companies and tools. An AI hotline of this sort would enable enhanced information collection and reduce the odds of state AGs paternalistically imposing their own views about whether a certain AI use-case is good or bad or prematurely labeling certain business practices as unfair, deceptive or abusive. Sometimes the best response to a fork in the road is to ask others who have been down each trail.
Using feedback to target bad actors without burdening innovators
A dedicated AI hotline can partially fill three information gaps.
There’s the obvious gap of identifying bad AI actors and bad AI use cases as soon as possible. Consumers will always be the first to experience when AI goes wrong or an AI company relies on anti-competitive behavior. At this early stage in AI development and adoption, it’s pivotal that consumers accurately and promptly share this information.
There’s another potential gap between what appears in news headlines and what’s the actual lived experience of consumers day-to-day. Governance should not be steered by sensationalistic and unrepresentative stories, such as highly questionable reports about AI water usage. For better and for worse, we’re often susceptible to placing too much weight on compelling anecdotes, especially those that pull on our heart strings or touch on our core policy priorities. Policymaking and law enforcement demand a more rigorous approach.
An AI hotline won’t entirely meet that need — the people most likely to share their AI experiences may not reflect the general public. Still, some information is better than nothing. This hotline also need not solely be a place to complain. A well-designed AI hotline — one that actively solicits positive and neutral feedback, not just complaints — can help mitigate this skew and provide a more balanced picture.
Why a dedicated hotline is key
This is precisely why a standalone AI hotline is merited.
Though there are other state and federal hotlines, such as the Consumer Financial Protection Bureau’s mechanism for filing complaints about financial services and products, there are a few flaws with merely trying to tack on an AI-extension to a pre-existing tool.
First, it’s important that AI-specific information be collected in as precise a manner as possible and also shared with the appropriate actors in a timely fashion. There’s a reason you call 811, not 911, when you want to dig a hole — the former gets you to the right people faster. Given that AI regulation is top of mind for legislators and regulators, it’s important to have an AI specific line--though perhaps it could merge with pre-existing channels in the future.
Second, it’s key that folks are fully aware of this mechanism for sharing AI information. If AI becomes one of several technologies covered by a hotline, then it may be harder for folks to find the right forum.
On the whole, a more disciplined approach to tracking consumer experiences can inform future, more targeted studies into good, bad and ambiguous AI use-cases and actors.
The final gap is learning more about who is using AI tools, for what purpose and to what ends. While some public polling has been done on this topic and labs often share user surveys, it’s again worth supplementing this information as policymakers try to chart the best path forward.
Trust as a catalyst for AI competition and growth
Consumer protection initiatives like this AI hotline proposal and the desire to maintain competitive markets are too frequently pitted against each other. The truth of the matter is that all this information can foster innovation. The sooner bad actors are held accountable by state AGs for their behavior, the easier it will be for the responsible and innovative AI companies to compete on a level-playing field. Likewise, every percentage point increase in consumer confidence in the AI market and related AI tools will increase demand for the latest technology — opening more markets for more actors to compete.
This information can also become a bulwark against hasty or unnecessary laws that can impede economic growth. It may be the case that most of the consumer issues reported through the hotline are addressable through existing laws, for instance. It may also be the case that the AI incidents covered by the press are less common than some may suspect. The challenge is finding ways to act swiftly and effectively without unintentionally discouraging responsible innovation or relying on incomplete information.
Ideally, this hotline would be a unified effort by all 50 states — potentially housed under the auspices of the National Association of Attorneys General. Such a collaborative, well-resourced approach would help increase consumer awareness of the tool, ensure standardized submissions and generate a more complete understanding of consumer experiences. This information can then be shared with and analyzed by other AI stakeholders.
For example, state legislatures may find this information valuable when determining whether new legislation is warranted. AI companies may rely on user reports to improve their products and customer experiences. And, of course, state AGs will turn to the information to issue guidance and bring enforcement actions as necessary.
This hotline is not about punishment or panic — it’s about building a fuller picture of how AI is shaping people’s lives. By inviting the public to share their stories in a targeted manner, we can make smarter, faster and fairer decisions.
This sort of initiative is overdue. However, it’s important to not overlook critical details. Consumers must have assurances that any personal information shared via an AI hotline will not appear in any summaries and public-facing documents. Submissions should also be vetted to reduce the odds of the hotline being full of fake reports. What’s more, the adequacy of these and other components should be reviewed regularly to see if there’s room for improvement.
But now is not the time to get bogged down by these details. It’s the time for action.
Authors
