Home

Donate

DOGE Should Look to States For How to Implement Effective AI

Amanda Renteria / Apr 7, 2025

Amanda Renteria is the CEO of Code for America.

The Connecticut State Capitol in Hartford. Shutterstock

As the Department of Government Efficiency (DOGE) moves ahead with efforts to shrink US federal agencies, access massive amounts of data, and replace core functions with automated systems, the Trump administration is rushing headlong into an AI-first agenda that more closely mirrors a Silicon Valley startup than effective government.

For instance, in its haste to centralize information to look for potential cuts to the federal workforce and services, the General Services Administration (GSA) accelerated the rollout of a new AI chatbot that has raised concerns about the government’s use of the technology. “It’s about as good as an intern,” one employee who used the product told Wired. “Generic and guessable answers.” Reports suggest one purpose of the app is to substantiate further reductions in GSA staff.

This is a dangerous mistake. If a product flops in Silicon Valley, maybe a few investors take a hit, a team has to pivot, or a handful of customers are inconvenienced. But the government doesn’t get to fail fast and move on—because when public systems break, real people suffer. A bad AI decision isn’t just a bug to fix in the next release. It’s a mom losing access to food assistance because of a faulty algorithm. It’s a veteran getting the wrong healthcare recommendation from a chatbot. The government has a responsibility to get things right. AI needs thoughtful guardrails, not reckless disruption—because the stakes aren’t just financial; they’re human lives.

That doesn’t mean that AI has no place in making government more effective and efficient. While current AI tools are poorly suited as predictable sources of information, they can be very useful at turning massive amounts of data into actionable information. The government should focus on AI applications that help public servants quickly process, extract, and summarize vast amounts of data to expedite their work.

States show the way

The question isn't whether AI can improve government services—it absolutely can (and has). The real question is whether policymakers have the will and patience to do it the right way. That means ensuring AI implementation is transparent, responsible, accountable, and centered on the people it serves.

In fact, this work has already been done. The current administration should learn from red and blue states that have used responsible AI to better serve their residents while navigating shrinking budgets.

Take GetCalFresh, for example. The digital application assistant for California’s version of the Supplemental Nutrition Assistance Program saw soaring demand during the pandemic. An AI chatbot was carefully deployed to meet the growing concerns of food insecurity while lessening the burden on overwhelmed public servants. The result: a reduced backlog of requests for help, more time spent on complex situations, and an improved customer experience.

Or consider a case in Utah, where the government used entity resolution powered by AI to de-duplicate criminal records across disparate systems, enabling more efficient clearance of hundreds of thousands of eligible records. This resulted in greater economic opportunity for people residing in the state whose records had previously prevented them from pursuing a degree, getting a job, or opening up a small business.

And Connecticut demonstrated how to use AI to empower state workers instead of replacing them. Government leaders in the state implemented text message reminders to make it easier for families to find and maintain the benefits they were eligible for, reducing unnecessary work for state employees whose job is to evaluate and renew benefits.

Lessons learned

So, what lessons on AI implementation can be gleaned from these states and others like them?

First, transparency is non-negotiable. AI systems used to make decisions affecting people's lives—whether for taxes, criminal justice, or safety net benefits—must be explainable and open to scrutiny. A government that relies on opaque algorithms risks further eroding trust and deepening public skepticism. Government agencies should communicate openly and often about how AI is being used—whether on the backend of service delivery or in interactions with the public.

Second, strong, responsible AI frameworks are necessary to prevent bias and protect privacy. AI is only as good as the data it learns from, and history has shown that biased data leads to biased outcomes. Without strict safeguards, government AI could reinforce inequalities rather than eliminate them. In the digital age, data is power, and people have a right to understand how their data is protected from bad actors.

Third, true public accountability must be a cornerstone of AI deployment in government. Oversight cannot be left solely to the private sector or to internal agencies with limited transparency. Citizens and independent watchdogs must have a voice in shaping and evaluating these systems. Sunlight is the best disinfectant.

While current generative AI tools are capable of turning massive amounts of data into actionable information, they are poorly suited as predictable sources of knowledge. As such, government should be focusing on augmented intelligence to help employees quickly process, extract and summarize vast amounts of data to expedite their work.

Finally, AI must be designed with an approach that puts people first. Technology should be designed to serve people, not the other way around. That means involving those who understand the government’s complexity—civil servants who work on the front lines, technologists who grasp AI’s risks and opportunities, and, most importantly, the communities these systems will impact.

Take the long view

If we want lasting change, we need to take a step back and consider the downstream impacts of AI in government. Deploying new tools for the sake of it, with little consideration, is not a recipe for building tools that work or establishing public trust.

We’ve seen what happens when we move fast and break things. And we’ve seen what happens when government innovates responsibly. Only one path leads to a government that is more effective, efficient, and trusted. The choice is clear. Now we need the will to get it right.

Authors

Amanda Renteria
Amanda Renteria is the CEO of Code for America, the country’s leading civic tech nonprofit for over 15 years. Code for America works with government at all levels across the country to improve the delivery of public services through technology. Amanda champions initiatives that directly impact milli...

Related

What to Watch on US State Tech Policy in 2025

Topics