A Doritos Bag, a Police Response, and an AI Accountability Crisis
Nicholas E. Stewart / Dec 8, 2025One afternoon in October at Kenwood High School in Baltimore County, 16-year-old Taki Allen was eating Doritos after football practice when an AI gun-detection system misread his snack bag as a firearm. The alert triggered the arrival of police cars, a prone search, and handcuffs. The officers eventually realized Allen was not holding a weapon; he was holding chips. The system failed, but so did the decision-making around it. The episode shows what happens when untested AI tools move into schools and policing without accuracy standards, bias audits, or clear and effective protocols.
Despite these risks, adoption is accelerating. Districts and departments often buy systems on marketing promises rather than evidence. Few ask who validated the model or how it performs across racial groups. Even fewer consider how a single false positive can trigger force, especially against Black students.
Policy is only beginning to catch up. California’s SB 53, signed in 2025, is the first law requiring advanced AI developers to publicly disclose safety practices and report critical incidents. It is modest but meaningful: an acknowledgment that transparency can’t depend on voluntary statements.
Now some in Congress are attempting to follow suit with the AI Civil Rights Act, a federal proposal focused on preventing discriminatory algorithms in high-stakes settings. At the bill’s reintroduction this week, Damon Hewitt, President and Executive Director of the Lawyers’ Committee for Civil Rights Under Law, warned that “algorithms make decisions about all aspects of our lives, determining who gets bail, who gets a job, who can buy a house, who can run a home, where we can go to school. Over and over again, promises of innovation end up yielding discrimination.” His remarks are reflected in the Lawyers’ Committee’s statement on the bill, which aims to impose independent bias audits, limit harmful deployments, and create accountability standards before these systems are used in the public.
Lawmakers framed the issue in the same terms. Rep. Pramila Jayapal (D-WA) argued that “If we do not develop AI with fairness, transparency, and accountability, then AI will give us bias, exclusion, and discrimination,” adding that the technologies shaping opportunity cannot be allowed to “harden the injustices of the past.” The Grio’s coverage of the bill’s reintroduction captured those concerns. Sen. Edward Markey (D-MA) emphasized that algorithms now influence decisions across employment, banking, health care, criminal justice, public accommodations, and government services. Rep. Yvette Clarke (D-NY), chair of the Congressional Black Caucus, said the country has entered “the heart of a new technological revolution” and must ensure old forms of discrimination do not become “irreversibly entrenched in the technology of the future.”
These debates point to a potential consensus emerging in states and in Washington, DC: AI used in policing, schools, and courts cannot rely on trust in technology companies or the good intent of vendors. It requires enforceable standards, independent oversight, and mechanisms for civil rights protection before deployment, not after harms are discovered, because the consequences fall on the people who live with the consequences of these systems every day.
Generation Z understands those stakes directly. We are the most surveilled generation in history. At the Justice Education Project, the first national Gen Z–led criminal justice reform nonprofit, we’ve spent years analyzing the risks of these tools and briefing civil rights groups and legal organizations on their implications. Our forthcoming book, Next Steps into Criminal Justice Activism: Technology, Ethics, and the Future of Justice, features Columbia Law professors Daniel Richman and Amber Baylor and Penn Law professor Colleen Shanahan, who all point to the same gap: rapid deployment with little governance. As Shanahan told us, “Technology alone will not deliver justice. Without investment in relationships, conversations, and problem-solving, the tools fall short.”
That gap is widening as AI-generated evidence becomes easier to fabricate. Tools like OpenAI’s Sora can create photorealistic video that looks indistinguishable from real footage. Courts and investigators are entering an era where both real and fake evidence can be challenged as the other. Without rules for authentication and training for judges and attorneys, the justice system faces a credibility problem that no technical patch will solve.
What these developments point to is a need for upstream accountability. Responsibility cannot rest solely on end users like principals or officers. It must extend to developers building the systems and institutions procuring them. Accuracy benchmarks, independent testing, bias evaluations, override requirements, and meaningful liability should be prerequisites for any AI tool used in policing, schools, or courts.
The Baltimore County case is not a glitch. It is a preview. AI in criminal justice will not slow down. The choice is whether the systems shaping high-stakes decisions operate under enforceable standards or under trust in vendors. If policymakers want to prevent the next Taki Allen incident, the answer has to be accountability from the top, not blame assigned at the bottom.
Authors

