Texas Just Created A New Model for State AI Regulation
Matthew Ferraro, Anna Z. Saber / Jul 17, 2025
Austin, Texas—The interior dome of the Texas State Capitol. Justin Hendrix/Tech Policy Press
Texas Governor Greg Abbott (R) last month signed into law the Texas Responsible Artificial Intelligence Governance Act, or TRAIGA (HB 149), joining Colorado as only the second state to adopt a comprehensive AI governance law.
TRAIGA brings forth a new approach to AI regulation, both by limiting Texas’ ability to punish companies with prohibitions on only a few intentional harms and by expanding the state’s investigatory powers. When the law goes into force on January 1, 2026, the net effect will likely subject many private enterprises to regulatory review and few to actual punishment. Given the size and economic heft of Texas, the law will likely have substantial ramifications for the AI industry.
A new standard for AI regulation?
With the recent demise of the state AI moratorium provision in the federal budget reconciliation bill, the flurry of AI-related lawmaking will likely continue apace. But the direction in which that movement will go remains uncertain.
On one hand, several jurisdictions appear to be retrenching from comprehensive AI laws. For example, Colorado Governor Jared Polis (D) and other state leaders publicly called on the state legislature in May to delay the February 2026 implementation of the Colorado AI Act, which will impose substantial obligations on developers and deployers.
On the other hand, we have seen a flurry of sector-specific AI-laws that regulate the use of AI in particular domains, such as, chatbots, healthcare, insurance, nonconsensual intimate imagery, and political advertisements. In New York, Governor Kathy Hochul (D) is considering whether to sign the Responsible AI Safety and Education (RAISE) Act, which would be the first law in the nation to impose safety standards on cutting-edge frontier AI models.
TRAIGA forges something of a middle path. It focuses on AI uses (by both the public and private sectors), not the power of the models themselves. It limits enforcement to a small number of enumerated harms, provides affirmative defenses to violators and opportunities for them to cure defects, and imposes some limits on the collection of biometric data. At the same time, TRAIGA vests in the attorney general broad, investigatory authority and establishes a Sandbox Program that provides companies with a safe harbor to test AI systems without having to comply with only some regulations; the law’s core prohibitions will apply even to Sandbox participants.
Austin’s appropriately Texas-sized entry into the AI-lawmaking fray may become a model for other jurisdictions that, in the words of TRAIGA, seek to “facilitate and advance the responsible development and use” of AI while working to protect individuals and groups “from known and reasonably foreseeable risks.”
Pared-back but still far-reaching
Lawmakers introduced TRAIGA in December 2024 (as HB 1709) with provisions to extensively regulate the use of “high-risk artificial intelligence systems,” but the Texas legislature reduced the scope of the now-enacted law.
While narrower than what was originally proposed, TRAIGA still imposes significant regulatory burdens on a broad range of private entities for several reasons:
- Broad jurisdiction: The law applies to any person or entity who “promotes, advertises, or conducts business” in Texas; produces a product or service “used” by Texans; or “develops or deploys” an AI system in Texas. Few entities will fall outside this jurisdiction.
- Expansive definition of AI: TRAIGA’s definition of “artificial intelligence system” is not limited to generative AI but includes “any machine-based system” that “infers from the inputs the system receives how to generate outputs.”
- Inclusion of both developers and deployers: TRAIGA applies both to a “developer” of an AI system that is “offered, sold, leased, given, or otherwise provided in” Texas, and a “deployer.” These definitions cover not just large technology companies building AI models but run-of-the-mill Texas companies using them or providing them to customers.
Key prohibitions and duties
TRAIGA imposes specific prohibitions on developers and deployers. First, it bars the development or deployment of an AI system that “intentionally” aims to incite or encourage a person to commit physical self-harm, harm another person, or engage in criminal activity. Second, it prohibits the development or deployment of an AI system “with the sole intent” for the AI system to “infringe, restrict, or otherwise impair” an individual’s rights under the US Constitution. Third, it prohibits the development or deployment of an AI system with the “intent to unlawfully discriminate against a protected class in violation of state or federal law”. Fourth, it makes illegal the development or “distribut[ion]” of an AI system with the “sole intent” of “producing, assisting, or aiding in the production or distribution” of child pornography, sexually explicit “deep fake[s]” of nonconsenting adults, and chatbots that imitate children engaging in sexually explicit conversations. It also specifically requires private health care providers to disclose to patients when they are interacting with an AI system.
TRAIGA imposes additional duties on Texas government agencies, including mandating that agencies disclose to consumers when they are interacting with an AI system; barring agencies from deploying or developing an AI system for the purpose of identifying a specific individual with biometric data without the individual’s consent; and restricting agencies from using an AI system that evaluates or classifies a person or group based on social behavior or personal characteristics with the intent of assigning a social score to that person that may result in detrimental treatment, often referred to as social scoring.
By requiring TRAIGA’s violators to act with intent or sole intent to harm, the law narrows its scope and insulates, at least from conviction, developers and deployers who inadvertently engage in prohibited conduct.
Amendment to biometric law
The Texas Capture or Use of Biometric Identifier Act (CUBI) governs the collection and use of biometric data in Texas. It generally requires informed consent before capturing or using an individual’s biometric identifiers for commercial purposes. TRAIGA amends CUBI to clarify that individuals do not consent to the capture and storage of their biometric data based solely on the existence of the image online, unless the person made that image available publicly. The law also creates several carve outs from this requirement, including for the training of AI systems that will not be used for identifying individuals. This exception likely will be helpful to companies training large AI models.
Investigation and enforcement
The Texas attorney general has exclusive enforcement authority.
TRAIGA directs the attorney general to establish a website through which a consumer may submit a complaint about an AI system. On the basis of a single complaint, and without establishing reasonable suspicion or probable cause, the attorney general may issue a civil investigative demand (CID) “to determine if a violation has occurred.”
The attorney general may request information on an AI system from either a developer or deployer, including a high-level description of the purpose, intended use, deployment context, and associated benefits of the AI system; a description of the type of data used to program or train the AI system; any metrics used to evaluate the performance of the AI system; any known limitations of the AI system; a high-level description of the post-deployment monitoring and user safeguards used for AI system; or “any other relevant documentation reasonably necessary for the attorney general to conduct an investigation.”
To understand the breadth of the attorney general’s prerogatives, consider this thought experiment: Company A, a Texas company, makes available in Texas a large language model developed by Company B, an out-of-state firm. If a single Texan files a complaint — that need not be sworn or verified — alleging that the large language model censors a user’s speech in violation of the First Amendment, the attorney general will be able to subject both companies to CIDs for all “relevant documentation reasonably necessary” to investigate whether they violated TRAIGA.
The Texas attorney general has already demonstrated a keen interest in investigating companies for alleged AI malfeasance. In June 2024, the attorney general launched an initiative to “protect Texans’ sensitive data from illegal exploitation by tech, AI, and other companies.” And in September 2024, the attorney general secured a settlement with an AI healthcare company over allegations the company made false and misleading statements about its products. If past is prologue, the attorney general will readily wield TRAIGA’s authorities.
Opportunity to cure
While the law makes it easy for the attorney general to investigate an alleged AI harm, it makes it difficult to penalize a wrongdoer for several reasons. First, following a written notice from the attorney general, the violator has 60 days to cure violations before a formal enforcement action can begin. Second, developers and deployers may assert the affirmative defense that they discovered violations through various mechanisms, like user feedback, or substantially complied with the latest version of the National Institute of Standards and Technology’s “Artificial Intelligence Risk Management Framework: Generative Intelligence Profile” (NIST AI RMF) or a similarly recognized risk-management framework. Finally, the attorney general may only bring an action if the AI system has been deployed.
Penalties
The attorney general can seek scaled penalties that can reach as high as $12,000 for each curable violation and $200,000 for each uncurable one, and an additional $100,000 penalty and the loss of a license for entities that hold state-issued registrations.
Regulatory Sandbox program
TRAIGA establishes a Sandbox Program, a state-administered testing environment to promote AI’s use, particularly in healthcare, finance, education, and public services. TRAIGA prohibits the attorney general and other state agencies from pursuing charges or actions against participants. But, TRAIGA does not exempt participants from TRAIGA’s prohibitions on developing or deploying AI systems that manipulate human behavior, violate constitutional rights, discriminate unlawfully, or create certain sexually explicit content. Thus, the Sandbox Program provides only a shallow safe harbor.
Texas Artificial Intelligence Council
TRAIGA creates the Texas AI Council, an advisory body to study and make recommendations regarding AI systems operating in Texas.
How to prepare
To get ready, AI developers and deployers should substantially comply with the NIST AI RMF or a similarly recognized risk management framework and document such compliance; create a record of the intended purposes of developed and deployed AI systems to insulate against allegations they intend to violate TRAIGA’s core prohibitions; ensure alignment between a company’s collection and use of biometric data and the updated provisions of CUBI; and maintain and regularly update answers for the AI-system information that the attorney general is empowered to seek through a CID.
Authors

