ASAP
Texas Joins the Fray and Enacts AI Legislation
At a Glance
- Texas joins a growing number of states regulating artificial intelligence (AI).
- TRAIGA 2.0 places most of the compliance burdens on governmental entities and very little on private-sector businesses or employers.
- TRAIGA 2.0 expressly eliminates disparate impact as a viable theory of discrimination under the Act.
Following in the footsteps of an increasing number of states, Texas just joined the growing movement to regulate artificial intelligence (AI). On Sunday, June 22, 2025, Governor Greg Abbott signed HB 149, the Texas Responsible Artificial Intelligence Governance Act (“TRAIGA 2.0”), into law. TRAIGA 2.0, so called as it is the second attempt to pass such a law this session, establishes a patchwork framework for AI development and deployment focused on government transparency, consumer rights, the responsible use of AI systems, and procedural safeguards to encourage entrepreneurship and innovation in the AI space. TRAIGA 2.0 is scheduled to take effect on January 1, 2026.
Background
Early in the 2025 legislative session, State Representative Giovanni Capriglione (R), proposed a bill, HB 1709 or TRAIGA 1.0, that imposed reporting obligations and requirements for AI developers and mandated deployers to produce a Risk Identification and Management Policy. Developers, distributors, and deployers—including employers—of “high-risk AI systems” would owe a duty of reasonable care to avoid “algorithmic discrimination,” which was to include political viewpoint discrimination and any use of AI that “limits an individual’s ability to express or receive others’ beliefs or opinions based solely on the individual’s political benefits, opinions or affiliations.” TRAIGA 1.0 also would have required a deployer to conduct an impact assessment of a high-risk AI system prior to deployment, and similar to the AI law enacted in Colorado in 2024, to adopt and implement a risk management policy to govern the deployment of high-risk AI systems. The rejected proposed bill also called for the development of the Artificial Intelligence Workforce Development Grant Program, which was intended to support Texas AI companies and educational institutions to prepare for future AI uses and their impacts on workers, students, and society.
Among other criticisms, opponents contended that the bill imposed unnecessary government oversight and bureaucratic requirements with steep administrative penalties that would stifle innovation and entrepreneurship, especially for small business and open-source developers. They further argued that the bill could raise First Amendment concerns and potentially create political affiliation as a protected category.
Ultimately, the proposed bill died in committee, prompting Rep. Capriglione to propose TRAIGA 2.0, a simplified and less burdensome version of its predecessor.
Key Provisions of TRAIGA 2.0
Coverage and Governance: TRAIGA 2.0 defines key terms such as “artificial intelligence system,” “consumer,” “developer,” and “deployer” and broadly covers: (1) any individuals or entities that do business in Texas or otherwise produce products or services consumed by Texas residents; and (2) any individuals or entities involved in the development, distribution, or deployment of AI systems in Texas. As part of the governance framework, the law establishes a 7-member council appointed by the governor, lieutenant governor, and the speaker of the house of representatives. The council is charged with issuing reports to the legislature on the use of AI systems in Texas and enacting training for state agencies and local governments. Importantly, the Act places a vast majority of the compliance burdens on governmental entities, and not on private sector business and employers.
Discrimination Prohibitions: TRAIGA 2.0 prohibits persons from developing and deploying AI systems “with the intent to unlawfully discriminate against a protected class in violation of state or federal law,” including race, color, national origin, sex, age, religion, or disability. Importantly, the law expressly eliminates disparate impact as a viable theory of discrimination in the AI context. This language choice is deliberate. Lawmakers wanted to create safeguards against discrimination while avoiding penalizing AI developers and users who unintentionally develop or deploy discriminatory AI systems.
Mandated Disclosures: TRAIGA 2.0 imposes disclosure requirements mandating that governmental agencies and health care providers and/or their representatives provide clear, conspicuous, and easy to understand notifications to consumers about interactions with AI systems before or during the interaction. However, the definition of “consumer” specifically excludes individuals “acting in a commercial or employment context.” Disclosures in medical settings have additional requirements.
Prohibitions of Certain Developments and Deployments: Additionally, TRAIGA 2.0 prohibits governmental entities, with the exception of public hospital districts or higher education institutions, from developing or deploying AI systems that:
- Manipulate human behavior to commit physical self-harm, harm to another, or to otherwise engage in criminal activity;
- Create a social scoring program based on social behavior or personal characteristics (regardless of whether such characteristics are known, inferred, or predicted) in an attempt to score or otherwise value individuals that could result in detrimental or unfavorable treatment; and/or
- Create deep fake videos or impersonates a minor in sexual conduct.
Public Enforcement: Under TRAIGA 2.0, the attorney general is granted exclusive authority to enforce the statute, with state agencies permitted to levy additional sanctions by recommendation of the attorney general. Private citizens may submit complaints to the attorney general’s office through the attorney general’s website but may not bring a private cause of action under the statute. If the attorney general finds a violation, the alleged violator will be given 60 days’ notice to cure the violation before facing civil and other penalties.
Penalties: The penalties include financial administrative penalties that will vary depending on the type and length of the alleged violation. In addition to administrative penalties, violators can face injunctive relief and attorneys’ fees and costs. Alleged violators who believe in good faith that they have not violated the statute may also seek immediate relief via expedited and declaratory actions.
Sandbox Program: TRAIGA 2.0 provides a procedural safe harbor in which an individual can obtain legal protection and limited access to the Texas market to test innovative AI systems. To qualify, individuals must first secure approval from the Texas Department of Information Resources. This protected testing period is capped at three years, unless the Department determines there is good cause for an extension. Participants are required to coordinate with any applicable agencies, and to report quarterly.
Preemption Clause: Finally, the law prohibits local and municipal governments from enacting alternative AI regulations. TRAIGA 2.0 frequently emphasizes the concerns of risk, ethics, and civil liberties, and the preemption clause appears to focus on a unified, state-wide approach to these concerns. This preemption clause is also consistent with the contested Texas Regulatory Consistency Act passed during the 88th Texas Legislative session.
Additional AI Bills Regarding Deepfakes and AI-Generated Sexual Materials Signed into Law
Governor Abbott also signed into law two additional bills clarifying criminal penalties associated with deepfakes and AI-generated sexual materials. These laws all go into effect on September 1, 2025.
SB 441 increases the criminal penalty for threatening with or creating sexually explicit deepfakes, or so-called revenge pornography created by AI, to a Class A misdemeanor. Under certain circumstances, including if the aggrieved party is a minor, the penalty rises to a felony.
HB 581 restricts minors’ access to AI-generated sexual materials on the internet to avoid potentially harmful implications. The law requires that website operators with publicly accessible AI tools either are responsible for implementing age verification access requirements, or alternatively, expressly prohibiting and removing such content all together.
Additional AI Bills Regulating AI Systems for State Agencies and Local Governments
The Texas Legislature also enacted a suite of forward-looking bills that underscore the state’s stated commitment to responsible and transparent AI governance, particularly within public institutions. These measures, also effective September 1, 2025, are poised to shape how state agencies and local governments adopt and manage AI systems and technologies.
SB 1964 amends the Texas Government Code to create an AI system code of ethics and minimum standards for state agencies and local governments for utilizing AI systems in decision-making processes that affect individuals’ rights, benefits, or privileges. While not explicit, this could include employment-related decisions. State agencies and local governments must publicly disclose their AI use policies on their websites, provide clear notice to users if an AI system is used in a decision that affects the individual, and conduct impact assessments on the use of AI systems, regardless of whether they are deployed internally or in conjunction with a third-party vendor. Furthermore, the attorney general has enforcement authority with the ability: (1) to void a contract if a vendor violates or causes the state to violate the law; or (2) to subject the state agency or local government to administrative review or legislative oversight.
HB 2818 creates an AI division within the Department of Information Resources. This new division is charged with assisting state and governmental agencies and public institutions with implementing generative AI technology along with other projects the division finds appropriate.
HB 3512 complements these other legislative efforts by requiring AI training for designated public employees and officers and expanding on existing cybersecurity training mandates. It also defines state-certified AI training programs and obligates agencies to adopt written policies for evaluating the fairness, accuracy, and effectiveness of AI systems, again reinforcing the requirement to publish these policies online for public transparency.
Employer Take-Aways
Like any bill attempting to regulate a still-developing technology, TRAIGA 2.0 and the other AI-related bills have stirred up controversy among those involved in the development of this new technology. Texas opted for lean AI regulations focusing almost entirely on state government’s development and use of AI systems. As opposed to the original version of the bill, TRAIGA 2.0 imposes no requirement that private employers disclose their use of AI to make or aid in employment-related decisions to employees or job applicants. In fact, the law generally is silent on the duties, requirements, and expectations in employment and commercial contexts.
Relatedly, in Governor Abbot’s Signature Statement while signing TRAIGA 2.0 into law, he noted that the federal government is also considering an AI-related bill, which includes a moratorium on state and other political subdivisions from enforcing AI-related laws or regulations for 10 years. He then forecasted that if federal law ultimately prohibits state enforcement of rules in the AI space, he will instruct executive agencies to take action necessary to ensure that federal funding is not compromised.1
Regardless of the future role of federal regulation, at this time, TRAIGA 2.0 and all other AI bills passed in the state of Texas are set to have an impact on the direction of AI, and employers in both the public and private space should remain vigilant when operating in this space in Texas.