The Texas Responsible AI Governance Act and Its Potential Impact on Employers
On 23 December 2024, Texas State Representative Giovanni Capriglione (R-Tarrant County) filed the Texas Responsible AI Governance Act (the Act), ushering Texas into the assembly of states striving to regulate artificial intelligence (AI) amidst the void of federal guidelines. The Act enlists distinct responsibilities for developers, deployers, and distributors of select AI systems across Texas. While the Act encompasses diverse territories, this discussion zeroes in on its potential repercussions for employers.
The Act’s Regulation of Employers as Deployers of High-Risk Intelligence Systems
The Act endeavors to oversee employers’ as well as other deployers’ application of “high-risk artificial intelligence systems” within Texas. These high-risk systems are typified by AI tools that determine or influence “consequential decisions.” Within the employment realm, these decisions might relate to hiring, performance, compensation, discipline, and termination. Notably, the Act exempts a number of conventional intelligence systems from its oversight, such as technologies aimed at identifying decision-making patterns, anti-malware and antivirus applications, and calculators.
In accordance with the Act, employers falling under its purview would bear a general obligation to exercise reasonable diligence to thwart algorithmic discrimination. This encompasses the obligation to retract, disable, or recall any non-compliant high-risk AI systems. To fulfill this mandate, the Act delineates the following measures for covered employers and deployers:
Human Oversight
Ensure the presence of human oversight over high-risk AI systems by individuals who possess adequate competence, training, authority, and organizational backing to supervise the consequential decisions rendered by the system.
Prompt Reporting of Discrimination Risks
Engage in the immediate reporting of discrimination risks by notifying the Artificial Intelligence Council, an entity proposed under the Act, no later than ten days following the discovery of such concerns.
Regular AI Tool Assessments
Conduct regular assessments of high-risk AI systems, involving an annual review to ascertain the system’s adherence to preventing algorithmic discrimination.
Prompt Suspension
Should a deployer believe or have reasons to suspect that a system falls short of conforming to the Act’s stipulations, the usage of the system should be suspended, and such apprehensions should be communicated to the system’s developer.
Frequent Impact Assessments
Complete an impact assessment semi-annually and within 90 days post any deliberate or substantial amendments to the system.
Clear Disclosure of AI Use
Provide disclosure to any individual based in Texas regarding AI interaction, either before or at the time of said interaction.
Takeaways for Employers
The Act is likely to rise as a crucial subject of discourse in Texas’s forthcoming legislative session, slated to commence on 14 January 2025. Should it be enacted, the Act will herald a consumer protection-centered framework for AI regulation. It’s prudent for employers to vigilantly monitor the Act’s advancements and any modifications to the proposed bill while gearing up for its potential enactment. For instance, employers utilizing or aspiring to integrate high-risk AI systems in Texas could benefit from aligning their operations with anticipated legislative requirements.