ASAP
Labor Organizing and AI: The Employer Perspective
At a Glance
- AI presents both challenges and opportunities for labor-management relations.
- This article outlines some key labor positions on AI and offers practical guidance for employers navigating this evolving landscape.
Artificial intelligence (AI) presents both a large opportunity for employers — and potentially a source of reputational risk — depending on how its adoption is handled. As AI transforms the workplace, unions are responding with a mix of concern, advocacy, and strategic adaptation. For employers—especially those in unionized environments—understanding these responses is essential to effective labor relations and for ensuring compliance with emerging legal and regulatory frameworks.
This article outlines some key labor positions on AI and offers practical guidance for employers navigating this evolving landscape.1
“AI” here will refer to an emerging definition of “AI system,” set down in EU law and recently adapted in a proposed AI law for New York. The definitions below center around a concept of “inference”; historically a function left to human workers (with emphasis added):
European Labor AI Act definition of AI system:
“AI system” means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.2
New York Senate Bill 1169 definition:
“Artificial intelligence system” or “AI system” means a machine-based system or combination of systems, that for explicit and implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Artificial intelligence shall not include any software used primarily for basic computerized processes, such as anti-malware, anti-virus, auto-correct functions, calculators, databases, data storage, electronic communications, firewall, internet domain registration, internet website loading, networking, spam and robocall-filtering, spellcheck tools, spreadsheets, web caching, web hosting, or any tool that relates only to internal management affairs such as ordering office supplies or processing payments, and that do not materially affect the rights, liberties, benefits, safety or welfare of any individual within the state.
How Unions are Responding to AI
Unions have historically approached technological change with a focus on job security, retraining, and perceived fairness, in both the decisions to let certain roles go and the process and pace adopted. The rise of AI has prompted a similar response but with some new dimensions. Recent labor conferences and policy initiatives reveal several themes:
- Job protection and bargaining: Unions are seeking to bargain over the implementation of AI tools, particularly when these tools affect job duties, performance evaluations, or employment status. For example, the Communications Workers of America (CWA),3 the AFL-CIO Technology Institute,4 and UNI Global Union5 have each advocated in favor of collective bargaining over AI deployment.
- Transparency and oversight: Labor organizations are calling for greater transparency in how AI systems are used, especially in hiring, scheduling, and disciplinary decisions. They argue that algorithmic decision-making must be subject to human oversight to prevent bias and ensure accountability.6
- Reskilling and workforce development: Rather than necessarily opposing AI adoption, some unions are seeking employer-funded upskilling programs to help workers transition into new roles. UNI Global Union, which represents over 20 million workers globally, is arguing in favor of digital retraining as a labor right in the AI era.7
- Digital trade agenda: Unions are seeking to influence comprehensive digital trade agendas, including requests for consultation as part of AI implementation, privacy protections, and guarantees that workers will not be disciplined by automated means.8
These positions perhaps reflect a shift in labor strategy away from resisting automation to shaping its implementation, in ways that the labor movement views as protective to workers.
Impact of Recent NLRB Leadership Changes
The recent changes in leadership at the NLRB under the second Trump administration have already begun to reshape the agency’s strategies—particularly in ways that may affect labor organizing and workplace protections. President Trump’s firing of NLRB General Counsel Jennifer Abruzzo and Board Member Gwynne Wilcox—both of whom were seen as pro-labor—has left the Board with only two active members, effectively paralyzing its ability to issue decisions.9
As at the time of writing, Wilcox had been reinstated by the U.S. District Court for the District of Columbia on March 6,10 was then re-fired by a three-member panel of the DC Circuit Court on March 28 only to be reinstated by the full DC Circuit Court on April 7,11 then had the reinstatement stayed by Chief Justice Roberts on April 9.12
NLRB Acting General Counsel William Cowan rescinded an extensive list of policy memoranda which had been perceived to be pro-union. Some of the rescinded memoranda include those covering:
- Electronic monitoring and algorithmic management of employees interfering with the bargaining rights.
- The rights of student athletes.
- Educational privacy.
- Non-compete agreements.
- The rights of immigrant workers.13
With the NLRB’s enforcement capacity diminished, employers may experience fewer immediate legal challenges to their labor practices in the short term. However, this also increases the risk of inconsistent enforcement and legal uncertainty, particularly when the Board will regain a quorum or if courts ultimately reverse the administration’s actions.
Strategic Guidance for Employers
For employers, a proactive and transparent approach to AI implementation will help to reduce reputational risk and promote constructive labor relations. Strategies include:
- Be clear on your use cases: Not every situation is necessarily enhanced by the adoption of AI tools. Be clear what the expected gain is and what the company is signing up to.
- Conduct AI impact assessments: Before deploying AI tools, obtain legal advice on the application of emerging AI laws. Assess the tools’ potential impact on job functions, employee rights, and workplace dynamics. This will help identify areas where labor engagement may be likely.
- Develop clear AI governance policies: Establish internal policies that govern the use of AI in employment decisions. These should address permitted uses, protection of personal information, bias mitigation, and human oversight.
- Support workforce development: Investment in employee training in AI tools may mitigate the need for reductions in headcount as roles evolve.
- Monitor legal and regulatory trends: Employers should stay informed about new laws and guidance at the federal, state, and local levels, and consult legal counsel.
- Be prepared to bargain over AI adoption: If AI tools could materially affect working conditions, unionized employers should be prepared to bargain with unions. Employers should carefully consider communication strategies and manage the adoption of AI tools as a corporate relations issue.
AI Law Reform in New York: What Employers Need to Know
Employers operating in New York should be aware of several legislative developments that could significantly impact how AI is deployed in the workplace.
Several relevant bills are currently under consideration in the New York State Legislature:
- Senate Bill S1169: This bill would in effect replicate the existing New York City Law 144 of 2021 (the “Automated Employment Decisions Tools” ordinance) at the state level, with a greater focus on regulating “high-risk” AI systems as defined. It explicitly covers employment. It would, among other things:
- Require independent audits of "high-risk" AI systems.
- Prohibit algorithmic discrimination.
- Establish enforcement mechanisms through the attorney general and a private right of action for individuals alleged to be harmed by AI misuse.
While still in committee, this bill has garnered attention due to its broad scope and civil rights implications. It should be noted that it does not yet enjoy bipartisan support14 and will be subject to lobbying efforts.
- Senate Bill S934: This bill would require conspicuous warnings on generative AI systems, alerting users that outputs may be inaccurate or inappropriate.15 It has already passed the Senate and is now in the Assembly’s Science and Technology Committee. Given its narrow focus and consumer protection framing, this bill may have a higher likelihood of passage in 2025.
These are two among a raft of other current AI-related legislative proposals for New York.16
A Path Forward
AI presents both challenges and opportunities for labor-management relations. Some employers and tech firms are already working with unions to co-develop training programs, obtain feedback on AI tools as they are developed, and establish joint policy positions.17 These initiatives can serve as models for balancing innovation with worker protections.
Unions are responding to AI with a mix of vigilance and engagement, seeking to ensure that technological change benefits workers rather than displacing them. For employers, the key to navigating this transformation lies in proactive planning, transparent communication, and a commitment to fair and lawful implementation. By anticipating labor concerns and aligning AI strategies with legal and ethical standards, employers can harness the benefits of AI while reducing reputational risk.
Conclusion
Originally from Newcastle, Australia, the author witnessed first-hand the impact of major technological and industrial change on a community. The city was known as the “Steel City”: its large steelworks employed tens of thousands of employees at its peak. It had been a fixture in the community since 1915 and the city’s economy centered around it. The steelworks shut down in 1999, leaving thousands of employees without the careers they thought they would have forever.18
This experience, replicated in countless communities across the American heartland, underscores the importance of managing technological transitions thoughtfully, to avoid a perception of individuals having been “thrown on life’s scrap heap.” Employers therefore have a vested reputational interest in ensuring that the adoption of AI systems is handled in a way that is mindful of perceived impacts on individuals.