Employers Beware: AI Tools May Lead to Labor Force Friction and Strikes

July 28, 2023

According to recent studies, 83% of large employers surveyed rely in some form on artificial intelligence (AI) in employment decision-making, and 86% of employers that use AI admit that it is becoming a mainstream technology at their company. (See Chicago Tribune’s “Do Robots Care About Your Civil Rights?” and Harvard Business Review’s “AI Adoption Skyrocketed Over the Last 18 Months.”) The potential uses and benefits of AI are powerful, but the risks are, in some regards, less obvious.

To date, literature tends to focus on potential discrimination and bias associated with the use of AI in employment decision-making. (See The Promise and The Peril: Artificial Intelligence and Employment Discrimination by Keith E. Sonderling, et al., 77 U. MIA L. REV. 1 (2022).) However, a new area of risk is emerging for employers — potential alienation of the labor force with the use of “generative AI” as a replacement for traditional notions of labor.

Generative AI, specifically large language models, can be trained on large quantities of text data and, in response to prompts, generate text by predicting the “best” following text. The use of generative AI for creative works, scripts and guidance has created a new avenue for automation. While many welcome automated machines replacing dangerous tasks in heavy industry, much fewer welcome AI replacing the role of creative professionals.

While assessing the strength of their AI policies, employers can learn from three recent instances of labor force friction involving Hollywood, the National Eating Disorder Association and an online coding forum.

  1. Hollywood WGA and DGA Strike; Use of Generative AI Is a Pain Point

Many writers and directors in Hollywood are anticipating the risks of generative AI and taking proactive measures to secure their roles. On May 2, 2023, the Writers Guild of America (WGA) and the Directors Guild of America (DGA) went on strike against the Alliance of Motion Picture and Television Producers (AMPTP) and several large networks and streaming platforms.

Among more traditional requests for higher pay and a focus on stricter rules regarding streaming platform revenue, the alleged threat of generative AI tools like ChatGPT formed a strong pain point in the grievances listed.

WGA’s official statement requests the regulation of “… artificial intelligence on [Minimum Basic Agreement] MBA-covered projects: AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI.” This request comes after multiple media organizations have cut costs in both production of media and their labor forces.

Negotiations between the AMPTP and the WGA broke down, with each side rejecting the other’s proposals. The AMPTP, along with other major Hollywood studios, has stated it does not plan to return to the negotiation table before late October 2023. The DGA appears to have been more successful, with Jon Avnet, the chair of the D.G.A.’s negotiating committee, stating that the AMPTP and DGA have come to a “groundbreaking agreement confirming that A.I. is not a person and that generative A.I. cannot replace the duties performed by members.”

As of June 26, 2023, the DGA voted to ratify a new labor contract with the AMPTP. Although the DGA has accepted the terms proposed, the WGA has intimated it will continue holding out for a better response from the AMPTP on this and other grievances. As of July 14, 2023, the WGA gained additional support on the picket line, with the largest union of actors, Stage Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA), joining the strike, putting even more pressure on the AMPTP. SAG-AFTRA previously expressed public support for the WGA strike, and its own strike includes similar issues of AI in the film and television industry.

Many media projects at leading studios have already been halted, conjuring memories of the 2007 writers’ strike, which cost an estimated $2.1 billion over the 100 days the picket lines remained standing. Time will tell whether strikes related to generative AI will cause similar impacts in the media industry. Stay tuned…

  1. National Eating Disorder Association Chatbot Woes

The potential labor friction caused by generative AI extends even to the health and wellness industry. On March 31, 2023, the National Eating Disorder Association (NEDA) performed a reduction of several services offered, including a long-running telephone hotline, where those suffering from eating disorders could seek help from live individuals. Those staffing the employee-run helpline, among others, were terminated, with the layoffs occurring soon after NEDA employees unionized.

NEDA then announced the introduction of a chatbot named “Tessa” who could “chat” with visitors to NEDA’s website. Reports began surfacing on social media soon after Tessa’s launch, with screenshots showing the chatbot providing weight loss tips to users as a solution to body-image issues, even after the users stated outright that they were suffering from an eating disorder.

NEDA has since pulled Tessa from the site for auditing, and NEDA CEO Elizabeth Thompson has released a statement that Tessa was never meant to serve as a replacement for the hotline and that the views expressed by the chatbot are not reflective of the organization’s mission. Thompson later addressed the developers of the chatbot, X2AI (now renamed Cass), in an email, asking how the problematic language entered the closed program of the chatbot, to which the developer responded by saying the messages at issue were the 0.1% where the chatbot “did not stick to guidelines.” To date, the chatbot has not returned to NEDA’s site.

  1. Moderators of Stack Overflow Strike Over AI Presence

On June 5, 2023, the moderators of popular coding forum Stack Overflow published a statement on the site stating that they would no longer provide voluntary maintenance until the site addressed its stance on AI-generated material. The site, which is part of the larger Stack Exchange network, provides a hub for programmers, developers and other tech industry professionals to ask questions and find resources.

The site previously announced a temporary ban on all ChatGPT-generated content in December 2022, before rescinding the ban to leave supervision to moderators who run individual forums. On May 30, 2023, a Stack Overflow staff member disclosed a third change in direction, which imposed new standards on moderators when identifying AI-generated content and suspending user accounts that improperly post such content. Moderators, who volunteer to manage the forums within the site, say the new standard prevents them from acting quickly when removing content created by AI tools, which in turn leaves the forums inundated with “spam” content.

In contrast, Stack Overflow’s vice president of community, Phillipe Beaudette, responded to the strike by stating that the previous ChatGPT detection tools had been overbroad in their banning of accounts suspected of using generative AI. Beaudette went on to state that only a small percentage of the moderators, around 22%, were involved with the strike. It remains to be seen how AI tools might cause further friction among the company, its moderators and its user base.

Practical Takeaways for Employers

These three examples of labor friction offer valuable case studies and practical considerations for employers implementing AI tools. Employers can mitigate employee disenfranchisement risk by developing thoughtful AI policies and safeguards, while at the same time, promoting transparency and committing to develop and use AI responsibly.

Moreover, employers also should stay apprised of pending and enacted AI regulation and government actions that could directly impact their operations and labor force. With the current patchwork of regulation, companies are more frequently turning to self-regulation — evaluating how, why and where they use AI and emphasizing their standards for “ethical” or “responsible” use of AI. Companies also are publishing comprehensive AI reports, policies and blueprints to help set standards and guardrails for the AI industry.

Ultimately, as employers consider implementing AI technology, they should carefully evaluate the developers and programs behind the tools to ensure compliance with relevant laws, keep pace with emerging standards for the responsible use of AI, and reduce the potential for labor force friction.

 

Subscribe