Author : Siddharth Yadav

Expert Speak Digital Frontiers
Published on Mar 04, 2024

Generative AI's impact on creative industries is unfolding through economic shifts and legal challenges, affecting job markets and redefining artistic originality

Artificial imagination: Balancing innovation and rights in the era of generative AI

The launch of the generative AI tool ChatGPT in November 2022 and the subsequent proliferation of AI tools have taken the global economy by storm. Over the past year, several institutions have published reports forecasting the impact of generative AI on the global job market and specific sectors like education, the creative industries, and the wealth distribution between developed and developing economies.

Although there is an ongoing debate about whether such predictions are products of marketing hype and sensationalism, creative industries have already started wrestling with the prospect of AI replacing human labour. In the United States (US), the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), and the Writers Guild of America (WGA) launched a strike in the first quarter of 2023 and halted production in major film studios. A major issue of contention for the unions was the use of generative AI tools like ChatGPT for scriptwriting and image/audio generation tools like Midjourney AI and DALL-E for generating digital voices and digital likenesses of performers. Objections to using AI in filmmaking are not a uniquely Western phenomenon, as members of other creative enclaves like Bollywood have also begun acting on this concern. The vulnerability being felt by creative industries has legal ramifications, particularly in copyright and Intellectual Property (IP) law, that can significantly impact the trajectory of AI development in the near future.

A major issue of contention for the unions was the use of generative AI tools like ChatGPT for scriptwriting and image/audio generation tools like Midjourney AI and DALL-E for generating digital voices and digital likenesses of performers.

Union actions and lawsuits against generative AI

As early as 2022, AI tools for creating deepfakes were making headlines for their potential to cause socio-political and economic disruption. The 2023 union action in Hollywood is a prime example of the vulnerability of creative industries to the incursion of AI. The primary complaints against the use of generative AI by SAG-AFTRA and WGA were:

  • Since applications like ChatGPT generate outputs by training on data created by other humans on the internet, they cannot—by definition—produce original content.
  • Since generative AI tools cannot create original content, their outputs cannot be used as source material as it runs the risk of breaking copyright laws.
  • Since audio and video products protected by SAG-AFTRA and WGA union contracts are protected under copyright law, they cannot be used as training data for generative AI tools.

Lawsuits have already been filed in the US against AI companies like OpenAI, Meta, Microsoft and other AI art generators by various artists, though such efforts have not been successful so far. In 2023, US judges dismissed two cases of copyright infringement against AI companies as the plaintiffs were unable to substantiate claims that the AI tools under question were creating output that was identical to the plaintiffs’. As legal battles are ongoing with The New York Times filing a lawsuit along with several authors against OpenAI in December 2023, the future of creative work is being interrogated in the United Kingdom (UK) as well.

There is a crucial difference between the lawsuits filed against these companies compared to text generation tools like ChatGPT.

After the leak of a 4,700-person list comprising artists whose artwork was to be used to train Midjourney AI, British artists have begun working with US lawyers to sue Midjourney and other AI art generation companies like Stability AI, Runway AI and DeviantArt. There is a crucial difference between the lawsuits filed against these companies compared to text generation tools like ChatGPT. The artists allege that Midjourney encourages users to generate output in the likeness of a particular artist’s style. A similar argument was made against ChatGPT, but OpenAI, in a blog post, argued that the duplication of text in a particular writer’s style was a rare bug and often the result of cherry-picking from several outputs. In the case of Midjourney, artists allege that duplication of an artist’s style is not a bug but an advertised feature of the tool. At the heart of the lawsuits against generative AI tools is the issue of IP rights and copyright infringement.

Generative AI and Copyright Law

The lawsuits and union actions mentioned previously hinge on the use of copyrighted data on the internet as training data for generative AI, and the court decisions in these cases will likely be detrimental for the developmental trajectory of generative AI. A notable issue that has risen to the surface amid the copyright and attribution controversy is whether an AI-generated product through user prompts can be copyrighted. In 2022, a US-based artist, Kris Kashtanova, created a graphic novel using Midjourney. However, his request to copyright the graphic novel was rejected by the US Copyright Office on the grounds that it is “not a product of human authorship.” The judgment stated that when using generative AI, the output and the process “is not controlled by the user because it is not possible to predict what Midjourney will create ahead of time.” Furthermore, the information in the user prompt “may ‘influence’ generated image, but the prompt text does not dictate a specific result.” Similar to Kashtanova, US-based computer scientist Stephen Thaler wanted to patent inventions reportedly created by his ‘Device for the Autonomous Bootstrapping of Unified Sentience’ (DABUS) system, however, his application was rejected using the same reasoning as in the Kashtanova case.

The judgment stated that when using generative AI, the output and the process “is not controlled by the user because it is not possible to predict what Midjourney will create ahead of time.”

In the lawsuits against AI companies, the issue of authorship may prove to be crucial. The charge against them is that the training data for generative AI models is extracted from works that are produced by humans that are protected by copyright law. However, a strong argument against this charge is that using copyrighted data should be considered ‘fair use’ because the AI is “only borrowing the work to extract statistical signals from it, not trying to pass it off as their own.” Using synthetic data instead of real data is another method that AI companies can implement to avoid the copyright minefield. Synthetic data is defined as “artificially created information that mimics the characteristics of real-world data but does not correspond to real-world events.” However, the primary drawback of using synthetic data is that if the source data has biases or inaccuracies, “the resulting synthetic data may only magnify the particular biases in the original data.” If the output of a generative AI is increasingly biased and inaccurate, it will defeat the point of using AI in the first place. As the capabilities of generative AI tools expand complementarily to their investment potential, so does the value of their training data. Therefore, the results of the lawsuits against generative AI companies in the future are certain to have a crucial impact on the entire sector.

Going forward

The risks associated with the adoption of generative can be minimised by encouraging AI companies to implement responsible policies towards their source training data and the outputs of their generative AI tools. Firstly, in terms of the training data, companies can be required to offer opt-out options for writers and artists to ensure that their work is not retained by generative AI models. OpenAI has made an opt-out option available, although it has been criticised for not being user-friendly and shifting the burden away from big companies to individual artists. Nevertheless, cross-sector collaborations and legal mandates can be used to ensure the availability of robust and user-friendly opt-out options. Secondly, labelling and certification frameworks can be implemented by regulators so that AI companies are only green-lit if they acquire permission to use copyrighted data. This mechanism is being used by Ed Newton-Rex, ex-president of Stability AI audio division who has recently founded the company Fairly Trained, to tackle the issue of copyright infringement. Thirdly, in terms of output, one suggested approach to reduce the risk of digital artwork theft is embedding digital watermarks in AI-generated works.

OpenAI has made an opt-out option available, although it has been criticised for not being user-friendly and shifting the burden away from big companies to individual artists.

Since its arrival, generative AI has poised itself to become the most exciting technological platform of the decade (so far). However, the hopes of economic and productivity gains have quickly been followed by the looming threat of job displacement, large-scale economic disruption and, in most extreme cases, the obsolescence of human labour. Beyond the West, Indian actors have already started establishing legal protections against the replication and simulation of their digital likeness. However, the picture is not all doom and gloom. As a case in point, in January 20224, SAG-AFTRA announced an AI voice agreement with Replica Studios, which will allow actors to license the digital replicas of their voice. Several visual artists internationally have also started to embrace the use of generative AI in their process. In organisational settings, the benefits of integrating generative AI in creative processes like storyboarding and initial draft creation are being observed. These developments highlight a crucial juncture in technological evolution, where the need for responsible and ethical development of AI is paramount.


Siddharth Yadav is a PhD scholar with a background in history, literature and cultural studies.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Siddharth Yadav

Siddharth Yadav

Siddharth Yadav is a PhD scholar with a background in history, literature and cultural studies. He acquired BA (Hons) and MA in History from the ...

Read More +