A legal brawl is brewing about how, and whether, copyright law should apply to the use of artificial intelligence chatbot, ChatGPT. In one corner are authors who own copyright in their work; in the other are those who claim there is a public interest benefit in enabling content (or ‘output’) from AI such as ChatGPT to be exempt from copyright restrictions.
ChatGPT operates by harvesting the work of others to produce content for its users. The problem is that this output may breach the copyright of the original author
ChatGPT’s process works like this: the chatbot’s machine learning algorithm mines vast datasets, including texts, websites, news articles and books to respond to users’ prompts. This process uses billions of parameters to statistically analyse complex language structures and patterns, from which it can produce sophisticated and surprisingly accurate content.
To unpick this issue, it first needs to be asked who authors the text generated by ChatGPT? The person who put in the prompt? Or is it OpenAI, the company that released ChatGPT onto the market in November 2022?
Under the Copyright Act 1994, the author of a computergenerated work is “the person by whom the arrangements necessary for the creation of the work are undertaken”. Arguably, as the person inputting the prompts into ChatGPT, the user becomes an author of the output created by the AI.
Can it be said that the person inputting a prompt into ChatGPT exerted any skill or labour in producing the output? It is an arguable point: some consultants are forging new careers in creating the best prompts. Even if this is so, will some output end up being so similar to the work of others that it becomes a copy of another’s work?
Chances are that your average AI user has not reflected on whether his or her use of ChatGPT might infringe copyright. It is likely that many users intuitively think the AI is the author of any output. However, rightly or wrongly, the narrow description of who (and what) can be an author for the purposes of the Copyright Act probably means the AI itself is not the author.
The million-dollar question is whether there is a risk that a ChatGPT user might inadvertently infringe on someone else’s copyright work. For example, this could occur where the output reproduces a portion of copyright-protected material. This raises several policy questions. Does the author of a copyright work have a legitimate interest in that work being protected from a ChatGPT user’s inadvertent infringement into that copyright? Is there a public interest in the benefits of AI for such output to be an exception to the infringement regime in the Copyright Act?
The Copyright Act provides for some exceptions, including, for example, the fair dealing exception. This means copyright in a work is not infringed “if such fair dealing is accompanied by a sufficient acknowledgement”. The trouble is that output does not provide (at least for now) any acknowledgement of what work has been trawled from the internet to produce its output. What are ChatGPT users to do with this? Do they run the gauntlet and hope their output does not infringe on someone else’s copyright work?
Some work on this issue has already been done in the UK and in the European Union. Their laws have been updated to offer guidance around how copyright protections work in respect of AI-produced output.
They provide two models for AI exceptions, allowing what otherwise would be a copyright infringement, and give some idea of what reform of the Copyright Act, in respect of AI, could look like in New Zealand. The EU has two copyright exceptions for text and data mining which specifically permit the use of data to train LLMs (the large language models that power ChatGPT). Broadly speaking, these exceptions provide for:
- Text and data mining for the purposes of scientific research, innovation and education. This enables “non-commercial” research organisations and cultural heritage institutions to data mine (eg, universities, museums, etc.); and
- A general exception from copyright infringement for anyone, including commercial enterprises, using text and data mining.
However, a rights-holder can contract out of this latter exception, meaning that the copyright holder can reserve the rights to carry out text and data mining of their work, and thereby prevent others from doing so. The UK has taken a more narrow approach – so far. Its Copyright Designs and Patents Act 1988 provides a limited exception to copyright infringement, similar to the first category of EU exceptions.
The exception provides that text and data mining does not infringe copyright where it is done for the purpose of computational analysis for non-commercial research. Further, for the exception to apply, sufficient acknowledgement of the mined work must be given.
Some commentators have noted that these restrictive approaches do not allow for the full potential benefits of AI to be exploited. That’s why the UK government confirmed its intention to expand the scope of that exception to enable text and data mining of works protected by copyright and database rights for any purpose because of the benefits to artificial intelligence and wider innovation in the UK. However, the minister responsible for the policy faces strong opposition to the proposal. While this article has focussed on the text generated by AI, another open question relates to images created using prompts. Should that be considered more as “art” with different protection?
Where to next?
There are myriad problems coming our way, along with new policy issues around the legitimate interests of authors who create content mined by AI versus societal interest in the advantages that AI data mining can contribute to the common good.
We’ll leave it for readers to weigh up how that tension should be settled. And if all else fails, we can always ask ChatGPT how it should be solved. ■
Steven Moe & Alex Summerlee are partners at Parry Field Lawyers in Christchurch ■
Click here for an upcoming CPD event.