US court rules AI-generated art cannot be copyrighted
Additionally, the use of generative AI in creating synthetic data, which can be used to train other AI models, raises concerns about the privacy of individuals who may be represented in the data. Generative AI holds tremendous promise for a wide range of applications, but it also raises important questions about the applicability of intellectual property rights and ownership of generated works. As this technology continues to evolve, it will be important to develop new ownership models and explore where the infringement risks are in relation to this emerging technology. The PIRT suggested TDM should be permitted for any purpose to include the use of publicly available content protected by intellectual property as an input to TDM (including databases). It also recommended a code of practice, a requirement for altered images to be labelled as generated or assisted by AI and the use of technological solutions for ensuring attribution and recognition, such as watermarking.
For example, it has also been reported that a major UK newspaper is preparing for a legal battle with Google, over what is alleged to have been the unauthorised use of thousands of online news stories to train Bard, its ChatGPT rival. Those lawsuits include one filed by Sarah Silverman and two other authors against OpenAI and Meta earlier this year over their data-scraping practices. The MLC Letter is just one of many ways the Copyright Office is addressing topics related to AI and AI-generated works. Earlier this year, the Copyright Office launched an AI initiative, which includes four public AI listening sessions focused on the use of AI to generate works in creative industries. The listening sessions include ‘Literary Works, Including Software’ (held on April 19, 2023), ‘Visual Arts’ (held on May 2, 2023), ‘Audiovisual Works’ (held on May 17, 2023) and ‘Music and Sound Recordings’ (held on May 31, 2023).
Whose Images Train AI Models?
This is the second article in our AI 101 series, where the team
at Lewis Silkin, Ius Laboris’s UK firm, unravel the legal
issues involved in the development and use of AI text and image
generation tools. In the first article of the series, we looked at
how generative AI tools are trained and why disputes have arisen
involving some of the major AI companies in both the UK and US. In the ever-evolving world of technology, the boundaries of law are constantly being tested.
In section 178 of the CPDA, computer generated works are defined as works “generated by computer in circumstances such that there is no human author of the work”, thus acknowledging the possibility of work without human authors. Read more on the key legal issues gaming companies and investors are facing as they pursue opportunities across the globe and some acute issues for the industry such as online safety, age verification and the metaverse. It’s hard to deny the crux of their argument, which is that current IP and creative rights legal frameworks designed to protect artists from having their work used without permission or compensation are simply not adequate for the AI era.
Karanović & Partners at the IP and Video Games at Play Roundtable
If copyright does not subsist in AI-generated works (e.g. images, articles, music), they can be freely copied by anyone without risk of copyright infringement liability. Stephen Thaler, an artist and programme developer based in the United States, created an AI image generator dubbed ‘Creativity Machine’ claiming that it had generated a piece of art ‘on its own accord,’ according to recent court documents read. Thaler then tried to copyright the image under his own name, claiming that, as the creator of Creativity Machine, the AI’s output can be credited under himself.
This means there’s a very good chance that it contains material protected by copyright and whether or not OpenAI has the right to use this data and, furthermore, to monetise it is highly questionable. The MLC Letter is an example of the numerous ways in which AI-generated works may invoke aspects of the Copyright Act. The MLC Letter also highlights that the line between a human-generated work that uses AI (and is protectable with a copyright) and an AI-generated work (where copyright protection is not available) requires a ‘case-by-case’ analysis and may not always be clear. We expect that as uses of AI become more sophisticated, these questions will become more complicated.
A prolific businessman and investor, and the founder of several large companies in Israel, the USA and the UAE, Yakov’s corporation comprises over 2,000 employees all over the world. He graduated from the University of Oxford in the UK and Technion in Israel, before moving on to study complex systems science at NECSI in the USA. Yakov has a Masters in Software Development.
Regardless of whether or not ChatGPT is “fair use”, the content being created with it, if agreed to be a derivative work, are subject to copyright law. Judge Howell sided with the Copyright Office, saying human authorship was a “bedrock requirement of copyright”. It is crucial that businesses have an accurate and up to date understanding of any generative AI used in their organisation by their staff.
- Since developers have no creative input in the end product and may not even have any intention to create any kind of artwork, it is arguable that attributing authorship to them runs contrary to the basic premise of copyright laws.
- Dr Trapova explains that the three benchmark requirements needed to be mapped onto the four pillars individually in order to determine if they were protected by copyright – at each stage and in the final output.
- As generative AI models ingest more copyright-protected material, the landscape of intellectual property rights becomes increasingly complex.
Inside the music industry, that astonishment is coupled with existential concerns for the future of the industry. Although a growing number of artists are already using the fast-developing technology to support and speed up the creative process, the unlicensed use of recorded-music catalogs to “train” AI systems to generate new songs has thrown up some very serious copyright issues. Oliver Lock, Owen O’Rorke, and Ethan Ezra Howard at the law firm Farrer & Co have kindly provided Omdia with their thoughts on the legal considerations that are set to unfold in the next few years. With the recent relaxation of copyright laws in the UK, AI developers now have the freedom to utilise a vast array of copyrighted content as training data for machine learning models.
“In the Office’s view, it is well-established that copyright can protect only material that is the product of human creativity,” the statement reads. It cited both a previous rejection of Thaler’s A Recent Entrance to Paradise image and an unrelated graphic novel where the imagery was generated by prompts written by a person. The Solicitors Regulation Authority (SRA) is consulting on potential changes to its regulatory arrangements to include authorised members of the Chartered Institute of Legal Executives (CILEX). This follows a CILEX consultation launched earlier this month on proposals to re-delegate the regulation of its members to the SRA. Let’s say, for example, that the European Union passes legislation mandating that AI does not replace humans in certain jobs. If, for example, a multinational company based in the US uses AI for customer services in their business call centre, European customers may be unable to access this service.
OpenAI, for example, attracted investment of $10 billion from Microsoft based on its potential to generate revenue in the future. To mitigate these risks areas, I recommend that companies implement and operationalize policy that demonstrates clear and consistent human involvement, creativity, artistry, and originality in the development of AI-generated genrative ai code. Copyright Office confirmed that AI-generated works are unprotectable where human involvement is limited to providing a prompt that generates content. But, the Copyright Office also recognized that human selection, arrangement, and/or modification of AI-generated works may constitute human expression warranting copyright protection.
Artist Eva Toorent is one of the founder members of the European Guild for Artificial Intelligence Regulation (EGAIR). She believes that AI companies should have to obtain artists’ opt-in consent before using their work to train algorithms that can create other works. DeepMind, Google’s artificial intelligence (AI) division, allegedly harvested a vast cache of around 1 million news articles from the Daily Mail and CNN websites to help develop its chatbot, genrative ai Bard. I am the location head of the Italian Intellectual Property & Technology department and the global co-head of the IoT and Gaming and Gambling groups at the world-leading law firm DLA Piper. IoT and artificial intelligence influencer and FinTech and blockchain expert, finding solutions to what’s next for our client’s success. Biases in generative AI can lead to legal issues and ethical issues for the potential risk of discrimination.
The IPO is to produce a code of practice by the summer that will provide guidance to support AI firms in accessing copyright protected works as an input to their models. The upshot is that the CDPA 1988 does not make clear who should be considered the author of art created using generative AI tools. There are likely to be disputes on who the author is, particularly given that there could be several people and/or companies who claim to be responsible. The logical extension of the debate is whether the AI tool itself could ever be considered the author of a work and therefore a legal person, in the same way that a company has legal personality with the ability to own and enforce rights. Ethan advises clients on a variety of intellectual property (both contentious and non-contentious), commercial contracts, and information law matters. His clients include higher education institutions, cultural organisations, businesses, and schools.