What is the issue with Intellectual Property?
Intellectual property (IP) rights, such as copyright, have proved to be extremely adaptable over the years in accommodating new technologies, but probably the greatest challenge IP faces is accommodating the advances in artificial intelligence (“AI”), often referred to as “machine learning”.
What is AI?
AI covers many things. In essence, it encompasses a range of algorithm-based technologies, which in some forms seek to copy human thought processes to solve complex tasks.
An AI system usually involves the creation of an algorithm, which uses a collection of data to model some aspect of the world. It then applies this model to a new data set, in order to make predictions, recommendations or classifications.
‘Generative AI’ refers to AI capable of generating text, images, or other multimedia, using models formed by inputting training data. Readers may be aware of popular Generative AI models such as the language model chatbot ChatGPT, or the text-to-image art model Midjourney.
Some AI systems are fully automated when put into use, so that the AI output and any decisions based on it, are implemented without any human intervention or oversight.
In other systems, the AI output is combined with other information and then considered by a human, who makes a decision based in it. (This is often referred to as “having a human in the loop”).
Relevant Intellectual Property Rights
The most relevant IP rights when considering AI are copyright and patents.
There are two main issues when it comes to considering the interaction between AI and copyright: (i) copyrightability; and (ii) copyright infringement.
Considering first copyrightability, it may seem a strange place to start when considering IP and a machine, but a case involving a monkey is informative.
The so-called “money selfie case” (Naruto -v- Slater) in the USA was concerned with the authorship of a photograph. The camera equipment had been set up by Mr Slater (shutter speed, lens, aperture, etc) near a troupe of monkeys. One of them pressed the shutter button and took a photograph of himself.
After a protracted series of cases, with the monkey being represented by PETA (an organisation for the ethical treatment of animals), the US courts decided that a non-human could not be the author of the copyright work.
A similar result would have occurred under English law. Under the Copyright, Designs and Patents Act 1988 (CDPA) (s11) the author is the first owner of a work (generally speaking).
While a human photographer would normally be the author and first owner of a photographic work taken by him, if an animal (or machine?) “took” a photograph, when the photographer had set everything up, who would own the copyright?
S9(3) of the CDPA provides that if a photograph (an artistic work) is “computer generated”, the author will be the person by whom the arrangements necessary for the creation of the work were undertaken. “Computer generated” means that the work is generated by a computer in circumstances such that there is no human author of the work.
It is worth noting that s 9(3) only applies in the UK and very few other countries have similar legal provisions. This means that the copyright protection for AI generated works in other jurisdictions may not exist.
Computer generated works have a shorter period of protection than other works – 50 years from the end of the year when the work was made.
This leaves open the question of what would be the position with AI machines which are capable of making their own judgments and decisions, with no human input?
Under the current law, it is likely that the author and first owner of any copyright work created by such a machine would be the legal person who created the AI machine – under s 9(3), they would be the person who made the arrangements necessary for the work to be created.
As referred to above, another crack in the UK’s copyright framework concerns copyright infringement. Generative AI is the current focus of heavy discussions in media, governments, and courtrooms across the world. These focus on whether creators of Generative AI models are – or should be – afforded sufficient protection in the existing IP framework.
There has been a spate of claims cropping up in court systems worldwide regarding copyright infringements occurring in the training of such Generative AI models.
One high-profile matter in London’s High Court concerns allegations that Stability AI has copied millions of images belonging to Getty Images without a licence (Getty Images (US) Inc. –v- Stability AI Inc.). The images were used to train an AI model product, Stable Diffusion, to generate accurate images based on text prompts from its users.
The UK’s Intellectual Property Office (IPO) has proposed in a recent consultation that an apt solution could be to specifically carve out an exemption to copyright infringement for text and data-mining (TDM) purposes.
At present, there is an existing TDM exception in the UK, but this is limited to non-commercial research use only. Having previously announced the intention to extend this to encompass commercial use, the UK Government has recently scrapped this idea due to concerns over the impact on the UK’s creative industry.
There remains uncertainty, therefore, on how the UK government will approach this issue in future. As AI continues to evolve, the situation will only become more fraught, and further litigation on the issue is expected. Indeed at the time of writing a group of prolific authors, including George RR Martin and Jodi Picoult, have brought a lawsuit in New York against OpenAI, claiming that its ChatGPT AI model has been fed data from their respective publications without permission.
There are two aspects of patentability to consider. First, the AI machine itself and second, the output from an AI machine.
Certainly, some AI machines have been patented, but there are a number of hurdles to overcome. One problem is that it may be difficult or impossible to describe how the invention works – because it all occurs in a “black box”. An essential requirement of a patent is that it must be able to teach an ordinary skilled person how to perform the invention by including a description in the claims of the patent.
This problem can sometimes be overcome by setting out the algorithms, source code or other description of how the invention works. However, that has its own issues, because a patent cannot be granted for such things as a mathematical method, a method for performing a mental act, pure business methods and programs for computers “as such”.
One possible solution would be if it can be claimed successfully that the invention makes a technical contribution or has a technical effect – perhaps a new method or approach derived from using the AI machine. Alternatively, the AI machine creator may try to keep the algorithms or machine logic confidential and protect it as a trade secret.
Turing to the output issue, it is conceivable that an AI machine could make or develop a patentable invention, but can it own that invention and any resulting patent?
Section 7 of the Patent Act 1977 provides that a patent for an invention will be granted primarily to the inventor. An “inventor” is defined as the actual deviser of the invention. The devising will most likely relate to the development of the programming logic or development of the algorithms. Since devising an invention is a human activity and patents are property rights which can only be held by a legal person, logically, the “deviser” cannot be the AI machine.
There is currently a gap in the law where an AI machine makes an invention without any human intervention. There is no equivalent to s 9(3) CDPA, which only applies to copyright works. No one (except perhaps the devisers of the underlying algorithms) can claim to be the inventor and so no one can apply for a patent. (Some applicants for patents have tried to name their AI machines as inventors, leading the UK Intellectual Property Office to issue a statement:” An AI inventor is not acceptable as this does not identify a “person”, which is required by law. The consequence of failing to supply this information is that the application is taken to be withdrawn.”)
In the past few years, a Dr Stephen Thaler has applied for patents in several counties across the world, claiming that the inventor was an AI machine called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience). Dr Thaler also claimed that he had acquired the right to the grant of the patents by “ownership of the creativity machine, DABUS”. This has prompted a raft of litigation, some continuing today, regarding whether an AI machine can legally own a patent.
In the UK, the case came before the High Court (Stephen Thaler –v- Comptroller General of Patents Trade Marks and Designs). The Court, in the first instance, had little difficulty in confirming the decision of the Intellectual Property Office. In essence:
- Since DABUS was a machine and not a natural person, it could not be regarded as an inventor for the purposes of the Patents Act 1977.
- There could be no transfer of DABUS’ rights to Dr Thaler. Putting to one side the point that DABUS could not be an inventor and so had no “rights”, a machine could not transfer rights to a legal person.
- Dr Thaler was not entitled to the grant of a patent as the owner of DABUS. There was no law which allowed the transfer of ownership of an invention from the inventor to owner in this case, as the “inventor” could not own any intellectual (or other) property.
Interestingly, the High Court Judge added a postscript to the judgment. He made the point that the question of whether the owner/controller of an AI machine that “invents” something can be said, himself, to be the inventor, had not been raised in the case. He went on to say: “I in no way regard the argument that the owner/controller of an AI machine is the “actual deviser of the invention” as an improper one….. It would be wrong to regard this judgment as discouraging an applicant from at least advancing that contention, if so advised.”
The Court of Appeal has subsequently considered Dr Thaler’s appeal on the issue, and ultimately agreed with the High Court and Intellectual Property Office. Unsurprisingly, Dr Thaler has appealed the decision again to the Supreme Court, and is currently awaiting judgement.
The German Federal Patents Court has taken a perhaps more pragmatic approach to the issue, holding that the human with the closest nexus to the AI responsible for the invention can be named the inventor. Dr Thaler has appealed this decision to the German Federal Supreme Court.
In Australia, the Federal Court decided that an AI machine (DABUS again) could be an “inventor” for the purposes of a patent application (Thaler -v- Commissioner of Patents 30 July 2021). This was overturned a year later by Australia’s Full Court.
As at the time of writing, the only jurisdiction in which Dr Thaler has made a successful application is South Africa – although the South African system is essentially that of registration, and it has not examined the validity of the application. It is now open to challenge based on lack of novelty or inventiveness.
The UK’s Intellectual Property Office has published a consultation in which it proposes no material change in the UK law for now, opining that the current patent rules on inventorship are enough to protect AI-assisted inventions. There was, however, an indication that this may change as AI technology advances.
What does the future hold for Intellectual Property?
With increasing levels of sophistication, the creative link between the creator of the AI machine and the work created by the AI machine will inevitably reduce. This may give rise to the argument that a truly autonomous and “free thinking” AI machine should be entitled to own the copyright in creative works made by it. The same arguments would apply to patents and possibly also the one outlined in the postscript to the High Court’s DABUS judgment.
However, this in turn would create further questions. For example, who would be entitled to any royalty stream or other revenue from the exploitation of the AI created work or patented invention? Should it be the owner of the AI machine at the time of the creation? Should the rewards be shared with the original manufacturer of the AI machine? How could an AI machine effectively assign any IP rights?
The EU has recently announced what they describe as “the world’s first comprehensive AI law”. Although many hoped that the law would iron out the inadequacies of the EU’s current IP framework with regard to IP, the proposed law actually gives it a rather light touch.
The only express provision dealing with IP would require the ‘provider’ of a foundation model used by a Generative AI system to “document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law” before it is “put into service”. What this means for AI companies is unclear – what constitutes “sufficient detail”? Will revealing this information to the public leave them vulnerable to legal challenge?
The EU’s draft legislation adopts a strongly risk-averse approach to dealing with AI. It has prompted alarm from some of Europe’s largest companies, concerned that its enactment will damage competition and technological progress, yet simultaneously fail to deal with AI’s real issues.
Although the UK Government have previously endorsed a pro-AI approach, the revocation of their plans to extend the TDM extension to commercial use and the recent IPO recommendation that no changes should be made to patent law, may demonstrate a change in attitude. The outcome of the DABUS litigation in the Supreme Court could prove to be the prompt needed for legislative reform.