EU Artificial Intelligence Act – Legislation Adopted by the European Council

Share

The long-awaited European Union Artificial Intelligence Act (the AI Act) is nearing implementation following its adoption by the European Council yesterday (21 May 2024). This signals the completion of the final major stage of the European Union (EU) legislative process and the AI Act is expected to enter into force imminently. We considered the impact of this legislation in detail in our previous article: EU Artificial Intelligence Act — Final Form Legislation Endorsed by European Parliament.

The only remaining formalities are the signature of the President and Secretary-General of the European Parliament and Council and publication in the Official Journal, which is expected to happen in the coming days. The AI Act will enter into force 20 days after this takes place. The AI Act will become fully applicable 24 months after its entry into force (June 2026). However, some provisions will apply before that date.

Continue reading “EU Artificial Intelligence Act – Legislation Adopted by the European Council”

UK and US Announce Partnership on Science of AI Safety

Share

On 1 April 2024, the UK and US signed a memorandum of understanding on the science of AI safety. This partnership is the first of its kind and will see the two countries work together to assess risks and develop safety tests for the most advanced AI models.

Following their announcement of cooperation at the AI Safety Summit in Bletchley Park last November, the UK and US have formally agreed to align their scientific approaches to AI safety testing, with plans to perform at least one joint testing exercise on a publicly accessible model. The partnership will take effect immediately and will see the two countries work together to tackle the safety risks posed by next-generation versions of AI. The agreement will facilitate collaboration between the UK AI Safety Institute (formed last November) and the US AI Safety Institute (which is still in its initial stages) and will include the sharing of vital information and research on the capabilities and risks associated with AI systems, together with the exchange of expertise through researcher secondments between the institutes.

Continue reading “UK and US Announce Partnership on Science of AI Safety”

UK Supreme Court Rules that AI cannot be an ‘Inventor’ Under UK Patent Law

Share

In Thaler v Comptroller-General of Patents, Designs and Trade Marks [2023] UKSC 49, the UK Supreme Court ruled that AI cannot be an ‘inventor’ for the purposes of UK patent law. The ruling concludes a series of appeals from Dr Stephen Thaler and his collaborators, who argued that an AI system called ‘DABUS’ should be named as the inventor of two new inventions generated autonomously by it relating to food and beverage packaging and light beacons. This was part of a series of test cases, which have had limited success globally, seeking to establish that AI systems can make inventions and that the owners of such systems can apply for and secure the grant of patents for those inventions. The judgment noted that the broader questions of whether an invention generated autonomously by AI ought to be patentable, or whether the meaning of the term ‘inventor’ should be expanded to include machines powered by AI, were matters of policy that would need to be addressed by legislation.

The UK Supreme Court made three main findings.

  1. DABUS is not an ‘inventor’ under the Patents Act 1977 (“Patents Act”)
  2. An ‘inventor’ within the meaning of the Patents Act must be a natural person (a human being). Since DABUS is a machine, not a natural person, it cannot be an ‘inventor.’
  3. It was not Dr Thaler’s case that he was the inventor and had simply used DABUS as a highly sophisticated tool. Had Dr Thaler made that case and named himself as the inventor, the Court noted that its decision might have been different, but it was not the Court’s place to determine that question.
  1. Dr Thaler was not entitled to apply for and obtain a patent simply by virtue of his ownership of DABUS
  2. Dr Thaler sought to rely on the doctrine of accession whereby the owner of existing property would own new property generated by that existing property (in the same way that a farmer owns the cow and also the calf). The Court held that this only applies to tangible property and not to intangible inventions. For this reason, title to the invention cannot pass as a matter of law from the machine that generated it to the owner of that machine. This argument also assumes that DABUS itself can be an inventor within the meaning of the Patents Act, which, as the court had already established, it cannot.
  1. By failing to satisfy the requirements of the Patents Act, the two patent applications must be taken to have been withdrawn
  2. Because Dr Thaler had failed to name an inventor and had failed to state a valid right to apply for and obtain the patents, the UK Intellectual Property Office had been correct to find that Dr Thaler’s two patent applications would be taken to be withdrawn at the expiry of the 16-month period prescribed by UK patent law for this purpose.

Commentary

Dr Thaler’s UK patent applications were part of a project involving parallel applications to patent offices around the world. The UK Supreme Court’s ruling is unsurprising and follows similar decisions in the United States and Europe.

The ruling raises significant issues for the AI industry, but it is important to focus on what it confirms: that inventors must be natural persons for the purposes of UK patent law. The judgment does not impact the patentability of AI-generated inventions as it does not necessarily preclude a person from securing a patent, provided that a human being is named the inventor.

UK AI Regulation Bill Proposes New AI Regulator

Share

While the focus of attention in the world of AI has been the EU AI Act: EU AI Act Agreed – Discerning Data in recent weeks, there have also been some other noteworthy legislative developments. On 22 November 2023, the Artificial Intelligence (Regulation) Bill (the “Bill”) was introduced to the UK Parliament and passed the first reading in the House of Lords. The Bill seeks to establish a central AI authority (“AI Authority”) to oversee the UK’s regulatory approach to AI. The proposal for an AI Authority comes after the UK Government formally announced a UK AI Safety Institute at the global AI Safety Summit at Bletchley Park (summarised here).

Whilst the Bill largely reflects the approach of the UK Government, this is a Private Members’ Bill (“PMB”). PMBs are legislative proposals introduced into one of the UK Houses of Parliament by ‘backbench’ members (members who are not Government Ministers). Most PMBs fail to pass unless the UK Government steps in to support their progress through the legislative process.

Continue reading “UK AI Regulation Bill Proposes New AI Regulator”

EU AI Act Agreed

Share

Late on Friday (December 8th), the European Union Commission, Parliament and Council concluded its “trilogue” negotiations for the EU Artificial Intelligence Act. The summary below is based on the information available to date. It will be some time before the definitive text is finalized and released since it will have to go through various committee stages and its legal language finalized in multiple languages.

Prohibited AI Applications

The following applications of AI will be prohibited:

Continue reading “EU AI Act Agreed”

Bletchley Park AI Safety Summit 2023

Share

On 1 and 2 November 2023, world leaders, politicians, computer scientists and tech executives attended the global AI Safety Summit at Bletchley Park in the UK. Key political attendees included US Vice President Kamala Harris, European Commission President Ursula von der Leyen, UN Secretary-General António Guterres, and UK Prime Minister Rishi Sunak. Industry leaders also attended, including Elon Musk, Google DeepMind CEO Demis Hassabis, OpenAI CEO Sam Altman, Amazon Web Services CEO Adam Selipsky, and Microsoft president Brad Smith.

Day 1: The Bletchley Declaration

On the first day of the summit, 28 countries and the EU signed the Bletchley Declaration (“Declaration”). The Declaration establishes an internationally shared understanding of the risks and opportunities of AI and the need for sustainable technological development to protect human rights and to foster public trust and confidence in AI systems. In addition to the EU, signatories include the UK, the US and, significantly, China. Nevertheless, there are notable absences, most obviously, Russia.

Continue reading “Bletchley Park AI Safety Summit 2023”

©2024 Faegre Drinker Biddle & Reath LLP. All Rights Reserved. Attorney Advertising.
Privacy Policy