A note on artificial intelligence and intellectual property in Sweden and the EU

23 april 2020

1. Introduction

 

Today most industrial countries, including Sweden, are investing heavily in the develop-ment of artificial intelligence (“AI”) and machine learning software. According to a recent white paper from the Swedish Government (Ministry of Enterprise and Innovation, “National approach to artificial intelligence”, Article no: N2018.36) “Sweden aims to be the world leader in harnessing the opportunities offered by digital transformation”. As far as AI is concerned “the Government’s goal is to make Sweden a leader in harnessing the opportunities that the use of AI can offer”. The Swedish view is not unique. Many govern-ments and international organizations have developed formal AI frameworks to help spur economic and technological growth (cf. Future of Life Institute, “National and inter-national AI strategies”, 2019). Internationally, major investments are being made in AI research, especially in the United States and China. In Europe, the “European AI Alliance” has been formed to increase Europe’s competitiveness in the research and deployment of AI. In a recent White Paper (COM (2020) 65 final) the European Commission (the “EC”) has also unveiled an ambitious programme intended to strengthen and consolidate a European approach to AI.

 

Much like the countries in which they operate, an increasing number of corporations are convinced that AI will be essential to maintaining a leading position in the future. In fact, a clear majority of the early adopters are convinced that AI technologies are very important to their business success today. According to a recent report, the number of enterprises implementing AI technologies has grown by 270 per cent over the past four years (cf. Pooja Singh, “Enterprise use of AI has grown 270 per cent globally over the past four years”, Entrepreneur Asia Pacific, January 22, 2019). Hence, although strong and long-term research in AI will be essential to realize the technological opportunities, the current capabilities of AI technologies are already revolutionizing a very large spectrum of areas such as facial and voice recognition, autonomous vehicles, personalized medicine, legal discovery, investment fund management, military defense, energy production, individualized marketing, customer service, culture and entertainment. The rapid development is expected to continue. Analysts predict global spending on AI to USD 79.2 billion by 2022 (cf. International Data Corporation (IDC), “Worldwide spending on artificial intelligence systems will grow to nearly $35.8 billion in 2019, according to new IDC spending guide”, March 11, 2019).

 

Inevitably, seeing that AI is already becoming omnipresent in our everyday life, the development raises broad and multi-disciplinary policy questions, including several aspects of intellectual property (“IP”). Today, artificial narrow intelligence (“ANI”) systems can perform specified tasks such as generating artworks and music, writing news and novels, driving innovation processes and executing product suggestion and purchasing services. In the long run, it is not unlikely that we will have systems that can learn from experience with humanlike breadth and even surpass human performance in many cognitive tasks. Assuming that further research into, and development of, deep learning technologies and artificial general intelligence (“AGI”) will generate even more intelligent software, AI systems may not be dependent on any human intervention to achieve an almost unlimited range of outstanding results.

 

The technology transition brings into question several fundamental IP concepts. Seeing that the IP laws were written at a time when only natural intelligence and human cognitive processing were contemplated, AI challenges many traditional IP legal notions such as “originality”, “copying”, “author”, “designer”, “inventor”, “inventive step”, “a person skilled in the art” and the “average consumer”. Arguably, when AI systems are engaged to perform creative or other cognitive tasks, the prevailing humanistic approach to IP is not well suited to protect the generated results. From the system developer’s perspective, it is also important that the IP regulatory framework offers sufficient room for protection of AI technologies as such. In these regards, a closer look at the current legal requirements for IP protection reveals a number of questions that call for further discussion.

Set forth below is an introductory presentation of some IP questions raised by the techno-logical advances in the AI field. The article discusses IP protection of AI technologies (Section 2), IP protection of AI generated works, inventions and designs (Section 3), protection of and access to data (Section 4) and the impact AI may have on trademark law (Section 5). The primary purpose is to provide an overview of some IP challenges in Sweden and the EU and, where possible, to offer some limited conclusions.

 

2. IP protection of AI technologies

 

2.1 Copyright law

 

An AI system is first developed as a computer program. Under EU and Swedish copyright law, copyright protection applies to the expression in any form of a computer program, provided that the program is original in the sense that it is the author’s own intellectual creation. In respect of the criteria to be applied in determining whether a computer program meets the originality requirement, no tests as to the qualitative or aesthetic merits of the program should be applied. Originality manifests itself in the structure and architecture of the program. The originality threshold is quite low. Simply put, as long as the author of a computer program has been able to select which steps will be taken and the way in which those steps are expressed, the computer program will be deemed original and will therefore be subject to copyright protection.

 

However, ideas, methods and principles which underlie any element of a computer program, including those which underlie its interfaces, are not protected by copyright. Only expressions of intellectual efforts (e.g. source code) are protected. In addition, since no registration is necessary for copyright protection to arise (although different options for voluntary deposit or registration exist in some EU member states), collection of evidence may sometimes be difficult. Therefore, from an economic standpoint, the scope of copyright protection for an AI system may be perceived as insufficient. Seeing that copyright will not protect the creativity, skill and inventiveness devoted to the development of the functional concept behind an AI system, it may be recommended not to rely solely on copyright law. It may also be prudent to explore the option of obtaining patent and/or trade secret protection, as such protection may be invoked to prevent others from technically exploiting, e.g. a certain algorithm and/or from creating computer programs that perform certain functions.

 

2.2 Patent law

 

AI systems rely on performing mathematical methods or algorithms by way of computer implementation. Hence, although an increasing number of AI related patents are being granted, the current law on patentable subject matter poses certain challenges. According to Article 52(2) of the EPC and Article 1(2) of the Swedish Patents Act, mathematical methods and computer programs are expressly excluded from patentability when claimed as such. In other words, pure mathematical methods and computer programs are not “inventions”.

 

As explained by the November 2019 edition of the Guidelines for Examination in the European Patent Office (the “GL”), AI and machine learning are based on computational models and algorithms which are per se of an abstract mathematical nature, irrespective of whether they can be “trained” based on training data (G-II, 3.3.1). Hence, the GL also state that the patentability of AI computational models and algorithms ought to be assessed according to the general guidance provided in respect of mathematical methods.

 

It follows that the methods and algorithms employed by an AI system must contribute to producing a technical effect that serves a technical purpose, by their application to a technical field and/or by being adapted to a specific technical implementation (cf. the decision of the EPO’s Board of Appeal (the “BoA”) in case T 2330/13). The “normal” in-herent technical interactions between an AI system’s computer program and its hardware, such as the circulation of electrical currents in the computer, are not in themselves sufficient (cf. the BoA in case T 1173/97). As explained by the BoA “it is not the case that the implementation of a non-technical method on a computer necessarily results in a process providing a technical contribution going beyond its computer implementation”. Hence, normally a further technical effect is required. According to the BoA’s current jurisprudence “a technical effect requires, at a minimum, a direct link with physical reality, such as a change in or a measurement of a physical entity” (case T 0489/14).

The distinction between mathematical methods and technical processes lies “in the fact that a mathematical method or a mathematical algorithm is carried out on numbers (whatever these numbers may represent) and provides a result also in numerical form, the mathematical method or algorithm being only an abstract concept prescribing how to operate on the numbers. No direct technical result is produced by the method as such. In contrast thereto, if a mathematical method is used in a technical process, that process is carried out on a physical entity (which may be a material object but equally an image stored as an electric signal) by some technical means implementing the method and provides as its result a certain change in that entity. The technical means might include a computer comprising suitable hardware or an appropriately programmed general purpose computer” (the BoA in case T 208/84).

 

Accordingly, the mere use of a computer to perform calculations is not, as such, a patentable invention. Present case law requires a physical technical effect beyond the performance of a mathematical method or algorithm by way of computer implementation. For example, according to the GL, the use of a neural network in a heart monitoring apparatus for identifying irregular heartbeats makes a technical contribution (G-II, 3.3.1).

 

Arguably, the legal requirement of “a direct link with physical reality” may pose a threat to the patentability of certain AI technologies, seeing that the beauty of AI lies in its ability to mimic the human brain. An AI system is designed, e.g. to analyze and process data, and to decide what the best action is to achieve a specific goal. While these actions are essential, they do not, by themselves, indicate a technical use being made of the resulting decision. The prohibition on patents on “methods for performing mental acts” (Article 52(2) of the EPC) adds an extra layer of complexity in this regard. While the general purpose of an AI system is to assist (or replace) its user in the performance of a cognitive task, established case law prescribes that any method that could exclusively be carried out mentally will be deemed to lack technical character. Complexity of an activity is not normally considered to be sufficient to escape the mental act exclusion. This principle also applies to “any algorithmically specified procedure that can be carried out mentally” (the BoA in case T 0489/14, reasons 15).

 

It would thus seem that the very definition of AI may possibly disqualify certain AI tech-nologies from patentability under Article 52(2) of the EPC. To mitigate this problem, special attention needs to be paid to the formulation of the patent claims. Preferably, the core AI technology should be described as an embedded component of a larger system, rather than applying for patent protection for a stand-alone AI technology having little or no connection to “physical reality”. If possible, terms such as “support vector machine”, “reasoning engine” or “neural network” should be avoided because, as explained in the GL, such terms may, depending on the context, be understood as references to abstract models or algorithms and do not necessarily imply the use of a technical means (G-II, 3.3.1). That said, given how fast AI is evolving, governments and other policy makers really ought to discuss whether the present subject-matter patentability standard sufficiently promotes the main objectives of patent law.

 

If an AI system meets the patent subject-matter eligibility standard, the invention will be examined under the same patentability requirements as any other invention. A patent will thus be granted only if the invention is new in relation to what was known before the filing date of the patent application (novelty) and differs essentially therefrom (inventive step). For the assessment of inventive step, all features which contribute to the inventions’ technical character (as defined above) must be considered. Non-technical features are considered in the assessment of an inventive step only to the extent that they interact with the technical subject-matter of the claim to solve a technical problem or, equivalently, to bring about a technical effect. For instance, the GL recognize that “where a classification method serves a technical purpose, the steps of generating the training set and training the classifier may … contribute to the technical character of the invention if they support achieving that technical purpose” (G-II, 3.3.1). Reversely, if the implementation on a computer would be the only technical aspect of a claimed method, the method would lack an inventive step over a known general-purpose computer. In summary, an AI system will be patentable only if it provides a new and non-obvious technical solution to a technical problem, but this does not mean that patent protection will never be afforded, e.g. to neural network training methodologies, processes or techniques used to build, test and validate the system. The decisive question is whether the claimed invention, as a whole, is new, non-obvious and serves a technical purpose.

 

The mandatory disclosure requirements pose an additional challenge for AI inventions. Article 83 of the EPC and Section 8 of the Swedish Patents Act require that a patent application shall disclose the invention in a manner sufficiently clear and complete for it to be carried out by “a person skilled in the art”. In addition, Rule 42(1)(c) of the EPC requires that the description disclose the invention, as claimed, in such terms that the technical problem (even if not expressly stated as such) and its solution can be understood.

 

In the context of AI and machine learning algorithms, it may be difficult to determine how to satisfy these requirements. Sophisticated AI systems will sometimes produce results without explanation. This is commonly referred to as the “black box” dilemma. If an AI computer program is a black box, it will make predictions and decisions without being able to communicate its reasons for doing so. In essence, the black box predicament arises from the complexity of distributed elements, such as in deep neural networks, and from the inability of humans to visualize higher-dimensional patterns (cf. Yavar Bathaee, The artificial intelligence black box and the failure of intent and causation, Harvard Journal of Law & Technology, Volume 31, Number 2, 2018). AI that relies on machine-learning algorithms can sometimes be as difficult to understand as the human brain. Hence, a black box can make it difficult or impossible to disclose the innovation in sufficient levels of detail to satisfy Article 83 of the EPC and Section 8 of the Swedish Patents Act.

 

The GL do not address the black box problem, but they emphasize that that the invention must be described not only in terms of its structure but also in terms of its function, unless the functions of the various parts are immediately apparent (F-III, 1(4)). Consequently, if an AI invention is claimed without explaining in sufficient detail how the AI technology works, the application may be refused on the ground that it lacks a clear and complete dis-closure of the invention. This happened, e.g. in case No. T 0521/95, in which the applicant asserted that the invention (a pattern recognition system) solved certain problems by simulating the operation of the human brain. According to the BoA, the invention was not simply a conventional associative memory, but rather a complex neural network that would be difficult to train successfully. Finding the correct training scheme was thus a critical part of the design of the system. The BoA noted that the description did not mention this matter, let alone provide any guidance on how the training should be done. Therefore, according to the BoA, the skilled person would not be able to train the whole system to solve the specific problems given in the application without undue burden. In conclusion, the BoA considered, e.g. that the lack of adequate instructions, the vague functional nature of the description and the lack of any concrete definition of the invention meant that the disclosure of the invention failed to fulfil the requirements set out in Article 83 EPC.

 

In summary, there are some hurdles to be overcome to satisfy patent examiners and courts that an AI system is eligible for patent protection. From the applicant’s perspective, one important question is what parts of the technology that should be claimed. Should a pos-sible patent focus on the processes by which the AI system is created, trained and validated, or should it rather focus on the final technical result achieved through these operations? In addition, although providing details in the claim can help avoid abstraction, doing so can limit the granted scope of protection. This raises several tactical questions, one of which is whether patent protection is desirable at all.

Sometimes it may be more appropriate to rely on contractual arrangements, copyrights and/or trade secret protection.

 

From society’s point of view, considering the important role that AI systems play in the development of new products and services, more political, academic and legal discussions are needed to ensure that patent law is predictable and that it provides for desired technological advances.

 

2.3 Law on trade secrets

 

Somewhat simplified, in Article 2 of the Trade Secrets Directive (EU) 2016/943 (the “TSD”) and Article 2 of the Swedish Act on Trade Secrets (the “TSA”), a “trade secret” is defined as information which: (i) is secret in the sense that it is not, as a body or in the precise configuration and assembly of its components, generally known among or readily accessible to persons within the circles that normally deal with the kind of information in question; (ii) has commercial value because it is secret; and (iii) has been subject to reasonable steps under the circumstances, by the person lawfully in control of the information, to keep it secret.

 

Accordingly, even though practically any information can be kept and protected as a trade secret, such protection is particularly suited to technologies that are incapable of indepen-dent discovery or reverse engineering and/or that cannot be described in detail without substantial efforts. Modern AI technologies are thus well suited to trade secret protection. For example, AI applications and functions may be provided as cloud services under such circumstances that external users do not get access to underlying algorithms and program code.

 

Trade secret protection of AI technologies may be particularly important prior to patent application filings. The basic purpose of patent law is to reward inventors with a limited exclusive right on their invention and for providing technical information and progress to society. When patents and patent applications are published, they provide an insight into present technological developments and help avoiding parallel superfluous developments.

 

This, however, does not necessarily mean that patents and trade secrets are mutually exclusive. In practice, patent protection and trade secret protection are often complementary. For instance, while a patent may protect a core AI invention, trade secrets may protect valuable know-how associated with the invention. It is not unusual that a patented invention cannot be effectively and commercially exploited without access to such know-how.

 

Trade secret protection undoubtedly has some advantages over patent protection. For instance, patent protection may be deemed ineffective or unattainable due to the current law on patentable subject matter or because of the invention disclosure requirements (cf. Section 2.2 above). Moreover, trade secret protection is not dependent on novelty or inventive step requirements. Trade secrets are immediately protected and generally cover broader subject matter than patents. In addition, as some AI technologies are very complex, a patent holder may not be able to effectively discern whether a third party is using the patented technology. Furthermore, trade secret protection is not subject to statutory time limits, whereas patent protection (as well as copyright protection) will inevitably expire after a given period.

 

Unlike patents and copyrights, however, a trade secret does not give its controller an exclu-sive right to exploit the protected subject matter. The information is only protected against misappropriation, such as unauthorized acquisition or disclosure. If, for any reason, a trade secret becomes “generally known among or readily accessible to persons within the circles that normally deal with the kind of information”, it will no longer be defined as a trade secret and, hence, the information will no longer be protected. In addition, as trade secret protection is not dependent on registration, it may sometimes be difficult to define and keep track of protected information and as a consequence it may be difficult to keep the information secret.

 

2.4 Concluding remarks

 

As with any technology, AI can be protected with a variety of IP assets. Patents, copyrights and trade secrets are all viable means. A combined-model approach, using the advantages of each type of IP protection, is probably the best option. The right IP strategy depends on a number of factors such as the type, expected lifespan, value and importance of the AI technology and the costs involved to obtain and enforce exclusive rights. An active management of the company’s IP assets will also require due regard to changes in the law.

 

3. IP protection of AI generated works, inventions and designs

 

3.1 Works

 

AI systems are capable of analyzing and reproducing products, processes and available data in order to create new outcomes. Another characteristic of AI systems is the ability to choose between alternatives in order to achieve the best outcome. Hence, the creative abilities of AI, including the capacity to create, e.g. music or paintings, are not dependent on a human writing detailed code with a desired visual or aural outcome in mind. Instead, one or more humans may write algorithms to “teach” the AI system a specific aesthetic by analyzing thousands of data sets including, e.g. images or sound. In the current state of the art, the collection of data to feed the algorithm is chosen by one or more humans. The algorithm then tries to generate new works in adherence to the aesthetics it has learned. Alternatively, the AI system is not “taught” to mimic a certain aesthetic or style but is rather tasked with creating something new, based on more general input such as thousands of representative Western canon portraits from the past 500 years. One example of this is the AICAN (artificial intelligence creative adversarial network). AICAN is a program that can generate innovate images in a way that can be considered relatively autonomous and unpredictable (cf. Elgammal, “AI Is Blurring the Definition of Artist”, American Scientist, Volume 107, Number 1, 2019). Another example is the Swedish theater play “Nattygsbordet”. According to the Gothenburg City Theatre, Nattygsbordet is written entirely by AI. The AI system has created the dialogue, situations, scenography, sound, lighting and costumes (cf. https://kulturpunkten.nu/evenemang/nattygsbordet-en-pjas-helt-skriven-av-en-al/?time=15908).

 

Can there be any copyright to such results and, if so, where do the rights lie?

 

Under Swedish and EU copyright law, two cumulative conditions must be satisfied for any subject matter to be classified as a copyright protected work. Firstly, the subject matter must be expressed in a manner which makes it identifiable with sufficient precision and objectivity (cf. the CJEU in Case C-310/17 (Levola Hengelo), paragraphs 35-41). Copyright does not protect information but expressions. Mere ideas, methods, opinions and principles are excluded from copyright. Secondly, the subject matter must be original in the sense that it is the author’s own intellectual creation (cf. the CJEU in Case C-5/08 (Infopaq), paragraph 37). The CJEU has also clarified that an intellectual creation is an author’s own if the creation reflects the author’s personality. That is the case if the author was able to express his creative abilities in the production of the work by making free and creative choices (cf. the CJEU in Case C-145/10 (Painer), paragraphs 88-89). On the contrary, as emphasized by the CJEU in Cases C-403/08 (Murphy) and C-604/10 (Dataco), the originality criterion is not satisfied when the creation is dictated by technical considerations, rules or constraints which leave no room for creative freedom (cf. Murphy, paragraph 98, and Dataco, paragraph 39).

 

The originality criterion, as developed by the CJEU with references to the author’s “intel-lectual” creation, “personality” and “free and creative choices”, strongly implies that originality requires a human creator. Arguably, when an AI system is tasked with generating a painting or any other work, based on its analysis and processing of data, the appearance and characteristics of the final, identifiable, expression (the work) is not a reflection of a human artist’s personality. Hence, works that are created solely by AI systems are most likely not eligible for copyright protection under EU copyright law. This conclusion is also consistent with earlier Swedish case law establishing that works created by animals are not copyright protected. In fact, at the current time, most jurisdictions appear to consider human intellectual authorship a prerequisite for copyright protection.

 

A human author requirement is also consistent with the statutory rules on the duration of copyright as expressed, e.g. in the Bern Convention. The Bern Convention stipulates that copyright protection lasts for the life of the author plus at least 50 years. The EU Directive 2006/116/EC states, with reference to the Bern Convention, that copyrights shall run for the life of the author and for 70 years after his death. According to the Swedish Copyright Act, copyright in a work subsists to the expiry of the seventieth year after the year in which the author deceased. The references to the “life of the author”, “the year in which the author deceased” and the authors “death” strongly suggest that only natural persons can create copyright protected works. In addition, both the Software Directive 2009/24/EC and the Database Directive 96/9/EG expressly define authorship on the basis of the natural person(s) who created the work (although, according to both directives, the author may also be a legal person where national legislation so permits). Moreover, in Sweden and in many other countries around the world, copyright privileges include rights of attribution and association and rights of integrity (commonly referred to as “moral rights”). Moral rights are based on the notion that the work is an extension of the author’s personality and, hence, the mere existence of these rights strongly imply that copyright protection requires human intellectual authorship.

 

In conclusion, as AI systems lack the human attributes required by Swedish and EU copyright law, AI-generated works are not eligible for copyright protection.

 

However, if a natural person is directly implicated in the creative process by giving instructions to the AI system to modify the generated result and/or by manually modifying the generated result, it should most likely be considered an expression of the natural person’s creative abilities and, hence, the work should be eligible for copyright protection. Under such circumstances the AI system may be considered a tool in the hands of a human user. In addition, certain rights neighboring to copyright may possibly arise when an AI system autonomously generates a product. For instance, if an AI system is engaged to create a recording of sound and/or moving images, or to generate a catalogue, a database or similar compilation, such products may sometimes be protected regardless of human authorship or originality. That said, in the absence of explicit rules on the protection of AI generated results, it is likely that such results are often unprotected under the current IP laws of many countries.

 

Assuming that AI generated works are not eligible for copyright protection under current Swedish and EU copyright law, it should be assessed whether there actually is a need to protect such works and, if so, how such protection should be defined and constructed.

 

From an economic point of view, investments in AI are considerable. These investments include development of technologies for the creation of works. One of the purposes of copyright is to encourage the creation of works. Even though there seems to be a lack of empirical evidence supporting the need to create new property rights in the field of AI, recent and evidence-based data indicates the great importance of IP to creativity, innovation and economic growth (cf. EUIPO, “Intellectual property rights and firm performance in Europe: an economic analysis”, Firm-Level Analysis Report, June 2015). Accordingly, if creations generated through AI are desirable, protecting such creations should be equally desirable.

 

In light of the above, considering that the vast majority of IP experts from most industrial countries are seemingly unwilling to afford (genuine) copyright protection to AI generated works (cf. the Resolution “Copyright in artificially generated works” adopted at the AIPPI World Congress London in September 2019), one may consider introducing new sui generis neighboring rights to encourage continued AI research and development. Such a model would respect the humanist approach to copyright law but would nevertheless incentivize future AI investments. The new rights could have the same scope as the rights of reproduction and making available to the public provided for in Swedish and EU copyright law. The new rules could also be subject to the already existing provisions on exceptions and limitations. That said, the author of this article contends that any new sui generis neighboring rights to AI generated works should only be given a limited term of protection and not be disproportionately prioritized at the expense of human authorship, competition and public access to information and culture. Hence, in a world where millions of works can be created at the push of a button, the well-known risks of excessive monopolies should be taken into account.

 

A related question concerns ownership. Who should be the first owner of the IP rights in AI generated works (assuming that such rights are introduced)? Should the rights reside with the AI system developer(s), with the owner of the AI machine or with the end user of the AI system? Some authors (including the author of this article) would prefer a solution inspired by the US ”work made for hire” doctrine, according to which the person or entity that orders or initiates the work is entitled to the copyright in the work (cf. Shlomit Yanisky-Ravid, Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era—The Human-Like Authors are Already Here—A New Model, 2017 Mich. St. L. Rev. 659 (2017)). Such a model would essentially view AI systems as creative employees or subcontractors working for their users. The model would offer an important exception to the general rule that copyright protection rests with the author, who, in the case of AI generated works, would be the AI machine. It would encourage further investments in AI technology, as the IP rights would normally vest in the commercial actor that takes the financial risk of buying or licensing the AI system to produce a specific result.

Applying this model to AI generated works would also facilitate the imposition of accountability on the user to avoid damages and infringements of third party rights. Hence, preferably, the user would be entitled to IP rights as well as accountability regarding the works generated by the AI system.

 

3.2 Inventions

 

It goes without saying that actions and capabilities like learning, logic, reasoning, perception, communication and creativity are extremely useful in inventive processes. AI systems process such abilities. Even though today’s ANI systems are not capable of replicating the full depth and breadth of human skills and cognition, AI’s abilities are already being widely used to generate “inventive” ideas and solutions that would otherwise be impossible through human inventiveness alone. A few examples are Stephen Thaler’s “Creativity Machine”, which can generate new ideas through artificial neural networks, John Koza’s “Invention Machine”, which is based on genetic programming, i.e. modelled after the process of biological evolution, and IBM’s supercomputer “Watson”, which combines an architecture of logical deduction with access to massive databases containing knowledge and expertise to generate “novel, non-obvious and useful ideas” (cf. Abbott, “I Think, Therefore I Invent: Creative Computers and the Future of Patent Law”, B.C.L. Rev. 57(4), 1079, 28 September 2016). Many experts accept that some results generated by these AI systems, including several technical solutions achieved with practically no human guidance, meet the traditional criteria for patentability, i.e. that they are new and non-obvious to a “person skilled in the art”. Additional AI research and development, particularly in algorithm design, increase the probability that AI systems will invent autonomously within the foreseeable future.

From a contemporary patent law perspective there is a clear difference between AI-assisted invention, on the one hand, and autonomous AI invention, on the other. Under Swedish and EU patent law, invention is considered a human activity. For instance, hitherto it is not permitted to designate AI systems as inventors in patent applications. This principle was recently confirmed by the EPO when it rejected an attempt to register an AI system, “DABUS”, as an official inventor. According to the EPO, the “EPC does not provide for non-persons, i.e. neither natural nor legal persons, as applicant, inventor or in any other role in the patent grant proceedings”. As explained by the EPO, “AI systems or machines have at present no rights because they have no legal personality comparable to natural or legal persons. Legal personality is assigned to a natural person as a consequence of their being human, and to a legal person based on legal fiction. Where non-natural persons are concerned, legal personality is only given on the basis of legal fictions. These legal fictions are either directly created by legislation, or developed through consistent jurisprudence establishing such a legal fiction. It follows that AI systems or machines cannot have rights that come from being an inventor, such as the right to be mentioned as the inventor or to be designated as an inventor in the patent application”. As a consequence, as “AI systems or machines cannot have any legal title over their output which could be transferred by operation of law and agreement … the owner of an AI system or machine cannot be considered to be a successor in title within the meaning of Article 60(1) EPC”. Moreover, according to the EPO, “[t]he legislative history shows that the legislators of the EPC were in agreement that the term “inventor” refers to a natural person only” (cf. the EPO’s decision of 27 January 2020 in the matter of application EP 18 275 163 (appealed)).

 

Accordingly, under current patent law, a patent registration applicant is tasked with identi-fying and disclosing one or more humans that are responsible, wholly or partially, for the intellectual and creative conception of the invention, i.e. natural persons that are inventors. According to established case law, to qualify as an inventor or at least a joint inventor, one must contribute independently and intellectually to the finalized invention. In general, such contribution must express innovative technical problem solving and constitute a part of the inventive step. The mere desire for a final solution to a problem, or a mere suggestion or instruction to solve a problem, will not in itself contribute to a new invention and will thus not constitute grounds for inventorship. As a consequence, if an invention would be an original creation of an AI system, with no or insignificant human involvement in the creative conception of the finalized invention, it would be ineligible for patent protection.

 

It is debatable whether current patent legislation should keep or abolish the requirement for a human inventor. Some authors believe that traditional patent law is irrelevant, inefficient and inapplicable to AI generated inventions and that such inventions should not be patentable at all, while recognizing other tools that can achieve the same ends (cf. Yanisky-Ravid, Shlomit and Liu, Xiaoqiong (Jackie), When Artificial Intelligence Systems Produce Inventions: The 3A Era and an Alternative Model for Patent Law (March 1, 2017). 39 Cardozo Law Review, 2215-2263 (2018)). Others argue that patent rights to AI-generated inventions would accelerate innovation and enable developments that would otherwise be unachievable (cf. Abbot, supra, and Fraser, Erica, Computers as Inventors – Legal and Policy Implications of Artificial Intelligence on Patent Law, (2016) 13:3 SCRIPTed 305). Still others fear that granting patent rights to AI-generated inventions would stifle human invention, as human intelligence and creativity would be supplanted by superior AI systems. Evaluating and balancing these competing views is indeed a difficult task. While it may be impossible to find a “perfect” solution that satisfies all legitimate interests and objectives, the best alternative could perhaps be some moderate changes in the patent system, seeing that outdated patent law would most likely result in negative effects on technology. For instance, instead of maintaining the view that AI-generated inventions should never be eligible for patent protection, one could consider raising the patentability standard for AI-generated inventions and/or granting different terms of protection based on the level of human involvement in the inventive process.

 

As regards the patentability of AI-generated inventions, the “person skilled in the art” is another key issue. Under current Swedish and EU patent law, the central condition governing patentability is that the invention involves an inventive step. An invention shall be considered as involving an inventive step if, having regard to the state of the art, it is not obvious to a person skilled in the art.

The GL (G-VII, 4) states:

 

“Thus the question to consider, in relation to any claim defining the invention, is whether before the filing or priority date valid for that claim, having regard to the art known at the time, it would have been obvious to the person skilled in the art to arrive at something falling within the terms of the claim. If so, the claim is not allowable for lack of inventive step. The term "obvious" means that which does not go beyond the normal progress of technology but merely follows plainly or logically from the prior art, i.e. something which does not involve the exercise of any skill or ability beyond that to be expected of the person skilled in the art.”

 

According to established case law and guidelines, the person skilled in the art is presumed to be a skilled practitioner in the relevant field of technology, who is possessed of average knowledge and ability and is aware of what was common general knowledge in the art at the relevant date. He is also presumed to have had access to everything in the “state of the art” and to have had at his disposal the means and capacity for routine work and experimentation which are normal for the field of technology in question. The “person” skilled in the art can in fact also be a team of people with different skills.

 

Hence, arguably, if the use of AI is common practice in the relevant field of technology, the person skilled in the art should mean a person equipped with AI resources. If the law were to be construed this way, it could significantly raise the bar for non-obviousness. That could become a big issue particularly in fields where innovation requires management of large data volumes and/or substantial investments in research and experimentation. When AGI (or even superintelligent AI) technologies become prevalent in various industries, perhaps only the most groundbreaking technologies will be patentable, as many inventions would be deemed obvious to a skilled person equipped with relevant AI technology. On the other hand, as AI technologies are already being used in innovative processes and will become even more employed in such processes in the future (cf. above), setting the patentability standard too low (i.e. without regard to available AI resources in the hands of the skilled person) could result in an overflow of scrap patents being granted and in more infringement litigation. Further discussions on these issues are clearly needed.

 

3.3 Designs

 

In the world of designs, thus far AI has perhaps been mostly about optimization and speed. AI systems can analyze vast amounts of data and suggest design adjustments. Once an AI system recognizes a pattern, it can apply the pattern to generate numerous variations in an instant. For instance, in a project called “Nutella Unica,” an AI system was able to use a database of patterns and colors to create seven million different versions of Nutella’s packaging (cf. https://youtu.be/sHYakhyvJps).

 

As with works and inventions (cf. Sections 3.1 and 3.2 above), designs may be produced with the assistance of AI or may be autonomously generated by AI applications.

 

AI assisted designs may be regarded as a variant of computer-aided designs and, hence, they should not pose any specific problems from an IP perspective. However, under current Swedish and EU design law, designs that have been produced autonomously by AI applications are not eligible for design protection. Only natural persons can qualify as designers. This conclusion is supported, inter alia, by the statutory references to the designer and “his successor in title” (Article 1(a) of the Swedish Design Protection Act, Article 5 of the European Designs Directive 98/71/EC and Articles 7 and 14 of the Community Design Regulation (EC) No 6/2002). As emphasized by the EPO, AI systems cannot have successors in title (cf. Section 3.2 above, regarding patent application EP 18 275 163). In addition, Article 17 of the European Designs Directive states that a design protected by a design right registered in a Member State shall also be eligible for copyright protection in that Member State. Article 96 of the Community Design Regulation (EC) No 6/2002 contain similar rules. Seeing that copyright obviously requires a human author (cf. Section 3.1 above) the principle of cumulation of protection, as formulated in Article 17 of the Directive and Article 96 of the Regulation, respectively, would not be applicable or coherent if AI generated designs were eligible for design protection.

 

Hence, in the case of AI generated designs, issues and considerations arise that are similar to those that arise with respect to AI generated works (Section 3.1 above) and AI generated inventions (Section 3.2 above). For example, how should we distinguish between AI assisted designs that are eligible for protection and AI generated designs that are ineligible for protection? What level of human intervention is required, under contemporary law, for a design to be eligible for design protection? Is it desired to uphold the distinction between human and non-human creativity in the assessment of protectability? Should we afford design protection to autonomously AI generated designs and, if so, under which circumstances? These and other pertinent questions should be discussed and decided with a view to finding the right balance between the interests of rights holders and the public.

 

4. Protection of and access to data

 

Over the last few years, machine learning has emerged as a dominant branch of AI tech-nology. Machine learning is very much dependent on access to big and varied datasets. As stressed by the EC, “without data, there is no AI”, because “[t]he functioning of many AI systems, and the actions and decisions to which they may lead, very much depend on the data set on which the systems have been trained” (White Paper On Artificial Intelligence - A European approach to excellence and trust (COM(2020) 65 final)).

 

The shift towards online activities, including the “Internet of Things”, has created a huge bulk of easily accessible data that are cheap to collect and store. Valuable data sets can be obtained from many different sources, such as internet browsers, social media sites, smart-phone apps, cameras, cars and other connected devices. In practice, information is often collected in connection with the use of products and services. For instance, it is no secret that Netflix has become very successful by collecting “big data” from their 151 million subscribers and implementing data analytics models to discover customer behaviour and buying patterns.

 

Seeing that data availability is a key driver of developments in AI, policymakers ought to ensure that the law allows a fair balance to be struck between data access rights, on the one hand, and data protection, on the other. Even though access to data matters greatly for the development of AI, protective rules will also be necessary to incentivize data production and to protect individuals and enterprises from illicit exploitation of sensitive information.

 

Exclusive or proprietary “rights” to information as such are not recognized under current Swedish or European IP law. Even so, the rules on copyright, sui generis database rights and trade secrets may prevent collection of and/or further exploitation of data.

 

4.1 Copyright protection

 

Copyright protection is actualized in relation to expressions (e.g. texts or pictures) that meet the originality requirement (cf. Section 3.1 above). Copyright protection cannot be granted to pure information, ideas, procedures, methods of operation or mathematical concepts as such. Conceivably, therefore, the big sets of data that are nowadays being collected and processed within the context of AI analysis will rarely be protected by copyright. Some authors draw a distinction between “data” and the “semantic content” being carried by the data, while arguing that only the semantic content (e.g. books, music, film and news articles), and not the data, may be granted copyright protection (cf. Nestor Duch-Brown, Bertin Martens and Frank Mueller-Langer, The economics of ownership, access and trade in digital data; Digital Economy Working Paper 2017-01; JRC Technical Reports, p. 8). Similarly, to the extent protected works (e.g. drawings) are used to train an AI system, it is also important to distinguish between a work as such, on the one hand, and information about the work, on the other. Feeding an algorithm with data does not necessarily involve reproduction of the work. That said, in some situations it may of course be difficult to distinguish non-proprietary digital information about a work, on the one hand, from an altered or adapted digital version of that work, on the other. From a copyright enforcement perspective, an adequate and sufficient comparison between two clusters of digital data will only be possible on the semantic (human) level, as it will ultimately be up to one or more human judges (assisted by human technical experts, where necessary) to assess whether an infringement has occurred.

 

The data collection software being used in AI analysis contexts is unlikely to select or arrange the collected data in a way that would meet the originality criterion (cf. Gervais, Daniel, Exploring the Interfaces Between Big Data and Intellectual Property Law, 10 (2019) JIPITEC 22). Hence, even though a compilation of data will be defined as a “database” under the Database Directive 96/9/EC, provided that the compilation is ”a collection of independent works, data or other materials arranged in a systematic or me-thodical way and individually accessible by electronic or other means”, the databases created through data collection software will rarely be protected by copyright. Instead, the collector may have to rely on sui generis database rights (cf. Section 4.2 below) and/or trade secret protection (cf. Section 4.3 below) to prevent unauthorized access to and/or reuse of the information thus assembled.

 

4.2 Sui generis protection of databases

 

In Swedish and EU law, there is a sui generis right in databases. In essence, although data as such are not protected by proprietary rights, the maker of a protected database (or his successor in title) has a right to prevent extraction and/or re-utilization of the whole or of a substantial part of the contents of the database.

 

The sui generis right is not dependent on originality. According to the Database Directive, sui generis protection requires that the database is a result of a substantial investment in either the obtaining, verification or presentation of the contents of the database. In Sweden, the requirements are lower. Under the Swedish Copyright Act, a data compilation will be protected: (i) if it contains “a large number of information items”; or (ii) if the compilation is the result of a significant investment. While it is debatable whether Swedish law is compliant with the Database Directive in this regard, the Swedish courts have thus far applied the statutory law according to its wording. For instance, in Case T 15952-11, the Gothenburg District Court ruled that the scope of the contents of two databases was such that the databases were protected “already on this ground”. The Court of Appeal for Western Sweden shared this principal view in Case T 3375-13. Hence, hitherto database makers have enjoyed a relatively strong degree of protection under Swedish law.

The term “substantial investment”, as used in the Database Directive, refers to the creation of the database as such. As emphasized by the CJEU, the purpose of the protection through the sui generis right “is to promote the establishment of storage and processing systems for existing information and not the creation of materials capable of being collected subsequently in a database” (the CJEU in Case C-338/02 (Fixtures Marketing), paragraph 24). Thus, regarding collection of data, only the investments into obtaining the contents of a database will be relevant, whereas investments into the creation of materials are irrelevant. Consequently, the outputs generated through AI analysis of already collected data may not be protected by the sui generis right, as machine-generated data is arguably “created” and not resulting from substantial investments in the obtaining of the data. Nonetheless, “many cases of sensor- or other machine generated data should be covered by the sui generis right on the condition that the investments into measuring or otherwise obtaining verifying and presenting the data were substantial” (Leistner, Matthias, Big Data and the EU Database Directive 96/9/EC: Current Law and Potential for Reform (September 7, 2018), p. 2). Moreover, as mentioned above, current Swedish law seeks to protect any large compilation of data from unauthorized extraction and/or reuse, regardless of the investments made in the creation of the compilation.

 

In principle, when a database is protected by the sui generis right, any temporary or per-manent extraction and/or re-utilization of a substantial part of the data would need per-mission from the rightholder, unless an exception applies. Consequently, the collection of commercial and/or structured information from, e.g. publicly available websites or other databases may be prohibited in the absence of rightholder authorization.

 

To avoid this obstacle, data analysts may wish to explore the possibilities of using applications where the “code comes to the data”, and not the classic model of the data having to find the code. This is because, arguably, “analyses whereby the ‘code comes to the data’ in order to generate new information will not lead to any ‘extraction’ since there will be no ‘permanent or temporary transfer of all or a substantial part of the contents of a database to another medium’” (Drexl, Josef, Designing Competitive Markets for Industrial Data - Between Propertisation and Access (October 31, 2016). Max Planck Institute for Innovation & Competition Research Paper No. 16-13, p. 21-22). In addition, Articles 3 and 4 of the recently adopted Directive (EU) 2019/790 on Copyright in the Digital Single Market (the “DSM Directive”) may bring some good news for analysts involved in text and data mining (“TDM”), defined in the DSM Directive as “any automated analytical technique aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations”.

 

Article 3 of the DSM Directive allows TDM by research organizations and cultural heritage institutions having legal access to works or databases, for scientific research. Other entities (e.g. private companies) may, according to Article 4, reproduce and extract lawfully accessible works and other materials for the purposes of TDM, provided that such use has not been expressly reserved by the rightholders in an appropriate manner. The exceptions under Articles 3 and 4 relate to both copyright and database sui generis rights. As just mentioned, however, a rightholder may “in an appropriate manner” oppose TDM conducted by commercial entities under Article 4. Hence, it remains to be seen whether Article 4 will have any significant positive effects on private companies that depend on TDM in AI related contexts.

 

4.3 Trade secret protection and de facto control

 

In comparison to copyrights and sui generis database rights, trade secrets protection has the advantage of protecting the specific data as such. The TSD and the TSA thus protect the data holder from unlawful acquisition, use or disclosure (“misappropriation”) of any data that qualifies as a trade secret. Misappropriation of trade secrets is sanctioned by rules on, inter alia, injunctions and damages.

However, as explained above (Section 2.3), a piece of information will qualify as a trade secret only if it satisfies three cumulative conditions. It is sometimes difficult to assess whether all requirements are met. For instance, trade secrets protection requires a causal link between the secrecy of the data and its commercial value. In the context of big data, an individual piece of information may be rather unimportant, but great value may arise from correlations with other data. In addition, it may sometimes be difficult to fulfil the require-ments that the information ought to be kept secret by the holder and not be readily accessible to other persons. This may be particularly difficult in respect of data produced by connected devices, i.e. by sensors attached to smart products such as cars. For instance, when a car transmits information about, e.g. traffic conditions, the same information may be sent by other cars, to other receivers. Moreover, in the context of connected devices, information may be used by many actors in the dynamic value networks that characterize the data economy. When data is generated in a network of different entities connected through a value network, it may be very difficult to allocate protection to a single entity controlling the secret (cf. Drexl, Josef, supra).

 

These difficulties aside, the overall protection offered by a combination of copyrights, sui generis database rights and trade secrets protection may of course be sufficient to prevent unauthorized access and exploitation of data in many situations. In addition, and perhaps most importantly, contractual arrangements and technical access restrictions may be used to create de facto control over valuable information. The key policy question is to what extent such control is desirable from society’s point of view (for more on this issue, see Nestor Duch-Brown, Bertin Martens and Frank Mueller-Langer, The economics of ownership, access and trade in digital data; Digital Economy Working Paper 2017-01; JRC Technical Reports).

 

5. AI and trademark law

 

The basic purpose of a trademark is to guarantee the identity of the origin of the trade-marked product or service to the consumer or ultimate user. This essential function is also a prerequisite for trademark protection, as trademarks may only consist of signs that are capable of “distinguishing the goods or services of one undertaking from those of other undertakings” (Article 3(a) of the Trademark Directive (EU) 2015/2436, Article 4(a) of the Trademark Regulation (EU) 2017/1001 and Chapter 1, Articles 4 and 5, of the Swedish Trademarks Act).

Although the basic function of a trademark is to identify commercial origin, a trademark may also serve additional purposes, all of which are protected by EU and Swedish trademark law. A trademark owner may prevent use by a third party that affects or is liable to affect any of the functions of the trademark. According to the CJEU’s jurisprudence “[t]hese functions include not only the essential function of the trade mark, which is to guarantee to consumers the origin of the goods or services, but also its other functions, in particular that of guaranteeing the quality of the goods or services in question and those of communication, investment or advertising.” Hence, the owner “is entitled to prevent the use by a third party … even where such use is not capable of jeopardising the essential function of the mark, which is to indicate the origin of the goods or services, provided that such use affects or is liable to affect one of the other functions of the mark.” (Case C-487/07 (L’Oréal), paragraphs 58 and 65)

 

A negative impact on any of the functions described by the CJEU (trademark infringement) obviously requires interference with cognitive processing. A trademark would hardly serve any purpose without the deep-rooted tendency of the human mind to proceed by association. For instance, when the CJEU defines the “investment function” as the use of the mark by its proprietor “to acquire or preserve a reputation capable of attracting consumers and retaining their loyalty” (Case C-323/09 (Interflora), paragraph 60), the CJEU apparently refers to the fact that a trademark activates associations in the consumer’s mind. Similarly, when, e.g. the Trademark Directive protects a trademark from use that “takes unfair advantage of, or is detrimental to, the distinctive character or the repute of the trade mark” (Article 10.2(c)), the law assumes that the trademark triggers notions and emotions in the mind of the consumer. As explained by the CJEU, “[t]he advantage arising from the use by a third party of a sign similar to a mark with a reputation is unfair “where that party seeks by that use to ride on the coat-tails of the mark with a reputation in order to benefit from the power of attraction, the reputation and the prestige of that mark and to exploit… the marketing effort expended by the proprietor of the mark in order to create and maintain the mark’s image” (Case C-487/07 (L'Oréal), paragraph 50).

 

Positive associations with a trademark thus drive purchase behaviour and positively affect the user’s experience of the trade-marked product. It does not matter whether the associations objectively correspond to the “truth”. For example, several blind tests demonstrate that people like Pepsi better than Coke until they know what it is they are drinking, at which point preferences shift to Coke.

 

Hence, trademark protection is premised on a psychological assumption, namely that a trademark has an inherent and/or acquired ability to communicate and trigger mental associations. Trademarks affect thinking. Cognitive science supports this assumption. Consequently, when a court is tasked with an infringement assessment, the court must evaluate the overall perception of the compared marks “in the mind of the average consumer” of the goods or services in question (CJEU in Case C-342/97 (Lloyd), paragraph 25). Similarly, the main pieces of EU and Swedish trademark legislation explain that “the likelihood of confusion includes the likelihood of association” (see e.g. Article 9.2(b) of the Trademark Regulation). In fact, “the perception of marks in the mind of the average consumer … plays a decisive role in the global appreciation of the likelihood of confusion” (CJEU in Case C-251/95 (Sabel), paragraph 23).

 

Through a series of judgments, the CJEU has also established certain guidelines for assessing the average consumer’s ability to mentally process the impressions and associations conveyed by the trademark(s) at issue. Hence, according to established case law, the average consumer is deemed to be reasonably well-informed and reasonably observant and circumspect (see, e.g. the CJEU in Case 299/99 (Philips)). Trademark law also assumes that the average consumer only rarely has the chance to make a direct comparison between the different marks but must place his trust in an imperfect recollection of them. Furthermore, the average consumer’s level of attention is assumed to vary depending on the category of goods or services concerned (see, e.g. the CJEU in Case C-342/97 (Lloyd), paragraph 26). In summary, according to EU and Swedish trademark law, the average consumer is (or is represented by) a natural person who, as a main rule, is moderately attentive, somewhat susceptible to manipulation and sometimes not even aware of the actual reasons for his or her decision making.

 

But what happens when the natural person is replaced by an AI system?

Today, AI systems are already being employed on a wide scale to reduce human involvement in product suggestion and product purchasing processes. For instance, Amazon’s website (www.amazon.in) employs AI software to recommend products based on the user’s browsing and purchase history. Sophisticated AI products, such as several Google home devices, are programmed to interact with humans. The systems get better and better at understanding human emotions, desires and cultural aspects. Some products, such as Amazon’s “Echo”, are run by voice recognition software and make product suggestions to consumers based on, e.g. past purchase behaviour. Various replenishing services, powered by AI, automatically re-order consumable items, e.g. ink cartridges and coffee pods, to ensure that the end user does not run out.

 

Hence, AI systems are already assisting and sometimes substituting human purchasing decision-making. The trend is upward.

 

AI systems do not make purchasing decisions as a direct or immediate response to human associations, emotions and vague memories triggered by trademarks. AI systems have no emotions (arguably), but they have perfect memory. They do not get confused, at least not in the human sense contemplated in trademark law. AI systems objectively analyze vast amounts of data to optimize decision-making and to take adequate action. AI systems can perfectly recollect commercial origin and they are not impressed by fancy commercials. Compared to humans, AI systems are super-rational. Hence, to convince an AI system in the purchasing process, it will rarely be sufficient to use a certain trademark. Information about purchase history, price, quality, availability, delivery, consumer reviews, official recommendations and other data can be collected and analyzed by AI, in an instant, and objectively weighed together to make the most rational purchase decision, with little or no human involvement. Simply put, AI systems do not suffer from the human “deficiencies” that current trademark law take as a reference point. In summary, conceivably, it may take another AI system and not a trademark to influence an AI system to order or recommend a product or service.

 

Where does this leave trade mark law? The existing rules, including the doctrine of trademark functions, will serve their purpose as long as humans consider trademarks as important carriers of information, values and emotions. For a human, a trademark may serve different purposes before, during or after the purchase of a product or service. Humans consume for many reasons, and not only to satisfy physical and material needs. Humans attach substantial value to features that individualize them. Trademarks are used as a means of self-expression, self-realization or to satisfy other emotional desires. In parallel, most likely, courts and other policymakers will have to consider new rules, concepts and principles to ensure that trademark law does not become irrelevant in some situations, as the use of AI drastically changes the rules of the game for the interaction between businesses and consumers.

6. Concluding remarks

Technological advance in the AI field raises many IP questions, some of which challenge the very essence of current IP law. Today, when Swedish and European courts and other authorities apply “intellectual” property law, they are typically protecting creations of the human intellect (such as works or inventions) or items that influence human cognition and behaviour (such as trademarks). When IP protection is sought, the traditional legal solution is to look for the human behind the artificial process, even when he or she does not exist. Arguably, this solution is untenable in the long run. Given how fast AI is evolving and seeing that the main purpose of IP law is to encourage the creation and distribution of a wide variety of goods to the benefit of consumers, more research is needed to ensure that the IP legal framework will serve its purpose in the new AI era.