by Prof Enrico Bonadio, Prof Eduardo Alonso (City St. George’s University of London), and Mr Vansh Tayal (Symbiosis Law School, Pune, India)
Artificial intelligence (AI) has notoriously been making inroads into the inventive process, from drug discovery to engineered systems, and beyond. The widely reported DABUS project is notorious for igniting a global legal and philosophical debate over whether AI can be recognised as an inventor under patent law. But patent applications for the DABUS inventions were rejected in several jurisdictions including UK, US, Germany, at the European Patent Office, and Australia, primarily because patent laws require that an inventor must be a natural person, not a machine or an AI system.
A less researched aspect of AI inventions and their intersection with patent regimes focuses on the disclosure requirement. As is known, patent laws around the world require an invention to be disclosed in a manner which is clear and complete enough to allow the expert in the field to reproduce it, as dictated by Article 29 of the WTO/TRIPS Agreement. Yet, many AI models, such as neural networks, operate in ways that are not fully understood even by their creators. This is the so-called ‘black box’ issue. While the input and output of the system can be observed, the internal logic or decision-making process is often inscrutable or cannot be described in human-understandable terms. In other words, even when inputs and outputs are known, the internal logic or parameters can be inscrutable to human observers.
This lack of explainability often makes it impossible to provide the detailed, step-by-step descriptions required for patent disclosure. IBM, for example, observes that “the input and output may be known … but the logic in between is in some respects unknown,” making AI inventions hard to fully disclose. And scholars such as Tabrez Ebrahim note that this lack of transparency – i.e. the difficulty of replicating an AI-constructed outcome – “profoundly … challenges disclosure theory in patent law”.