by Prof Enrico Bonadio, Prof Eduardo Alonso (City St. George’s University of London), and Mr Vansh Tayal (Symbiosis Law School, Pune, India)

Artificial intelligence (AI) has notoriously been making inroads into the inventive process, from drug discovery to engineered systems, and beyond. The widely reported DABUS project is notorious for igniting a global legal and philosophical debate over whether AI can be recognised as an inventor under patent law. But patent applications for the DABUS inventions were rejected in several jurisdictions including UK, US, Germany, at the European Patent Office, and Australia, primarily because patent laws require that an inventor must be a natural person, not a machine or an AI system.

A less researched aspect of AI inventions and their intersection with patent regimes focuses on the disclosure requirement. As is known, patent laws around the world require an invention to be disclosed in a manner which is clear and complete enough to allow the expert in the field to reproduce it, as dictated by Article 29 of the WTO/TRIPS Agreement. Yet, many AI models, such as neural networks, operate in ways that are not fully understood even by their creators. This is the so-called ‘black box’ issue. While the input and output of the system can be observed, the internal logic or decision-making process is often inscrutable or cannot be described in human-understandable terms. In other words, even when inputs and outputs are known, the internal logic or parameters can be inscrutable to human observers​.

This lack of explainability often makes it impossible to provide the detailed, step-by-step descriptions required for patent disclosure​. IBM, for example, observes that “the input and output may be known … but the logic in between is in some respects unknown,” making AI inventions hard to fully disclose. And scholars such as Tabrez Ebrahim note that this lack of transparency – i.e. the difficulty of replicating an AI-constructed outcome – “profoundly … challenges disclosure theory in patent law”​.

As AI continues to invent – in medicine, energy, software and beyond – we list and briefly explain here a series of proposals recently put forward to possibly reconcile AI’s opacity with the disclosure requirement under patent laws. These proposals might be helpful especially when it comes to file applications for AI assisted inventions, which are in principle patentable in several jurisdictions.

Firstly, AI innovators who want to patent AI assisted outputs might leverage the so-called ZK-Patenting regime, where they would commit to the AI model’s architecture, weights, and training data via a cryptographic hash anchored on a tamper-evident blockchain, securing an immutable timestamp. They would then provide a succinct proof such as a zk-SNARK or zk-STARK that demonstrates that the performance of the committed model on committed data yields the outcomes that are claimed by the invention without disclosing model weights or training data. By anchoring the commitments on a public ledger, the framework secures an immutable timestamp and audit trail, ensuring that the secret method predates the filing date. In this way, zero-knowledge proofs would satisfy statutory enablement and best-mode requirements, enabling a skilled artisan to reconstruct the invention while preserving trade-secret AI algorithms’ confidentiality.

An alternative approach might be to enforce a highly intensive version-controlled log of an AI system’s architecture, training-data lineage, and hyperparameter settings alongside independent algorithmic audits under confidentiality rules like that mandated by the EU AI Act. This would mandate that applicants create immutable audit trails for each model update and grant independent auditors or regulatory authorities secure access to repositories and documentation through non-disclosure agreements. In auditing activities, such auditors or authorities would check the development of the training pipeline and that the performance checks for bias mitigation functionalities and promised functionalities exist through automated toolchains and manual review. This procedure may include continuous monitoring, reproducibility testing as well as risk assessment under frameworks such as the one provided by the EU AI Act. While demanding the controlled release of sensitive artefacts, the algorithmic audit could also ensure that the black box works ethically and adequately. Thus, such procedure would complement patent disclosures without requiring public scrutiny of intellectual property. Also, coupling credible audit trails with external expert oversight would enhance trust in AI innovation and enable the treatment of technical validity and societal risk issues.

AI innovators might also consider adopting model-agnostic Explainable AI (XAI) techniques to explain their black-box behaviour without revealing the proprietary internal code. This can be accomplished by creating a simple surrogate model like a decision tree or linear regressor on the inputs and outputs of the secret AI system that approximates decision boundaries and reveals simple, high-level rules underlying the invention. Complementary methods like counterfactual explanations could demonstrate how a minute change to specific inputs can alter the outputs, while feature-attribution algorithms (e.g., SHAP or LIME) detect each parameter’s contribution to an output. Patent applicants would just need to provide limited examples of XAI artefacts, including surrogate decision paths, prototype input-output pairs, or attribution heatmaps – which would suffice to meet the patent disclosure’s enablement requirement by demonstrating the causal logic for the invention, even if the complete model remains undisclosed.

As a final alternative, under a data-deposit rule, patent applicants might submit core AI materials such as sanitised training datasets, model weights, or an executable ‘black box’ to the patent office under strict confidentiality. This mechanism would allow examiners, or accredited third-party auditors under a Non-Disclosure Agreement, to reproduce and validate the invention without exposing the proprietary code to the public, like biological material deposits in plant patents. The applicants could furnish immutable records (e.g., cryptographic hashes or blockchain timestamps) of the datasets, code versions, and parameter settings used to create the invention. In fact, the EU AI Act already requires high-risk systems to document all design choices, data collection, labelling, cleaning, and bias assessments on all training/validation sets. A parallel requirement in patent law might introduce ‘data deposit’ or ‘metadata registration’ requirements: uploading a sanitised schema of the dataset or annotating descriptions of the preprocessing steps. Such provenance tracking would establish confidence for an examiner to ensure that it is possible to reproduce the patentable invention from the declared input, thus strengthening the patent bargain while allowing some confidentiality.

* * *

As artificial intelligence continues to revolutionise industries, the challenge of balancing proprietary innovation with the transparency required by patent law becomes increasingly urgent. This is particularly relevant for AI assisted inventions which can be patented in various jurisdictions (as opposed to AI generated technologies). Indeed, the ‘black box’ issue here persists as a legal and technical barrier, and this challenge may deter patent filings altogether, pushing innovators toward keeping the whole inventive process (and not just a part of it) confidential instead – an outcome which is certainly not beneficial to the public interest.

However, emerging solutions – such as zero-knowledge patenting, independent algorithmic audits, explainable AI techniques, and robust data-deposit frameworks – might offer promising ways forward. By thoughtfully integrating some of these approaches, the hope is to foster an environment where inventors are incentivised to innovate while still upholding the spirit of openness and knowledge-sharing that underpins the patent system. Ultimately, addressing the black box issue is essential not only for legal clarity but also for building a more trustworthy and collaborative AI-driven future.