The DIKE Research Group is pleased to announce an upcoming publication by Simeng Chen, co-authored with Marco Giacalone.
Their article, “How Accountability Mechanisms Shape the Non-Contractual Liability of Generative AI in the EU and China,” will appear in the forthcoming 2026/2 issue of the European Journal of Privacy Law & Technologies (EJPLT).
Overview of the Article
Generative artificial intelligence systems are increasingly capable of producing complex outputs that may cause harm, raising important questions about how legal responsibility should be assigned. However, traditional liability frameworks—particularly negligence-based tort law—struggle to address these cases. The opacity, autonomy, and technical complexity of generative AI systems often make it difficult to establish key legal elements such as causation or fault.
In response to these challenges, policymakers are increasingly focusing on accountability mechanisms as a way to address what scholars describe as the emerging “accountability gap” in AI governance. These mechanisms include requirements such as transparency obligations, documentation duties, and human oversight, which aim to make AI systems more traceable and governable.
The article provides a comparative analysis of the European Union and China, examining how accountability mechanisms influence the attribution of non-contractual liability when generative AI causes harm.
The authors argue that accountability is a necessary precondition for effective AI liability regimes. However, its practical impact depends on whether accountability mechanisms generate legally actionable standards and usable evidence for courts and claimants.
Their analysis highlights important differences between the two jurisdictions:
European Union: Accountability duties are increasingly connected to liability rules. Documentation, compliance records, and governance obligations can help provide evidence that may support liability claims.
China: Accountability mechanisms are primarily framed as administrative regulatory tools, meaning their relationship with civil liability remains more indirect.
By comparing these two governance models, the article sheds light on how different regulatory approaches shape the legal consequences of generative AI harms, and how accountability mechanisms may either strengthen—or fail to strengthen—liability frameworks.
The DIKE Research Group congratulates Simeng Chen and Marco Giacalone on this contribution to the evolving debate on AI governance, accountability, and liability in comparative perspective.