Chair for Law and
Artificial Intelligence

Prof. Dr. Michèle Finck, LL.M.

CZS Chair for Law and Artificial Intelligence

Geschwister-Scholl-Platz, Neue Aula, Raum 136

72074 Tübingen

michele.finck@uni-tuebingen.de

Personal Webpage

The Chair for Law and Artificial Intelligence at the University of Tübingen carries out research at the intersection of law and artificial intelligence. Our team forms part of the broader Tübingen research environment focused on AI and its interdisciplinary implications. As such, we collaborate closely with the CZS Institute for Artificial Intelligence and Law as well as the Cluster of Excellence “Machine Learning in Science”.

Contact:
Friederike Gruber

Geschwister-Scholl-Platz, Neue Aula, Raum 062

72074 Tübingen

friederike.gruber@uni-tuebingen.de

+49 7071 29 76581 (Monday, Wednesday-Friday 7:30-12:30)


Team

News

  • Research Workshop on Digital Well-Being

    On 18 April 2024, the Chair for Law and Artificial Intelligence held an interdisciplinary Research Workshop on Digital Well-Being at Schloss Hohentübingen with experts from various disciplines that offered invaluable insights into the relationship between well-being and digital activities and engaged in a set of lively and stimulating discussions with the attending researchers.

  • Symposium in Oxford

    Michèle Finck took part in a symposium in Oxford that honored the career of Steve Weatherill. The symposium engaged with the future of the EU internal market and its regulation and also marked the publication of the book “The Internal Market Ideal: Essays in Honour of Stephen Weatherill” to which Michèle contributed a chapter on “The Maturation of European Data Law: From Fundamental Rights to Economic Rights”.

  • Medical Liability: Use of AI in Medicine

    Last week, Henrik Nolte held an online lecture on ’Medical Liability: Use of AI in medicine’ at the medical faculty of the Univesity of Tübingen. The lecture is part of the BMBF-funded "TüKITZMed" project.

  • Vacancy for a Research Associate

    The chair for Law and Artificial Intelligence at the University of Tübingen conducts research on legal issues related to artificial intelligence.

    There is currently one vacancy for a Research Associate in Law and Artificial Intelligence (m/f/d)

  • PhD Summer School on Artificial Intelligence and Law

    The Chair for Law and Artificial Intelligence at the University of Tübingen is hosting an International PhD Summer School in Artificial Intelligence and EU Law from 13 – 17 May 2024.

  • Seminar on the EU Artificial Intelligence Act

    In this seminar th participants will study the European Union's approach towards regulating Artificial Intelligence by focussing on the newly enacted Artificial Intelligence Act ("the AIA"). The AIA is a harmonized and horizontal legal framework that creates different sets of rules for different classes of AI.

  • "Who Controls Artificial Intelligence?"

    On October 20, 2023, a panel discussion on the topic "Who Controls Artificial Intelligence?" took place at the Uhlandsaal of the Museumsgesellschaft in Tübingen.

  • Tommaso Fia joins chair as Postdoc

    We are thrilled to announce that Tommaso Fia has joined the Chair for Law and Artificial Intelligence of the University of Tübingen! With his expertise in the intersection of law and data governance, Tommaso brings a fresh and dynamic perspective to our team. Please join us in extending a warm welcome to Tommaso Fia as we embark on this exciting new chapter together!

  • Creation of the CSZ Institute for AI and Law

    The newly created CZS Institute for Artificial Intelligence and Law is an interdisciplinary research institute at the University of Tübingen. It is devoted to research on the reciprocal implications between artificial intelligence and law.

Current Projects

The EU Artificial Intelligence Act -
A Commentary to the Provisions Laying Down Harmonised Rules on Artificial Intelligence

Summer School on Artificial Intelligence and Law

Fairness in market instrumental
data law

Cybersecurity of AI-based
Medical Devices

Ecological Concerns in Legal
Data Regulation

Edge Computing and the Principle
of Accountability

Michèle Finck is currently writing a book on the EU Artificial Intelligence Act that will be published by Oxford University Press in 2025. It will be entitled “The EU Artificial Intelligence Act. A Commentary to the Provisions Laying Down Harmonised Rules on Artificial Intelligence”. This book will comprise two parts. The first part will provide a contextual analysis to the AI Act  that engages with topics such as AI and its most pertinent regulatory implications, a regulatory theory perspective on AI as well as a broader analysis of the key themes of the AI Act. The second part is an article-by-article analysis of the final text that examines each of its various provisions in detail.

The Chair for Law and Artificial Intelligence at the University of Tübingen is hosting an International PhD Summer School in Artificial Intelligence and EU Law from 13 – 17 May 2024. The Summer School provides a platform for PhD candidates to engage in dynamic discussions and showcase their research. It will take place at Tübingen castle and will feature expert lectures from leading scholars and practitioners, participant presentations, and social events in view of fostering in-depth discussions amongst participants. The event will be embedded in the lively law and AI communities in Tübingen. Tübingen is home to excellent computer science and law faculties, the newly-founded CZS Institute for Artificial Intelligence and Law as well as the Cluster of Excellence "Machine Learning – New Perspectives for Science," the Tübingen AI Center, the first Chair for Law and Artificial Intelligence in Germany, and the Max Planck Institute for Intelligent Systems, among others.

What is the role of "fairness" in EU data law? Tommaso Fia's research appraises how ‘fairness’ comes into play as a core principle of EU regulation of data markets. More specifically, he interrogates how fairness has evolved as a legal principle in adjacent bodies (ie data protection and platform law), unveiling its nature, functions and content for market instrumental data governance. Here fairness embodies a principle of substantive justice, particularly evident in concerns on unequal data transactions (commutative justice) and, to a lesser extent, on the uneven distribution of data access and use in society (distributive justice). Normativity furnishes yet another level of complexity. In fact, the meaning of fairness depends on how its justice-related features are normatively conceptualised. His enquiry thus moves on to scrutinising the contending readings of fairness that variously emerge from market instrumental data governance. Four perspectives arise: the welfarist approach, the liberal perfectionist one, the political liberal one, and ‘fairness’ as ‘equality of means and outcomes’. Market instrumental data governance and related interpretive and adjudicative practices have the potential to reflect this wealth of understandings, paving the way towards diverse patterns of data access and use in (EU) data markets.

The exposure of AI-supported medical devices to cybersecurity risks has gained significant momentum in recent years. Numerous studies have found that while the healthcare industry is increasingly reaping the benefits for patients and society, its products are commensurately susceptible to growing vulnerabilities. Cyberattacks on medical devices have the potential to compromise information security, undermine patient privacy, or to put the patients’ health or lives at risk. Against this background Henrik Nolte’s PhD thesis examines the extent to which the European legal framework adequately addresses the complexities associated with the cybersecurity of AI-based medical devices throughout the entire life cycle to protect patient rights. He has published an article about this topic together with Dr. Zeynep Streitmüller in the Zeitschrift für das gesamte Medizinprodukterecht (2024).

The consideration of ecological concerns in legal data regulation is warranted from both factual and legal standpoints. In order to fulfil this ecological potential, ecologically relevant legal instruments in EU data law need to be identified, systematized, and assessed with regard to their effectiveness in comparison to green data governance ideals and legislative objectives. Why should ecological concerns be taken into consideration in EU data law? Which instruments in EU law are ecologicallyrelevant? How can they be systematized? To what extent do they live up to their ecological potential – respectively and in relation to (non-)legal regulatory mechanisms?

Bilge Kaan Güner's doctoral research examines the complex relationship between edge computing technologies in the information and communication technology (ICT) sector, and the accountability principle of the General Data Protection Regulation (GDPR). In particular, his research focuses on the unique challenges and dynamics presented by the distributed nature of edge computing networks and their ability to accommodate a diverse range of devices. This investigation includes a critical assessment of both the diversity of stakeholders involved and the multifaceted functionality of these networks, particularly in light of GDPR compliance requirements. As the GDPR has been in force for nearly six years, Güner's comparative analysis of these technologies in relation to the GDPR uncovers a critical area of research. His study aims to analyse the nuanced interplay between law, technology and innovation, examine the adequacy of current EU data protection laws in the face of ongoing digital transformations, and provide forward-looking recommendations for policymakers to improve the effectiveness of data protection strategies.

Recent Publications: 

  • The chapter documents the transformation of European Union data law from a legislative ensemble centred primarily around the fundamental right to data protection towards a much broader and more complex area of EU law of which data protection is but one element. Whereas data protection law implements data subjects’ right to data protection in submitting the processing of personal data to a qualified prohibition, the new legislative proposals and acts encourage the processing of (personal) data. This transforms the latter into an object of economic rights, a tradable commodity that can be sold or donated. This paradigm shift will undoubtedly result in tensions that will be keeping practitioners, academics, and judges busy for decades to come. Rather than providing a detailed overview of these tensions, this contribution documents the evolution of EU data law through the lens of the internal market and ponders the practical future implications thereof.

  • This symposium analyses European Union (EU) law as a means for both perpetuating commodification processes and potentially mitigating their consequences. This issue framing essay traces the evolutionary trajectory of commodification as a conceptual framework in contemporary intellectual debates, zeroing in on the most prominent theoretical frameworks underpinning its usage. It then relates the evolution of these debates more concretely to the context of the EU as a major institutional forum for the concept’s actualisation. Lastly, it connects these narratives to current conversations on the law’s role in constituting capitalism and consolidating its attendant structural inequalities. In so doing, it also canvasses the contributions that make up this symposium, showing how each enhances the discussion of commodification in the EU context.

  • Cyberattacks on medical devices and facilities can compromise the privacy of patients and, in the worst-case scenario, jeopardize health and lives. Studies on cyberattacks on medical devices indicate a significant increase since the COVID-19 pandemic. Given the critical implications of potential cyber threats, this contribution explores the extent to which current legal frameworks address the challenges in developing and using AI-based medical products. Compared to conventional medical devices, AI-based medical technology opens additional points of attack for cyberattacks due to its special technical characteristics. As the EU legal framework for cybersecurity is currently undergoing substantial changes, this contribution focuses exclusively on the regulatory requirements for cybersecurity that specifically pertain to AI-based medical devices. These requirements arise not only from the Medical Devices Regulation EU 2017/745 but also notably from the upcoming EU AI-Act.

  • Available here. As part of the European Commission's broader data strategy, the Data Governance Act (“DGA”) introduces a new regulatory regime for data intermediaries, which, inter alia, pursues the objective of increasing the competitiveness of the European data economy by bolstering trust in data-sharing mechanisms. Against this backdrop, we introduce data intermediaries and critically examine the DGA's related legal regime by testing its underlying assumptions and highlighting its intrinsic weaknesses and limitations as part of the broader EU data law puzzle. As a result, the paper brings to the fore certain contradictions between DGA's means and ends. Indeed, due to various questionable assumptions, the DGA imposes requirements that not all data intermediaries can satisfy and entrenches a specific techno-organisational form for data intermediation services that may turn out to be economically non-viable. Consequently, one must wonder whether the DGA's rules on data intermediaries are necessary and proportionate in light of the freedom to conduct a business. We furthermore uncover inconsistencies and loopholes between the DGA, the GDPR, the draft Data Act, and the Digital Markets Act. Overall, while the DGA's underlying efforts are laudable, its precise postulations may hinder the achievement of its underlying objectives due to two main factors. First its own internal limitations and incoherences, and, second, uncertainties and tensions resulting from its interplay with the broader EU data law framework.

  • Available here. Few policy issues will be as defining to the EU’s future as its reaction to environmental decline, on the one hand, and digitalisation, on the other. Whereas the former will shape the (quality of) life and health of humans, animals and plants, the latter will define the future competitiveness of the internal market and relatedly, also societal justice and cohesion. Yet, to date, the interconnections between these issues are rarely made explicit, as evidenced by the European Commission’s current policy agendas on both matters. With this article, we hope to contribute to, ideally, a soon growing conversation about how to effectively bridge environmental protection and digitalisation. Specifically, we examine how EU law shapes the options of using data—the lifeblood of the digital economy—for environmental sustainability purposes, and ponder the impact of on-going legislative reform.

  • Available here. Mechanisms to control public power have been developed and shaped around human beings as decision-makers at the centre of the public administration. However, technology is radically changing how public administration is organised and reliance on Artificial Intelligence is on the rise across all sectors. While carrying the promise of an increasingly efficient administration, automating (parts of) administrative decision-making processes also poses a challenge to our human-centred systems of control of public power. This article focuses on one of these control mechanisms: the duty to give reasons under EU law, a pillar of administrative law designed to enable individuals to challenge decisions and courts to exercise their powers of review. First, it analyses whether the duty to give reasons can be meaningfully applied when EU bodies rely on AI systems to inform their decisionmaking. Secondly, it examines the added value of secondary law, in particular the data protection rules applicable to EU institutions and the draft EU Artificial Intelligence Act, in complementing and adapting the duty to give reasons to better fulfil its purpose in a (partially) automated administration. This article concludes that the duty to give reasons provides a useful starting point but leaves a number of aspects unclear. While providing important safeguards, neither EU data protection law nor the draft EU Artificial Intelligence Act currently fill these gaps.

  • Available here. Existing and planned legislation stipulates various obligations to provide information about machine learning algorithms and their functioning, often interpreted as obligations to “explain”. Many researchers suggest using post-hoc explanation algorithms for this purpose. In this paper, we combine legal, philosophical and technical arguments to show that post-hoc explanation algorithms are unsuitable to achieve the law’s objectives. Indeed, most situations where explanations are requested are adversarial, meaning that the explanation provider and receiver have opposing interests and incentives, so that the provider might manipulate the explanation for her own ends. We show that this fundamental conflict cannot be resolved because of the high degree of ambiguity of post-hoc explanations in realistic application scenarios. As a consequence, post-hoc explanation algorithms are unsuitable to achieve the transparency objectives inherent to the legal norms. Instead, there is a need to more explicitly discuss the objectives underlying “explainability” obligations as these can often be better achieved through other mechanisms. There is an urgent need for a more open and honest discussion regarding the potential and limitations of post-hoc explanations in adversarial contexts, in particular in light of the current negotiations of the European Union’s draft Artificial Intelligence Act.