Federica Fedorczyk

Federica Fedorczyk is a Research Fellow at EURA, European Jean Monnet Center of Excellence on the Regulation of Robotics & AI at Sant’Anna School of Advanced Studies in Pisa, where she works on AI regulation and the ethical-legal assessment of advanced technologies in light of the EU AI Act.

She graduated in Law with honors at the University of Roma Tre and obtained a Ph.D. in Criminal law at Sant’Anna School of Advanced Studies. In her dissertation ‘Artificially Intelligent Criminal Justice? A policy oriented critical plea’ she investigated how the use of AI is transforming the criminal justice system and the risks of new forms of digital authoritarianism.

She has been Visiting Ph.D. Researcher at Fordham Law School in New York City (2023) and Doctoral Research Visiting Fellow at the Max Planck Institute for the Study of Crime, Security and Law in Freiburg (2024), where she was awarded a Scholarship for her research project.

During her studies, Federica authored several academic articles published both in national and international journals. Her main research interests include the intersection between AI and democracy, the innovative concept of smart prisons and digital rehabilitation, as well as gender-based crimes and gender discrimination.

In addition to her academic activities, Federica is a lawyer and has also served as a legal trainee at the Avvocatura Generale dello Stato and as a judicial clerk at the Criminal Public Prosecutor Office, where she was part of the anti-violence pool specialized in crimes against women, children, and other vulnerable victims.

As an Emile Noël Fellow, Federica will work on a critical examination of the EU AI Act in the context of AI top-down misuse, exploring new ways for a democratization of AI.

Contact: ff2346@nyu.edu

Research Project

‘Democratizing AI: a critical examination of the EU AI Act in the context of AI top-down misuse.’

Even in democracies, states can inadvertently become primary actors in AI misuse. Justified by the need to ensure short-term safety, they might adopt robust mass surveillance systems and use AI systems for law enforcement objectives posing therefore a threat to fundamental freedoms and providing fertile ground for the growth of new forms of digital authoritarianism. The human-centric approach adopted by the EU and the fundamental rights impact assessment provided by the AI Act may not be sufficient to address this risk. A trustworthy AI cannot be truly relied upon if governments can exploit exceptions based on several reasons, among which, for instance, law enforcement and migration control. This approach, which prioritizes national security over fundamental freedoms, may not fully align with Article 52 CFREU, especially given that the consequences of top-down AI misuse on human rights are potentially more severe than those associated with any other technology created to date, and perhaps not even fully understood at present. Hence, my research investigates how the AI Act is (or is not) addressing top-down AI misuse. I claim that the AI EU strategy has been focused on regulating mainly bottom-up AI misuse, while the potential misuse of AI applications by states is not adequately addressed. Therefore, there is a gap in the EU AI safety strategy that should be identified, critically analyzed, and tackled. In essence, the analysis will address what are the loopholes in the AI Act concerning the risk of top-down AI misuse and whether the balance established in the EU AI Act between public interests and individual freedoms is adequate to effectively address this exceptional risk. In this context, the research will also explore the need for a more comprehensive legal framework that prioritizes the democratization of AI.