Yiannos Tolias

Yiannos is a Senior Emile Noël Global Fellow at NYU School of Law. His research project at NYU concerns Liability for Damages caused by Artificial Intelligent Systems.

He is a lawyer at the European Commission (European Union). He works in the legal teams that initiate infringement actions and bring legal actions before the Court of Justice of the European Union (CJEU) against EU Member States for violations of EU law in the area of free movement of goods.

Yiannos also works in the team of the European Commission responsible for the European Union (EU) Products Liability Directive (Directive 85/374/EEC). This team currently looks at the possible adaptations of applicable laws at EU level relating to new technologies including AI.

Prior to joining the European Commission, Yiannos was an Assistant Professor in EU law at the Universities of Edinburgh and Dundee in Scotland, UK. He graduated with a Ph.D. in EU law from the University of Edinburgh and thereafter he has been a Post-Doctoral Research Fellow at The Institute for Advanced Studies in the Humanities at the University of Edinburgh until he was appointed as an Assistant Professor.

Yiannos’ current research interests also include the question concerning the limits of Article 34 of the Treaty of the Functioning of the European Union (TFEU) (this provision is the loosely analogue to the US Negative Commerce Clause); the assessment of proportionality of EU Member States’ legislative measures that erect obstacles to trade in violation of Article 34 TFEU in order to protect public interests. Particularly, he is interested in the assessment of proportionality of national measures that restrict trade in order to protect public health; a recent case that Yiannos worked on in this area was the Scottish case before the Court of Justice of the EU concerning Scottish legislation introducing a minimum price per unit of alcohol (Case C-333/14, Scotch Whisky Association); finally, Yiannos’ research interests and work include the question of whether EU legislative measures provide for complete, partial or minimum harmonization and the legal implications.

Research Project

Liability for Damages caused by Artificial Intelligent Systems. Recent reports estimate that artificial intelligence (AI) could contribute up to €13.3 trillion to the global economy by 2030 and that the adoption of AI could boost productivity by 30% in all major industries. At AI’s infancy, there are a number of issues to be addressed, relating to among others, research, industrial needs, product safety legislation, biased decision making, ethics, accountability, transparency and legal issues including fundamental rights. My research topic concerns the Liability for Damages caused by Artificial Intelligent Systems. The European Parliament (European Union) conducted a public consultation on artificial intelligence and 74% of the respondents considered that “liability rules” was the second most important concern for regulation purposes. AI-based applications present specificities that may challenge the suitability of existing legal frameworks both at US federal and State levels as well as EU and national levels. AI start-ups including small medium size enterprises (SMEs) need clarity on the legal framework governing AI. Some concepts that were clear-cut some years ago such as “product” and “producer” or “defect” and “damage” may be less clear-cut in this new industrial age. The specific question that I would like to answer is how existing legal tools in the US can be used to deal with immediate to near future (5 years) AI legal liability problems. Specifically, which elements from existing US laws and US courts’ jurisprudence could be applicable to AI legal liability disputes? Could some of these existing US laws or US legal principles be given a “purposive interpretation” which is fit for AI? My study aims, first, to create a unique data bank of US legal tools that could be immediately used to address AI legal liability problems. Secondly, this analysis would indicate what necessary actions developers of AI need to take to avoid legal liability. Thirdly, this clarity on legal liability would provide confidence to both users of and start-ups in AI thus boosting AI development. Finally, this study would lay down a strong foundation for any future regulation.