Machine Learning in Medicine and Personalized Warnings

Prior legal analyses have not comprehensively looked at the technical character of machine learning algorithms that could be applied to the medical domain. Understanding the challenges associated with building machine learning (ML) algorithms and the specificities of the medical domain is foundational for examining the legal implications. This paper looks at these challenges and the implications on legal liability, focusing on warnings. It appears that explanations and/or confidence intervals provided by ML medical devices for their specific predictions, what are referred to in this paper as personalized warnings, are in certain instances requisite elements of adequate warnings. ML explanations/confidence intervals could be revealing hidden albeit foreseeable risks. The different forms of ML interpretability (ML systems giving explanations for their predictions) in medicine is the subject of further research that will complement this paper. This paper sets the foundations of the correlation between ML interpretability/confidence intervals and warnings. It also introduces the general framework on how such warnings could be formulated.