Explainable AI in Medicine, Confidence Intervals and Warnings

The advancements in artificial intelligence (AI) have created a promising potential to revolutionize a number of domains including healthcare. These rapid developments in AI led policy makers and legal scholars to also look at AI legal implications. In addressing any legal implications, it is first necessary to understand the technical character of AI and specifically the branch of AI that deals with machine learning (ML). This understanding enables us to identify any new risks and/or safety concerns that differentiate ML systems from other conventional products and services. If such new risks and concerns exist then a differentiated legal treatment for ML systems could be justified. This paper focuses on ML liability in medicine and particularly on warnings. In this regard, the crux of the matter concerning ML systems performing medical tasks, such as diagnosis, is the type of information (warning) that a manufacturer/ML medical system should be providing to the physician. The physician would be acting as a learned intermediary and the manufacturer would be held to the standard of an expert in the field. The manufacturer/ML medical system should be providing information concerning the ML medical prediction to the physician in a manner suitable for her expertise that would enable the physician to provide appropriate explanations to the patient in order to obtain the patient’s informed consent. This paper sets the foundations for a correlation between explainable ML, ML confidence intervals and warnings.

Attachments