Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

Submitted by gpeng on Sun, 06/25/2023 - 15:21
Authors
Cynthia Rudin
DOI
https://doi.org/10.1038/s42256-019-0048-x
Publication journal
Nature Machine Intelligence

Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems
in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box
models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable
in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward
is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and
using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in highstakes
decisions, identifies challenges to interpretable machine learning, and provides several example applications where
interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.

Publication Date
Keywords
interpretable model
machine learning
mechanistic model