Advances in artificial intelligence could improve how we make decisions for a broad array of applications, such as healthcare or urban planning. However, practitioners need to understand and trust how complex machine learning systems make choices before these new tools can be more widely adopted.
Please join the National Academies for a symposium on Interpretable and Explainable AI and Machine Learning on Tuesday, June 21 from 1-4pm ET. During the symposium, expert speakers will discuss the possibilities and challenges of interpretable machine learning across a variety of applications, including cognitive science, healthcare, and policy. The symposium will conclude with a moderated panel on the future landscape of interpretable and ethical machine learning.
- Dr. Been Kim (Google Brain) will explore how to bridge the gap between humans and machines.
- Dr. Gari Clifford (Emory University) will discuss the use of interpretable machine learning in healthcare and its impact on bias and ethics.
- Dr. Christian Lebiere (Carnegie Mellon University) will present on cognitive models at the interface of humans and machines.
- Patrick Hall (bnh.ai and The George Washington University) will discuss machine learning transparency in the legal context.