The Journey of XAI: Attention Map, Enhancing Trust, and Bidirectional Value Alignment
The current generation of AI systems offers tremendous benefits, but their effectiveness will be limited by the machine’s inability to explain its decisions and actions to users. Explainable AI (XAI) is essential if users are to understand, appropriately trust, and effectively manage this incoming generation of artificially intelligent partners. The DARPA XAI program (2017-2021) is designed to address these challenges. In this talk, I will share some background on the DARPA XAI program and our research journey spanning all three phases. We started with a focus on the machine’s performance in Phase I, mainly to develop new algorithms that unveil internal decision-making by combining deep learning and And-Or-Graphs (AOGs). In Phase II, we evaluated which type of explanation is better for humans; our Science Robotics paper on opening medicine bottles sheds light on how to enhance human trust by explaining robot behavior. In contrast to the one-directional explanations in Phase I and II, we moved to bidirectional explanations in Phase III, resulting in the recent Science Robotics paper on bidirectional human-robot value alignment.
Dr. Yixin Zhu is an Assistant Professor at Peking University. He received a Ph.D. degree (‘18) from UCLA advised by Prof. Song-Chun Zhu. His research builds interactive AI by integrating high-level common sense (functionality, affordance, physics, causality, intent) with raw sensory inputs (pixels and haptic signals) to enable richer representation and abstract reasoning on objects, scenes, shapes, numbers, and agents. During his Ph.D. and postdoc studies, his work was supported by DARPA MSEE, DARPA SIMPLEX, DARPA XAI, ONR MURI, and ONR Cognitive Systems for Human-Machine Teaming. His recent publications have been featured in MIT Tech Review, IEEE Spectrum, Smithsonian, Xinhua News, CGTN, China Daily, UCLA, Peking University.