The Journey of XAI: Attention Map, Enhancing Trust, and Bidirectional Value Alignment
The current generation of AI systems offers tremendous benefits, but their effectiveness will be limited by the machine’s inability to explain its decisions and actions to users. Explainable AI (XAI) is essential if users are to understand, appropriately trust, and effectively manage this incoming generation of artificially intelligent partners. The DARPA XAI program (2017-2021) is designed to address these challenges. In this talk, I will share some background on the DARPA XAI program and our research journey spanning all three phases. We started with a focus on the machine’s performance in Phase I, mainly to develop new algorithms that unveil internal decision-making by combining deep learning and And-Or-Graphs (AOGs). In Phase II, we evaluated which type of explanation is better for humans; our Science Robotics paper on opening medicine bottles sheds light on how to enhance human trust by explaining robot behavior. In contrast to the one-directional explanations in Phase I and II, we moved to bidirectional explanations in Phase III, resulting in the recent Science Robotics paper on bidirectional human-robot value alignment.