Actions Have Consequences: Leveraging Dialogue to Calibrate Automation
By Yonatan Bisk
Deployed systems take actions in the world – both virtual and physical. While robot autonomy research has focused on making systems more automatic and requiring less human feedback, there are many scenarios where this is undesirable (e.g. system lacks information about the specific deployment environment) but the human has begun to over-trust the system’s abilities. In this work, we aim to address this concern via the creation of an embodied dialogue agent that assesses when human judgment is necessary (e.g. high-stakes contexts, high-diversity of outcomes, …) and communicate this to the human in natural language. The agent should then be able to integrate responses to update its behaviors. This will be evaluated with both quantitative and qualitative metrics of human-AI coordination and trust.