Background
Effective human-robot interaction must be informed by human expectations and knowledge of the world, including how robots are expected to act/interact in various situations. However, knowledge about the robot may be inconsistent with the ground truth, which can result in the robot not meeting expectations, and the interaction integrity being compromised. Explicable planning has been studied in recent years as a novel planning approach to reconcile human expectations with the optimal robot behavior, which creates more predictable and interpretable robot decision-making. However, there is a current issue of safety within explicable planning, which can result in potential harms to either party in an interaction.
Invention Description
Researchers at Arizona State University have developed a Safe Explicable Planning (SEP), which is a new method to extend explicable planning to support the specification of a safety bound. The objective of SEP is to find a policy that generates a behavior close to human expectations, while satisfying the safety constraints introduced by the bound. This method can return a set of safe explicable policies through multi-objective optimization, or a single policy as an approximate and more efficient solution. This method also includes theoretical proofs for the Pareto optimality of the solutions under the designer-specified bound.
Potential Applications:
- Robot navigation (such as autonomous driving)
- Collaborative robot-human tasks (such as in automated manufacturing)
- Medical robotics
Benefits and Advantages:
- Effective safe explicable robot behavior
- Optimal solutions provided under safety constraints
- Increased safety for applications where robots work closely with humans