The International Workshop on Actionable Knowledge Representation and Reasoning for Robots (AKR³) is dedicated to Knowledge Representation and Reasoning (KRR) in the area of cognitive robotics, with the focus on acquiring knowledge from varying sources and making it actionable for robotic applications. By bringing together different communities from all over the world that are currently specialized in KRR and robotics, we aim to increase collaboration and accelerate advancements in the field.
Given the availability of a plethora of sources and datasets for common sense knowledge on the Web, recent advances in language modelling as well as strides in learning through human-robot interaction, it is a timely research question to investigate which methods and approaches can enable robots to take advantage of this existing common sense knowledge to reason on how to perform tasks in the real world. The main issue to be addressed in particular is how to allow robots to perform tasks flexibly and adaptively, gracefully handling contextually determined variance in task execution. We expect this line of research to contribute to better generalizability and robustness of robots performing tasks in every-day environments.
For instance, household robots are still not able to autonomously prepare meals, set or clean the table or do other chores besides vacuum cleaning. Much of the knowledge needed to refine vague task instructions and transfer them to new task variations is contained in instruction web sites like WikiHow, encyclopedic web sites like Wikipedia, and many other web-based information sources. We argue that such knowledge can be used to teach robots to perform new task variations.
The topics of interest to the workshop include but are not limited to:
- Knowledge Representation for cognitive robotics: The importance of linking object to action and environment information
- Knowledge-Enhanced Robotics: Integrating Knowledge Graphs and Question Answering for Enhanced Robot Capabilities
- Approaches to leverage common sense knowledge from varying sources (e.g. Web, LLMs, Interaction)
- Linking common sense knowledge to perception and execution
- Multi-modal AI Reasoning for Robotics: Multi-modal robot reasoning integrates diverse information for environmental understanding and interaction
- Translation of task requests to body movements and parametrisation of such body movements with knowledge
- Novel formalisms and approaches to represent and encode knowledge for robots
- Novel cognitive architectures and paradigms supporting reasoning with Web knowledge
- Use of large language models and prompting to infer action-relevant knowledge
- Natural language processing applied to common sense knowledge extraction from unstructured sources