Robot Research Interests
K.R. Zentner
Questions are welcome.
How do we control robots?
How to complete our goals?
How to complete complex tasks?
How to move the actuators?
How do we control robots?
Human Level Goals
High Level Tasks
Continuous Control
How do we control robots?
Imperative Programs
State machines
Motion Planning
Path Planning
Hierarchical Reinforcement Learning
(Learned) Continuous Control
Inverse Kinematics
Realtime Control
What are the requirements?
Human interpretable and specifiable
Goals not known when engineered
Very high sample efficiency
Interface with black-box behavior
Debuggable, not (necessarily) specifiable
Engineered with known family of tasks
Interface with continuous control
Potential Methods
- Imperative Programs
- Finite State Machines / Automata
- Behavior Trees
- Graphical Models
- Logic Programs
- Inverse Reinforcement Learning
- Natural language?
- More Reinforcement Learning?
Main Difficulties
- Causal Dependencies
- Concurrency
- Object Identity
- Error Handling
- Interpretability
- Learnability
Current Approach
Logic programs (over continuous vectors)
Other places where logic programs are state of the art:
- Dialogue Systems (Inference Based Dialogue)
- Games (Goal Oriented Action Planning)
Current Difficulties
- Efficient planning
- Efficient learning
- Lack of prior work with logic planning + learning
- Lack of established datasets