To appear as posters at the Conference on Robot Learning in November (virtually, I would guess):
- A paper describing how a robot can shape perceptions of its motion while doing a task. I’m happy that we were able to model this problem cleanly, and especially happy that our method works even with a tricky domain coverage. iRobot, if you’re reading this, get in touch 😉
- Some new work describing how you can get a robot to make natural-seeming back-channels (nods, in this paper) based on human speech signals and a head-pose estimate. All it took was a fairly small amount of human-human interaction data, and the models are small enough that you can run them on real robots.