I’ll be working with Claudia D’Arpino in the Seattle Robotics Lab.
This inaugural symposium aims to make up for two-plus years of missed in-person networking as a result of the pandemic. Students, postdocs, industry researchers, and faculty are all encouraged to participate. The event is free.
The symposium will be an all-day affair featuring invited short talks, posters, and social events.
Happening May 13th at UW.
We needed to put a Mayfield Kuri into the halls of our building for a study, but it turned out to be no small feat. We settled on a system that had the robot roam around and occasionally ping us for help when it needed to charge. It’s a pattern to consider if you find yourself wanting to run a user study with a robot that can’t navigate autonomously. Check out the paper, and don’t miss the demo on the project page.
I organized a relay along Seattle’s light rail line for UW CSE’s Race Condition Running club. Dozens of people ran, many a station was visited, and I can’t wait to do it again next year.
To appear as posters at the Conference on Robot Learning in November (virtually, I would guess):
- A paper describing how a robot can shape perceptions of its motion while doing a task. I’m happy that we were able to model this problem cleanly, and especially happy that our method works even with a tricky domain like coverage. iRobot, if you’re reading this, get in touch 😉
- Some new work describing how you can get a robot to make natural-seeming back-channels (nods, in this paper) based on human speech signals and a head-pose estimate. All it took was a fairly small amount of human-human interaction data, and the models are small enough that you can run them on real robots.
We have a new short paper summarizing our first experiments with modeling and controlling how a robot’s motion impacts the impressions formed by observers.