Robots Learning Dexterity
Robots Learning Dexterity


Deep reinforcement learning provides a path towards solving many outstanding challenges in robotics. It lets machines learn more like humans do, by trial and error. The main obstacle has been getting enough data for training. Recent advances show that sim-to-real techniques, training entirely in simulation and transferring to a real robot, may bridge the gap and enable a new wave of applications. To showcase these techniques, we train a deep neural network to solve Rubik’s Cube in simulation, and then deploy it to a real world human-like robot hand. This shows that reinforcement learning isn’t just a tool for virtual tasks, but can solve physical-world problems requiring unprecedented dexterity.


Peter Welinder is a Research Scientist at OpenAI, where he leads projects on learning-based robotics. His past projects include teaching robots to learn by imitating humans and autonomously manipulating objects with robotic hands. Previously, he was Head of Machine Learning at Dropbox, where he founded and managed applied machine learning and infrastructure teams. He founded a startup, Anchovi Labs, out of grad school which was acquired by Dropbox in 2012. Peter has a PhD in Computation and Neural Systems from Caltech and a degree in Physics from Imperial College London.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from Youtube
Consent to display content from Vimeo
Google Maps
Consent to display content from Google