- Dave Rubin fails to delete Patreon on livestream to delete Patreon 3 Months Ago
- The ‘some of y’all… and it shows’ meme is taking over Twitter Today 12:24 PM
- ‘Star Trek: Discovery’ begins season 2 on a cheerful note Today 11:49 AM
- Climate change memes are disrupting the feel-good ’10 year challenge’ Today 11:48 AM
- Mysterious Washington Post parody predicts Trump’s resignation Today 11:42 AM
- YouTube cracks down on challenges, pranks Today 11:04 AM
- Upskirting will soon be illegal in England Today 10:45 AM
- Jake Paul calls Keemstar a ‘piece of trash’ for ‘body-shaming’ Erika Costell Today 10:18 AM
- Sprint promises to stop selling location data after outcry Today 9:53 AM
- Kirsten Gillibrand announces presidential bid—and Al Franken diehards are salty Today 9:49 AM
- How to watch ‘Married at First Sight’ online for free Today 9:43 AM
- There are already memes for ‘Spider-Man: Far From Home’ Today 9:00 AM
- Did Laura Loomer get duped into believing Muslims got her suspended from Twitter? Today 8:44 AM
- Here’s what we know about the Ice Sphere and Ice Storm in Fortnite Today 8:18 AM
- Who is Jigsaw, the villain of Netflix’s ‘Punisher’ season 2? Today 7:25 AM
One small step for man, a galloping dance craze for robot-kind.
In a new paper to be published next month in the Association for Computing Machinery’s Transactions on Graphics journal, researchers from the University of California-Berkeley were able to train a deep neural network to copy human movements by simply feeding them YouTube videos, paving the way for better mimicry of people.
Humanoid characters on a computer simulation were able to do backflips, handsprings, and cartwheels after learning them from video clips through state-of-the-art techniques in computer vision and reinforcement learning.
Here’s one example.
Xue Bin Peng and Angjoo Kanazawa, two of the artificial intelligence researchers who developed the program, said in a blog that this is a departure from previous techniques which strongly restricted the behaviors which can be produced.
“Therefore, these methods tend to be limited in the types of skills that can be learned, and the resulting motions can look fairly unnatural. More recently, deep learning techniques have demonstrated promising results for visual imitation on domains such as Atari and fairly simple robotics tasks,” the pair said.
According to them, their system works by first predicting the pose of the subject of a video fed in the pose estimation stage. After this, the motion reconstruction stage collates pose predictions into a reference motion and fixes artifacts for smoother movement. Finally, the reference motion is passed to the motion imitation stage, where a character is trained to mimic motion using reinforcement learning.
Researchers were able to teach simulated characters more than 20 different skills like vaulting, jumping jacks, high kicks, pushing a box, dancing from side to side, running, and walking.
“The key is in decomposing the problem into more manageable components, picking the right methods for those components, and integrating them together effectively. However, imitating skills from videos is still an extremely challenging problem, and there are plenty of video clips that we are not yet able to reproduce,” Xue and Kanazawa said.
Unfortunately, one of the actions they cannot properly reproduce yet is the 2012 viral dance hit “Gangnam Style.”
“We still have all of our work ahead of us, and we hope that this work will help inspire future techniques that will enable agents to take advantage of the massive volume of publicly available video data to acquire a truly staggering array of skills,” Xue and Kanazawa said.