AI News, Robots visualize actions, plan, with out human instruction
Robots visualize actions, plan, with out human instruction
Sergey Levine and UC Berkeley colleagues have developed robotic learning technology that enables robots to visualize how different behaviors will affect the world around them, with out human instruction.
Speakers include: Vinod Khosla – Justin Sanchez – Brian Otis – Bryan Johnson – Zhenan Bao – Nathan Intrator – Carla Pugh – Jamshid Ghajar – Mark Kendall – Robert Greenberg – Darin Okuda – Jason Heikenfeld – Bob Knight – Phillip Alvelda – Paul Nuyujukian – Peter Fischer – Tony Chahine – Shahin Farshchi – Ambar Bhattacharyya – Adam D’Augelli – Juan-Pablo Mas – Michael Eggleston – Walter Greenleaf
Reversal Learning Task in Children with Autism Spectrum Disorder: A Robot-Based Approach.
Children with autism spectrum disorder (ASD) engage in highly perseverative and inflexible behaviours.
The aim of our study is to investigate the role of the robotic toy Keepon in a cognitive flexibility task performed by children with ASD and typically developing (TD) children.
On the other hand their cognitive flexibility performance is, in general, similar in the robot and the human conditions with the exception of the learning phase where the robot can interfere with the performance.
Last month, we showed an earlier version of this robot where we'd trained its vision system using domain randomization, that is, by showing it simulated objects with a variety of color, backgrounds, and textures, without the use of any real images.
(The vision system is never trained on a real image.) The imitation network observes a demonstration, processes it to infer the intent of the task, and then accomplishes the intent starting from another starting configuration.
Applied to block stacking, the training data consists of pairs of trajectories that stack blocks into a matching set of towers in the same order, but start from different start states.
At test time, the imitation network was able to parse demonstrations produced by a human, even though it had never seen messy human data before.
The imitation network uses soft attention over the demonstration trajectory and the state vector which represents the locations of the blocks, allowing the system to work with demonstrations of variable length.
It also performs attention over the locations of the different blocks, allowing it to imitate longer trajectories than it's ever seen, and stack blocks into a configuration that has more blocks than any demonstration in its training data.
- On Thursday, March 21, 2019
Vestri the robot imagines how to perform tasks
UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never...
Deep Learning & Robotics - Prof. Pieter Abbeel
Recorded, October 11, 2017 Pieter Abbeel is a professor at UC Berkeley.
RI Seminar: Sergey Levine : Deep Robotic Learning
Sergey Levine Assistant Professor, UC Berkeley April 07, 2017 Abstract Deep learning methods have provided us with remarkably powerful, flexible, and robust solutions in a wide range of passive...
Robot Learning To Walk With Neural Networks
Robot learns to walk with neural networks with the use of genetic mutations.
This Japanese Robot Evolves Based on Its Surroundings
Oct. 24 -- Japan has a unique fascination with androids and the quest to make robots more like humans. One of the country's most original thinkers in this area is Professor Takashi Ikegami...
BRETT the Robot learns to put things together on his own
Full Story: UC Berkeley researchers have developed algorithms that enable robots to learn motor..
Deep Learning for Robotics - Prof. Pieter Abbeel - NIPS 2017 Keynote
Pieter Abbeel delivers his keynote: Deep Learning for Robotics, at NIPS 2017.
Mobile Robot Navigation with Deep Reinforcement Learning
The algorithm was based on the following paper:
Robot Control with Distributed Deep Reinforcement Learning
Demonstration of Distributed Deep Reinforcement Learning in simulated racing car driving and actual robots control.
What's new, Atlas?
What have you been up to lately, Atlas?