AI News, Meet the Amazing Robots That Will Compete in the DARPA Robotics Challenge

Meet the Amazing Robots That Will Compete in the DARPA Robotics Challenge

The press release sums it up nicely: 'over the next two years, teams will compete to develop and put to the test hardware and software designed to enable robots to assist humans in emergency response when a disaster strikes.'

The first half of this is the hardware: DARPA is promising that an 'advanced variation' of ATLAS (which is what the above picture is showing) will be ready to go by June of 2013, and will be provided to the advancing Track B and C teams (see our previous post on the DRC for more details on the tracks).

Over time, [the Simulator] will be increasingly populated with models of robots, perception sensors and field environments, and function as a cloud-based, real-time, operator-interactive virtual test bed that uses physics-based models of inertia, actuation, contact and environment dynamics.

The value of a cloud-based simulator is that it gives talent from any location a common space to train, design, test and collaborate on ideas without the need for expensive hardware and prototyping.

Remember, the DRC Simulator will be the focus of the Track B (funded) and Track C (unfunded) teams, and whoever does the best will receive one of six ATLAS robots after the first virtual challenge event to continue on to compete with real hardware.

The Guardian robot will expand its Exoskeleton (XOS) concept, introducing innovative technologies such as large range of motion, high specific torque/power actuators and a rapidly modulated fluid supply for overall power efficiency.

Virginia Tech proposed to develop THOR, a Tactical Hazardous Operations Robot, which will be state-of-the-art, light, agile and resilient with perception, planning and human interface technology that infers a human operator’s intent, allowing seamless, intuitive control across the autonomy spectrum.

RoboSimian will use its four general-purpose limbs and hands, capable of both mobility and manipulation, to achieve passively stable stances, create multi-point anchored connections to supports such as ladders, railings and stair treads, and brace itself during forceful manipulation operations.

We're really hoping that all of these teams will be as open as possible as they go about building and testing their robots, because the simultaneous development of seven platforms of this size and complexity is, as far as we know, totally unprecedented in the robotics world.


Robotics is an interdisciplinary branch of engineering and science that includes mechanical engineering, electrical engineering, computer science, and others.

The concept of creating machines that can operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century.[1] Throughout history, it has been frequently assumed that robots will one day be able to mimic human behavior and manage tasks in a human-like fashion.

Robots are widely used in manufacturing, assembly, packing and packaging, mining, transport, earth and space exploration, surgery, weaponry, laboratory research, safety, and the mass production of consumer and industrial goods.[5] There are many types of robots;

they are used in many different environments and for many different uses, although being very diverse in application and form they all share three basic similarities when it comes to their construction: As more and more robots are designed for specific tasks this method of classification becomes more relevant.

flexure is designed as part of the motor actuator, to improve safety and provide robust force control, energy efficiency, shock absorption (mechanical filtering) while reducing excessive wear on the transmission and other mechanical components.

It has been used in various robots, particularly advanced manufacturing robots and[33] walking humanoid robots.[34] Pneumatic artificial muscles, also known as air muscles, are special tubes that expand(typically up to 40%) when air is forced inside them.

They have been used for some small robot applications.[38][39] EAPs or EPAMs are a new[when?] plastic material that can contract substantially (up to 380% activation strain) from electricity, and have been used in facial muscles and arms of humanoid robots,[40] and to enable new robots to float,[41] fly, swim or walk.[42] Recent alternatives to DC motors are piezo motors or ultrasonic motors.

The advantages of these motors are nanometer resolution, speed, and available force for their size.[44] These motors are already available commercially, and being used on some robots.[45][46] Elastic nanotubes are a promising artificial muscle technology in early-stage experimental development.

Recent research has developed a tactile sensor array that mimics the mechanical properties and touch receptors of human fingertips.[48][49] The sensor array is constructed as a rigid core surrounded by conductive fluid contained by an elastomeric skin.

Some have a fixed manipulator which cannot be replaced, while a few have one very general purpose manipulator, for example, a humanoid hand.[53] Learning how to manipulate a robot often requires a close feedback between human to the robot, although there are several methods for remote manipulation of robots.[54] One of the most common effectors is the gripper.

Fingers can for example, be made of a chain with a metal wire run through it.[55] Hands that resemble and work more like a human hand include the Shadow Hand and the Robonaut hand.[56] Hands that are of a mid-level complexity include the Delft hand.[57][58] Mechanical grippers can come in various types, including friction and encompassing jaws.

Some advanced robots are beginning to use fully humanoid hands, like the Shadow Hand, MANUS,[60] and the Schunk hand.[61] These are highly dexterous manipulators, with as many as 20 degrees of freedom and hundreds of tactile sensors.[62] For simplicity, most mobile robots have four wheels or a number of continuous tracks.

Balancing robots generally use a gyroscope to detect how much a robot is falling and then drive the wheels proportionally in the same direction, to counterbalance the fall at hundreds of times per second, based on the dynamics of an inverted pendulum.[63] Many different balancing robots have been designed.[64] While the Segway is not commonly thought of as a robot, it can be thought of as a component of a robot, when used as such Segway refer to them as RMP (Robotic Mobility Platform).

Several one-wheeled balancing robots have been designed recently, such as Carnegie Mellon University's 'Ballbot' that is the approximate height and width of a person, and Tohoku Gakuin University's 'BallIP'.[66] Because of the long, thin shape and ability to maneuver in tight spaces, they have the potential to function better than other robots in environments with people.[67] Several attempts have been made in robots that are completely inside a spherical ball, either by spinning a weight inside the ball,[68][69] or by rotating the outer shells of the sphere.[70][71] These have also been referred to as an orb bot[72] or a ball bot.[73][74] Using six wheels instead of four wheels can give better traction or grip in outdoor terrain such as on rocky dirt or grass.

There has been much study on human inspired walking, such as AMBER lab which was established in 2008 by the Mechanical Engineering Department at Texas AM University.[76] Many other robots have been built that walk on more than two legs, due to these robots being significantly easier to construct.[77][78] Walking robots can be used for uneven terrains, which would provide better mobility and energy efficiency than other locomotion methods.

The robot's onboard computer tries to keep the total inertial forces (the combination of Earth's gravity and the acceleration and deceleration of walking), exactly opposed by the floor reaction force (the force of the floor pushing back on the robot's foot).

In this way, the two forces cancel out, leaving no moment (force causing the robot to rotate and fall over).[79] However, this is not exactly how a human walks, and the difference is obvious to human observers, some of whom have pointed out that ASIMO walks as if it needs the lavatory.[80][81][82] ASIMO's walking algorithm is not static, and some dynamic balancing is used (see below).

more advanced way for a robot to walk is by using a dynamic balancing algorithm, which is potentially more robust than the Zero Moment Point technique, as it constantly monitors the robot's motion, and places the feet in order to maintain stability.[87] This technique was recently demonstrated by Anybots' Dexter Robot,[88] which is so stable, it can even jump.[89] Another example is the TU Delft Flame.

Mimicking the way real snakes move, these robots can navigate very confined spaces, meaning they may one day be used to search for people trapped in collapsed buildings.[93] The Japanese ACM-R5 snake robot[94] can even navigate both on land and in water.[95] A

It has four legs, with unpowered wheels, which can either step or roll.[96] Another robot, Plen, can use a miniature skateboard or roller-skates, and skate across a desktop.[97] Several different approaches have been used to develop robots that have the ability to climb vertical surfaces.

Therefore, many researchers studying underwater robots would like to copy this type of locomotion.[102] Notable examples are the Essex University Computer Science Robotic Fish G9,[103] and the Robot Tuna built by the Institute of Field Robotics, to analyze and mathematically model thunniform motion.[104] The Aqua Penguin,[105] designed and built by Festo of Germany, copies the streamlined shape and propulsion by front 'flippers' of penguins.

It was the first robotic fish capable of outperforming real carangiform fish in terms of average maximum velocity (measured in body lengths/ second) and endurance, the duration that top speed is maintained.[106] This build attained swimming speeds of 11.6BL/s (i.e.

3.7 m/s).[107] The first build, iSplash-I (2014) was the first robotic platform to apply a full-body length carangiform swimming motion which was found to increase swimming speed by 27% over the traditional approach of a posterior confined waveform.[108] Sailboat robots have also been developed in order to make measurements at the surface of the ocean.

Interpreting the continuous flow of sounds coming from a human, in real time, is a difficult task for a computer, mostly because of the great variability of speech.[110] The same word, spoken by the same person may sound different depending on local acoustics, volume, the previous word, whether or not the speaker has a cold, etc..

It becomes even harder when the speaker has a different accent.[111] Nevertheless, great strides have been made in the field since Davis, Biddulph, and Balashek designed the first 'voice input system' which recognized 'ten digits spoken by a single user with 100% accuracy' in 1952.[112] Currently, the best systems can recognize continuous, natural speech, up to 160 words per minute, with an accuracy of 95%.[113] Other hurdles exist when allowing the robot to use voice for interacting with humans.

For social reasons, synthetic voice proves suboptimal as a communication medium,[114] making it necessary to develop the emotional component of robotic voice through various techniques.[115][116] One can imagine, in the future, explaining to a robot chef how to make a pastry, or asking directions from a robot police officer.

It is likely that gestures will make up a part of the interaction between humans and robots.[117] A great many systems have been developed to recognize human hand gestures.[118] Facial expressions can provide rapid feedback on the progress of a dialog between two humans, and soon may be able to do the same for humans and robots.

Robotic faces have been constructed by Hanson Robotics using their elastic polymer called Frubber, allowing a large number of facial expressions due to the elasticity of the rubber facial coating and embedded subsurface motors (servos).[119] The coating and servos are built on a metal skull.

Likewise, robots like Kismet and the more recent addition, Nexi[120] can produce a range of facial expressions, allowing it to have meaningful social exchanges with humans.[121] Artificial emotions can also be generated, composed of a sequence of facial expressions and/or gestures.

Researchers use this method both to create better robots,[129] and to explore the nature of evolution.[130] Because the process often requires many generations of robots to be simulated,[131] this technique may be run entirely or mostly in simulation, then tested on real robots once the evolved algorithms are good enough.[132] Currently, there are about 10 million industrial robots toiling around the world, and Japan is the top country having high density of utilizing robots in its manufacturing industry.[citation needed] The study of motion can be divided into kinematics and dynamics.[133] Direct kinematics refers to the calculation of end effector position, orientation, velocity, and acceleration when the corresponding joint values are known.

Robotics engineers design robots, maintain them, develop new applications for them, and conduct research to expand the potential of robotics.[134] Robots have become a popular educational tool in some middle and high schools, particularly in parts of the USA,[135] as well as in numerous youth summer camps, raising interest in programming, artificial intelligence, and robotics among students.

First-year computer science courses at some universities now include programming of a robot in addition to traditional software engineering-based coursework.[54] Universities offer bachelors, masters, and doctoral degrees in the field of robotics.[136] Vocational schools offer robotics training aimed at careers in robotics.

As factories increase their use of robots, the number of robotics–related jobs grow and have been observed to be steadily rising.[139] The employment of robots in industries has increased productivity and efficiency savings and is typically seen as a long term investment for benefactors.

discussion paper drawn up by EU-OSHA highlights how the spread of robotics presents both opportunities and challenges for occupational safety and health (OSH).[142] The greatest OSH benefits stemming from the wider use of robotics should be substitution for people working in unhealthy or dangerous environments.

In space, defence, security, or the nuclear industry, but also in logistics, maintenance, and inspection, autonomous robots are particularly useful in replacing human workers performing dirty, dull or unsafe tasks, thus avoiding workers' exposures to hazardous agents and conditions and reducing physical, ergonomic and psychosocial risks.

In the future, many other highly repetitive, risky or unpleasant tasks will be performed by robots in a variety of sectors like agriculture, construction, transport, healthcare, firefighting or cleaning services.[143] Despite these advances, there are certain skills to which humans will be better suited than machines for some time to come and the question is how to achieve the best combination of human and robot skills.

This need to combine optimal skills has resulted in collaborative robots and humans sharing a common workspace more closely and led to the development of new approaches and standards to guarantee the safety of the 'man-robot merger'.

Become A Robotics Software Engineer

The field of robotics is growing at an incredible rate, and demand for software engineers with the right skills far exceeds the current supply.

Expert instructors, personalized project reviews, and exclusive hiring opportunities are hallmarks of this program, and in collaboration with the NVIDIA Deep Learning Institute—one of the most exciting and innovative companies in the world—we have built an unrivalled curriculum that offers the most cutting-edge learning experience currently available.