AI News, Bots_Alive Brings Sophisticated Brains to Cheap Robots
- On Sunday, February 11, 2018
- By Read More
Bots_Alive Brings Sophisticated Brains to Cheap Robots
It’s a hard problem: Making an interesting robot means giving it intelligence and creative autonomy, and giving a robot intelligence and creative autonomy is generally not compatible with it also being cheap.
Bots_Alive turns a Hexbug Spider into a fully autonomous robotic critter by replacing the infrared controller with an IR blaster plugged into your phone, and using computer vision to localize a fiducial sticker placed on the robot’s head.
The advantage of doing so is obvious: We’re all carrying around extraordinarily powerful little computers with vision systems and all kinds of other stuff packed into them, so why not steal that hardware and use it for robot control?With this in mind, Bots_Alive picked for a platform what has to be one of the cheapest and most common robot toys out there, the $25 Hexbug Spider.
Before starting in on all of this robotics stuff, he had an undergraduate degree in psychology, which, he says, gave him a “different perspective” for designing interactive robots: “[At MIT], sometimes the grad students and the postdocs would get together and we’d think about why we don’t have robots that we ourselves want to interact with.
Each of us had our own personal dream of what our robot companion would be like, and so we’d get together and we’d talk about why is that not here.” The philosophy on robot companions that Knox ended up with was informed by a few different things.
These things fit together because dogs, in a lot of ways, typify what could work with a social interactive robot right now: no speech, not really designed to perform “useful tasks,” but friendly and expressive and can somehow be consistently entertaining.
Knox says he wants to build“simple creatures that feel very alive, that can maintain an illusion of life and the magic that comes with that.” To do that, he’s relying on his MIT research on learning from demonstration.
It was somewhat of an open question whether it would result in effective behavior and a good character, and we were pretty surprised by how well it worked.” Knox’s research involved children interacting with MIT’s Dragonbot.
***Want these robots but don't have time to think through the options? The BOTS_ALIVE DOUBLE BOT KIT is most popular and is also our recommendation.***
'If you wish to build a ship, do not divide your people into teams and send them to the forest to cut wood.
(Often attributed to Antoine de Saint-Exupéry.) There are many excellent robot products that aim to teach children to code.
We see the curiosity-driven play of bots_alive to be the prior step: building a passion in children for the human-centered design, technology, engineering, and math that drives the robots.
Our goal is to make robots whose behavior appears alive more than any machine you’ve interacted with.
Suggested pledge levels and rewards: When we reach $50,000, sumo wrestling between 2 robots is unlocked.
The robot's organic behavior rewards the kids for their designs, since the robot appears to struggle, falter, and sometimes succeed at the tasks.
Kids cite the robots' unpredictability and expressivity as a key reason why building challenges are so much fun.
See for yourself: Our companion app controls the robots, but we designed bots_alive for kids to focus on the real world.
We'd like to thank the following people or groups who have provided repeated mentorship, significant collaboration, or other support:
Playful immersion in advanced STEM
bots_alive's vision is to develop simple, animal-like robots that seem alive.
We are based in Austin and are led by Dr. Bradley Knox, who researched human-robot interaction and artificial intelligence at the MIT Media Lab and at UT Austinafter studying psychology at Texas AM.
Building character AI through machine learning
Much like motion capture for scripted animation, this new technique may revolutionize how interactive characters are created, through observation of authentic human-generated behavior.
For bots_alive, the teleoperator watches a screen display of what the system sees and pushes buttons to send commands of forward, back, left, right, forward-right, forward-left, back-right, or back-left.
From these teleoperation sessions, we gather training data containing (a) teleoperation commands and (b) information about the context in which each command was given.
A category of machine learning called supervised learning is applied to create a model of the puppeteer, which effectively answers the question All of that happens during development.
Consider whether your control in these moments is different, both at a high level and in tiny movements, than what the character would do if you had simply written down a set of rules for it to act by.
Our method is an application of what’s called learning from demonstration.** For times the teleoperation needs to include human interaction with the character, we keep the teleoperation secret to keep the human interaction partner from changing his or her behavior from what would be done with an autonomous character.
Rather, it’s an iterative process of - demonstrate, - apply machine learning on the demonstration data set, - observe behavior from the learned model, - create more demonstrations in those contexts the robot isn’t acting satisfactorily, - apply machine learning, and so on.
Throughout these iterations, the teleoperator and algorithm designer also reflect on what contextual information still needs to be encoded to improve learning, determine what context cannot be encoded and therefore should be ignored by the teleoperator, and find fun and more delightful behavior than originally puppeteered.
Compared to a no-robot condition, the teleoperated robot and the autonomous robot programmed by machine learning elicited similar behavior from their human interaction partners.
If you’re familiar with the Turing Test, the popularly known test of the effectiveness of an artificial intelligence, you might recognize that the MIT study constitutes passing a certain narrow and social Turing Test.
It combined algorithmic optimization—which can result in markedly unnatural behavior that still achieves the specified goal— of its combat effectiveness with selective replay of recorded human behavior.
Bots_alive uses your smartphone to drive artificially intelligent spider robots
Artificial intelligence is all the rage in robotics these days, and for good reason: Properly implemented, it has the potential to program ‘bots on the fly.
AI programmers typically give robots personalities with decision trees, Knox explained, dictating the rules by which they abide when behaving in certain ways.
He placed the robot near a handful of blue blocks and red blocks, and defined two simple rules: The robot was to move toward blue blocks and perceive red blocks as barriers.
The next scenario was a little more challenging: An unbreakable barrier of red blocks encircling the robot and a blue block just beyond reach.
Impressively, the robot broke through the barrier, inching backward and forward until it managed to create an opening in the barrier through which it could escape.
In play tests, users have placed blue blocks at the top of stacked red blocks, Knox said, and the robot has knocked them over.
Over-the-air software updates will enable new features like nonverbal signs of social interaction between robots, Knox said.
If the Kickstarter campaign reaches its first stretch goal, users will be able to pit two robots against each other in a robot battle to the death.
“We don’t have explicit plans, but one of the main things that we’re looking forward to in the Kickstarter campaign is what people would value.
For $60, you get the full kit, including the Hexbug Spider, decals, five vision blocks, an IR blaster, and the mobile app.
- On Saturday, February 29, 2020
bots_alive Intelligent And Autonomous Hexbug - Behold The Future
Behold The Future...bots_alive | robots with playful artificial intelligence. Playful immersion in advanced STEM. Robots solve kid-built mazes with lifelike artificial intelligence. Supports...
These Autonomous Robot Spiders Can Learn
These robot spiders' unpredictable behavior is based on humans. Subscribe to Vocativ: Find us everywhere else: Website:..