AI News, Revolutionizing everyday products with artificial intelligence

Revolutionizing everyday products with artificial intelligence

But the ability to replicate a human brain’s ability to learn is incredibly difficult.” Kim specializes in machine learning, which relies on algorithms to teach computers how to learn like a human brain.

While the phrase “machine learning” often conjures up science fiction typified in shows like 'Westworld' or 'Battlestar Galactica,' smart systems and devices are already pervasive in the fabric of our daily lives.

Rather than building the sentient robots romanticized in popular culture, these researchers are working on projects that improve everyday life and make humans safer, more efficient, and better informed.  Making portable devices smarter Jeehwan Kim holds up sheet of paper.

“So, you build artificial neurons and synapses on a small-scale wafer.” The result is a so-called ‘brain-on-a-chip.’ Rather than compute information from binary signaling, Kim’s neural network processes information like an analog device.

In a Nature Materials study published earlier this year, Kim found that when his team made a chip out of silicon germanium they were able to control the current flowing out of the synapse and reduce variability to 1 percent.

“The potential is limitless – we can integrate this technology in our phones, computers, and robots to make them substantially smarter.” Making homes smarter While Kim is working on making our portable products more intelligent, Professor Sanjay Sarma and Research Scientist Josh Siegel hope to integrate smart devices within the biggest product we own: our homes.  One evening, Sarma was in his home when one of his circuit breakers kept going off.

If he could embed the AFCI with smart technologies and connect it to the ‘internet of things,’ he could teach the circuit breaker to learn when a product is safe or when a product actually poses a fire risk.

“Virus scanners are connected to a system that updates them with new virus definitions over time.” If Sarma and Siegel could embed similar technology into AFCIs, the circuit breakers could detect exactly what product is being plugged in and learn new object definitions over time.

If, for example, a new vacuum cleaner is plugged into the circuit breaker and the power shuts off without reason, the smart AFCI can learn that it’s safe and add it to a list of known safe objects.

“You don’t teach these devices all the rules, you teach them how to learn the rules.” Making manufacturing and design smarter Artificial intelligence can not only help improve how users interact with products, devices, and environments.

“Having 3-D printers that learn how to create parts with fewer defects and inspect parts as they make them will be a really big deal — especially when the products you’re making have critical properties such as medical devices or parts for aircraft engines,” Hart explains.   The very process of designing the structure of these parts can also benefit from intelligent software.

“The goal is to enable effective collaboration between intelligent tools and human designers.” In a recent study, Yang and graduate student Edward Burnell tested a design tool with varying levels of automation.

You can think of all kinds of applications — medical, health care, factories.” Kim sees opportunity to eventually connect his research with the physical neural network his colleague Jeewhan Kim is working on.

“Jeewhan’s neural network hardware could possibly enable that someday.” Combining the power of a portable neural network with a robot capable of skillfully navigating its surroundings could open up a new world of possibilities for human and AI interaction.

Whether it’s using face and handwriting recognition to protect our information, tapping into the internet of things to keep our homes safe, or helping engineers build and design more efficiently, the benefits of AI technologies are pervasive.

Engineers design artificial synapse for 'brain-on-a-chip' hardware

There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a 'brain on a chip' would work in an analog fashion, exchanging a gradient of signals, or 'weights,' much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

Kim says that's because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel -- a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

perfect mismatch Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment.

They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses -- a much more uniform performance compared with synapses made from amorphous material.

Writing, recognized As a final test, Kim's team explored how its device would perform if it were to carry out actual learning tasks -- specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips.

Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip.

They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.

Looking beyond handwriting, Kim says the team's artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.

MIT researchers say new chip design takes us closer to computers that work like our brains

Advances in machine learning have moved at a gallop in recent years, but the computer processors these programs run on have barely changed.

To remedy this, companies have been re-tuning existing chip architecture to fit the demands of AI, but on the cutting edge of research, an entirely new approach is taking shape: remaking processors so they work more like our brains.

This is called “neuromorphic computing,” and scientists from MIT this week said they’ve made significant progress in getting this new breed of chips up and running.

Their research, published in the journal Nature Materials, could eventually lead to processors that run machine learning tasks with lower energy demands — up to 1,000 times less.

This means that instead of sending information in a series of on / off electrical bursts, they vary the intensity of these signals — just like our brain’s synapses do.

Using it, they were able to train a neural network that could recognize handwriting (a standard training task for new forms of AI) with 95 percent accuracy.

Kurzweilaccelerating intelligence

MIT engineers have designed a new artificial synapse made from silicon germanium that can precisely control the strength of an electric current flowing across it.

The idea is to apply a voltage across layers that would cause ions (electrically charged atoms) to move in a switching medium (synapse-like space) to create conductive filaments in a manner that’s similar to how the “weight” (connection strength) of a synapse changes.

Instead of carrying out computations based on binary, on/off signaling, like current digital chips, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights” —

Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials could form a funnel-like dislocation, creating a single path through which ions can predictably flow.* This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Such chips would consist of artificial “neurons” connected to other “neurons” via filament-based artificial “synapses.” They ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, based on measurements from their actual neuromorphic chip.

It contains 60,000 training images and 10,000 testing images.  Although several types of architecture combining memory cells and transistors have been used to demonstrate artificial synaptic arrays, they usually present limited scalability and high power consumption.

Transistor-free analog switching devices may overcome these limitations, yet the typical switching process they rely on—formation of filaments in an amorphous medium—is not easily controlled and hence hampers the spatial and temporal reproducibility of the performance.

Here, we demonstrate analog resistive switching devices that possess desired characteristics for neuromorphic computing networks with minimal performance variations using a single-crystalline SiGe layer epitaxially grown on Si as a switching medium.

Direct Neural Interface & DARPA - Dr Justin Sanchez

Subscribe: Follow: Like: .

Pruning Makes Faster and Smaller Neural Networks | Two Minute Papers #229

The paper "Learning to Prune Filters in Convolutional Neural Networks" is available here: We would like to thank our ..

Implants & Technology -- The Future of Healthcare? Kevin Warwick at TEDxWarwick

Kevin Warwick is Professor of Cybernetics at the University of Reading, where he carries out research in artificial intelligence, control, robotics and cyborgs.

Introduction to Computer-Aided Diagnosis in Medical Imaging (Radiology)

Learn more advanced front-end and full-stack development at: Computer-Aided Diagnosis (CAD) refers to the use of machine ..

How AI beat the best poker players in the world | Engadget R+D

"That was anticlimactic," Jason Les said with a smirk, getting up from his seat. Unlike nearly everyone else in Pittsburgh's Rivers Casino, Les had just played his ...

SLAC Dataset From MIT and Facebook | Two Minute Papers #227

The paper "SLAC: A Sparsely Labeled Dataset for Action Classification and Localization" is available here: ..

[Paper Day 2018] NestedNet: Learning Nested Sparse Structures in Deep Neural Networks

NestedNet: Learning Nested Sparse Structures in Deep Neural Networks (CVPR 2018 Spotlight) Speaker: Eunwoo Kim (SNU) Recently, there have been ...

Human Upper-Body Pose Estimation using Fully Convolutional Network and Joint Heatmap

Seunghee Lee, Jungmo Koo, Hyungjin Kim, Kwangyik Jung, and Hyun Myung, "A Robust Estimation of 2D Human Upper-body Poses using Fully Convolutional ...

Speech Technologies and Platforms - Present and Future Evolutions

Google Tech Talks March 3, 2008 ABSTRACT At the end of the last century, the landscape of speech applications was abruptly changed due to the convergence ...