AI News, The world's first demonstration of spintronics-based artificial intelligence

The world's first demonstration of spintronics-based artificial intelligence

Artificial intelligence, which emulates the information processing function of the brain that can quickly execute complex and complicated tasks such as image recognition and weather prediction, has attracted growing attention and has already been partly put to practical use.

The used spintronic device is capable of memorizing arbitral values between 0 and 1 in an analogue manner unlike the conventional magnetic devices, and thus perform the learning function, which is served by synapses in the brain.

Through the multiple trials, they confirmed that the spintronic devices have a learning ability with which the developed artificial neural network can successfully associate memorized patterns from their input noisy versions just like the human brain can.

(Brain Networks Lab, Neural Intelligence Lab, Multi-scale modeling of mouse brain networks project, Topographica cortical map simulator project) (Intelligent sensors, speech processing, face recognition, machine olfaction, neuromorphic computation, mobile robotics, pattern recognition, machine learning) (Sketch recognition, gesture recognition, haptics, hand-tracking, artificial intelligence, human computer interfaces)  

How a Toronto professor’s research revolutionized artificial intelligence

Four years ago, in one of those snack-filled micro-kitchens, Jeff Dean, a longtime Google engineer, bumped into Andrew Ng, a Stanford University computer science professor and visiting researcher.

After three days, the researchers came back and ran a series of visualizations on the neural net to see what its strongest impressions were.

(If you don’t think cat videos are important, go ahead and stop watching them.) The team moved out of the Google X labs and quietly began absorbing the world’s tiny cadre of neural network specialists.

He was asked to tinker with Google’s speech recognition algorithms, and he responded by suggesting they gut half their system and replace it with a neural net.

Jaitly’s program outperformed systems that had been fine-tuned for years, and the results of his work found their way into Android, Google’s mobile operating system —

The U of T model took the error rate of the best-performing algorithms to date, and the error rate of the average human, and snapped the difference in half like a dry twig.

In December 2013, researchers from a small British startup called DeepMind Technologies posted a preprint of a research paper that showed how it had taught a neural net to play, and beat, Atari games.

By January, Google had paid a reported $400 million for DeepMind, a company with an impressive roster of deep learning talent and no products to speak of.

Deep learning became one of the hottest trends in tech practically overnight, and industry insiders estimate Google employs half of the world’s experts, if not more.

The man who designed it claimed it would eventually be able to read and write, and the story said it would be the “first device to think as the human brain …

Rosenblatt was convinced that algorithms could learn as human brains do, and his machine made use of this architecture: like the brain’s web of neurons, information travels through interconnected layers of nodes.

But as an undergraduate at Cambridge University, Hinton bopped from discipline to discipline, finding no satisfaction: not in physiology, not in philosophy and certainly not in psychology, though his degree was finally from that department.

Backpropagation made neural nets substantially better at tasks such as recognizing simple shapes and predicting a third word after seeing two.

The Great A.I. Awakening

The ones that got “cat” right get their votes counted double next time — at least when they’re voting for “cat.” They have to prove independently whether they’re also good at picking out dogs and defibrillators, but one thing that makes a neural network so flexible is that each individual unit can contribute differently to different desired outcomes.

The neural network just needs to register enough of a regularly discernible signal somewhere to say, “Odds are, this particular arrangement of pixels represents something these humans keep calling ‘cats.’ ” The more “voters” you have, and the more times you make them vote, the more keenly the network can register even very weak signals.

The neuronal “voters” will recognize a happy cat dozing in the sun and an angry cat glaring out from the shadows of an untidy litter box, as long as they have been exposed to millions of diverse cat scenes.

You just need lots and lots of the voters — in order to make sure that some part of your network picks up on even very weak regularities, on Scottish Folds with droopy ears, for example — and enough labeled data to make sure your network has seen the widest possible variance in phenomena.

If a machine was asked to identify creditworthy candidates for loans, it might use data like felony convictions, but if felony convictions were unfair in the first place — if they were based on, say, discriminatory drug laws — then the loan recommendations would perforce also be fallible.

What the cat paper demonstrated was that a neural network with more than a billion “synaptic” connections — a hundred times larger than any publicized neural network to that point, yet still many orders of magnitude smaller than our brains — could observe raw, unlabeled data and pick out for itself a high-order human concept.

(The researchers discovered this with the neural-network equivalent of something like an M.R.I., which showed them that a ghostly cat face caused the artificial neurons to “vote” with the greatest collective enthusiasm.) Most machine learning to that point had been limited by the quantities of labeled data.

Stanford researchers create a high-performance, low-energy artificial synapse for neural network computing

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain.

Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper.

For many key metrics, it also performs better than anything that’s been done before with inorganics.” The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them.

Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper.

In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A.

“We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.” This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform.

Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models.

Research News

Artificial intelligence, which emulates the information processing function of the brain that can quickly execute complex and complicated tasks such as image recognition and weather prediction, has attracted growing attention and has already been partly put to practical use.

The used spintronic device is capable of memorizing arbitral values between 0 and 1 in an analogue manner unlike the conventional magnetic devices, and thus perform the learning function, which is served by synapses in the brain.

The proof-of-concept demonstration in this research is expected to open new horizons in artificial intelligence technology - one which is of a compact size, and which simultaneously achieves fast-processing capabilities and ultralow-power consumption.

Neuromorphic Computing, AI Chips Emulating the Brain with Kelsey Scharnhorst on MIND & MACHINE

Today we explore Artificial Intelligence (AI) through Neuromorphic Computing with computer chips that emulate the biological neurons and synapses in the brain ...

Machine Learning in Neuroscience

How are machine learning and neuroscience related? I'll discuss some of the discoveries in neuroscience that have produced breakthroughs in machine ...

Artificial Intelligence, the History and Future - with Chris Bishop

Chris Bishop discusses the progress and opportunities of artificial intelligence research. Subscribe for weekly science videos: The last ..

Could We Upload Our Consciousness To A Computer?

Would it ever be possible to one day upload our consciousness to a computer? How would we go about this? Read More: The Brain vs. The Computer ...

Hybrid Intelligence: Coupling AI and the Human Brain | Edward Boyden bigthink

Edward Boyden is a Hertz Foundation Fellow and recipient of the prestigious Hertz Foundation Grant for graduate study in the applications of the physical, ...

John Searle: "Consciousness in Artificial Intelligence" | Talks at Google

John Searle is the Slusser Professor of Philosophy at the University of California, Berkeley. His Talk at Google is focused on the philosophy of mind and the ...

Andrew Ng: Artificial Intelligence is the New Electricity

On Wednesday, January 25, 2017, Baidu chief scientist, Coursera co-founder, and Stanford adjunct professor Andrew Ng spoke at the Stanford MSx Future ...

MIT Intelligence Quest Launch: The Future of Intelligence Science

James J. DiCarlo, head of the Department of Brain and Cognitive Sciences and the Peter de Florez Professor of Neuroscience, describes the future of ...

Joe Rogan's Mind Is Blown By Biologist Explaining Fungal Intelligence

Joe Rogan talks to Mycologist, Paul Stamets, and has his mind blown when Stamets explains fungal intelligence and the Stoned Ape Hypothesis to him on the ...

MIT Intelligence Quest Launch: How AI Enables the Home to Monitor Our Physical and Mental Health

Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, describes her research on artificial intelligence and ...