AI News, Benedict Evans

Benedict Evans

Mobile means that, for the first time, pretty much everyone on earth will have a camera, taking vastly more images than were ever taken on film ('How many pictures?').

Then, the image sensor in a phone is more than just a camera that takes pictures - it’s also part of new ways of thinking about mobile UIs and services ('Imaging, Snapchat and mobile'), and part of a general shift in what a computer can do ('From mobile first to mobile native').

Meanwhile, image sensors are part of a flood of cheap commodity components coming out of the smartphone supply chain, that enable all kinds of other connected devices - everything from the Amazon Echo and Google Home to an August door lock or Snapchat Spectacles (and of course a botnet of hacked IoT devices).

When combined with cloud services and, increasingly, machine learning, these are no longer just cameras or microphones but new endpoints or distribution for services - they’re unbundled pieces of apps.

You might train an ‘is there a person in this image?’ neural network in the cloud with a vast image set - but to run it, you can put it on a cheap DSP with a cheap camera, wrap it in plastic and sell it for $10 or $20.

The key thing here is that the nice attention-grabbing demos of computer vision that recognize a dog or a tree, or a pedestrian,are just the first, obvious use cases for a fundamental new capability -to read images.And not just to read them the way humans can, but to read a billion and see the patterns.

Japanese scientists just used A.I. to read minds and it's amazing

Machine learning has previously been used to study brain scans (MRIs, or magnetic resonance imaging) and generate visualizations of what a person is thinking when referring to simple, binary images like black and white letters or simple geographic shapes (as shown in Figure 2 here).

The new technique allows the scientists to decode more sophisticated 'hierarchical' images, which have multiple layers of color and structure, like a picture of a bird or a man wearing a cowboy hat, for example.

'We have been studying methods to reconstruct or recreate an image a person is seeing just by looking at the person's brain activity,' Kamitani, one of the scientists, tells CNBC Make It.

For the research, over the course of 10 months, three subjects were shown natural images (like photographs of a bird or a person), artificial geometric shapes and alphabetical letters for varying lengths of time.

Deep Learning

He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.

The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.

Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats.

In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin.

Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.

Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form.

These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes.

The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog.

This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds.

Because the multiple layers of neurons allow for more precise training on the many variants of a sound, the system can recognize scraps of sound more reliably, especially in noisy environments such as subway platforms.

Hawkins, author of On Intelligence, a 2004 book on how the brain works and how it might provide a guide to building intelligent machines, says deep learning fails to account for the concept of time.

Brains process streams of sensory data, he says, and human learning depends on our ability to recall sequences of patterns: when you watch a video of a cat doing something funny, it’s the motion that matters, not a series of still images like those Google used in its experiment.

In high school, he wrote software that enabled a computer to create original music in various classical styles, which he demonstrated in a 1965 appearance on the TV show I’ve Got a Secret.

Since then, his inventions have included several firsts—a print-to-speech reading machine, software that could scan and digitize printed text in any font, music synthesizers that could re-create the sound of orchestral instruments, and a speech recognition system with a large vocabulary.

This isn’t his immediate goal at Google, but it matches that of Google cofounder Sergey Brin, who said in the company’s early days that he wanted to build the equivalent of the sentient computer HAL in 2001: A Space Odyssey—except one that wouldn’t kill people.

“My mandate is to give computers enough understanding of natural language to do useful things—do a better job of search, do a better job of answering questions,” he says.

queries as quirky as “a long, tiresome speech delivered by a frothy pie topping.” (Watson’s correct answer: “What is a meringue harangue?”) Kurzweil isn’t focused solely on deep learning, though he says his approach to speech recognition is based on similar theories about how the brain works.

“That’s not a project I think I’ll ever finish.” Though Kurzweil’s vision is still years from reality, deep learning is likely to spur other applications beyond speech and image recognition in the nearer term.

Microsoft’s Peter Lee says there’s promising early research on potential uses of deep learning in machine vision—technologies that use imaging for applications such as industrial inspection and robot guidance.

Device that can literally read your mind invented by scientists

A device that can read people’s minds by detecting their brainwaves has been developed in a breakthrough that could eventually enable people with “locked-in syndrome”to communicate.

The system was only partially effective with a 90 per cent success rate when trying to recognise numbers from zero to nine and a 61 per cent rate for single syllables in Japanese, the researchers said.

“At the same time, 61 per cent accuracy in 18 Japanese monosyllable recognition was achieved, outperforming performance in previous research (humans have sufficient intelligibility of sentences with an 80 per cent monosyllable recognition rate).” The researchers said other attempts to use brain waves to understand people’s thoughts had struggled to understand what was being “said”.

“Furthermore, the research group plans to develop a device that can be easily operated with fewer electrodes and connected to smartphones within the next five years.” The statement was released ahead of a conference later this year where the research will be presented in more detail.

The search for a thinking machine

It means we are getting machines that can, for example, teach themselves how to play computer games and get incredibly good at them (work ongoing at Google's DeepMind) and devices that can start to communicate in human-like speech, such as voice assistants on smartphones.

First as a PhD student and latterly as director of the computer vision lab at Stanford University, she has pursued the painstakingly difficult goal with an aim of ultimately creating the electronic eyes for robots and machines to see and, more importantly, understand their environment.

Back in 2007, Ms Li and a colleague set about the mammoth task of sorting and labelling a billion diverse and random images from the internet to offer examples of the real world for the computer - the theory being that if the machine saw enough pictures of something, a cat for example, it would be able to recognise it in real life.

To teach the computer to recognise images, Ms Li and her team used neural networks, computer programs assembled from artificial brain cells that learn and behave in a remarkably similar way to human brains.

At Stanford, the image-reading machine now writes pretty accurate captions (see examples above) for a whole range of images although it does still get things wrong - so for instance a picture of a baby holding a toothbrush was wrongly labelled 'a young boy is holding a baseball bat'.

The ultimate aim is to create 'seeing' robots that can assist in surgical operations, search out and rescue people in disaster zones and generally improve our lives for the better, said Ms Li.

Back in 1950, pioneering computer scientist Alan Turing wrote a paper speculating about a thinking machine and the term 'artificial intelligence' was coined in 1956 by Prof John McCarthy at a gathering of scientists in New Hampshire known as the Dartmouth Conference.

After some heady days and big developments in the 1950s and 60s, during which both the Stanford lab and one at the Massachusetts Institute of Technology were set up, it became clear that the task of creating a thinking machine was going to be a lot harder than originally thought.

But, by the 1990s, the focus in the AI community shifted from a logic-based approach - which basically involved writing a whole lot of rules for computers to follow - to a statistical one, using huge datasets and asking computers to mine them to solve problems for themselves.

Lecture 2 | Image Classification

Lecture 2 formalizes the problem of image classification. We discuss the inherent difficulties of image classification, and introduce data-driven approaches.

Human Pose Estimation With Deep Learning | Two Minute Papers #106

The paper "Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image" is available here: ...

On-device machine learning: TensorFlow on Android (Google Cloud Next '17)

In this video, Yufeng Guo applies deep learning models to local prediction on mobile devices. Yufeng shows you how to use TensorFlow to implement a ...

Emp Jammer for Slot Machines : How To Make

Learn how you can make a EMP Jammer to hack a Vending Machine or Slot Machine Easily. Steps to make your own emp jammer 1. Buy the High ...

Computer Vision: Crash Course Computer Science #35

Today we're going to talk about how computers see. We've long known that our digital cameras and smartphones can take incredibly detailed images, but taking ...

Demystifying Machine and Deep Learning for Developers : Build 2018

To build the next set of personalized and engaging applications, more and more developers are adding ML to their applications. In this session, you'll learn the ...

How computers are learning to be creative | Blaise Agüera y Arcas

We're on the edge of a new frontier in art and creativity — and it's not human. Blaise Agüera y Arcas, principal scientist at Google, works with deep neural ...

Transform Retail with Machine Learning: Find & Recommend products (Google Cloud Next '17)

Businesses today are realizing that they can use machine learning to improve customer experience. With the most recent models, you can simplify product ...

What Is Vagus Nerve Stimulation (VNS)? | Epilepsy

Watch more Epilepsy & Seizure Disorders videos: One of the options ..

Machine Learning - A New Programming Paradigm

In this video from RedHat Summit 2018, Cassie Kozyrkov demystifies machine learning and AI. She describes how they're simply a different way to program ...