AI News, Hyping Artificial Intelligence, Yet Again

Hyping Artificial Intelligence, Yet Again

This past Sunday’s story, by John Markoff, announced that “computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.” The deep-learning story, from a year ago, also by Markoff, told us of “advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking.” For fans of “Battlestar Galactica,” it sounds like exciting stuff.

As the author notes in passing, “the new computing approach” is “already in use by some large technology companies.” Mostly, the article seems to be about neuromorphic processors—computer processors that are organized to be somewhat brainlike—though, as the piece points out, they have been around since the nineteen-eighties.

(If you check the archives, the Times billed it as a revolution, with the headline “NEW NAVY DEVICE LEARNS BY DOING.” The New Yorker similarly gushed about the advancement.) The only new thing mentioned is a computer chip, as yet unproven but scheduled to be released this year, along with the claim that it can “potentially [make] the term ‘computer crash’ obsolete.” Steven Pinker wrote me an e-mail after reading the Times story, saying “We’re back in 1985!”—the last time there was huge hype in the mainstream media about neural networks.

lab, put it a few months ago in a Google+ post, a kind of open letter to the media, “AI [has] ‘died’ about four times in five decades because of hype: people made wild claims (often to impress potential investors or funding agencies) and could not deliver.

after typing in “better than ‘Cats!’ ” (which the system correctly interpreted as positive), the first thing I tested was a Rotten Tomatoes excerpt of a review of the last movie I saw, “American Hustle”: “A sloppy, miscast, hammed up, overlong, overloud story that still sends you out of the theater on a cloud of rapture.” The deep-learning system couldn’t tell me that the review was ironic, or that the reviewer thought the whole was more than the sum of the parts.

As a more balanced article on the same topic in Technology Review recently reported, some neuroscientists, including Henry Markram, the director of a European project to simulate the human brain, are quite skeptical of the currently implemented neuromorphic systems on the grounds that their representations of the brain are too simplistic and abstract.

The Business of Artificial Intelligence

For more than 250 years the fundamental drivers of economic growth have been technological innovations.

The internal combustion engine, for example, gave rise to cars, trucks, airplanes, chain saws, and lawnmowers, along with big-box retailers, shopping centers, cross-docking warehouses, new supply chains, and, when you think about it, suburbs.

that is, the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it’s given.

The effects of AI will be magnified in the coming decade, as manufacturing, retailing, transportation, finance, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models to take advantage of machine learning.

We see business plans liberally sprinkled with references to machine learning, neural nets, and other forms of the technology, with little connection to its real capabilities.

The term artificial intelligence was coined in 1955 by John McCarthy, a math professor at Dartmouth who organized the seminal conference on the topic the following year.

A study by the Stanford computer scientist James Landay and colleagues found that speech recognition is now about three times as fast, on average, as typing on a cell phone.

Vision systems, such as those used in self-driving cars, formerly made a mistake when identifying a pedestrian as often as once in 30 frames (the cameras in these systems record about 30 frames a second);

The error rate for recognizing images from a large database called ImageNet, with several million photographs of common, obscure, or downright weird images, fell from higher than 30% in 2010 to about 4% in 2016 for the best systems.

Google’s DeepMind team has used ML systems to improve the cooling efficiency at data centers by more than 15%, even after they were optimized by human experts.

A system using IBM technology automates the claims process at an insurance company in Singapore, and a system from Lumidatum, a data science platform firm, offers timely advice to improve customer support.

Infinite Analytics developed one ML system to predict whether a user would click on a particular ad, improving online ad placement for a global consumer packaged goods company, and another to improve customers’

For instance, Aptonomy and Sanbot, makers respectively of drones and robots, are using improved vision systems to automate much of the work of security guards.

More fundamentally, we can marvel at a system that understands Chinese speech and translates it into English, but we don’t expect such a system to know what a particular Chinese character means —

The fallacy that a computer’s narrow understanding implies broader understanding is perhaps the biggest source of confusion, and exaggerated claims, about AI’s progress.

The most important thing to understand about ML is that it represents a fundamentally different approach to creating software: The machine learns from examples, rather than being explicitly programmed for a particular outcome.

For most of the past 50 years, advances in information technology and its applications have focused on codifying existing knowledge and procedures and embedding them in machines.

In this second wave of the second machine age, machines built by humans are learning from examples and using structured feedback to solve on their own problems such as Polanyi’s classic one of recognizing a face.

Artificial intelligence and machine learning come in many flavors, but most of the successes in recent years have been in one category: supervised learning systems, in which the machine is given lots of examples of the correct answer to a particular problem.

Deep Learning

He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.

The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.

Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats.

In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin.

Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.

Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form.

These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes.

The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog.

This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds.

Because the multiple layers of neurons allow for more precise training on the many variants of a sound, the system can recognize scraps of sound more reliably, especially in noisy environments such as subway platforms.

Hawkins, author of On Intelligence, a 2004 book on how the brain works and how it might provide a guide to building intelligent machines, says deep learning fails to account for the concept of time.

Brains process streams of sensory data, he says, and human learning depends on our ability to recall sequences of patterns: when you watch a video of a cat doing something funny, it’s the motion that matters, not a series of still images like those Google used in its experiment.

In high school, he wrote software that enabled a computer to create original music in various classical styles, which he demonstrated in a 1965 appearance on the TV show I’ve Got a Secret.

Since then, his inventions have included several firsts—a print-to-speech reading machine, software that could scan and digitize printed text in any font, music synthesizers that could re-create the sound of orchestral instruments, and a speech recognition system with a large vocabulary.

This isn’t his immediate goal at Google, but it matches that of Google cofounder Sergey Brin, who said in the company’s early days that he wanted to build the equivalent of the sentient computer HAL in 2001: A Space Odyssey—except one that wouldn’t kill people.

“My mandate is to give computers enough understanding of natural language to do useful things—do a better job of search, do a better job of answering questions,” he says.

queries as quirky as “a long, tiresome speech delivered by a frothy pie topping.” (Watson’s correct answer: “What is a meringue harangue?”) Kurzweil isn’t focused solely on deep learning, though he says his approach to speech recognition is based on similar theories about how the brain works.

“That’s not a project I think I’ll ever finish.” Though Kurzweil’s vision is still years from reality, deep learning is likely to spur other applications beyond speech and image recognition in the nearer term.

Microsoft’s Peter Lee says there’s promising early research on potential uses of deep learning in machine vision—technologies that use imaging for applications such as industrial inspection and robot guidance.

The Great A.I. Awakening

The ones that got “cat” right get their votes counted double next time — at least when they’re voting for “cat.” They have to prove independently whether they’re also good at picking out dogs and defibrillators, but one thing that makes a neural network so flexible is that each individual unit can contribute differently to different desired outcomes.

The neural network just needs to register enough of a regularly discernible signal somewhere to say, “Odds are, this particular arrangement of pixels represents something these humans keep calling ‘cats.’ ” The more “voters” you have, and the more times you make them vote, the more keenly the network can register even very weak signals.

The neuronal “voters” will recognize a happy cat dozing in the sun and an angry cat glaring out from the shadows of an untidy litter box, as long as they have been exposed to millions of diverse cat scenes.

You just need lots and lots of the voters — in order to make sure that some part of your network picks up on even very weak regularities, on Scottish Folds with droopy ears, for example — and enough labeled data to make sure your network has seen the widest possible variance in phenomena.

If a machine was asked to identify creditworthy candidates for loans, it might use data like felony convictions, but if felony convictions were unfair in the first place — if they were based on, say, discriminatory drug laws — then the loan recommendations would perforce also be fallible.

What the cat paper demonstrated was that a neural network with more than a billion “synaptic” connections — a hundred times larger than any publicized neural network to that point, yet still many orders of magnitude smaller than our brains — could observe raw, unlabeled data and pick out for itself a high-order human concept.

(The researchers discovered this with the neural-network equivalent of something like an M.R.I., which showed them that a ghostly cat face caused the artificial neurons to “vote” with the greatest collective enthusiasm.) Most machine learning to that point had been limited by the quantities of labeled data.

Hyping Artificial Intelligence, Yet Again

This past Sunday’s story, by John Markoff, announced that “computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.” The deep-learning story, from a year ago, also by Markoff, told us of “advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking.” For fans of “Battlestar Galactica,” it sounds like exciting stuff.

As the author notes in passing, “the new computing approach” is “already in use by some large technology companies.” Mostly, the article seems to be about neuromorphic processors—computer processors that are organized to be somewhat brainlike—though, as the piece points out, they have been around since the nineteen-eighties.

(If you check the archives, the Times billed it as a revolution, with the headline “NEW NAVY DEVICE LEARNS BY DOING.” The New Yorker similarly gushed about the advancement.) The only new thing mentioned is a computer chip, as yet unproven but scheduled to be released this year, along with the claim that it can “potentially [make] the term ‘computer crash’ obsolete.” Steven Pinker wrote me an e-mail after reading the Times story, saying “We’re back in 1985!”—the last time there was huge hype in the mainstream media about neural networks.

lab, put it a few months ago in a Google+ post, a kind of open letter to the media, “AI [has] ‘died’ about four times in five decades because of hype: people made wild claims (often to impress potential investors or funding agencies) and could not deliver.

after typing in “better than ‘Cats!’ ” (which the system correctly interpreted as positive), the first thing I tested was a Rotten Tomatoes excerpt of a review of the last movie I saw, “American Hustle”: “A sloppy, miscast, hammed up, overlong, overloud story that still sends you out of the theater on a cloud of rapture.” The deep-learning system couldn’t tell me that the review was ironic, or that the reviewer thought the whole was more than the sum of the parts.

As a more balanced article on the same topic in Technology Review recently reported, some neuroscientists, including Henry Markram, the director of a European project to simulate the human brain, are quite skeptical of the currently implemented neuromorphic systems on the grounds that their representations of the brain are too simplistic and abstract.

AI could get 100 times more energy-efficient with IBM’s new artificial synapses

The catch is that neural nets, which are modeled loosely on the structure of the human brain, are typically constructed in software rather than hardware, and the software runs on conventional computer chips.

This method “addresses a few key issues,” most notably low accuracy, that have bedeviled previous efforts to build artificial neural networks in silicon, says Michael Schneider, a researcher at that National Institute of Standardsand Technology who is researching neurologically inspired computer hardware.

Although the company doesn’t sell computer chips these days, it has been investing in efforts to reinvent computer hardware, hoping that fundamentally new types of microelectronic components might help provide impetus for the next big advances.

The design of IBM’s chips is also still relatively clunky, consisting of five transistors and three other components where there would be a single transistor on a normal chip.Some aspects of the system, moreover, have so far been tested only in simulation, a common technique for validating microchip designs.

Google's Deep Mind Explained! - Self Learning A.I.

Subscribe here: Become a Patreon!: Visual animal AI: .

Glossika's AI-Driven Language Learning

Michael Campbell explains how Glossika uses AI and machine learning technology to optimize the language learning experience to help learners achieve ...

The Impact of AI on Autonomous Vehicles

In this webinar with IHS Markit, learn how automotive OEMs and chip designers can leverage AI, deep learning, and convolutional neural networks (CNNs) to ...

Horrifying Deep Web Stories "Why I Quit Hacking.." (Graphic) A Scary Hacker Story

WARNING, GRAPHIC CONTENT AND LANGUAGE. YOU HAVE BEEN WARNED. The Deep Web / Dark Web is a very real and scary place.. Let me know what ...

Probabilistic Machine Learning and AI

How can a machine learn from experience? Probabilistic modelling provides a mathematical framework for understanding what learning is, and has therefore ...

From Deep Learning of Disentangled Representations to Higher-level Cognition

One of the main challenges for AI remains unsupervised learning, at which humans are much better than machines, and which we link to another challenge: ...

Nvidia gave away its newest AI chips for free - and that's part of the reason why it's dominating th

One wouldn't think that giving away your best product is a winning business strategy, but forNvidia , it's one that's working. The graphics processing unit (GPU) ...

PolyNetwork AI Lending Platform Review ICO Opens Nov 1st

Lock in Your Position at PolyNetwork for FREE here: WHAT IS POLYNETWORK AI? POLYNETWORK AI is a studying ..

Google DeepMind's Deep Q-learning playing Atari Breakout

Google DeepMind created an artificial intelligence program using deep reinforcement learning that plays Atari games and improves itself to a superhuman level.

ICO Review: Effect.AI (EFX) the Decentralized Network for Artificial Intelligence

ICO Review: Effect.AI (EFX) the Decentralized Network for Artificial Intelligence Effect.ai is an open, decentralized network that provides services in the Artificial ...