AI News, Quiz-playing computer system could revolutionize research
- On 3. juni 2018
- By Read More
Quiz-playing computer system could revolutionize research
Three years ago, researchers at the secretive Google X lab in Mountain View, California, extracted some 10 million still images from YouTube videos and fed them into Google Brain — a network of 1,000 computers programmed to soak up the world much as a human toddler does.
After three days looking for recurring patterns, Google Brain decided, all on its own, that there were certain repeating categories it could identify: human faces, human bodies and … cats1.
But it was also a landmark in the resurgence of deep learning: a three-decade-old technique in which massive amounts of data and processing power help computers to crack messy problems that humans solve almost intuitively, from recognizing faces to understanding language.
These systems, loosely inspired by the densely interconnected neurons of the brain, mimic human learning by changing the strength of simulated neural connections on the basis of experience.
For everyday consumers, the results include software better able to sort through photos, understand spoken commands and translate text from foreign languages.
For scientists and industry, deep-learning computers can search for potential drug candidates, map real neural networks in the brain or predict the functions of proteins.
“Over time people will decide what works best in different domains.” Back in the 1950s, when computers were new, the first generation of AI researchers eagerly predicted that fully fledged AI was right around the corner.
But that optimism faded as researchers began to grasp the vast complexity of real-world knowledge — particularly when it came to perceptual problems such as what makes a face a human face, rather than a mask or a monkey face.
By the 2000s, however, advocates such as LeCun and his former supervisor, computer scientist Geoffrey Hinton of the University of Toronto in Canada, were convinced that increases in computing power and an explosion of digital data meant that it was time for a renewed push.
In 2009, the researchers reported2 that after training on a classic data set — three hours of taped and transcribed speech — their deep-learning neural network had broken the record for accuracy in turning the spoken word into typed text, a record that had not shifted much in a decade with the standard, rules-based approach.
The project's ability to spot cats was a compelling (but not, on its own, commercially viable) demonstration of unsupervised learning — the most difficult learning task, because the input comes without any explanatory information such as names, titles or categories.
“After many of my talks,” he says, “depressed graduate students would come up to me and say: 'I don't have 1,000 computers lying around, can I even research this?'” So back at Stanford, Ng started developing bigger, cheaper deep-learning networks using graphics processing units (GPUs) — the super-fast chips developed for home-computer gaming3.
With triumphs in hand for image and speech recognition, there is now increasing interest in applying deep learning to natural-language understanding — comprehending human discourse well enough to rephrase or answer questions, for example — and to translation from one language to another.
“Deep learning will have a chance to do something much better than the current practice here,” says crowd-sourcing expert Luis von Ahn, whose company Duolingo, based in Pittsburgh, Pennsylvania, relies on humans, not computers, to translate text.
The task was to trawl through database entries on more than 30,000 small molecules, each of which had thousands of numerical chemical-property descriptors, and to try to predict how each one acted on 15 different target molecules.
Seung is currently using a deep-learning program to map neurons in a large chunk of the retina, then forwarding the results to be proofread by volunteers in a crowd-sourced online game called EyeWire.
William Stafford Noble, a computer scientist at the University of Washington in Seattle, has used deep learning to teach a program to look at a string of amino acids and predict the structure of the resulting protein — whether various portions will form a helix or a loop, for example, or how easy it will be for a solvent to sneak into gaps in the structure.
Etzioni's specific goal is to invent a computer that, when given a stack of scanned textbooks, can pass standardized elementary-school science tests (ramping up eventually to pre-university exams).
In December 2012, it hired futurist Ray Kurzweil to pursue various ways for computers to learn from experience — using techniques including but not limited to deep learning.
Why Deep Learning Is Suddenly Changing Your Life
Over the past four years, readers have doubtlessly noticed quantum leaps in the quality of a wide range of everyday technologies.
To gather up dog pictures, the app must identify anything from a Chihuahua to a German shepherd and not be tripped up if the pup is upside down or partially obscured, at the right of the frame or the left, in fog or snow, sun or shade.
Medical startups claim they’ll soon be able to use computers to read X-rays, MRIs, and CT scans more rapidly and accurately than radiologists, to diagnose cancer earlier and less invasively, and to accelerate the search for life-saving pharmaceuticals.
They’ve all been made possible by a family of artificial intelligence (AI) techniques popularly known as deep learning, though most scientists still prefer to call them by their original academic designation: deep neural networks.
Programmers have, rather, fed the computer a learning algorithm, exposed it to terabytes of data—hundreds of thousands of images or years’ worth of speech samples—to train it, and have then allowed the computer to figure out for itself how to recognize the desired objects, words, or sentences.
“You essentially have software writing software,” says Jen-Hsun Huang, CEO of graphics processing leader Nvidia nvda , which began placing a massive bet on deep learning about five years ago.
What’s changed is that today computer scientists have finally harnessed both the vast computational power and the enormous storehouses of data—images, video, audio, and text files strewn across the Internet—that, it turns out, are essential to making neural nets work well.
“We’re now living in an age,” Chen observes, “where it’s going to be mandatory for people building sophisticated software applications.” People will soon demand, he says, “ ‘Where’s your natural-language processing version?’ ‘How do I talk to your app?
The increased computational power that is making all this possible derives not only from Moore’s law but also from the realization in the late 2000s that graphics processing units (GPUs) made by Nvidia—the powerful chips that were first designed to give gamers rich, 3D visual experiences—were 20 to 50 times more efficient than traditional central processing units (CPUs) for deep-learning computations.
Its chief financial officer told investors that “the vast majority of the growth comes from deep learning by far.” The term “deep learning” came up 81 times during the 83-minute earnings call.
I think five years from now there will be a number of S&P 500 CEOs that will wish they’d started thinking earlier about their AI strategy.” Even the Internet metaphor doesn’t do justice to what AI with deep learning will mean, in Ng’s view.
Google is Using Deep Learning to Make Computers Better with Age!
The hardware in our machines struggles to handle the latest software updates and inevitable leads to wear and tear.
In their latest research paper, they have proposed a solution using deep learning that might make our machines better with age!
Basically, our computers process information at a rate that’s way more quicker than what’s being extracted from the memory.
it’s deep learning model, made up of a gigantic simulated neural network, is working on revamping the prefetching process.
The researchers are confident that their deep learning model, once improved further, could potentially be applied to all components of a machine, from the design chips to the integrated OS.
He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.
The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs.
Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats.
In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin.
Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.
Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power.
Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form.
These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.
Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes.
The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog.
This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.
Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds.
Because the multiple layers of neurons allow for more precise training on the many variants of a sound, the system can recognize scraps of sound more reliably, especially in noisy environments such as subway platforms.
Hawkins, author of On Intelligence, a 2004 book on how the brain works and how it might provide a guide to building intelligent machines, says deep learning fails to account for the concept of time.
Brains process streams of sensory data, he says, and human learning depends on our ability to recall sequences of patterns: when you watch a video of a cat doing something funny, it’s the motion that matters, not a series of still images like those Google used in its experiment.
In high school, he wrote software that enabled a computer to create original music in various classical styles, which he demonstrated in a 1965 appearance on the TV show I’ve Got a Secret.
Since then, his inventions have included several firsts—a print-to-speech reading machine, software that could scan and digitize printed text in any font, music synthesizers that could re-create the sound of orchestral instruments, and a speech recognition system with a large vocabulary.
This isn’t his immediate goal at Google, but it matches that of Google cofounder Sergey Brin, who said in the company’s early days that he wanted to build the equivalent of the sentient computer HAL in 2001: A Space Odyssey—except one that wouldn’t kill people.
“My mandate is to give computers enough understanding of natural language to do useful things—do a better job of search, do a better job of answering questions,” he says.
queries as quirky as “a long, tiresome speech delivered by a frothy pie topping.” (Watson’s correct answer: “What is a meringue harangue?”) Kurzweil isn’t focused solely on deep learning, though he says his approach to speech recognition is based on similar theories about how the brain works.
“That’s not a project I think I’ll ever finish.” Though Kurzweil’s vision is still years from reality, deep learning is likely to spur other applications beyond speech and image recognition in the nearer term.
Microsoft’s Peter Lee says there’s promising early research on potential uses of deep learning in machine vision—technologies that use imaging for applications such as industrial inspection and robot guidance.
Google says machine learning is the future. So I tried it myself
But it’s a really important tool.” The most powerful form of machine learning being used today, called “deep learning”, builds a complex mathematical structure called a neural network based on vast quantities of data.
And now everyone, regardless of whether they’re an engineer or a software developer or a product designer or a CEO understands how internet connectivity shapes their product, shapes the market, what they could possibly build.” He says that same kind of transformation is going to happen with machine learning.
They don’t have to do the detailed things, but they need to understand ‘well, wait a minute, maybe we could do this if we had data to learn from.’” Google’s own implementation of the idea, an open-source software suite called TensorFlow, was built from the ground up to be useable by both the researchers at the company attempting to understand the powerful models they create, as well as the engineers who are already taking them, bottling them up, and using them to categorise photos or let people search with their voice.
When Google made TensorFlow open to anyone to use, it wrote: “By sharing what we believe to be one of the best machine learning toolboxes in the world, we hope to create an open standard for exchanging research ideas and putting machine learning in products”.
And it’s not alone in that: every major machine learning implementation is available for free to use and modify, meaning it’s possible to set up a simple machine intelligence with nothing more than a laptop and a web connection.
On 16 June, it announced that it was opening a dedicated Machine Learning group in its Zurich engineering office, the largest collection of Google developers outside of the US, to lead research into three areas: machine intelligence, natural language processing, and machine perception.
Of course it’s terrible: if I could train a machine to write a convincing Guardian editorial, or even a convincing sentence extract from a Guardian editorial, in two days by copying a readme and fiddling around with complex software which I don’t really understand even after having successfully used it, then my job would be much less secure than it is.
(I could have run the system on every story in the archive, but it learns better if there’s a consistent tone and style for it to emulate – something leader columns, which are all written in the voice of the paper, have).
My inbox has consistently seen one or two a day for the past year, from an “online personal styling service” which uses deep learning to match people to clothes, to a “knowledge discovery engine” which aims to beat Google at its own game.
“The fact that they’re getting faster and cheaper is part of what’s making this possible.” Right now, he says, doing machine learning yourself is like trying to go online by manually coding a TCP/IP stack.
The Robot Changing School for Students with Disabilities
Using algorithms partially modeled on the human brain, researchers from the Massachusetts Institute of Technology have enabled computers to predict the immediate future by examining a photograph.
program created at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) essentially watched 2 million online videos and observed how different types of scenes typically progress: people walk across golf courses, waves crash on the shore, and so on.
Experts say deep learning, which uses mathematical structures called neural networks to pull patterns from massive sets of data, could soon let computers make diagnoses from medical images, detect bank fraud, predict customer order patterns, and operate vehicles at least as well as people.
The 'deep' in deep learning refers to using tall stacks of these layers to collectively uncover more complex patterns in the data, expanding its understanding from pixels to basic shapes to features like stop signs and brake lights.
The team's program beat rivals by a wide margin at classifying objects in photographs into categories, performing with a 15.3 percent error rate compared to a 26.2 percent rate for the second-place entry.
'A lot of people don’t have access to a specialist who can access these [diagnostic] films, especially in underserved populations where the incidence of diabetes is going up and the number of eyecare professionals is flat,' says Dr. Lily Peng, a product manager at Google and lead author on the paper.
Recent editions of the ImageNet challenge, which has added more sophisticated object recognition and scene analysis challenges as algorithms have grown more sophisticated, included hundreds of gigabytes of training data — orders of magnitude larger than a CD or DVD.
Developers at Google train new algorithms from the company's sweeping archive of search results and clicks, and companies racing to build self-driving vehicles collect vast amounts of sensor readings from heavily instrumented, human-driven cars.
Experts say, with a bit of awe, that the math operations involved aren't beyond an advanced high school student — some clever matrix multiplications to weight the data points and a bit of calculus to refine the weights in the most efficient way — but all those computations still add up.
'If you have this massive dataset, but only a very weak computer, you’re going to be waiting a long time to train that model,' says Evan Shelhamer, a graduate student at the University of California at Berkeley and lead developer on Caffe, a widely-used open source toolkit for deep learning.
One limitation is that it can be difficult to understand how neural networks are actually interpreting the data, something that could give regulators pause if the algorithms are used for sensitive tasks like driving cars, evaluating medical images, or computing credit scores.
- On 24. september 2021
Google's Deep Mind Explained! - Self Learning A.I.
Subscribe here: Become a Patreon!: Visual animal AI: .
Google's DeepMind AI Just Taught Itself To Walk
Google's artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance ...
The 7 Steps of Machine Learning
How can we tell if a drink is beer or wine? Machine learning, of course! In this episode of Cloud AI Adventures, Yufeng walks through the 7 steps involved in ...
How Machines Learn
How do all the algorithms around us learn to do their jobs? Bot Wallpapers on Patreon: Discuss this video: ..
New Google AI Can Have Real Life Conversations With Strangers
At its 2018 I/O developer conference, Google showed off some updates coming to Google Home and Assistant. One feature — Google Duplex — can make ...
Inside a Google data center
Joe Kava, VP of Google's Data Center Operations, gives a tour inside a Google data center, and shares details about the security, sustainability and the core ...
Google and NASA's Quantum Artificial Intelligence Lab
A peek at the early days of the Quantum AI Lab: a partnership between NASA, Google, USRA, and a 512-qubit D-Wave Two quantum computer. Learn more at ...
How to Build a PC! Step-by-step
Sponsor of the day - Be Quiet! Pure Base 600 on Amazon: US: Canada: UK: Specs
The Race to Quantum AI
AI is changing the world around us, making its way into businesses, health care, science and many other fields. In fact, most of us are happy to work daily with ...
UNBOXING A QUANTUM COMPUTER! – Holy $H!T Ep 19
Learn more about D-Wave: The coldest place in the known universe is on Earth! It's quantum computing company D-Wave's HQ, and they ..