AI News, Researchers want to teach computers to learn like humans

Researchers want to teach computers to learn like humans

This means the AI agent can dynamically learn network traffic patterns and normal behavior and thus become more effective in discovering and thwarting new attacks before significant damage.

'Or It would be nice if an intelligent computer assistant could aggregate thousands of news items or memos for someone, so that the process of reading that material was quicker and that person could decide almost instantly how to use it,' Rad said.

Additionally, intelligent machines could be used in medical diagnoses, which Rad says could lead to more affordable health care, and other fields that require precise, deductive reasoning.

AI Is Becoming More Human: New Algorithm Lets Computers Learn From Their Mistakes

The company has released the technology as an open-source software package which includes a number of virtual environments in which two simulated robots—a robotic arm, similar to those used in manufacturing, and a robotic hand—can be put through their paces in a number of mini tasks.

Why artificial intelligence will help human workers, not hurt them These tasks include sliding a disc across a table until it hits a given target and manipulating a pen until it achieves a desired position and rotation.  At first, the simulated robots are unsuccessful at completing their given tasks, but as the new algorithm kicks in, they begin to train themselves to become more effective by reframing each failure as a success.

“If we repeat this process, we will eventually learn how to achieve arbitrary goals, including the goals that we really want to achieve.” This process mimics the way that we learn when it comes to trying to master new skills.

The researchers have already been using their new software to help train real physical robots, but these kind of self-learning algorithms could prove useful in a huge range of applications.

A.I. Versus M.D.

One evening last November, a fifty-four-year-old woman from the Bronx arrived at the emergency room at Columbia University’s medical center with a grinding headache.

“The trick is to diagnose the stroke before too many nerve cells begin to die.” Strokes are usually caused by blockages or bleeds, and a neuroradiologist has about a forty-five-minute window to make a diagnosis, so that doctors might be able to intervene—to dissolve a growing clot, say.

The residents raced through the layers of images, as if thumbing through a flipbook, calling out the names of the anatomical structures: cerebellum, hippocampus, insular cortex, striatum, corpus callosum, ventricles.

Then one of the residents, a man in his late twenties, stopped at a picture and motioned with the tip of a pencil at an area on the right edge of the brain.

“The borders look hazy.” To me, the whole image looked patchy and hazy—a blur of pixels—but he had obviously seen something unusual.

Then questions and preliminary tests help eliminate one hypothesis and strengthen another—so-called “differential diagnosis.” Weight is given to how common a disease might be, and to a patient’s prior history, risks, exposures.

Variations of this stepwise process were faithfully reproduced in medical textbooks for decades, and the image of the diagnostician who plods methodically from symptom to cause had been imprinted on generations of medical students.

He would ask a patient to demonstrate the symptom—a cough, say—and then lean back in his chair, letting adjectives roll over his tongue.

Some contained a single pathological lesion that might be commonly encountered—perhaps a palm-shaped shadow of a pneumonia, or the dull, opaque wall of fluid that had accumulated behind the lining of the lung.

The radiologists were shown the three types of images in random order, and then asked to call out the name of the lesion, the animal, or the letter as quickly as possible while the MRI machine traced the activity of their brains.

In all three cases, the same areas of the brain lit up: a wide delta of neurons near the left ear, and a moth-shaped band above the posterior base of the skull.

“Our results support the hypothesis that a process similar to naming things in everyday life occurs when a physician promptly recognizes a characteristic and previously known lesion,” the researchers concluded.

A child knows that a bicycle has two wheels, that its tires are filled with air, and that you ride the contraption by pushing its pedals forward in circles.

Ryle termed this kind of knowledge—implicit, experiential, skill-based—“knowing how.” The two kinds of knowledge would seem to be interdependent: you might use factual knowledge to deepen your experiential knowledge, and vice versa.

But Ryle warned against the temptation to think that “knowing how” could be reduced to “knowing that”—a playbook of rules couldn’t teach a child to ride a bike.

Our rules, he asserted, make sense only because we know how to use them: “Rules, like birds, must live before they can be stuffed.” One afternoon, I watched my seven-year-old daughter negotiate a small hill on her bike.

This CT scan, like most, had other gray squiggles on the left that weren’t on the right—artifacts of movement, or chance, or underlying changes in the woman’s brain that preceded the stroke.

And I kept thinking, Could a machine-learning algorithm help?” Early efforts to automate diagnosis tended to hew closely to the textbook realm of explicit knowledge.

These limitations became starkly evident in a 2007 study that compared the accuracy of mammography before and after the implementation of computer-aided diagnostic devices.

(Even later studies have shown problems with false positives.) Thrun was convinced that he could outdo these first-generation diagnostic devices by moving away from rule-based algorithms to learning-based ones—from rendering a diagnosis by “knowing that” to doing so by “knowing how.” Increasingly, learning algorithms of the kind that Thrun works with involve a computing strategy known as a “neural network,” because it’s inspired by a model of how the brain functions.

these digital systems aim to achieve something similar through mathematical means, adjusting the “weights” of the connections to move toward the desired output.

“Perhaps a machine could do it even better.” Traditionally, dermatological teaching about melanoma begins with a rule-based system that, as medical students learn, comes with a convenient mnemonic: ABCD.

This rogues’ gallery contained nearly a hundred and thirty thousand images—of acne, rashes, insect bites, allergic reactions, and cancers—that dermatologists had categorized into nearly two thousand diseases.

And so she shifts her understanding bit by bit: this is ‘dog,’ that is ‘wolf.’ The machine-learning algorithm, like the child, pulls information from a training set that has been classified.

And, by testing itself against hundreds and thousands of classified images, it begins to create its own way to recognize a dog—again, the way a child does.” It just knows how to do it.

In June, 2015, Thrun’s team began to test what the machine had learned from the master set of images by presenting it with a “validation set”: some fourteen thousand images that had been diagnosed by dermatologists (although not necessarily by biopsy).

(The actual output of the algorithm is not “yes” or “no” but a probability that a given lesion belongs to a category of interest.) Two board-certified dermatologists who were tested alongside did worse: they got the answer correct sixty-six per cent of the time.

Thrun, Esteva, and Kuprel then widened the study to include twenty-five dermatologists, and this time they used a gold-standard “test set” of roughly two thousand biopsy-proven images.

But they found that if they began with a neural network that had already been trained to recognize some unrelated feature (dogs versus cats, say) it learned faster and better.

Thrun hoped that people could one day simply submit smartphone pictures of their worrisome lesions, and that meant that the system had to be undaunted by a wide range of angles and lighting conditions.

We had to crop them out—otherwise, we might teach the computer to pick out a yellow disk as a sign of cancer.” It was an old conundrum: a century ago, the German public became entranced by Clever Hans, a horse that could supposedly add and subtract, and would relay the answer by tapping its hoof.

it has effectively taught itself to differentiate moles from melanomas by making vast numbers of internal adjustments—something analogous to strengthening and weakening synaptic connections in the brain.

A bathtub would perform sequential scans as you bathe, via harmless ultrasound or magnetic resonance, to determine whether there’s a new mass in an ovary that requires investigation.

Might a medical panopticon that constantly scans us in granular—perhaps even cellular—detail, comparing images day by day, enable us to catch cancer at its earliest stages?

You cannot shout from New York to California”—Thrun and I were, indeed, speaking across that distance—“and yet this rectangular device in your hand allows the human voice to be transmitted across three thousand miles.

Just as machines made human muscles a thousand times stronger, machines will make the human brain a thousand times more powerful.” Thrun insists that these deep-learning devices will not replace dermatologists and radiologists.

It did not go down too well.” Hinton’s actual words, in that hospital talk, were blunt: “They should stop training radiologists now.” When I brought up the challenge to Angela Lignelli-Dipple, she pointed out that diagnostic radiologists aren’t merely engaged in yes-no classification.

“The role of radiologists will evolve from doing perceptual things that could probably be done by a highly trained pigeon to doing far more cognitive things,” he told me.

His prognosis for the future of automated medicine is based on a simple principle: “Take any old classification problem where you have a lot of data, and it’s going to be solved by deep learning.

There’s going to be thousands of applications of deep learning.” He wants to use learning algorithms to read X-rays, CT scans, and MRIs of every variety—and that’s just what he considers the near-term prospects.

In the future, he said, “learning algorithms will make pathological diagnoses.” They might read Pap smears, listen to heart sounds, or predict relapses in psychiatric patients.

Although computer scientists are working on it, Hinton acknowledged that the challenge of opening the black box, of trying to find out exactly what these powerful learning systems know and how they know it, was “far from trivial—don’t believe anyone who says that it is.” Still, it was a problem he thought we could live with.

“The baseball player, who’s thrown a ball over and over again a million times, might not know any equations but knows exactly how high the ball will rise, the velocity it will reach, and where it will come down to the ground.

But you could build in a system to teach the computer to achieve exactly that.” Some of the most ambitious versions of diagnostic machine-learning algorithms seek to integrate natural-language processing (permitting them to read a patient’s medical records) and an encyclopedic knowledge of medical conditions gleaned from textbooks, journals, and medical databases.

(Identifying details have been changed.) A bearded man, about sixty years old, sat in the corner concealing a rash on his neck with a woollen scarf.

In a fluorescent-lit room in the back, a nurse sat facing a computer and gave a one-sentence summary—“fifty years old with no prior history and new suspicious spot on the skin”—and then Bordone rushed into the examining room, her blond hair flying behind her.

It took her twenty minutes, but she was thorough and comprehensive, running her fingers over the landscape of moles and skin tags and calling out diagnoses as she moved.

As she wrote her notes in the back room, I asked her about Thrun’s vision for diagnosis: an iPhone pic e-mailed to a powerful off-site network marshalling undoubted but inscrutable expertise.

“Some of my patients could take pictures of their skin problems before seeing me, and it would increase the reach of my clinic.” That sounded like a reasonable response, and I remembered Thrun’s reassuring remarks about augmentation.

The phenomenon has been called “automation bias.” When cars gain automated driver assistance, drivers may become less alert, and something similar may happen in medicine.

The most powerful element in these clinical encounters, I realized, was not knowing that or knowing how—not mastering the facts of the case, or perceiving the patterns they formed.

Medical researchers would be the physicists, as removed from the clinical field as theorists are from the baseball field, but with a desire to know “why.” It’s a convenient division of responsibilities—yet might it represent a loss?

Indeed, for the past few decades, ambitious doctors have strived to be at once baseball players and physicists: they’ve tried to use diagnostic acumen to understand the pathophysiology of disease.

If more and more clinical practice were relegated to increasingly opaque learning machines, if the daily, spontaneous intimacy between implicit and explicit forms of knowledge—knowing how, knowing that, knowing why—began to fade, is it possible that we’d get better at doing what we do but less able to reconceive what we ought to be doing, to think outside the algorithmic black box?

I also know that medical knowledge emerges from diagnosis.” The word “diagnosis,” he reminded me, comes from the Greek for “knowing apart.” Machine-learning algorithms will only become better at such knowing apart—at partitioning, at distinguishing moles from melanomas.

Scanning The Future, Radiologists See Their Jobs At Risk

He's sitting inside a dimly lit reading room, looking at digital images from the CT scan of a patient's chest, trying to figure out why the patient is short of breath.

Because MRI and CT scans are now routine procedures and all the data can be stored digitally, the number of images radiologists have to assess has risen dramatically.

'Radiology, at its core, is now a human being, based on learning and his or her own experience, looking at a collection of digital dots and a digital pattern and saying 'That pattern looks like cancer or looks like tuberculosis or looks like pneumonia,' ' he says.

Big tech companies are betting the same machine learning process — training a computer by feeding it thousands of images — could make it possible for an algorithm to diagnose heart disease or strokes faster and cheaper than a human can.

As part of a UCSF collaboration with GE, Mongan is helping teach machines to distinguish between normal and abnormal chest X-rays so doctors can prioritize patients with life-threatening conditions.

'You need them working together' The reality is this: Dozens of companies, including IBM, Google and GE, are racing to develop formulas that could one day make diagnoses from medical images.

Health care companies like vRad, which has radiologists analyzing 7 million scans a year, provide anonymized data to partners that develop medical algorithms.

The data has been used to 'create algorithms to detect the risk of acute strokes and hemorrhages' and help off-site radiologists prioritize their work, says Dr. Benjamin Strong, chief medical officer at vRad.

Chief Medical Officer Eldad Elnekave says computers can detect diseases from images better than humans because they can multitask — say, look for appendicitis while also checking for low bone density.

'When we're talking about the machines doing things radiologists can't do, we're not talking about a machine where you can just drop an MRI in it and walk away and the answer gets spit out better than a radiologist,' he says.

Why Deep Learning Is Suddenly Changing Your Life

Over the past four years, readers have doubtlessly noticed quantum leaps in the quality of a wide range of everyday technologies.

To gather up dog pictures, the app must identify anything from a Chihuahua to a German shepherd and not be tripped up if the pup is upside down or partially obscured, at the right of the frame or the left, in fog or snow, sun or shade.

Medical startups claim they’ll soon be able to use computers to read X-rays, MRIs, and CT scans more rapidly and accurately than radiologists, to diagnose cancer earlier and less invasively, and to accelerate the search for life-saving pharmaceuticals.

They’ve all been made possible by a family of artificial intelligence (AI) techniques popularly known as deep learning, though most scientists still prefer to call them by their original academic designation: deep neural networks.

Programmers have, rather, fed the computer a learning algorithm, exposed it to terabytes of data—hundreds of thousands of images or years’ worth of speech samples—to train it, and have then allowed the computer to figure out for itself how to recognize the desired objects, words, or sentences.

“You essentially have software writing software,” says Jen-Hsun Huang, CEO of graphics processing leader Nvidia nvda , which began placing a massive bet on deep learning about five years ago.

What’s changed is that today computer scientists have finally harnessed both the vast computational power and the enormous storehouses of data—images, video, audio, and text files strewn across the Internet—that, it turns out, are essential to making neural nets work well.

“We’re now living in an age,” Chen observes, “where it’s going to be mandatory for people building sophisticated software applications.” People will soon demand, he says, “ ‘Where’s your natural-language processing version?’ ‘How do I talk to your app?

The increased computational power that is making all this possible derives not only from Moore’s law but also from the realization in the late 2000s that graphics processing units (GPUs) made by Nvidia—the powerful chips that were first designed to give gamers rich, 3D visual experiences—were 20 to 50 times more efficient than traditional central processing units (CPUs) for deep-learning computations.

Its chief financial officer told investors that “the vast majority of the growth comes from deep learning by far.” The term “deep learning” came up 81 times during the 83-minute earnings call.

I think five years from now there will be a number of S&P 500 CEOs that will wish they’d started thinking earlier about their AI strategy.” Even the Internet metaphor doesn’t do justice to what AI with deep learning will mean, in Ng’s view.

The wonderful and terrifying implications of computers that can learn | Jeremy Howard

What happens when we teach a computer how to learn? Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of ...

The Rise of the Machines – Why Automation is Different this Time

Automation in the Information Age is different. Books we used for this video: The Rise of the Robots: The Second Machine Age: ..

AI Beats Radiologists at Pneumonia Detection | Two Minute Papers #214

The paper "CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning" is available here: ...

Past, Present and Future of AI / Machine Learning (Google I/O '17)

We are in the middle of a major shift in computing that's transitioning us from a mobile-first world into one that's AI-first. AI will touch every industry and transform ...

A simple guide to electronic components.

By request:- A basic guide to identifying components and their functions for those who are new to electronics. This is a work in progress, and I welcome feedback ...

AutoCAD - Complete Tutorial for Beginners - Part 1

CHECK OUT THE LIST OF CONTENTS HERE! In this tutorial we pretend to teach the most basic tools and techniques, so that the beginner can start drawing ...

The next manufacturing revolution is here | Olivier Scalabre

Economic growth has been slowing for the past 50 years, but relief might come from an unexpected place — a new form of manufacturing that is neither what ...

Bridging the Gap Between Theory and Practice in Machine Learning

Machine learning has become one of the most exciting research areas in the world, with various applications. However, there exists a noticeable gap between ...

Build a TensorFlow Image Classifier in 5 Min

In this episode we're going to train our own image classifier to detect Darth Vader images. The code for this repository is here: ...

Everything You Need To Know About Fanuc In 20 Minutes - Global Electronic Services

Get the lowdown on all things Fanuc with this informative guide! Fanuc provides a complete range of industry-leading products and services for robotics. Learn ...