AI News, How can humans keep the upper hand on artificial intelligence?

How can humans keep the upper hand on artificial intelligence?

One machine-learning method used in AI is reinforcement learning, where agents are rewarded for performing certain actions -- a technique borrowed from behavioral psychology.

But if, on a rainy day for example, a human operator interrupts the robot as it heads outside to collect a box, the robot will learn that it is better off staying indoors, stacking boxes and earning as many points as possible.

'The challenge isn't to stop the robot, but rather to program it so that the interruption doesn't change its learning process -- and doesn't induce it to optimize its behavior in such a way as to avoid being stopped,' says Guerraoui.

From a single machine to an entire AI network In 2016, researchers from Google DeepMind and the Future of Humanity Institute at Oxford University developed a learning protocol that prevents machines from learning from interruptions and thereby becoming uncontrollable.

For instance, in the example above, the robot's reward -- the number of points it earns -- would be weighted by the chance of rain, giving the robot a greater incentive to retrieve boxes outside.

If the human in the first car brakes often, the second car will adapt its behavior each time and eventually get confused as to when to brake, possibly staying too close to the first car or driving too slowly.

How can humans keep the upper hand on artificial intelligence?

In artificial intelligence (AI), machines carry out specific actions, observe the outcome, adapt their behavior accordingly, observe the new outcome, adapt their behavior once again, and so on, learning from this iterative process.

One machine-learning method used in AI is reinforcement learning, where agents are rewarded for performing certain actions – a technique borrowed from behavioral psychology.

But if, on a rainy day for example, a human operator interrupts the robot as it heads outside to collect a box, the robot will learn that it is better off staying indoors, stacking boxes and earning as many points as possible.

'The challenge isn't to stop the robot, but rather to program it so that the interruption doesn't change its learning process – and doesn't induce it to optimize its behavior in such a way as to avoid being stopped,' says Guerraoui.

From a single machine to an entire AI network In 2016, researchers from Google DeepMind and the Future of Humanity Institute at Oxford University developed a learning protocol that prevents machines from learning from interruptions and thereby becoming uncontrollable.

For instance, in the example above, the robot's reward – the number of points it earns – would be weighted by the chance of rain, giving the robot a greater incentive to retrieve boxes outside.

They must reach their destination as quickly as possible – without breaking any traffic laws – and humans in the cars can take over control at any time.

If the human in the first car brakes often, the second car will adapt its behavior each time and eventually get confused as to when to brake, possibly staying too close to the first car or driving too slowly.

Their breakthrough method lets humans interrupt AI learning processes when necessary – while making sure that the interruptions don't change the way the machines learn.

'We worked on existing algorithms and showed that safe interruptibility can work no matter how complicated the AI system is, the number of robots involved, or the type of interruption.

News IC | Computer and Communication Sciences

01.12.17 - EPFL researchers have shown how human operators can maintain control over a system comprising several agents that are guided by artificial intelligence.

In artificial intelligence (AI), machines carry out specific actions, observe the outcome, adapt their behavior accordingly, observe the new outcome, adapt their behavior once again, and so on, learning from this iterative process.

But if, on a rainy day for example, a human operator interrupts the robot as it heads outside to collect a box, the robot will learn that it is better off staying indoors, stacking boxes and earning as many points as possible.

“The challenge isn’t to stop the robot, but rather to program it so that the interruption doesn’t change its learning process – and doesn’t induce it to optimize its behavior in such a way as to avoid being stopped,” says Guerraoui.

For instance, in the example above, the robot’s reward – the number of points it earns – would be weighted by the chance of rain, giving the robot a greater incentive to retrieve boxes outside.

If the human in the first car brakes often, the second car will adapt its behavior each time and eventually get confused as to when to brake, possibly staying too close to the first car or driving too slowly.

Giving humans the last word This complexity is what the EPFL researchers aim to resolve through “safe interruptibility.” Their breakthrough method lets humans interrupt AI learning processes when necessary – while making sure that the interruptions don’t change the way the machines learn.

Once the system has undergone enough of this learning, we could install the pre-trained algorithm in a self-driving car with a low exploration rate, as this would allow for more widespread use.” And, of course, while making sure humans still have the last word.

How robots, artificial intelligence, and machine learning will affect employment and public policy

While emerging technologies can improve the speed, quality, and cost of available goods and services, they may also displace large numbers of workers.  This possibility challenges the traditional benefits model of tying health care and retirement savings to jobs.  In an economy that employs dramatically fewer workers, we need to think about how to deliver benefits to displaced workers.  The impacts of automation technologies are already being felt throughout the economy.

If automation technologies like robots and artificial intelligence make jobs less secure in the future, there needs to be a way to deliver benefits outside of employment.  “Flexicurity,” or flexible security, is one idea for providing healthcare, education, and housing assistance whether or not someone is formally employed.  Expanding the Earned Income Tax Credit, providing a guaranteed basic income, and encouraging corporate profit-sharing are some ideas that need to be considered in the case of persistent unemployment.  Perhaps the most provocative question raised by the paper is how people will choose to spend their time outside of traditional jobs.  “Activity accounts” could finance lifelong education or volunteering for worthy causes.

The Dark Secret at the Heart of AI

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey.

The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence.

Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems.

The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation.

There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users.

But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable.

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.” There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right.

The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease.

Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver.

If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed.

Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception.

It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand.

The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges.

In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for.

The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables.

“But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.” In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine.

The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment.

She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.” How well can we get along with machines that are

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study.

Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data.

A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military.

But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning.

A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do.

But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems.“If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trustit.”

Becoming a Self-Driving Car Machine Learning Engineer

Prior to my career change, my professional background was in computer chip design.

The first few years were invigorating: I was learning industry best-practices, contributing to products that millions of people use, and I felt like I was making progress professionally.

However, around late-2014 I started feeling the semiconductor industry was stagnating, and I kept hearing news of consolidation within the industry.

Fortunately, during that time MOOCs were growing in popularity, so I took advantage and was learning about web development, Android development, machine learning, and artificial intelligence — all through online MOOCs.

After work and on weekends, I would study Andrew Ng’s machine learning class on Coursera, read posts on the /r/machinelearning sub-Reddit (mostly being totally confused), and read ML tutorials online (such as Andrej Karpathy’s blog posts;

Financially, I had 2–3 years worth of living expenses in liquid savings, so it was feasible to quit my job and study full-time.

Here was the sequence of events that happened in 2016: In September 2016, something interesting happened: Udacity announced a new 9-month Nanodegree in self-driving cars.

Since the first three months of the Nanodegree would be focused on deep learning and computer vision, I felt I could start applying for jobs in that area soon enough.

With that, my 2016 timeline continued as follows: By mid-December 2016, I had completed the first 3 projects of the SDCND: basic lane detection, traffic sign classification, and behavioral cloning.

I always found object detection demos to be visually cool, so I decided to do a deep-learning-based object detection project (SDCND’s 5th project is actually vehicle detection, but it doesn’t use deep learning techniques).

Ultimately, I spent 4 weeks creating a traffic sign detection project, implementing the popular “SSD” algorithm from scratch in TensorFlow.

For my resume and LinkedIn profile, my main struggle was to craft a concise yet compelling summary statement, as I was a non-standard applicant for sure.

After much introspection plus great feedback from Udacity career services, here was the summary statement I used: Another great opportunity came when fellow SDCND student Patrick Poon created the Boston Self Driving Cars Meetup group.

The application channels I used were LinkedIn Jobs, AngelList, applying directly on companies’ websites, a local 3rd party recruiter in Boston, and Udacity’s career services.

deep learning) concepts, how ML/DL applies to computer vision, and “traditional” computer vision concepts (perspective transform, edge detection, line detection, etc.).

Much of the interview was also spent on discussing my past projects in deep learning and computer vision — my motivations, the process I went through, how I could improve upon the projects.

One particular question that came up repeatedly was “how did you go beyond the coursework requirements?”, or similarly “which of your projects were not part of your coursework?”.

Out of those 9 interviews, 4 of them lead to final-round interviews: 2 final-round interviews for full-time jobs, 2 final-round interviews for internships.

At the end of my two-month job search, I had 2 full-time offers and 1 internship offer for self-driving car perception roles, plus 1 internship offer for a natural language understanding role (applying NLU/AI to understand medical records).

Time to get ready to move across the country and start a new career :) It’s been a little over three weeks since I started my job at BMW, and things are looking good.

year ago I took the plunge into the unknown, leaving my full-time job in computer chip design to study machine learning via Udacity.

The Rise of the Machines – Why Automation is Different this Time

Automation in the Information Age is different. Books we used for this video: The Rise of the Robots: The Second Machine Age: ..

Computational Creativity: AI and the Art of Ingenuity

Will a computer ever be more creative than a human? In this compelling program, artists, musicians, neuroscientists and computer scientists explore the future of ...

Can we build AI without losing control over it? | Sam Harris

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build ...

The future of artificial intelligence and self-driving cars

Stanford professors discuss their innovative research and the new technologies that will transform lives in the 21st century. At a live taping of The Future of ...

BI in the age of artificial intelligence

Equip your organization today for the future of data analytics. See how users of Microsoft Power BI, for example, can experience their data in a natural way by ...

Should Our Potential Robot Overlords Come With A Killswitch?

If, somehow, all these machines we're creating decided to rise up against humans, wouldn't it be nice to have a big, red button to shut it all down? Yes — the ...

Probabilistic Machine Learning and AI

How can a machine learn from experience? Probabilistic modelling provides a mathematical framework for understanding what learning is, and has therefore ...

72 HOUR BUILD: SELF DRIVING CAR

For more information and to sign up for the next event, visit ! Short version ▻ This .

The Cognitive Era: Artificial Intelligence and Convolutional Neural Networks

Jim Hogan, Raik Brinkmann, James Gambale Jr., Chris Rowen and Drew Wingard discuss artificial intelligence and convolutional neural networks at the Diaz ...

[Roblox] Asian Union Meeting + Its Interruption By Tibet Nationalists