AI News, Podcast: AI Ethics, the Trolley Problem, and a Twitter Ghost Story with Joshua Greene and Iyad Rahwan

Podcast: AI Ethics, the Trolley Problem, and a Twitter Ghost Story with Joshua Greene and Iyad Rahwan

As technically challenging as it may be to develop safe and beneficial AI, this challenge also raises some thorny questions regarding ethics and morality, which are just as important to address before AI is too advanced.

He created the Moral Machine, which is “a platform for gathering human perspective on moral decisions made by machine intelligence.” In this episode, we discuss the trolley problem with autonomous cars, how automation will affect rural areas more than cities, how we can address potential inequality issues AI may bring about, and a new way to write ghost stories.

We could use this data to make better decisions, whether it’s micro-decisions in an autonomous car that takes us from A to B safer and faster, or whether it’s medical decision-making that enables us to diagnose diseases better, or whether it’s even scientific discovery, allowing us to do science more effectively, efficiently and more intelligently.

Joshua: One of the original versions of the trolley problem goes like this (we’ll call it “the switch case”): A trolley is headed towards five people and if you don’t do anything, they’re going to be killed, but you can hit a switch that will turn the trolley away from the five and onto a side track.

In “the footbridge case,” the situation is different as follows: the trolley is now headed towards five people on a single track, over that track is a footbridge and on that footbridge is a large person wearing a very large backpack.

If you’re a vision researcher and you’re using flashing black and white checkerboards to study the visual system, you’re not using that because that’s a typical thing that you look at, you’re using it because it’s something that drives the visual system in a way that reveals its structure and dispositions.

This has a structure similar to the trolley problem because you’re making similar tradeoffs between one and five people and the decision is not being taken on the spot, it’s actually happening at the time of the programming of the car.

If customers don’t feel safe in those cars because of some hypothetical situation that may take place in which they’re sacrificed, that pits the financial incentives against the potentially socially desirable outcome, which can create problems.

Imagine you’re driving on the road and there is a large truck on the lane to your left and as a result you choose to stick a little bit further to the right, just to minimize risk in case this car gets off its lane.

Now suppose that there could be a cyclist later on the right hand side, what you’re effectively doing in this small maneuver is slightly reducing risk to yourself but slightly increasing risk to the cyclist.

Different reasonable and moral people can disagree on what the more important factors and considerations should be and I think this is precisely why we have to think about this problem explicitly, rather than leave it purely to – whether it’s programmers or car companies or any particular single group of people – to decide.

Imagine there’s a cyclist in front of you going at cyclist speed and you either have to wait behind this person for another five minutes creeping along much slower than you would ordinarily go, or you have to swerve into the other lane where there’s oncoming traffic at various distances.

It’s a very hard question to answer because the answers don’t come in the form of something that you can write out in a sentence like, “give priority to the cyclist.” You have to say exactly how much priority in contrast to the other factors that will be in play for this decision.

But now, thanks to autonomous vehicles that can make decisions and reevaluate situations hundreds or thousands of times per second and adjust their plan and so on – we potentially have the luxury to make those decisions a bit better and I think this is why things are different now.

Joshua: With the human we can say, “Look, you’re driving, you’re responsible, and if you make a mistake and hurt somebody, you’re going to be in trouble and you’re going to pay the cost.” You can’t say that to a car, even a car that’s very smart by 2017 standards.

Iyad: Economists say you can incentivize the people who make the cars to program them appropriately by fining them and engineering the product liability law in such a way that would hold them accountable and responsible for damages, and this may be the way in which we implement this feedback loop.

People are better able to complement the machines because they have technical knowledge, so they’re able to use new intelligent tools that are becoming available, but they also work in larger teams on more complex products and services.

Ariel: Josh, you’ve done a lot of work with the idea of “us versus them.” And especially as we’re looking in this country and others at the political situation where it’s increasingly polarized along this line of city versus smaller town, do you anticipate some of what Iyad is talking about making the situation worse?

Whereas the problem that we were talking about earlier – humans being divided into a technological, educated, and highly-paid elite as one group and then the larger group of people who are not doing as well financially – that “us-them” divide, you don’t need to look into the future, you can see it right now.

Iyad: I don’t think that the robot will be the “them” on their own, but I think the machines and the people who are very good at using the machines to their advantage, whether it’s economic or otherwise, will collectively be a “them.” It’s the people who are extremely tech savvy, who are using those machines to be more productive or to win wars and things like that.

If there was a similar trade off being caused by a software feature, then, we wouldn’t know unless we allowed for experimentation as well as monitoring – if we looked at the data to identify whether a particular algorithm is making for very safe cars for customers, but at the expense of a particular group.

Ultimately, it’s going to come down to the quality of the government that we have in place, and quality means having a government that distributes benefits to people in what we would consider a fair way and takes care to make sure that things don’t go terribly wrong in unexpected ways and generally represents the interests of the people.

With self-driving cars especially in the trucking industry, I think that’s going to be the first and most obvious place where millions of people are going to be out of work and it’s not going to be clear what’s going to replace it for them.

And as AI teaching and learning systems get more sophisticated, I think it’s possible that people could actually get very high quality educations with minimal human involvement and that means that people all over the world could unlock their potential.

Last year we launched the nightmare machine, which was using deep neural networks and style transfer algorithms to take ordinary photos and convert them into haunted houses and zombie-infested places.

And so it’s basically got a lot of human knowledge about what makes things spooky and scary, and the nice thing is that it generates part of the story and people can tweet back at it a continuation of the story and then basically take turns with the AI to craft stories.

The President in Conversation With MIT’s Joi Ito and WIRED’s Scott Dadich

LOGIN SUBSCRIBE ADVERTISE SITE MAP PRESS CENTER FAQ CUSTOMER CARE CONTACT US T-SHIRT COLLECTION NEWSLETTER WIRED STAFF JOBS RSS Use of this site constitutes acceptance of our user agreement (effective 3/21/12) and privacy policy (effective 3/21/12).

The Dark Secret at the Heart of AI

L ast year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey.

The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence.

Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems.

The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation.

There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

Subscribe But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users.

But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable.

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.” There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right.

The artist Adam Ferriss created this image, and the one below, using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network.

The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease.

Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver.

If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed.

Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception.

It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand.

The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges.

In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for.

The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables.

“But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.” In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine.

The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment.

She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.” How well can we get along with machines that are unpredictable and inscrutable?

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study.

Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data.

A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military.

But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning.

“It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do.

“If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.” Hear more about artificial intelligence from the experts at the EmTech Digital Conference, March 26-27, 2018 in San Francisco.

Facebook’s head of AI wants us to stop using the Terminator to talk about AI

Yann LeCun is one of AI’s most accomplished minds, so when he says that even recent advances in the field aren’t taking us closer to super-intelligent machines, you need to pay attention.

LeCun has been working in AI for decades, and is one of the co-creators of convolutional neural networks — a type of program that’s proved particularly adept at analyzing visual data, and powers everything from self-driving cars to facial recognition.

His team’s software automatically captions photos for blind users and performs 4.5 billion AI-powered translations a day.

One of the biggest recent stories about Facebook’s AI work was about your so-called “AI robots” getting ”shut down after they invent their own language.” There was a lot of coverage that badly misrepresented the research, but how do you and your colleagues react to those sorts of stories?

All you’re seeing now — all these feats of AI like self-driving cars, interpreting medical images, beating the world champion at Go and so on — these are very narrow intelligences, and they’re really trained for a particular purpose.

So for example, and I don’t want to minimize at all the engineering and research work done on AlphaGo by our friends at DeepMind, but when [people interpret the development of AlphaGo] as significant process towards general intelligence, it’s wrong.

One thing DeepMind does say about its work with AlphaGo is that the algorithms it’s creating will be useful for scientific research, for things like protein folding and drug research.

AlphaGo Zero [the latest version of AlphaGo] has played millions of games over the course of a few days or few weeks, which is possibly more than humanity has played at a master level since Go was invented thousands of years ago.

The only way to get out of this is to have machines that can build, through learning, their own internal models of the world, so they can simulate the world faster than real time.

The example I use is when a person learns to drive, they have a model of world that lets them realize that if they get off the road or run into a tree, something bad is going to happen, and it’s not a good idea.

We have a good enough model of the whole system that even when we start driving, we know we need to keep the car on the street, and not run off a cliff or into a tree.

But if you use a pure reinforcement learning technique, and train a system to drive a car with a simulator, it’s going to have to crash into a tree 40,000 times before it figures out it’s a bad idea.

For example, one of the models that [Hinton] likes is one he came up with in 1985 called Boltzmann machines [...] And to him that’s a beautiful algorithm, but in practice it doesn’t work very well.

What we’d like to find is something that has the essential beauty and simplicity of Boltzmann machines, but also the efficiency of backpropagation [a calculation that’s used to optimize AI systems].

Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible'

While waiting for his brisket to slow cook, he delivered an admonition to Elon Musk, his fellow Silicon Valley billionaire, and others who sound alarm bells over artificial intelligence posing a threat to our safety and well-being.

Musk also expressed dire concern over a future shared with robots: 'I have exposure to the most cutting edge AI, and I think people should be really concerned by it,' Musk said.

Does Uber's Fired Self-Driving Car Guru Really Believe This Shit? [UPDATE: Yes]

Uber's Self-Driving Unit Gets New Head of Hardware After Levandowski Firing Little more than a week after Uber dismissed the embattled engineer at the helm of its self-driving … Read more Backchannel first reported on the existence of Levandowski’s church, Way of the Future, back in September, but the piece was largely a profile of his rise to prominence in Silicon Valley.

His early obsession with robots led to an education at UC Berkeley, early participation in DARPA’s self-driving vehicle Grand Challenge, the development of Lidar navigation systems, and a top position in Google’s self-driving car unit which is now called Waymo.

Documents filed with the IRS for official recognition as a religious organization state the church’s mission as “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” Levandowski is listed as the “Dean” of the church and CEO of its non-profit organization.

It’s guaranteed to happen.” But rather than sound an alarm about the enormous responsibility that this decades-long process will require, he wants people to go ahead and begin accepting their place as inferior to the AI Godhead.

Levandowski is already building up his own vernacular for his church, starting with calling “the singularity” the “Transition.” The singularity is the hypothesized point at which an artificial intelligence will hit a milestone of self-improvement and rapidly surpass human intelligence.

While prominent figures like Stephen Hawking, Elon Musk, and Bill Gates are urging the world to anticipate this moment and start creating solutions for controlling the monster we create, Levandowski thinks it’s just inevitable that we’ll serve the machine.

It may not work out, but if you’re aggressive toward it, I don’t think it’s going to be friendly when the tables are turned.” Sure, that sounds kind of reasonable, but it leaves out the fact that you don’t worship this problem child.

According to Backchannel, “Levandowski made it absolutely clear that his choice to make WOTF a church rather than a company or a think tank was no prank.” And one of his former engineer colleagues told Backchannel in September: “He had this very weird motivation about robots taking over the world—like actually taking over, in a military sense,” said the same engineer.

He insists that he won’t take a salary from the church but he’s still essentially talking about bringing layman and experts together into some sort of organization, and he’s open to that organization working towards building AI.

Logan Paul - Outta My Hair [Official Music Video]

Get your NEW Merch... Be a Maverick ▻ SUBSCRIBE FOR DAILY VLOGS! ▻ STREAM "Outta My Hair" on SPOTIFY! ▻

Google's DeepMind AI just taught itself to walk

Google's artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance. The result is as impressive as...

Film Theory: The Cars in The Cars Movie AREN'T CARS!

Subscribe for More Awesome Theories! ▻▻ Frozen: Elsa's True Fight For The Throne ▻ Wall-E's Unseen Cababalism! ▻▻

Hot Wheels AI RC Toy Cars for Kids Racing Car Track Toys for Boys Kinder Playtime

Hot Wheels AI RC Toy Cars for Kids Racing Car Track Toys for Boys Kinder Playtime | Today on Kinder Playtime we are taking a look at the new Hot Wheels AI Track Set! It is so cool that the...

What's the Deepest Hole We Can Possibly Dig?

The first 500 people to use this link will get a 2 month free trial of Skillshare: What is the deepest hole that humanity has ever dug? If you're curious about the answer,..

Pills & Automobiles (Official Video)

"Pills & Automobiles“ available everywhere now! Apple Music - Spotify - Amazon -

Hanson Robotics - Sophia AI Human-Like Robot Demonstration [1080p]

Fan Funding : PayPal : paypal.me/ArronLee. Thanks a lot for your support! :-)

AI vs. AI. Two chatbots talking to each other

Are you a Robot or a Unicorn? Let the world know: What happens when you let two bots have a conversation? We certainly never expected this... (More:..

2017 Audi AI - Test Drive with Humanoid Robot Sophia

2017 Audi AI - Test drive Jack and Sophia. Prof. Rupert Stadler Chairman of the Board of Management of AUDI AG United Nations “AI for Good Global Summit” Keynote Speech I was invited...

REACTING TO SPICY PEWDIEPIE MEMES

Photoshop this: