AI News, BOOK REVIEW: Big-name scientists worry that runaway artificial intelligence could pose a threat to humanity

Big-name scientists worry that runaway artificial intelligence could pose a threat to humanity

Just down Vassar Street from Tegmark’s office is MIT’s Computer Science and Artificial Intelligence Laboratory, where robots are aplenty.  Daniela Rus, the director, is an inventor who just nabbed $25 million in funding from Toyota to develop a car that will never be involved in a collision.

Rus points out that robots are better than humans at crunching numbers and lifting heavy loads, but humans are still better at fine, agile motions, not to mention creative, abstract thinking.

“There are tasks that are very easy for humans — clearing your dinner table, loading the dishwasher, cleaning up your house — that are surprisingly difficult for machines.” Rus makes a point about self-driving cars: They can’t drive just anywhere.

“We don’t have the right sensors and algorithms to characterize very quickly what happens in a congested area, and to compute how to react.” The future is implacably murky when it comes to technology;

MIT’s new robot reads your thoughts and knows when it made a mistake

Computers are exceedingly powerful machines, capable of instantly completing a litany of tasks that would take humans infinitely longer to carry out.

The EEG monitors a human’s brain activity, and using a set of proprietary machine-learning algorithms developed by the group, the system can analyze a person’s brain waves.

The EEG monitor picks up on signals called “error-related potentials” (ErrPs) that the brain outputs when we notice a mistake, which Rus said were one of the easiest types of brain signals to detect from “an output that is mostly a big noise.” The system detects a change in ErrPs, then reacts and changes its course of action.

They also plan to further develop the system to more clearly detect mistake signals from humans, and potentially create a system that will allow the robot to get deeper feedback from the human.

For example, if the robot isn’t sure it’s registered an ErrP from a human, it could continue to carry out the potentially mistaken task, and if it receives a stronger error signal, it would then make a change.

The question they want to answer, according to CSAIL researcher Stephanie Gil, is essentially: “Can you have a two-way conversation with the robot?” While the system right now can pretty much only be used in yes-or-no situations, the group plans to develop it to handle more complex tasks.

More Evidence That Humans and Machines Are Better When They Team Up

Instead of just fretting about how robots and AI will eliminate jobs, we should explore new ways for humans and machines to collaborate, says Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL).

How technology will affect employment in coming years has become a huge question for economists, policy makers, and technologists.

There is some disagreement among experts about how significantly jobs will be affected by automation and AI, and how job losses will be offset by the creation of new business opportunities.

Rus pointed to the potential for AI to augment human capabilities in law and in manufacturing, where smarter automated systems might play a role in customizing and distributing goods.

For instance, Rus pointed to a project at MIT that involves using the technology to help people withimpaired vision navigate in self-driving cars.

Robots and the Future of Jobs: The Economic Impact of Artificial Intelligence

But we’re going to have a great time discussing “Robots and the Future of Jobs: The Economic Impact of Artificial Intelligence.” So I’ll start with simple introductions, and then we’ll lay out some definitions about the kinds of terms that will be involved in this conversation.

And the definition that many accept is it’s the development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and even translation between languages.

And the reason we are here is because we have extraordinary computation power, we have good sensors and controllers, but also have good algorithms for making maps, for localizing in a map, for planning paths, in general for decision-making.

MANYIKA: Well, I would just add that one of the things that I think that’s brought AI—artificial intelligence sort of to the public consciousness in the most—in the last five or six years, in addition to what Dr. Rus just described, is the fact that we’ve actually had some fairly spectacular demonstrations, I think, things that have kind of made this quite real for most people.

the fact that we now do—can do visual perception and machine-based reading of visual data at error rates that actually are much better than human capability.

I think when people talk about narrow AI—back to definitions—they typically are talking about specialized machines or algorithms that can do particular, specific tasks very, very well—reading an MRI image or driving a car autonomously, but very specific tasks.

When people talk about AGI, which is still a term to be developed and defined, they’re talking mostly about trying to build machines and algorithms that can work on a variety of generalized problems.

In our case, what we’ve been trying to understand in the research we’ve been doing is trying to understand what is the impact of all of this on jobs, on work, and also on wages as these technologies sort of play out in the—in the real world.

So still in a conversation like you will have with Siri, they will now help you, for instance, to open up an insurance policy, to file your insurance claim, to maybe resolve a bill with your mobile phone provider.

Would you like to lay out what you’ve found over the last couple of years you’ve been working on this, what this means for—in terms of full automation but also partial automation, and how this can impact not just jobs but also wages?

And so, when you go through that and you ask the question, so, how many of these tasks could actually be automated in the next decade with currently available technology, the conclusion we got to was something like 45 percent of activities and tasks that are conducted in the economy could be automated.

And the other considerations to take into account—before you conclude that, you know, that’s as many activities that will go away, you have to take into account what is the cost of that automation, what does—what does it take in terms of the—in terms of the labor supply dynamics that are alternatives to doing that automation, and what are going to be some of the regulatory and social acceptance—(audio break).

However, what we did find, that even though it’s only 5 percent, if you look at the different skill categories and you look at the middle skill category, that’s the category of skills and jobs that will be automated the most.

Now, to put that in an historical context, if you look back over the last five decades, we’ve already been automating most middle-skill jobs at pretty high rates, and that rate has been hovering around 8 or 10 percent.

But what we did find is that with current technology, that rate—historical rate of automation, about 5 to 8—sorry—8 to 10 percent every decade is probably going to go up to about 15 to 20 percent.

The second finding, which I think in some ways may be even more important is that even when you don’t have full automation of the whole job in its entirety, we’ve found that something like 30 percent of activities in about 60 percent of jobs could be automated.

So I think this change in jobs is a much bigger effect, and it has implications also for the kinds of skills, capabilities that people are going to have to have if they’re going to work alongside machines.

The first thing is that when Amelia is starting to take over conversations, there are a lot of tasks still left for humans which are normally not being done—like, for instance, spending more time proactively with clients, spending more time on advising, if you talk, for instance, about the financial sector.

You need people which I would call in a cognitive center of excellence which will manage all these systems and these machines, which makes sure that they keep on learning, which makes also sure that they will stay within regulations and are compliant with what the company really wants.

If you think even bigger, I would also see in the near future that, for instance, people can start spending more time, when you’re, for instance, a doctor, on more taking care of patients which have more complex diseases or even doing more research.

If you take historical examples, I think, for example, we all know that if you’d gone back all the way to 1900, the percentage of the American population, the workforce that was working in agriculture was 48 percent.

The other possible question is that I think that as people work with machines—in this case, we’re talking about people in all kinds of activities working on machines—it might actually have slightly interesting effects when it comes to impact on wages.

So for instance, if you take your self-driving golf carts and you put them on a retirement community or active adult community, at that point people who do not have mobility are able to go places.

MANYIKA: Well, I was going to add one other additional thing when it really comes to autonomous cars, which is that—building on what Dr. Rus just said—which is also raises a whole range of questions about how human work with machines in some profound way.

So if you now start to have partial automation driverless cars where the cars drive—are doing the driving 99, 90 percent of the time, and the human occasionally intervenes, do we get to a place and time when in fact the ability of the human to take over is so diminished that they actually don’t do that effectively?

We had a—obviously had an example of this with the Air France plane that crashed in South America a few years ago, when the autonomous autopilot threw back control to the pilot, but the pilots had not been as used in operating in that those conditions, and their skills had atrophied, and actually probably did a worse job handling the situation.

In other words, the human drives, but this guardian angel system that can see the road and estimate what is happening on the road much, much better than we can with our naked eyes—the guardian angel looks at what you want to do, looks at what happens on the road, and if your maneuver is unsafe—in other words, if you might—if you’re about to move into a different lane without noticing someone in your blind spot, the car takes over and keeps you safe.

So this is really an exciting alternative way to think about autonomy as a tool, as a way of protecting the human driver, and ultimately as a way of combining with autonomy at rest that we’ve talked about, to the point where the car could become your friend.

So I actually would say, despite all the human biases, it’s still much better to, in the end, before something goes live, before something really is being brought as an advice or as a support, that humans have approved all these learning mechanisms in there, because you rather would have something you can explain than something you cannot explain when things are happening.

So now you actually train a machine which might not serve 10 people, like a human being would do, but who would serve thousands or hundred thousand of people, could potentially go global if you think really about the scale some of the new technology providers really have.

MANYIKA: But I think one of the things they’re going to really need to think through is that while humans have biases the machines will to, partly because, you know, the way machine learning actually works is that there’s typically training data.

We saw the example of what happened with—you know, with some of the bots that were picking up kind of traffic and information on the internet, and ended up with biases that were one way—in some cases that were discriminatory and so forth.

There are other questions that some have posed in the form of the classic trolley problems, which depending on how you think about those—I mean, the trolley problems, as you know, are the questions about—these kind of emanate mostly—kind of philosophy—where you’re thinking through—you know, if you had to—you couldn’t stop a train or a car and you had to run over five people versus a kid, you know, which way would you go?

And so those questions now become quite real in the case of autonomous systems because the algorithms can actually slow down time, so to speak, and somebody’s going to have to program how to think about that choice and how to deal with choice.

Then you have another question which is one of the things that’s interesting about machine learning and AI is what you might describe as kind of the detection problem, in the sense that when a machine is being used, especially now that many of these have kind of passed, you know—you know, what you might call the Turing test, or the ability to where you can tell this bot is a machine, this one’s a human being—how do we know when these systems are being applied and used?

I’m not worried about those, because I think they’re so far away, but— RUS: I would like to jump in to say something about the trolley problem, if I may. So—right, so the trolley problem is about whether to kill one person or five—maybe one young child or five elderly people.

But I would like to be optimistic and say that our technology will advance so that the trolley problem will simply go away because our sensors would know that the kid is running around the corner and we’ll see them—the other group of people.

It will take some time, because the perception systems of robot cars are not ready for dealing with the kind of complexity, with the kind of fast decision making that road situations require.

MANYIKA: The only comment I would add is to actually agree with Dr. Rus, which is I like this idea of guardian angel a lot, actually, because one of the things about where we are likely to see automation—and it is—you know, it is, in fact, in these highly routine activities, and also where you don’t want to have errors happen.

And so it turns out that truck driving, particularly long-haul truck driving, much of it is actually fairly structured—more structured than driving on the street, in a city street.

But also one of the things that machine learning and artificial intelligence are going to do is going to allow us to make quite some spectacular breakthroughs, actually, in several areas where we’ve been limited in our ability to do that by human cognition.

Think about some of the problems in—when it comes to discovery in the life sciences, where you’re trying to understand patterns of how genomics and synthetic biology work, which are very difficult to analyze and actually understand with conventional methods.

If you think about now putting AI again on mobile devices distributed by a global cloud, you can give people, for instance, a doctor, a basic practitioner, who normally don’t have easy access to medical advice—you can give many people who have been cut by banks to get financial advice—you can now give them a personal financial adviser and help them through day-to-day financials as well.

RUS: So if I could say it in just a—in different words, I would say that any job that requires understanding and knowing about massive amount of information that exists in books, where you have experts, all those jobs could be enhanced by the current technology, natural language processing, which are capable to read those books much faster than a human could and to extract—to understand what’s in those books and to extract the salient information so that doctors could benefit from knowing about case studies that the doctors don’t personally know about or lawyers can benefits from decisions that the current lawyers don’t know about.

But coming back here to the results of our election, which were fueled in large part by a lot of people who are not the highly educated, the highly skilled but the lower skilled and the lower educated, who feel left behind, and while the guardian angel idea is wonderful, lovely—hopefully it works—clearly a lot of jobs have been lost due to globalization and automation and more will be lost.

I’d like to hear more about what are we going to do as a policy matter to address that population, who feels left behind and is going to be probably more left behind unless they all have guardian angels.

And I think as you open up the possible kind of solution space, I think we have to grapple with the question of, in a world of abundance—assuming that, of course, these technologies are going to create all kinds of abundance and so forth—how do we take care of the least of us who don’t have that?

I think it’s a—it’s a fair question to ask about how do we create new ways for people to earn incomes in ways that may be different to the way we thought about ways that people earn incomes.

think it’s not lost on me about the fact that if you look at the last hundred years, even for an economy like the United States, the share of the national income that goes to wages has been declining actually because—you know, because we have—we now combine labor and capital to get the outputs that we need.

RUS: So if I could just add one thing about education, I would say that it is very important for our country to consider training the young kids in computational thinking and in data science, starting with kindergarten, starting with first grade, and going through graduation.

So, for instance, for manufacturing, we have recently introduced a number of low-cost products that are meant to be programmed by demonstration and are meant to make an impact on small- to medium organizations, where a little bit of automation that happens, that changes frequently enough to keep up with the products will make a big difference.

MANYIKA: The only thing I would add to that is I think that certainly should be the goal, but every time you do that, by lowering the skills required to operate and work, you’re also going to expand the pool of available people who can do that, and you’re probably going to put pressure on the wages for those activities.

VAN BOMMEL: Maybe from a different perspective—(inaudible)—is that I also see that the AI platforms themselves also are becoming more easier to train, so that you don’t need the supercomputer scientist training or to do all the training, that you actually can also give people who are normally doing their job on a day-to-day basis the ability to train these systems as well.

And it’s extraordinary to me to see that the technology has advanced and the human interaction—human-machine interaction has advanced to the point where people can use it in substantive ways, whether they know how to program or not, whether they understand the machine or not, just like with driving.

But if you show up in a car that has nobody in it, how do you identify yourself to the car, and how does the car know that you’re in but then you should also wait for your daughter who is lingering and maybe—or maybe halfway on the inside of the car?

There are so many issues about human-machine interaction that are not currently studied and at some point we will need to have solutions, we will need to study these problems in order to have comprehensive systems that can cope with people and robots.

RUS: So we—in the—in the guardian angel space, a different way to pose the same question would be to say in parallel with software that controls the car, can we also provide a system that explains what the car is doing or why the car has made a certain decision?

Could the car evaluate its own dynamics and then say, well, I saw—so given my current dynamics and the map I have just looked at, I thought that taking this turn was safe, but it turns out that I’m using a dated map and for that reason the curb is no longer where I thought it was.

But if you’re applying the system to, say, the criminal justice system, where you want to be able to explain why it is that the judgment was made this way as opposed to that way for accountability and transparency reasons, then we have a problem, I think.

The second part of what you’re asking, which I think is—in my mind at least, you know, I think it’s still a faraway problem—is really the one of super-intelligence, where I think it is in principle possible that you could have emergent behaviors that were not actually programmed into it, where the machine actually decides to set its own goal or rewrite its own program and so forth.

So if you look at the AlphaGo program and you say, wow, how could this program beat a grand master, I would say, well, the machine beat the grand master because the machine had access to all the games in the world and then the machine played with itself for a long time and the computer hardware, the computation, was fast enough to deal with that particular size of the board.

And also the machine who played Go is probably not able to play chess or poker or any other game because the machine is just narrowly trained on that particular topic, on that particular task.

VAN BOMMEL: Maybe also building on you, I think it depends a lot on the types of use cases, where you will apply certain types of learning just to make sure that where you want to have control, so far you can keep control until the machine can explain actually its own reasoning and that you can identify why a machine has taken some decisions.

FARMER: I think we’ve heard that certainly—net/net it seems like the panelists believe artificial intelligence is a force for good, but there are some real questions, particularly around the future of work and the impact on jobs and wages.

I found this episode very interesting as I am a project manager in the robotics field.

There are some quite subtle applications I’m involved in where the automation of data acquisition is now almost complete and huge amounts of user time saved, allowing users to do more imaginative tasks for customers.

When we imagine / talk about what autonomy might bring in the future it’s worth setting it in the context of how far we’ve come in recent years.

This was for the rest of history about the highest level of business activity / thinking “it’s not what you know, it’s who you know”.

I’m feeling old at 42 saying I’m amazed at what I’ve seen change in my life time, but I can see robotics becoming cheap, ubiquitous and eventually removing all our human toil right down to the “butler”

The question for the future will be how much our human DNA changes and we choose to alter our inherited characteristics and become the first species to redesign ourselves into something new.

Ariana Grande - God is a woman

God is a woman (Official Video) Song available here: Connect with Ariana: .

Christina Perri - Human [Official Video]

download 'human' now on iTunes - directed by elliott sellers pre-order 'head or heart' here: itunes: .

history of the entire world, i guess

patreon: spotify: itunes:

Halsey - Gasoline (Audio)

Discover more Halsey on Spotify: Sign up for the official Halsey newsletter: .

GOLD DIGGER PRANK PART 1! | HoomanTV

Thank you to TopBuzz for sponsoring this video. Click here to download their app: ✓ SUBSCRIBE, NEVER MISS A VIDEO

This Russian robot shoots guns

A Russian official posted a video on his Facebook page of FEDOR, a humanoid robot being worked on by the Russian Foundation for Advanced Research ...

That's why you NEVER WIN in Roulette!

Stay away from Live Casino and Live Roulette - Everything is Rigged. Electromagnetic technology in our days, is inconceivable..! The simple human mind, can ...

10 Time Travelers That May Have Been Caught on Tape

Subscribe For Similar Videos: ▻ Check out Part 2: Hit That Like Button If You Enjoyed! :D ▻ Instagram: Deburke321 .

Prototype Quadrotor with Machine Gun!

CLICK TO TWEET: FPSRussia Shirts: Twitter: FaceBook:

5 AM at Freddy's: The Prequel

Dang it, Bonnie ‣ More 5 AM at Freddy's ‣ ‣ Check out the voice actors! ‣ Xycron ..