AI News, Big-name scientists worry that runaway artificial intelligence could pose a threat to humanity
Big-name scientists worry that runaway artificial intelligence could pose a threat to humanity
Just down Vassar Street from Tegmark’s office is MIT’s Computer Science and Artificial Intelligence Laboratory, where robots are aplenty. Daniela Rus, the director, is an inventor who just nabbed $25 million in funding from Toyota to develop a car that will never be involved in a collision.
Rus points out that robots are better than humans at crunching numbers and lifting heavy loads, but humans are still better at fine, agile motions, not to mention creative, abstract thinking.
“There are tasks that are very easy for humans — clearing your dinner table, loading the dishwasher, cleaning up your house — that are surprisingly difficult for machines.” Rus makes a point about self-driving cars: They can’t drive just anywhere.
“We don’t have the right sensors and algorithms to characterize very quickly what happens in a congested area, and to compute how to react.” The future is implacably murky when it comes to technology;
MIT’s new robot reads your thoughts and knows when it made a mistake
Computers are exceedingly powerful machines, capable of instantly completing a litany of tasks that would take humans infinitely longer to carry out.
The EEG monitors a human’s brain activity, and using a set of proprietary machine-learning algorithms developed by the group, the system can analyze a person’s brain waves.
The group set up the Baxter to complete a simple task: pick up spray-paint cans and spools of wire and drop them into correspondingly labeled buckets.
When a human, wired up to the EEG monitor, notices the robot is about to put an object in the wrong bucket, the robot changes direction and drops the object where it’s supposed to go.
conference on robotics and automation taking place in Singapore in May. They also plan to further develop the system to more clearly detect mistake signals from humans, and potentially create a system that will allow the robot to get deeper feedback from the human.
For example, if the robot isn’t sure it’s registered an ErrP from a human, it could continue to carry out the potentially mistaken task, and if it receives a stronger error signal, it would then make a change.
This was patently clear in a recent case where a Tesla car in autopilot mode crashed into a barrier after failing to notice a lane shift.
Robots and the Future of Jobs: The Economic Impact of Artificial Intelligence
But we’re going to have a great time discussing “Robots and the Future of Jobs: The Economic Impact of Artificial Intelligence.” So I’ll start with simple introductions, and then we’ll lay out some definitions about the kinds of terms that will be involved in this conversation.
And the definition that many accept is it’s the development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and even translation between languages.
And the reason we are here is because we have extraordinary computation power, we have good sensors and controllers, but also have good algorithms for making maps, for localizing in a map, for planning paths, in general for decision-making.
MANYIKA: Well, I would just add that one of the things that I think that’s brought AI—artificial intelligence sort of to the public consciousness in the most—in the last five or six years, in addition to what Dr. Rus just described, is the fact that we’ve actually had some fairly spectacular demonstrations, I think, things that have kind of made this quite real for most people.
the fact that we now do—can do visual perception and machine-based reading of visual data at error rates that actually are much better than human capability.
I think when people talk about narrow AI—back to definitions—they typically are talking about specialized machines or algorithms that can do particular, specific tasks very, very well—reading an MRI image or driving a car autonomously, but very specific tasks.
When people talk about AGI, which is still a term to be developed and defined, they’re talking mostly about trying to build machines and algorithms that can work on a variety of generalized problems.
In our case, what we’ve been trying to understand in the research we’ve been doing is trying to understand what is the impact of all of this on jobs, on work, and also on wages as these technologies sort of play out in the—in the real world.
So still in a conversation like you will have with Siri, they will now help you, for instance, to open up an insurance policy, to file your insurance claim, to maybe resolve a bill with your mobile phone provider.
Would you like to lay out what you’ve found over the last couple of years you’ve been working on this, what this means for—in terms of full automation but also partial automation, and how this can impact not just jobs but also wages?
And so, when you go through that and you ask the question, so, how many of these tasks could actually be automated in the next decade with currently available technology, the conclusion we got to was something like 45 percent of activities and tasks that are conducted in the economy could be automated.
And the other considerations to take into account—before you conclude that, you know, that’s as many activities that will go away, you have to take into account what is the cost of that automation, what does—what does it take in terms of the—in terms of the labor supply dynamics that are alternatives to doing that automation, and what are going to be some of the regulatory and social acceptance—(audio break).
However, what we did find, that even though it’s only 5 percent, if you look at the different skill categories and you look at the middle skill category, that’s the category of skills and jobs that will be automated the most.
Now, to put that in an historical context, if you look back over the last five decades, we’ve already been automating most middle-skill jobs at pretty high rates, and that rate has been hovering around 8 or 10 percent.
But what we did find is that with current technology, that rate—historical rate of automation, about 5 to 8—sorry—8 to 10 percent every decade is probably going to go up to about 15 to 20 percent.
The second finding, which I think in some ways may be even more important is that even when you don’t have full automation of the whole job in its entirety, we’ve found that something like 30 percent of activities in about 60 percent of jobs could be automated.
So I think this change in jobs is a much bigger effect, and it has implications also for the kinds of skills, capabilities that people are going to have to have if they’re going to work alongside machines.
The first thing is that when Amelia is starting to take over conversations, there are a lot of tasks still left for humans which are normally not being done—like, for instance, spending more time proactively with clients, spending more time on advising, if you talk, for instance, about the financial sector.
You need people which I would call in a cognitive center of excellence which will manage all these systems and these machines, which makes sure that they keep on learning, which makes also sure that they will stay within regulations and are compliant with what the company really wants.
If you think even bigger, I would also see in the near future that, for instance, people can start spending more time, when you’re, for instance, a doctor, on more taking care of patients which have more complex diseases or even doing more research.
If you take historical examples, I think, for example, we all know that if you’d gone back all the way to 1900, the percentage of the American population, the workforce that was working in agriculture was 48 percent.
The other possible question is that I think that as people work with machines—in this case, we’re talking about people in all kinds of activities working on machines—it might actually have slightly interesting effects when it comes to impact on wages.
So for instance, if you take your self-driving golf carts and you put them on a retirement community or active adult community, at that point people who do not have mobility are able to go places.
MANYIKA: Well, I was going to add one other additional thing when it really comes to autonomous cars, which is that—building on what Dr. Rus just said—which is also raises a whole range of questions about how human work with machines in some profound way.
So if you now start to have partial automation driverless cars where the cars drive—are doing the driving 99, 90 percent of the time, and the human occasionally intervenes, do we get to a place and time when in fact the ability of the human to take over is so diminished that they actually don’t do that effectively?
We had a—obviously had an example of this with the Air France plane that crashed in South America a few years ago, when the autonomous autopilot threw back control to the pilot, but the pilots had not been as used in operating in that those conditions, and their skills had atrophied, and actually probably did a worse job handling the situation.
In other words, the human drives, but this guardian angel system that can see the road and estimate what is happening on the road much, much better than we can with our naked eyes—the guardian angel looks at what you want to do, looks at what happens on the road, and if your maneuver is unsafe—in other words, if you might—if you’re about to move into a different lane without noticing someone in your blind spot, the car takes over and keeps you safe.
So this is really an exciting alternative way to think about autonomy as a tool, as a way of protecting the human driver, and ultimately as a way of combining with autonomy at rest that we’ve talked about, to the point where the car could become your friend.
So I actually would say, despite all the human biases, it’s still much better to, in the end, before something goes live, before something really is being brought as an advice or as a support, that humans have approved all these learning mechanisms in there, because you rather would have something you can explain than something you cannot explain when things are happening.
So now you actually train a machine which might not serve 10 people, like a human being would do, but who would serve thousands or hundred thousand of people, could potentially go global if you think really about the scale some of the new technology providers really have.
MANYIKA: But I think one of the things they’re going to really need to think through is that while humans have biases the machines will to, partly because, you know, the way machine learning actually works is that there’s typically training data.
We saw the example of what happened with—you know, with some of the bots that were picking up kind of traffic and information on the internet, and ended up with biases that were one way—in some cases that were discriminatory and so forth.
There are other questions that some have posed in the form of the classic trolley problems, which depending on how you think about those—I mean, the trolley problems, as you know, are the questions about—these kind of emanate mostly—kind of philosophy—where you’re thinking through—you know, if you had to—you couldn’t stop a train or a car and you had to run over five people versus a kid, you know, which way would you go?
And so those questions now become quite real in the case of autonomous systems because the algorithms can actually slow down time, so to speak, and somebody’s going to have to program how to think about that choice and how to deal with choice.
Then you have another question which is one of the things that’s interesting about machine learning and AI is what you might describe as kind of the detection problem, in the sense that when a machine is being used, especially now that many of these have kind of passed, you know—you know, what you might call the Turing test, or the ability to where you can tell this bot is a machine, this one’s a human being—how do we know when these systems are being applied and used?
I’m not worried about those, because I think they’re so far away, but— RUS: I would like to jump in to say something about the trolley problem, if I may. So—right, so the trolley problem is about whether to kill one person or five—maybe one young child or five elderly people.
But I would like to be optimistic and say that our technology will advance so that the trolley problem will simply go away because our sensors would know that the kid is running around the corner and we’ll see them—the other group of people.
It will take some time, because the perception systems of robot cars are not ready for dealing with the kind of complexity, with the kind of fast decision making that road situations require.
MANYIKA: The only comment I would add is to actually agree with Dr. Rus, which is I like this idea of guardian angel a lot, actually, because one of the things about where we are likely to see automation—and it is—you know, it is, in fact, in these highly routine activities, and also where you don’t want to have errors happen.
And so it turns out that truck driving, particularly long-haul truck driving, much of it is actually fairly structured—more structured than driving on the street, in a city street.
But also one of the things that machine learning and artificial intelligence are going to do is going to allow us to make quite some spectacular breakthroughs, actually, in several areas where we’ve been limited in our ability to do that by human cognition.
Think about some of the problems in—when it comes to discovery in the life sciences, where you’re trying to understand patterns of how genomics and synthetic biology work, which are very difficult to analyze and actually understand with conventional methods.
If you think about now putting AI again on mobile devices distributed by a global cloud, you can give people, for instance, a doctor, a basic practitioner, who normally don’t have easy access to medical advice—you can give many people who have been cut by banks to get financial advice—you can now give them a personal financial adviser and help them through day-to-day financials as well.
RUS: So if I could say it in just a—in different words, I would say that any job that requires understanding and knowing about massive amount of information that exists in books, where you have experts, all those jobs could be enhanced by the current technology, natural language processing, which are capable to read those books much faster than a human could and to extract—to understand what’s in those books and to extract the salient information so that doctors could benefit from knowing about case studies that the doctors don’t personally know about or lawyers can benefits from decisions that the current lawyers don’t know about.
But coming back here to the results of our election, which were fueled in large part by a lot of people who are not the highly educated, the highly skilled but the lower skilled and the lower educated, who feel left behind, and while the guardian angel idea is wonderful, lovely—hopefully it works—clearly a lot of jobs have been lost due to globalization and automation and more will be lost.
I’d like to hear more about what are we going to do as a policy matter to address that population, who feels left behind and is going to be probably more left behind unless they all have guardian angels.
And I think as you open up the possible kind of solution space, I think we have to grapple with the question of, in a world of abundance—assuming that, of course, these technologies are going to create all kinds of abundance and so forth—how do we take care of the least of us who don’t have that?
I think it’s a—it’s a fair question to ask about how do we create new ways for people to earn incomes in ways that may be different to the way we thought about ways that people earn incomes.
think it’s not lost on me about the fact that if you look at the last hundred years, even for an economy like the United States, the share of the national income that goes to wages has been declining actually because—you know, because we have—we now combine labor and capital to get the outputs that we need.
RUS: So if I could just add one thing about education, I would say that it is very important for our country to consider training the young kids in computational thinking and in data science, starting with kindergarten, starting with first grade, and going through graduation.
So, for instance, for manufacturing, we have recently introduced a number of low-cost products that are meant to be programmed by demonstration and are meant to make an impact on small- to medium organizations, where a little bit of automation that happens, that changes frequently enough to keep up with the products will make a big difference.
MANYIKA: The only thing I would add to that is I think that certainly should be the goal, but every time you do that, by lowering the skills required to operate and work, you’re also going to expand the pool of available people who can do that, and you’re probably going to put pressure on the wages for those activities.
VAN BOMMEL: Maybe from a different perspective—(inaudible)—is that I also see that the AI platforms themselves also are becoming more easier to train, so that you don’t need the supercomputer scientist training or to do all the training, that you actually can also give people who are normally doing their job on a day-to-day basis the ability to train these systems as well.
And it’s extraordinary to me to see that the technology has advanced and the human interaction—human-machine interaction has advanced to the point where people can use it in substantive ways, whether they know how to program or not, whether they understand the machine or not, just like with driving.
But if you show up in a car that has nobody in it, how do you identify yourself to the car, and how does the car know that you’re in but then you should also wait for your daughter who is lingering and maybe—or maybe halfway on the inside of the car?
There are so many issues about human-machine interaction that are not currently studied and at some point we will need to have solutions, we will need to study these problems in order to have comprehensive systems that can cope with people and robots.
RUS: So we—in the—in the guardian angel space, a different way to pose the same question would be to say in parallel with software that controls the car, can we also provide a system that explains what the car is doing or why the car has made a certain decision?
Could the car evaluate its own dynamics and then say, well, I saw—so given my current dynamics and the map I have just looked at, I thought that taking this turn was safe, but it turns out that I’m using a dated map and for that reason the curb is no longer where I thought it was.
But if you’re applying the system to, say, the criminal justice system, where you want to be able to explain why it is that the judgment was made this way as opposed to that way for accountability and transparency reasons, then we have a problem, I think.
The second part of what you’re asking, which I think is—in my mind at least, you know, I think it’s still a faraway problem—is really the one of super-intelligence, where I think it is in principle possible that you could have emergent behaviors that were not actually programmed into it, where the machine actually decides to set its own goal or rewrite its own program and so forth.
So if you look at the AlphaGo program and you say, wow, how could this program beat a grand master, I would say, well, the machine beat the grand master because the machine had access to all the games in the world and then the machine played with itself for a long time and the computer hardware, the computation, was fast enough to deal with that particular size of the board.
And also the machine who played Go is probably not able to play chess or poker or any other game because the machine is just narrowly trained on that particular topic, on that particular task.
VAN BOMMEL: Maybe also building on you, I think it depends a lot on the types of use cases, where you will apply certain types of learning just to make sure that where you want to have control, so far you can keep control until the machine can explain actually its own reasoning and that you can identify why a machine has taken some decisions.
FARMER: I think we’ve heard that certainly—net/net it seems like the panelists believe artificial intelligence is a force for good, but there are some real questions, particularly around the future of work and the impact on jobs and wages.
Self-Driving Car 'Guardian Angels' Protect You From Yourself
On a fine day in a parking lot outside Boston, a man holding a smartphone steps in front of a 2015 Prius.
But a bunch of electronics hacked into the car brings it to a safe stop anyway—the system had already been tracking the pedestrian for some time using lasers and a camera.
They are trying to prove that there’s a different way to use robotic vehicles to improve people's lives than the driverless taxi vision espoused by some automakers and tech giants, such as Alphabet and Uber.
But it’s designed to test a concept dubbed “parallel autonomy,” where a human still drives, and the car’s computer only takes over when the meatbag behind the wheel is about to mess up.
(It was partly funded by Toyota, which has said it is exploring the guardian angel concept itself.) Rus and her team think parallel autonomy could start saving lives sooner than cars to which we humans are just dumb cargo, because the technical and regulatory hurdles are lower.
“We need to drive in rain, in snow, in heavily congested areas—this is an intermediate step we can take to make driving safer in the meantime.” The MIT team tricked out their Prius with a camera, 4 lidars, high-end GPS, and electronics that sense what a driver is doing with the controls.
(They have ready access to a more challenging proving ground: Boston ranks rock bottom in Allstate’s Best Drivers Report of 200 US cities.) The guardian angel concept is literally millions of miles of testing behind programs fixated on the idea of making human drivers obsolete.
Anyone that endured a congested commute today may find that hard to believe—and it’s hardly surprising a car company at risk of disruption might claim the status quo will endure.
Jobs Are Going Extinct. But That Doesn’t Mean We Have To.
If you work in an ad agency, a robot is probably going to take your job.
If you drive a taxi, or work for a ride-hailing service like Uber, a robot is definitely going to take your job — and will probably do so in the next couple of years.
New reports continually add to the list of the soon-jobless, saying that automation is going to steal work from lawyers, from writers, and even from the information technology experts developing and installing computer systems today.
Just as the agricultural, industrial, and cyber revolutions led to fundamental shifts in how we live and work, so is the age of autonomy reshaping the very fabric of our society.
When asked how she thinks governments can ensure that AI advances do, in fact, benefit humans, Rus stated that such guarantees will require grandiose shifts in how we think about education.
Instead of jumping to killer robots and job loss, we need to provide alternative ways of thinking that allow people to understand the benefits of AI.
Though level 5 autonomy is not quite here yet, and autonomous cars are not yet ready for public roads, Rus pointed out that driverless cars are already capable of driving people around places that have less congestion and hazards, such as parks and retirement homes.
To whit, in her presentation Rus showed a short video of a golf cart-like car picking up an elderly passenger and driving her from her retirement home to meet her friend for lunch on the boardwalk nearby.
(This incited a quiet laugh from philosopher Nick Bostrom, beside me, at her vision of a world where autonomous cars don’t replace human drivers.)
Rus also noted in her talk that health and medicine AI systems make a number of errors when analyzing medical images for signs of cancer — in 7.5 percent of cases, to be exact.
it is up to us to decide how we interact with them, how we govern them, and how we set the rules for machines — as well as the necessary legislation to govern humanity — to ensure that all of these advances are for the greater good.
- On Monday, January 20, 2020
Halsey - Gasoline (Audio)
Discover more Halsey on Spotify: Sign up for the official Halsey newsletter: .
Christina Perri - Human [Official Video]
download 'human' now on iTunes - directed by elliott sellers pre-order 'head or heart' here: itunes: .
10 Time Travelers That May Have Been Caught on Tape
Subscribe For Similar Videos: ▻ Check out Part 2: Hit That Like Button If You Enjoyed! :D ▻ Instagram: Deburke321 .
history of the entire world, i guess
patreon: spotify: itunes:
PINK GUY - HITLER'S EVIL SON
THIS IS A LORE ORIGIN VIDEO. PREQUEL. MORE COMING SOON. PINK SEASON Get it on iTunes! Spotify: ..
That's why you NEVER WIN in Roulette!
Stay away from Live Casino and Live Roulette - Everything is Rigged. Electromagnetic technology in our days, is inconceivable..! The simple human mind, can ...
Alien Interview Part 1 | Secrets of Universe Revealed | Project Blue Book
Extraordinary Project Blue Book file film of Alien interviewed in 1964. Subject was named 'EBE-3' and was held captive for 5 days. Subject disappeared from ...
Best military game?
Do you think this is the Best military game and the most realistic? Well look at the video and you decide! Comment & subscribe ARMA 2 is a tactical shooter ...
Bruce Lee Ping Pong (Full Version)
Say what you want but Bruce Lee was and always will be the shit Show some love:
Film Theory: The Cars in The Cars Movie AREN'T CARS!
Subscribe for More Awesome Theories! ▻▻ Frozen: Elsa's True Fight For The Throne ▻ Wall-E's Unseen Cababalis