AI News, Your AI Interviewer Will See You Now

Conversation with Robin Hanson

I guess to start with, the proposition we’ve been asking people to weigh in on is whether it’s valuable for people to be expending significant effort doing work that purports to reduce the risk from advanced AI.

AI’s going to be a big fraction of the world when it shows up, so it certainly at some point is worth a fair bit of effort to think about and deal with.

You should put a fair bit of effort into any large area of life or large area of the world, anything that’s big and has big impacts.

That was a scenario where it would happen really fast, would happen in a very concentrated place in time, and basically once it starts, it happens so fast, you can’t really do much about it after that point.

That’s a perfectly plausible argument given that scenario, if you believe that it shows up in one time and place all of a sudden, fully formed and no longer influenceable.

And then I critiqued that argument in my post saying he was basically saying the agency problem, which is a standard problem in all human relationships and all organizations, is exasperated when the agent is smart.

Of course, any large social change has the potential to produce wealth redistribution, and so I’m still less clear why this change would be a bigger wealth distribution consequence than others, or why it would happen more suddenly, or require a more early effort.

But if you guys have other particular arguments to talk about here, I’d love to hear what you think, or what you’ve heard are the best arguments aside from Foom.

they use the term ‘field building,’ where basically the idea is: AI’s likely to be this pretty difficult problem and if we do think it’s far away, there’s still sort of meaningful work we can do in terms of setting up an AI safety field with an increasing number of people who have an increasing amount of–the assumption is useful knowledge–about the field.

Then sort of there’s another assumption that goes along with that that if we investigate problems now, even if we don’t know the exact specifics of what AGI might look like, they’re going to share some common sub problems with problems that we may encounter in the future.

The example I would give to make it concrete is to imagine in the year 1,000, tasking people with dealing with various of our major problems in our society today.

Social media addiction, nuclear war, concentration of capital and manufacturing, privacy invasions by police, I mean any major problem that you could think of in our world today, imagine tasking people in the year 1,000 with trying to deal with that problem.

You’ve written based on AI practitioners estimates of how much progress they’ve been making that an outside view calculation suggests we probably have at least a century to go, if maybe a great many centuries at the current rates of progress in AI.

Obviously there’s a median estimate and a mean estimate, and then there’s a probability per-unit-time estimate, say, and obviously most everyone agrees that the median or mean could be pretty long, and that’s reasonable.

For example, if it was maximally lumpy, if it just shows up at one point, like the Foom scenario, then in that scenario, you kind of have to work ahead of time because you’re not sure when.

It’s just going to take a long time, and it’ll take 10% less or 10% more, but it’s basically going to take that long.

Seems to be a general feature of academia, which you might have thought lumpiness would vary by field, and maybe it does in some more fundamental sense, but as it’s translated into citations, it’s field independent.

Of course, we also understand that big ideas, big fundamental insights, usually require lots of complementary, matching, small insights to make it work.

It seems to me that the most reasonable default assumption is to assume future AI progress looks like past computer science progress and even past technical progress in other areas.

That suggests to me that, I mean AlphaGo is say a lump, I’m happy to admit it looks out of line with a smooth attribution of equal research progress to all teams at all times.

then the odds that we’re going to get all the rest of the way in the next five years are much less than you’d attribute to just randomly assigning say, “It’s going to happen in 200 years, therefore it’ll be one in two hundred per year.”

Once you understand the range of possible tasks, task environments, obstacles, issues, et cetera, once you’ve been in AI for a long time and have just seen a wide range of those things, then you have a more of a sense for “I see, AlphaGo, that’s a good job, but let’s list all these simplifying assumptions you made here that made this problem easier”, and you know how to make that list.

I might be wrong about this–my impression is that your estimate of at least a century or maybe centuries might still be longer than a lot of researchers–and this might be because there’s this trend where people will just say 50 years about almost any technology or something like that.

I’m happy to admit when you ask a lot of people how long it will take, they give you 40, 50 year sort of timescales.

But I would think that, I mean, I’ve done some writing on this psychology concept called construal-level theory, which just really emphasizes how people have different ways they think about things conceived abstractly and broadly versus narrowly.

There’s a consistent pattern there, which is consistent with the pattern we are seeing here, that is in the far mode where you’re thinking abstractly and broadly, we tend to be more confident in simple, abstract theories that have simple predictions and you tend to neglect messy details.

But if you take an AI researcher who has been staring at difficult problems in their area for 20 years, and you ask them, “In the problems you’re looking at, how far have we gotten since 20 years ago?,”

If we’re in a similar regime of the kind of problems we’re dealing with and the kind of tools and the kind of people and the kind of incentives, all that sort of thing, then that seems to be much more relevant.

And I guess, a related question is, now even given that it’s super unlikely, what’s the ideal number of people working about or thinking about this?

That is, whenever there’s a problem that it isn’t the right time to work on, it’s still the right time to have some people asking if it’s the right time to work on it.

Given how random academia of course and the intellectual world is, the intellectual world is not at all optimized in terms of number of people per topic.

That might well be true for ways in which AI problems brings up interesting new conceptual angles that you could explore, or pushes on concepts that you need to push on because they haven’t been generalized in that direction, or just doing formal theorems that are in a new space of theorems.

Certainly there’s a point of view from which decision theory was kind of stuck, and people weren’t pushing on it, and then AI risk people pushed on some dimensions of decision theory that people hadn’t… people had just different decision theory, not because it’s good for AI.

If you could say, “100 people will work on this as researchers, but then the rest of the people talk and think about the future.” If they can talk and think about something else, that would be a big win for me because there are tens and hundreds of thousands of people out there on the side just thinking about the future and so, so many of them are focused on this AI risk thing when they really can’t do much about it, but they’ve just told themselves that it’s the thing that they can talk about, and to really shame everybody into saying it’s the priority.

Now of course, I completely have this whole other book, Age of Em, which is about a different kind of scenario that I think doesn’t get much attention, and I think it should get more attention relative to a range of options that people talk about.

If you’re talking about the percentage of people who think about AI risk, or talk about it, or treat it very seriously, relative to people who are willing to think and talk seriously about the future, it’s this huge thing.

I was already going to ask a follow-up just about what share of, I don’t know, effective altruists who are focused on affecting the long-term future do you think it should be?

But that has to be a pretty tentative judgment so you can’t go too far there, because until you explore a scenario a lot, you really don’t know how extreme… basically it’s about extreme outcomes times the extreme leverage of influence at each point along the path multiplied by each other in hopes that you could be doing things thinking about it earlier and producing that outcome.

Relatedly, I think one thing that people say about why AI should take up a large share is that there’s the sense that maybe we have some reason to think that AI is the only thing we’ve identified so far that could plausibly destroy all value, all life on earth, as opposed to other existential risks that we’ve identified.

Of course, there could be other alien sources out there, but even AI would only destroy things from our source relative to other alien sources that would potentially beat out our AI if it produces a bad outcome.

do think there’s just a wide range of future scenarios, and there’s this very basic question, how different will our descendants be, and how far from our values will they deviate?

I mean, human values have changed enormously over a long time, we are now quite different in terms of our habits, attitudes, and values, than our distant ancestors.

And that’s just been a generic problem we’ve all had to deal with, all through history, AI doesn’t fundamentally change that fact, people focusing on that thing that could happen with AI, too.

When people can change people, even culturally, and especially later on when we can change minds more directly, start tinkering, start shared minds, meet more directly, or just even today we have better propaganda, better mechanisms of persuasion.

So you’ve written that there may well be no powerful general theories to be discovered revolutionizing AI, and this is related to your view that most everything we’ve learned about intelligence suggests that the key to smarts is having many not-fully-general tools.

If we just look in industry, if we look in academia, if we look in education, just look in a lot of different areas, you will find robustly that most tools are more specific tools.

Again, that’s true in things you learn in school, it’s true about things you learn on the job, it’s true about things that companies learn that can help them do things.

If tools have that sort of lumpy innovation, then if each innovation is improving a tool by some percentage, even a distribution percentage, most of the improvements will be in small things, therefore most of the improvements will be small.

If, of course, you thought intelligence was fundamentally different in there being fewer and bigger lumps to find, then that would predict that in the future we would find fewer, bigger lumps, because that’s what there is to find.

You might believe that there are also a few really big things, and the reason that in the past, computer science or education innovation hasn’t found many of them is that we haven’t come to the mother lode yet.

The belief, you’ll find that in intelligence innovation, is related to a belief that it exists, that it’s a thing to find, which we can relatedly believe that fundamentally, intelligence is simple.

So there’d be the simple theory of utilitarianism, or the simple theory of even physical particles, or simple theory of quantum mechanics, or …

Because your idea of intelligence is sort of intrinsically academic, that you think of intelligence as the sort of thing that best exemplary happens in the best academics.

That is, human mind capacities aren’t that different from a chimpanzee’s overall, and an individual [human] who hasn’t had the advantage of cultural evolution isn’t really much better.

Now obviously there’s some difference in the sense that it does seem hard, even though we’ve tried today to teach culture to chimps, we’ve also had some remarkable success.

Then it seems like well, if you actually look at the mechanisms of cultural evolution, the key thing is sitting next to somebody else watching what they’re doing, trying to do what they’re doing.

So that takes certain observation abilities, and it takes certain mirroring abilities, that is, the ability to just map what they’re doing onto what you’re doing.

Even our language ability seems like, well, we have modestly different structured mouths that can more precisely control sounds and chimps don’t quite do that, so it’s understandable why they can’t make as many sounds as distinctly.

The bottom line is that our best answer is it looks like there was a threshold passed, sort of ability supporting cultural evolution, which included the ability to watch people, the ability to mirror it, the ability to do it yourself, the ability to tell people through language or through more things like that.

At what timescale do you think people–how far out do you think people should be starting maybe the field building stuff, or starting actually doing work on AI?

But thinking about problems that could occur in the future, that you haven’t really seen the systems that would produce them or even the scenarios that would play out, that’s much more the other category of effort, is just thinking abstractly about the kinds of things that might go wrong, and maybe the kinds of architectures and kinds of approaches, et cetera.

Today, cars can have car crashes, but each crash is a pretty small crash, and happens relatively locally, and doesn’t kill that many people.

Most things that go wrong in systems, we have our things that go wrong on a small scale pretty frequently, and therefore you can look at actual pieces of things that have gone wrong to inform your efforts.

Today we might say, “Okay, there’s some kind of military weapons we can build that yes, we can build them, but it might be better once we realize they can be built and then have a treaty with the other guys to have neither of us build them.” Sometimes that’s good for weapons.

That’s a newer thing today, but 1,000 years ago, could people have anticipated that, and then what usefully could they have done other than say, “Yeah, sometimes it might be worse having a treaty about not building a weapon if you figure out it’d be worse for you if you have both.”

I’m mostly skeptical that there are sort of these big things that you have to coordinate ahead of time, that you have to anticipate, that if you wait it’s too late, that you won’t see actual concrete signatures of the problems before you have to invent them.

For that you need a particular plan in front of you, and now you can walk through concrete failure modes of all the combinations of this strut will break, or this pipe will burst, and all those you walk through.

It’s definitely true that we often analyze problems that never appear, but it’s almost never in the context of really abstract sparse descriptions of systems.

But the question is: what’s your credence that in a world where we didn’t have these additional EA-inspired safety efforts, what’s your credence that in that world AI poses a significant risk of harm?

The field of AI risk kind of has that same problem where again today, but for the last 70 years or even longer, there have been a subset of people who say, “The robots are coming, and it’s all going to be a mess, and it’s now.

That can be worse for when there really is, when we really do have the possibility of space colonization, when it is really the right time, we might well wait too long after that, because people just can’t believe it, because they’ve been hearing this for so long.

Just as a follow-up, I suppose the official line for most people working on AI safety is, as it ought to be, there’s some small chance that this could matter a lot, and so we better work on it.

AI risk, I mean, it’s got the advantage of all these people pushing and talking which has helped produce money and attention and effort, but it also means you can’t control the message.

Are you worried that this reputation effect or this impression of hyperbole could bleed over and harm other EA causes or EA’s reputation in general, and if so are there ways of mitigating that effect?

For example, I think there are really quite reasonable conservatives in the world who are at the moment quite tainted with the alt-right label, and there is an eager population of people who are eager to taint them with that, and they’re kind of stuck.

mean, EA has just a little, low presence in people’s minds in general, that unless it got a lot bigger, it just would not be a very attractive element to put in the story to blame those people.

This is zooming out again, but I’m curious: kind of around AI optimism, but also just in general around any of the things you’ve talked about in this interview, what sort of evidence you think that either we could get now, or might plausibly see in the future would change your views one way or the other?

For example, forgetting his name, somebody did a blog post a few years ago right after AlphaGo, saying this Go achievement seemed off trend if you think about it by time, but not if you thought about it by computing resources devoted to the problem.

Obviously certainly if you could make some metric for each AI progress being such that you could talk about how important it was by some relative weighting in different fields, and relevant weighting of different kinds of advances, and different kinds of metrics for advances, then you can have some statistics of tracking over time the size of improvements and whether that was changing.

mean, I’ll also make a pitch for the data thing that I’ve just been doing for the last few years, which is the data on automation per job in the US, and the determinants of that and how that’s changed over time, and its impact over time.

Basically there’s a dataset called O*NET and they’re broken into 800 jobs categories and jobs in the US, and for each job in the last 20 years, at some random times, some actual people went and rated each job on a one to five scale of how automated it was.

Then the answer is, we can predict pretty well, just like 25 variables lets us predict half the variance in which jobs are automated, and they’re pretty mundane things, they’re not high-tech, sexy things.

data series like that, if you kept tracking it over time, if there were a deviation from trend, you might be able to see it, you might see that the determinants of automation were changing, that the impacts were changing.

That would be in support of, “Is it time now to actually prepare people for major labor market impacts, or major investment market impacts, or major governance issues that are actually coming up because this is happening now?” But you’ve been asking about, “Well, what about doing stuff early?”

But the more that you could have these expert judgments of, “for any one problem, how close are we?,” and it could just be a list of problematic aspects of problems and which of them we can handle so far and which we can’t.

Or if architecture is a factor of 10 or 100, now you can have a scenario where somebody finds a better architecture and suddenly they’re a factor of 100 better than other people.

And that’s a thing you can actually study directly by having people make systems with different architectures, put different spots of reference into it, et cetera, and see what difference it makes.

Do you think there’s empirical evidence waiting to change your mind, or do you think people are just sort of misconstruing it, or are ignorant, or just not thinking correctly about what we should make of the fact of our species dominating the planet?

Well, there’s certainly a lot of things we don’t know as well about primate abilities, so again, I’m reflecting what I’ve read about cultural evolution and the difference between humans and primates.

For example, abstraction is something we humans do, and we don’t see animals doing much of it, but this construal-level theory thing I described and standard brain architecture says actually all brains have been organized by abstraction for a long time.

When you think about implementing code on the brain, you realize because the brain is parallel, whatever 90% of the code has been, that’s going to be 90% of the volume of the brain.

A key issue at the brain is you might find out that you understand 90% of the volume as a simple structure following a simple algorithm and you can still hardly understand anything about this total algorithm, because it’s all the other parts that you don’t understand where stuff isn’t executing very often, but it still needs to be there to make the whole thing work.

You’re tempted to go by volume and try to understand because volume is visible first, and whatever volume you can opportunistically understand, but you could still be a long way off from understanding.

So periodically over the years, some high-status person will make a quip, not very thought out, at some conference panel or whatever, and they’ll be all over responding to that, and sending this guy messages and recruiting people to talk to him saying, “Hey, you don’t understand.

I mean, you sure are relying on me to know what the main arguments are that I’m responding to, hence you’re sort of shy about saying, “And here are the main arguments, what’s your response?”

If you thought it was actually feasible to summarize people, then what you would do is produce tentative summaries, and then ask for feedback and go back and forth in rounds of honing and improving the summaries.

So yeah, there’s sort of this field building argument, and then there are arguments that if we think something is 20 years away, maybe we can make more robust claims about what the geopolitical situation is going to look like.

I don’t know if you’re familiar with iterated distillation and amplification, but it’s sort of treating this AI system as a black box, which is a lot of what it looks like if they’re in a world that’s close to the one now, because neural nets are sort of black box-y.

Treating it as a black box, there’s some chance that this approach where we basically take a combination of smart AIs and use that to sort of verify the safety of a slightly smarter AI, and sort of do that process, bootstrapping.

I would just say the whole issue is how plausible is it that within 20 years we’ll have human level, broad human-level AI on the basis of these techniques that we see now?

Many of the people that we’ve talked to have actually agreed that it’s taking up too much mind space, or they’ve made arguments of the form, “Well, I am a very technical person, who has a lot of compelling thoughts about AI safety, and for me personally I think it makes sense to work on this.

I mean, AI risk is focused on one relatively narrow set of scenarios, and there’s a lot of other scenarios to explore, so that would be a sense of mind space and career work is just say, “There’s 10 or 100 people working in this other area, I’m not going to be that …”

The more you think that a machine learning system like we have now could basically do everything, if only it were big enough and had enough data and computing power, it’s a different perspective than if you think we’re not even close to having the right machine learning techniques.

Paul Christiano has said more or less, in an 80,000 Hours interview, that he’s very unsure, but he suspects that we might be at insect-level capabilities if we devoted, if we wanted to, if people took it upon themselves to take the compute we have and the resources that we have, we could do what insects do.1 He’s interested in maybe concretely testing this hypothesis that you just mentioned, humans and cockroaches.

So that’s the idea that your brain is 100,000 lines of code, and 90% of the brain volume is 100 of those lines, and then there’s all these little small, swirly structures in your brain that manage the small little swirly tasks that don’t happen very often, but when they do, that part needs to be there.

If you thought there were just 100 key algorithms and once you got 100 of them then you were done, that’s different than thinking, “Sure, there’s 100 main central algorithms, plus there’s another 100,000 lines of code that just is there to deal with very, very specific things that happen sometimes.”

And that evolution has spent a long time searching in the space of writing that code and found these things and there’s no easy learning algorithm that will find it that isn’t in the environment that you were in.

I’m happy to give you a simulated house and some simulated dog food, and simulated predators, who are going to eat the insects, and I’m happy to let you do it all in simulation.

But you’ve got to show me a complicated world, with all the main actual obstacles that insects have to surviving and existing, including parasites and all sorts of things, right?

Though I do think it’s sort of an interesting project because it seems like lots of people just have vastly different sorts of timelines models, which they use to produce some kind of number.

If we’ve agreed that the outside view doesn’t support short time scales of things happening, and we say, “But yes, some experts think they see something different in their expert views of things with an inside view,”

a lot of people when they give us numbers are like, “this is really a total guess.” So I think a lot of the argument is either from people who have very specific compute-based models for things that are short [timelines], and then there’s also people who I think haven’t spent that much time creating precise models, but sort of have models that are compelling enough.

Robin Hanson: I think more people, even most, would say, “Yeah, from the outside, this doesn’t look so compelling.” That’s my judgement, but again, they might say, “Well, the usual way of looking at it from the outside doesn’t, but then, here’s this other way of looking at it from the outside that other people don’t use.” That would be a compromise sort of view.

That is, if there are people out there who specialize in chemistry or business ethics or something else, and they hear these people in AI risk saying there’s these big issues, you know, can the evidence that’s being offered by these insiders–

So it’s very plausible to me that there’s no particular reason to weigh the opinions of people working on this, other than that they’ve thought about it a little bit more than other people have.

[Note: I say ‘soon or far’ here, but I mean to say ‘more or less likely to be harmful’.] Robin Hanson: Well, as a professional economist, I would say, if you have good economic arguments, shouldn’t you bring them to the attention of economists and have us critique them?

And it would be interesting if you just asked people, “Whatever your reason is, what percentage of people interested in AI risk agree with your claim about it for the reason that you do?” Or, “Do you think your reason is unusual?” Because if most everybody thinks their reason is unusual, then basically there isn’t something they can all share with the world to convince the world of it.

- The Washington Post

The new European data protection law requires us to inform you of the following before you use our website:

We use cookies and other technologies to customize your experience, perform analytics and deliver personalized advertising on our sites, apps and newsletters and across the Internet based on your interests.

Your Interview With AI

If the proprietary technology that HireVue uses to evaluate the recordings concludes that a candidate does well in matching the demeanor, enthusiasm, facial expressions or word choice of current employees of the company, it recommends the candidate for the next round.

“If I’m a woman of color and trying to get in,” she said, “I would be pretty anxious about, is this system really going to assess me fairly?” The HireVue spokesperson said the company's AI assessments are actually increasing diversity at companies, because the algorithms don't notice appearance.

'Each algorithm or assessment model is trained not to 'notice' age, gender, ethnicity, and other personal characteristics that are irrelevant to job success, so it helps to level the playing field,' the spokesperson wrote in the email.

“The way that disabilities can affect people is very broad, and as a result some of the characteristics that people with disabilities exhibit are unlikely to exist in the AI’s training data.” 'HireVue offers various accommodations for people with disabilities and is actively working with international disability groups, as well as with Integrate Autism Employment Advisors to ensure that the tools and processes are fair and accessible,' the company spokesperson wrote.

Michael Kalish, associate director of on-campus recruiting at Baruch’s career center, says career counselors commonly suggest that students dress in a full suit and use industry-specific lingo, since it has been suggested that HireVue scans a candidate's answers for keywords.

“Not even just to be prepared for [HireVue] interviews, but just, in general, to show the interviewer that they are prepared, that they’ve done their homework, that they’re generally interested in that field.” Students can also practice for interviews using a mock interview platform called Symplicity that asks industry-specific interview questions and records their answers via webcam.

We can watch it with them and give them constructive criticism and feedback on areas that they should improve upon.” At Duke University, a document from the economics department lists typical HireVue questions (“Tell me about a time you worked on a team?” and “What does integrity mean to you?”) as well as a few tips for students.

The suggestions range from the general interview advice (“Try to give structured concise responses”) to the more technical (ridding the screen of your own image makes it easier to look into the camera), but they don’t really touch on how to nail the mannerisms of past employees other than to “act natural.” Brigham Young University Idaho is one of the few colleges that advertise mock interviews specifically for HireVue on the university's website.

'We do a lot of work on our campus around mitigating bias in the interview and hiring process as it is already, and so we want to make sure any tool we introduce or functionality we introduce is in line with that.” Yajin Wang, a professor of marketing at the University of Maryland, says that most of the things students can do to prepare for HireVue job interviews will make them better public speakers.

The HireVue Effect Frederick Hess, director of education policy studies at the American Enterprise Institute, said while he is “not impressed” by HireVue's platform, which he called “dystopian” and “pseudoscience,” the overall trend toward assessment-based hiring may undermine the economic value of a college degree.

“I think these efforts to build new tools, hiring platforms, hiring systems, which will stand up to legal scrutiny because they are specific and clearly attached to the job you’re going to do, and can be defended that they are non-prejudicial, that stuff should worry the heck out of colleges,” he said.

“If you can apply without having to go through all the stuff for the degree, then employers can pay less, and you still feel like you’re getting enough.” “In terms of value as a proxy for skill and talent, I think college degrees themselves are limited, and they’re one instrument that is probably overused in the labor market,” said Goger, the Brookings fellow.

Unilever saves on recruiters by using AI to assess job interviews

Unilever has claimed it is saving hundreds of thousands of pounds a year by replacing human recruiters with an artificial intelligence system, amid warnings of a populist backlash against the spread of machine learning.

citizens’ jury convened by the charity to explore AI concluded that the growing practice needed independent regulation and warned of public anger at “tech creep” unless citizens were given a greater role in designing systems.

Last week the United Nations special rapporteur Philip Alston said the world risked “stumbling zombie-like into a digital welfare dystopia” in which artificial intelligence and other technologies were used to target, surveil and punish the poorest people.

The Guardian reported on how the UK’s Department for Work and Pensions was accelerating the development of welfare robots for use in its flagship universal credit system, and how more than 100 councils were using predictive analytics and other artificial intelligence systems to aid interactions with their citizens.

The measures we are proposing – such as a new watchdog to scrutinise decisions made by AI on behalf of the public – are crucial first steps in increasing clarity and accountability.” Last month in a report commissioned by the government’s Centre for Data Ethics and Innovation, the Royal United Services Institute, a security thinktank, warned of “unfair discrimination” by data analytics and algorithms in policing.

HireVue has previously said the software scans the language that candidates use – for example, active or passive phrases, tone of voice and speed of delivery – as well as facial expressions such as furrowed brows, smiles and eye-widening.

Elon Musk: Tesla Autopilot | Artificial Intelligence (AI) Podcast

Elon Musk is the CEO of Tesla, SpaceX, Neuralink, and a co-founder of several other companies. This is our first conversation on the podcast. The second one ...

We Talked To Sophia — The AI Robot That Once Said It Would 'Destroy Humans'

This AI robot once said it wanted to destroy humans. Senior correspondent Steve Kovach interviews Sophia, the world's first robot citizen. While the robot can ...

No More BSM Falses! Radenso Introduces Artificial Intelligence with Rai

Radenso is developing Artificial Intelligence for their next generation of radar detectors. What they'll be able to do is incredible. This is going to blow away ...

Eric Weinstein: Revolutionary Ideas in Science, Math, and Society | Artificial Intelligence Podcast

Eric Weinstein is a mathematician, economist, physicist, and managing director of Thiel Capital. He formed the "intellectual dark web" which is a loosely ...

CES 2019: AI robot Sophia goes deep at Q&A

CES2019 AI GOES DEEP Things get strange - sometimes existential - when Hanson Robotics AI Sophia fields questions from the audience, a religious ...

Artificial Intelligence & the Antichrist | Mark Biltz

Get your copy of Decoding the Antichrist by Mark Biltz: ▷▷SUBSCRIBE: Sid Roth with Mark Biltz on It's .

Artificial Intelligence and Machine Learning Will NEVER Eliminate Recruiters

So, there's a lot of speculation and many opinions I hear in my space, (recruiting). Where people say that AI and Machine Learning is going to take over the ...

We Interviewed The AI Robot That's Now A Citizen Of Saudi Arabia

Sophia was made by Hanson Robotics, based in Hong Kong. It is currently a demonstration product doing a tour of the world's media. Business Insider caught ...

Garry Kasparov: "Deep Thinking" | Talks at Google

Garry Kasparov and DeepMind's CEO Demis Hassabis discuss Garry's new book “Deep Thinking”, his match with Deep Blue and his thoughts on the future of AI ...

How AI can save our humanity | Kai-Fu Lee

AI is massively transforming our world, but there's one thing it cannot do: love. In a visionary talk, computer scientist Kai-Fu Lee details how the US and China ...