AI News, Social and Ethical Implications of Human‐Centered Artificial ... artificial intelligence

Will Artificial Intelligence Enhance or Hack Humanity? – Openlab

THIS WEEK, I interviewed Yuval Noah Harari, the author of three best-selling books about the history and future of our species, and Fei-Fei Li, one of the pioneers in the field of artificial intelligence.

Yuval Noah Harari: Yeah, so I think what’s happening now is that the philosophical framework of the modern world that was established in the 17th and 18th century, around ideas like human agency and individual free will, are being challenged like never before.

And that’s scary, partly because unlike philosophers who are extremely patient people, they can discuss something for thousands of years without reaching any agreement and they’re fine with that, the engineers won’t wait.

And the equation is: B times C times D equals HH, which means biological knowledge multiplied by computing power, multiplied by data equals the ability to hack humans.

And maybe I’ll explain what it means, the ability to hack humans: to create an algorithm that understands me better than I understand myself, and can therefore manipulate me, enhance me, or replace me.

And this is something that our philosophical baggage and all our belief in, you know, human agency and free will, and the customer is always right, and the voter knows best, it just falls apart once you have this kind of ability.

So our immediately, our immediate fallback position is to fall back on the traditional humanist ideas, that the customer is always right, the customers will choose the enhancement.

One of the things—I’ve been reading Yuval’s books for the past couple of years and talking to you—and I’m very envious of philosophers now because they can propose questions but they don’t have to answer them.

When you said the AI crisis, I was sitting there thinking, this is a field I loved and feel passionate about and researched for 20 years, and that was just a scientific curiosity of a young scientist entering PhD in AI.

It’s still a budding science compared to physics, chemistry, biology, but with the power of data, computing, and the kind of diverse impact AI is making, it is, like you said, is touching human lives and business in broad and deep ways.

And responding to those kinds of questions and crisis that’s facing humanity, I think one of the proposed solutions, that Stanford is making an effort about is, can we reframe the education, the research and the dialog of AI and technology in general in a human-centered way?

We’re not necessarily going to find a solution today, but can we involve the humanists, the philosophers, the historians, the political scientists, the economists, the ethicists, the legal scholars, the neuroscientists, the psychologists, and many more other disciplines into the study and development of AI in the next chapter, in the next phase.

they talk about biases. And Yuval has very clearly laid out what he thinks is the most important one, which is the combination of biology plus computing plus data leading to hacking.

So absolutely we need to be concerned and because of that, we need to expand the research, and the development of policies and the dialog of AI beyond just the codes and the products into these human rooms, into the societal issues.

YNH: That’s the moment when you can really hack human beings, not by collecting data about our search words or our purchasing habits, or where do we go about town, but you can actually start peering inside, and collect data directly from our hearts and from our brains.

Something like the totalitarian regimes that we have seen in the 20th century, but augmented with biometric sensors and the ability to basically track each and every individual 24 hours a day.

When the extraterrestrial evil robots are about to conquer planet Earth, and nothing can resist them, resistance is futile, at the very last moment, humans win because the robots don’t understand love.

This is precisely why this is the moment that we believe the new chapter of AI needs to be written by cross-pollinating efforts from humanists, social scientists, to business leaders, to civil society, to governments, to come at the same table to have that multilateral and cooperative conversation.

But if somebody comes along and tells me, ‘Well, you need to maximize human flourishing, or you need to maximize universal love.’ I don’t know what it means.” So the engineers go back to the philosophers and ask them, “What do you actually mean?” Which, you know, a lot of philosophical theories collapse around that, because they can’t really explain that—and we need this kind of collaboration.

If we can’t explain and we can’t code love, can artificial intelligence ever recreate it, or is it something intrinsic to humans that the machines will never emulate?

So machines don’t like to play Candy Crush, but they can still— NT: So you think this device, in some future where it’s infinitely more powerful than it is right now, it could make me fall in love with somebody in the audience?

YNH: That goes to the question of consciousness and mind, and I don’t think that we have the understanding of what consciousness is to answer the question whether a non-organic consciousness is possible or is not possible, I think we just don’t know.

If you accept that something like love is in the end and biological process in the body, if you think that AI can provide us with wonderful healthcare, by being able to monitor and predict something like the flu, or something like cancer, what’s the essential difference between flu and love?

In the sense of is this biological, and this is something else, which is so separated from the biological reality of the body, that even if we have a machine that is capable of monitoring or predicting flu, it still lacks something essential in order to do the same thing with love.

One is that AI is so omnipotent, that it’s achieved to a state that it’s beyond predicting anything physical, it’s getting to the consciousness level, it’s getting to even the ultimate love level of capability.

Second related assumption, I feel our conversation is being based on this that we’re talking about the world or state of the world that only that powerful AI exists, or that small group of people who have produced the powerful AI and is intended to hack humans exists.

I mean humanity in its history, have faced so much technology if we left it in the hands of a bad player alone, without any regulation, multinational collaboration, rules, laws, moral codes, that technology could have, maybe not hacked humans, but destroyed humans or hurt humans in massive ways.

And that brings me to your topic that in addition to hacking humans at that level that you’re talking about, there are some very immediate concerns already: diversity, privacy, labor, legal changes, you know, international geopolitics.

So from the three components of biological knowledge, computing power and data, I think data is is the easiest, and it’s also very difficult, but still the easiest kind to regulate, to protect.

We just need the AI to know us better than we know ourselves, which is not so difficult because most people don’t know themselves very well and often make huge mistakes in critical decisions.

YNH: So imagine, this is not like a science fiction scenario of a century from now, this can happen today that you can write all kinds of algorithms that, you know, they’re not perfect, but they are still better, say, than the average teenager.

Let’s talk about what we can do today, as we think about the risks of AI, the benefits of AI, and tell us you know, sort of your your punch list of what you think the most important things we should be thinking about with AI are.

I was just thinking about your comment about as dependence on data and how the policy and governance of data should emerge in order to regulate and govern the AI impact.

We should be investing in the development of less data-dependent AI technology, that will take into considerations of intuition, knowledge, creativity and other forms of human intelligence.

And I just feel very proud, within the short few months since the birth of this institute, there are more than 200 faculty involved on this campus in this kind of research, dialog, study, education, and that number is still growing.

There’s so many scenarios where this technology can be potentially positively useful, but with that kind of explainable capability, so we’ve got to try and I’m pretty confident with a lot of smart minds out there, this is a crackable thing.

Most humans, when they are asked to explain a decision, they tell a story in a narrative form, which may or may not reflect what is actually happening within them.

And the AI gives this extremely long statistical analysis based not on one or two salient feature of my life, but on 2,517 different data points, which it took into account and gave different weights.

You applied for a loan on Monday, and not on Wednesday, and the AI discovered that for whatever reason, it’s after the weekend, whatever, people who apply for loans on a Monday are 0.075 percent less likely to repay the loan.

The first point, I agree with you, if AI gives you 2,000 dimensions of potential features with probability, it’s not understandable, but the entire history of science in human civilization is to be able to communicate the results of science in better and better ways.

I think science is getting worse and worse in explaining its theories and findings to the general public, which is the reason for things like doubting climate change, and so forth.

And the human mind wasn’t adapted to understanding the dynamics of climate change, or the real reasons for refusing to give somebody a loan.

And it’s true for, I mean, it’s true for the individual customer who goes to the bank and the bank refused to give them a loan.

YNH: So what does it mean to live in a society where the people who are supposed to be running the business… And again, it’s not the fault of a particular politician, it’s just the financial system has become so complicated.

You have some of the wisest people in the world, going to the finance industry, and creating these enormously complex models and tools, which objectively you just can’t explain to most people, unless first of all, they study economics and mathematics for 10 years or whatever.

That’s part of what’s happening, that we have these extremely intelligent tools that are able to make perhaps better decisions about our healthcare, about our financial system, but we can’t understand what they are doing and why they’re doing it.

But before we leave this topic, I want to move to a very closely related question, which I think is one of the most interesting, which is the question of bias in algorithms, which is something you’ve spoken eloquently about.

I mean, I’m not going to have the answers personally, but I think you touch on the really important question, which is, first of all, machine learning system bias is a real thing.

You know, like you said, it starts with data, it probably starts with the very moment we’re collecting data and the type of data we’re collecting all the way through the whole pipeline, and then all the way to the application.

At Stanford, we have machine learning scientists studying the technical solutions of bias, like, you know, de-biasing data or normalizing certain decision making.

And I also want to point out that you’ve already used a very closely related example, a machine learning algorithm has a potential to actually expose bias.

You know, one of my favorite studies was a paper a couple of years ago analyzing Hollywood movies and using a machine learning face-recognition algorithm, which is a very controversial technology these days, to recognize Hollywood systematically gives more screen time to male actors than female actors.

No human being can sit there and count all the frames of faces and whether there is gender bias and this is a perfect example of using machine learning to expose.

So in general there’s a rich set of issues we should study and again, bring the humanists, bring the ethicist, bring the legal scholars, bring the gender study experts.

So, we should teach ethics to coders as part of the curriculum, that the people today in the world that most need a background in ethics, are the people in the computer science departments.

And also in the big corporations, which are designing these tools, should be embedded within the teams, people with backgrounds in things like ethics, like politics, that they always think in terms of what biases might we inadvertently be building into our system?

It shouldn’t be a kind of afterthought that you create this neat technical gadget, it goes into the world, something bad happens, and then you start thinking, “Oh, we didn’t see this one coming.

You know, when we put the seatbelt in our car driving, that’s a little bit of an efficiency loss because I have to do the seat belt movement instead of just hopping in and driving.

So you’re not talking about just any economic competition between the different textile industries or even between different oil industries, like one country decides to we don’t care about the environment at all, we’ll just go full gas ahead and the other countries are much more environmentally aware.

But this is part of, I think, of our job, like with the nuclear arms race, to make people in different countries realize that this is an arms race, that whoever wins, humanity loses.

So if that is the parallel, then what might happen here is we’ll try for global cooperation and 2019, 2020, and 2021 and then we’ll be off in an arms race. A, is that likely and B, if it is, would you say well, then the US needs to really move full throttle on AI because it will be better for the liberal democracies to have artificial intelligence than totalitarian states?

You could say that the nuclear arms race actually saved democracy and the free market and, you know, rock and roll and Woodstock and then the hippies and they all owe a huge debt to nuclear weapons.

News Releases - Joint Statement on Public-Private Roundtable on Digital Priorities at G20 Ministerial

'On June 7, 2019, ahead of the G20 Ministerial Meeting on Trade and Digital Economy JEITA, ITI, DIGITALEUROPE, and techUK convened senior government and private sector representatives in Tokyo to discuss global collaboration on digital policy issues, including artificial intelligence (AI), data free flow with trust (DFFT), and the role of the World Trade Organization (WTO).

'Roundtable participants discussed the importance of public-private collaboration on digital policies, which helps industry, government, and other stakeholders make strides in achieving the goal of 'Society 5.0' by addressing issues of importance to the global community, including: reducing CO2 emissions, improving inclusivity and reducing gender and social biases, and fostering personalized healthcare.

'The participating associations and governments highlighted the importance of promoting and supporting guidelines and frameworks to foster collaborative policy discussions on issues foundational to the development of AI, including safety, privacy and data governance, ethics, and diversity and inclusion.

MIT 6.S093: Introduction to Human-Centered Artificial Intelligence (AI)

Introductory lecture on Human-Centered Artificial Intelligence (MIT 6.S093) I gave on February 1, 2019. For more lecture videos on deep learning, reinforcement ...

Fei-Fei Li & Yuval Noah Harari in Conversation - The Coming AI Upheaval

Watch Yuval Noah Harari speak with Fei-Fei Li, renowned computer scientist and Co-Director of Stanford University's Human-Centered AI Institute -- in a ...

World in EMotion: Artificial Intelligence and Ethics

The usage of artificial intelligence to solve the world's problems is growing and with that comes ethical concerns. What human elements/values should be ...

Ethics of AI @ NYU: Ethics of Specific Technologies

Day 1 Session 2: Ethics of Specific Technologies :00 - Ned Block Opening Remarks 1:30 - Peter Asaro "Killer Robots and the Ethics of Autonomous Weapons" ...

Stanford HAI 2019: Session 4 - AI's Human and Societal Impacts

To develop equitable and trustworthy technology, we must understand how AI performs in practice, and guide and shape the way AI interacts with humans.

Q&A | Artificial intelligence and ethics

Economics is pushing the rapid development of artificial intelligence (AI). At this event we explore how future AI will impact individuals, businesses, our society ...

Microsoft #TechTalk: AI & Humanity

This #TechTalk features Father Paolo Benanti, a Professor at the Pontifical Gregorian University in Rome, specialized in robotics, digital ethics and their ...

AI is not good or bad, nor is it neutral | Lokke Moerel | TEDxAmsterdamWomen

Lokke her talk will touch on the positive and negative aspects of artificial intelligence. She'll illustrate AI's profound impact on increasing awareness to human ...

Fair is not the default: The myth of neutral AI | Josh Lovejoy | TEDxSanJuanIsland

Fair is not the default. In the rush to Artificial General Intelligence and automating all the things, how will we stay in touch with our humanity when making the ...

Artificial Intelligence for Military Use and National Security

Courtney Bowman (Privacy and Civil Liberties Team Lead, Palantir), Avril Haines (Former White House Deputy National Security Advisor; Former Deputy ...