AI News, BOOK REVIEW: The ethics of artificial intelligence

Ethics should be integral to artificial intelligence development, says Carnegie Mellon expert

CMU-Q hosts AI ethics discussion for alumni, community partners, leadership DOHA, QATAR–While advances in artificial intelligence (AI) are shaping society in both the short and long term, there can be negative consequences of this developing technology, said Carnegie Mellon University’s David Danks at an alumni event hosted by Carnegie Mellon University in Qatar (CMU-Q).

“At CMU we are leading the way in AI research and education, but also in ethics.” Jahanian emphasized the role of CMU-Q graduates in shaping the future: “For the past 15 years, Carnegie Mellon University in Qatar has offered a transformative education for the next generation of global leaders.

You, our alumni, are an integral part of CMU’s borderless vision for education and innovation.” Carnegie Mellon’s Qatar campus offers undergraduate programs in biological sciences, business administration, computational biology, computer science and information systems.


In this episode of the McKinsey Podcast, Simon London speaks with MGI partner Michael Chui and McKinsey partner Chris Wigley about how companies can ethically deploy artificial intelligence.

Across all of their daily use in search and things like maps, health technology, assistants like Siri and Alexa, we’re all benefiting a lot from the convenience and the enhanced decision-making powers that AI brings us.

2:25 But on the flip side, there are justifiable concerns around jobs that arise from automation of roles that AI enables, from topics like autonomous weapons, the impact that some AI-enabled spaces and forums can have on the democratic process, and even things emerging like deep fakes, which is video created via AI which looks and sounds like your president or a presidential candidate or a prime minister or some kind of public figure saying things that they have never said.

Once you’ve decided perhaps I’m going to use it for a good purpose, I’m going to try to improve people’s health, the other ethical question is, “In the execution of trying to use it for good, are you also doing the right ethical things?”

We looked at 160 different individual potential cases of AI to improve social good, everything from improving healthcare and public health around the world to improving disaster recovery.

One thing you could imagine doing is taking a mobile phone and uploading an image and training an AI system to say, “Is this likely to be skin cancer or not?”

In a disaster situation, it can be very difficult in the search for humans, to be able to identify which buildings are still there, which healthcare facilities are still intact, where are there passable roads, where aren’t there passable roads.

6:11 We’ve seen the ability to use artificial-intelligence technology, particularly deep learning, be able to very quickly, much more quickly than a smaller set of human beings, identify these features on satellite imagery, and then be able to divert or allocate resources, emergency resources, whether it’s healthcare workers, whether it’s infrastructure construction workers, to better allocate those resources more quickly in a disaster situation.

That’s really helped to accelerate the recovery of that infrastructure for the city, helped the families who are affected by that, helped the infrastructure like schools and so on, using a mix of the kinds of imagery techniques that Michael’s spoken about.

7:30 Also there’s the commuting patterns—the communications data that you can aggregate to look at how people travel around the city and so on to optimize the work of those teams who are doing the disaster recovery.

Those are all things that we’ve been a part of in the last 12 months, often on a pro bono basis, bringing these technologies to life to really solve concrete societal problems.

Because when you’re identifying vulnerable populations, then sometimes bad things can happen to them, whether it’s discrimination or acts of malicious intent.

9:03 To that second level that we talked about before, how you actually implement AI within a specific use case also brings to mind a set of ethical questions about how that should be done.

We might think about this where a data set that we’re drawing on to build a model doesn’t reflect the population that the model will be applied to or used for.

9:55 There have been various controversies around facial-recognition software not working as well for women, for people of color, because it’s been trained on a biased data set which has too many white guys in it.

12:38 Yes, I think there’s that famous example of potholes in Boston I think it was using the accelerometers in smart phones to identify when people are driving, do they go over potholes.

Once we’re building these technologies into the workflows of people who are making decisions in clinical trials about patient safety, we have to be really, really thoughtful about the resilience of those models in operation, how those models inform the decision making of human beings but don’t replace it, so we keep a human in the loop, how we ensure that the data sources that feed into that model continue to reflect the reality on the ground, and that those models get retrained over time and so on.

The guy comes around from the local council and says, “Well, if you want to put a glass pane in here, because it’s next to a kitchen, it has to be 45-minutes fire resistant.”

That’s evolved through 150, 200 years of various governments trying to do the right thing and ensure that people are building buildings which are safe for human beings to inhabit and minimize things like fire risk.

But it’s really important that we start to shape out some of those building code equivalents for bias, for fairness, for explainability, for some of the other topics that we’ll touch on.

When you get into highly regulated environments like the pharmaceutical industry and also the banking industry and others, understanding how those models are making those decisions, which features are most important, becomes very important.

16:31 To take an example from the banking industry, in the UK the banks have recently been fined over 30 billion pounds, and that’s billion with a B for mis-selling of [payment] protection insurance.

It’s likely because of the size of that company, of the length of the relationship we’ve had with that customer, whatever it is, that actually A) explains what’s going on in the model and B) allows them to have a much richer conversation with their customer.

If a self-driving car makes a left turn instead of hitting the brakes and it causes property damage or hurts somebody, a regulator might say, “Well, why did it do that?”

So we might say for a bank that’s thinking about giving someone a consumer loan, we could have a black-box model, which gets us a certain level of accuracy, let’s say 96, 97 percent accuracy of prediction whether this person will repay.

To some extent, as a human being, if we’re reassured that this model is right and has been proven to be right in thousands of cases, we actually don’t care why it knows as long as it’s making a good prediction that a surgeon can act on that will improve our health.

We’ve looked at some companies where they’ve made the trade-off that Chris suggested, where they’ve gone to a slightly less performant system because they knew the explainability was important in order for people to accept the system and therefore actually start to use it.

22:54 Let me start with one piece of advice, which is as much as we expect executives to start to learn about every part of their business and maybe you’re going to be a general manager, you’re going to need to know something about supply chain, HR strategy, operations, sales and marketing.

In many legal traditions around the world, understanding that there are a set of protected classes or a set of characteristics around which we don’t want to actually use technology or other systems in order to discriminate.

24:40 As a leader thinking about how to manage the risks in this area, dedicating a bit of head space to thinking about it is a really important first step.

That’s something that again I’m not saying that if you’re GDPR compliant, you’re ethical, but think about all the processes that you had to cascade not only for the leaders to understand but all of your people and your processes to make sure that they incorporate an understanding of GDPR.

You think about everyone needs to understand a little bit about AI, and they have to understand, “How can we deploy this technology in a way that’s ethical, in a way that’s compliant with regulations?”

If we get that relationship right, it should become a flywheel of positive impact where we have an ethical framework which enables us to innovate, which enables us to keep informing our ethical framework, which enables us to keep innovating.

27:13 Let’s talk a little bit more about this issue of algorithmic bias, whether it’s in the data set or actually in the system design.

We can start to understand and address those issues of data bias through diversity of data sets, triangulating one data set against another, augmenting one data set with another, continuing to add more and more different data perspectives onto the question that we’re addressing.

We’re almost always developing what we call ensemble models that might be a combination of different modeling techniques that complement each other and get us to an aggregate answer that is better than any of the individual models.

We sometimes nominate who’s going to play the Eeyore role and who’s going to play the Tigger role when we’re discussing a decision.

Have someone else who has a different set of incentives check to make sure that in fact you’ve understood whether there’s bias there and understood whether there’s unintended bias there.

One of the important things to understand there is not only is race or sex or one of these protected characteristics— 30:27 And a protected characteristic is a very specific legal category, right?

But, yes, depending on which jurisdiction you’re in, in some cases, the law states, “You may not discriminate or have disparate impact against certain people with a certain characteristic.”

31:15 One of the big issues, once the model is up and running, is, “How can we ensure that while we’ve tested it as it’s being developed, that it maintains in operation both accuracy and not being biased.”

But a lot of this still relies on having switched-on human beings who maybe get alerted or helped by technology, but who engage their brain on the topic of, “Are these models, once they’re up and running, actually still performant?”

And so that question of how do we create resilient AI, which is stable and robust in production, is absolutely critical, particularly as we introduce AI into more and more critical safety and security and infrastructure systems.

33:24 And so again, you really need to understand when you need to update the model whether it’s to make sure that you’re not introducing bias or just in general to make sure that it’s performing.

But I also think that it’s interesting within individual regulatory jurisdictions, whether it’s in healthcare or in aviation, whether it’s what happens on roads, the degree to which our existing practices can be brought to bear.

Is the right level for allowing autonomous vehicles when it’s better than that level or when it’s better than that level by a factor of ten?

But I think that as we start to flesh out these kinds of ethics frameworks around machine learning and AI and so on, we need to deploy them to answer questions like that in a way which various stakeholders in society really buy into.

lot of the answers to fleshing out these ethical questions have to come from engaging with stakeholder groups and engaging with society more broadly, which is in and of itself an entire process and entire skill set that we need more of as we do more AI policy making.

The Ethics of Artificial Intelligence | Leah Avakian | TEDxYouth@EnglishCollege

In today's ever-changing and growing world, artificial intelligence is quickly becoming more integrated within our everyday lives. What happens when we give ...

Nick Bostrom - The Ethics of The Artificial Intelligence Revolution

Link to the panel discussion: Nick Bostrom is a Swedish philosopher at the University of Oxford known for his ..

The ethical dilemma we face on AI and autonomous tech | Christine Fox | TEDxMidAtlantic

The inspiration for Kelly McGillis' character in Top Gun, Christine Fox is the Assistant Director for Policy and Analysis of the Johns Hopkins University Applied ...

Is Developing Artificial Intelligence (AI) Ethical? | Idea Channel | PBS Digital Studios

Viewers like you help make PBS (Thank you ) . Support your local PBS Member Station here: If you're even the slightest bit ..


The implications and promises of artificial intelligence (AI) are unimaginable. Already, the now ubiquitous functions of AI have changed our lives ...

The Future of Artificial Intelligence and Ethics on the Road to Superintelligence

The progress of technology over time, the human brain Vs the future, and the future of artificial intelligence. Article: ...

Artificial Intelligence, ethics and the law: What challenges? What opportunities?

Artificial Intelligence (AI) is no longer sci-fi. From driverless cars to the use of machine learning algorithms to improve healthcare services and the financial ...

The Ethics of Artificial Intelligence

Is Artificial Intelligence inherently good or evil? Become a Sponsor of Crowdsource the Truth ..

A.I. Ethics: Should We Grant Them Moral and Legal Personhood? | Glenn Cohen

Artificial intelligence already exhibits many human characteristics. Given our history of denying rights to certain humans, we should recognize that robots are ...

Mustafa Suleyman | The Ethics of A.I.

Get the latest headlines: Subscribe: Like us on Facebook: .