AI News, Pranav Rajpurkar artificial intelligence

How Can Doctors Be Sure A Self-Taught Computer Is Making The Right Diagnosis?

Some computer scientists are enthralled by programs that can teach themselves how to perform tasks, such as reading X-rays.

Rajpurkar texted a lab mate and suggested they should build a quick and dirty algorithm that could use the data to teach itself how to diagnose the conditions linked to the X-rays.

The scientists lean into the screen, which displays a chest X-ray and the patient's basic lab results and highlights the part of the X-ray that the algorithm is focusing on.

'The ultimate thought from our group is that if we can combine the best of what humans offer in their diagnostic work and the best of what these models can offer, I think you're going to have a better level of health care for everybody,' Lungren says.

But he is well aware that it's easy to be fooled by a computer program, so he sees part of his job as a clinician to curb some of the engineering enthusiasm.

'And so our job as clinicians is to guard against the possibility of getting ahead of ourselves and allowing these things to be in a place where they could cause harm.'

For example, a program that has taught itself using data from one group of patients may give erroneous results if used on patients from another region — or even from another hospital.

Instead of just scoring the image for medically important details, it considered other elements of the scan, including information from around the edge of the image that showed the type of machine that took the X-ray.

Zech realized that portable X-ray machines used in hospital rooms were much more likely to find pneumonia compared with those used in doctors' offices.

That's hardly surprising, considering that pneumonia is more common among hospitalized people than among people who are able to visit their doctor's office.

Zech was able to unearth the problems related to the Stanford algorithm because the computer model provides its human handlers with additional hints by highlighting which parts of the X-ray it is emphasizing in its analysis.

Black-box algorithms are the favored approach to this new combination of medicine and computers, but 'it's not clear you really need a black box for any of it,' says Cynthia Rudin, a computer scientist at Duke University.

'I've worked on many predictive modeling problems,' she says, 'and I've never seen a high-stakes decision where you couldn't come up with an equally accurate model with something that's transparent, something that's interpretable.'

Black-box models do have some advantages: A program made with a secret sauce is harder to copy and therefore better for companies developing proprietary products.

But Rudin says that especially for medical decisions that could have life or death consequences, it is worth putting in the extra time and effort to have a program built from the ground up based on real clinical knowledge, so humans can see how it is reaching its conclusions.

She is pushing back against a trend in the field, which is to add an 'explanation model' algorithm that runs alongside the black-box algorithm to provide clues about what the black box is doing.

One designed to identify criminals likely to offend again turned out to be using racial cues rather than data about human psychology and behavior, she notes.

While the algorithm worked technically, Stanford palliative care physician Stephanie Harman says it ended up being more confusing than helpful in selecting patients for her service, because people in most need of this service aren't necessarily those closest to death.

In his view, what really matters is whether an algorithm gets enough testing along the way to assure doctors and federal regulators that it is dependable and suitable for its intended use.

And it is equally important to avoid misuse of an algorithm, for example if a health insurer tried to use Shah's death-forecasting algorithm to make decisions about whether to pay for medical care.

'We need to worry more about the cost of the action that will be taken, who will take that action' and a host of related questions that determine its value in medical care.

AUDIE CORNISH, HOST: We're taking a look at artificial intelligence - its benefits, its limits and the ethical questions it raises in this month's All Tech Considered.

(SOUNDBITE OF MUSIC) CORNISH: As artificial intelligence becomes more sophisticated, it allows computer programs to perform tasks, at one time, only people could do, like reading X-rays.

RICHARD HARRIS, BYLINE: If you want to glimpse the brave new world of artificial intelligence programs that are taking on life and death medical judgments, there's no better place than the Stanford University campus.

HARRIS: The team is looking at a prototype of a new program which can diagnose tuberculosis among HIV-positive patients from South Africa, a country that has a shortage of doctors for that task.

The algorithm also doesn't have access to a lot of information real-life doctors use when making tricky diagnoses, such as a patient's medical history, which he plunges into to sort out difficult cases.

RUDIN: I've worked on so many different predictive modeling problems, and I've never seen a high-stakes decision where you couldn't come up with an equally accurate model with something that's transparent, something that's interpretable.

HARRIS: But Rudin says, especially from medical decisions that could have life or death consequences, it's worth putting in the extra time and effort to have a program built from the ground up, based on real clinical knowledge, so humans can decide whether to trust it or not.

RAJPURKAR: The first thing I think about is not about convincing others but about convincing myself that this is, in fact, going to be useful for patients, and that's a question we think about every day and try to tackle every day.

< How Can Doctors Be Sure A Self-Taught Computer Is Making The Right Diagnosis?

AUDIE CORNISH, HOST: We're taking a look at artificial intelligence - its benefits, its limits and the ethical questions it raises in this month's All Tech Considered.

(SOUNDBITE OF MUSIC) CORNISH: As artificial intelligence becomes more sophisticated, it allows computer programs to perform tasks, at one time, only people could do, like reading X-rays.

RICHARD HARRIS, BYLINE: If you want to glimpse the brave new world of artificial intelligence programs that are taking on life and death medical judgments, there's no better place than the Stanford University campus.

And then you just feed it hundreds of thousands of these, and then it starts to be able to automatically learn the pattern from the image itself to the different pathologies.

HARRIS: The team is looking at a prototype of a new program which can diagnose tuberculosis among HIV-positive patients from South Africa, a country that has a shortage of doctors for that task.

It was - you know, it was being a good machine-learning model, and it was aggressively using all available information baked into the image to make its recommendations.

The algorithm also doesn't have access to a lot of information real-life doctors use when making tricky diagnoses, such as a patient's medical history, which he plunges into to sort out difficult cases.

HARRIS: Despite hype that this technology is just around the corner, Zech expects it will be a long time before a black-box algorithm can replace these human judgments.

RUDIN: I've worked on so many different predictive modeling problems, and I've never seen a high-stakes decision where you couldn't come up with an equally accurate model with something that's transparent, something that's interpretable.

It's also the case that it's easier to make a proprietary, commercially valuable product if it uses some sort of secret sauce nobody knows how to replicate.

HARRIS: But Rudin says, especially from medical decisions that could have life or death consequences, it's worth putting in the extra time and effort to have a program built from the ground up, based on real clinical knowledge, so humans can decide whether to trust it or not.

RAJPURKAR: The first thing I think about is not about convincing others but about convincing myself that this is, in fact, going to be useful for patients, and that's a question we think about every day and try to tackle every day.

HARRIS: One approach they've taken is they've added features so the algorithm not only comes up with an answer but also says how confident its human overlords should be in that result.

Instead, they're freely sharing their software and results so others can pick it apart and help the whole field move forward with more confidence.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR.

Data Protection Choices

below, you agree that NPR&rsquo;s sites use cookies, similar tracking and storage technologies, and information about the device you use to access our sites to enhance your viewing, listening and user experience, personalize content, personalize messages from NPR&rsquo;s sponsors, provide social media features, and analyze NPR&rsquo;s traffic.

Week 2: Pranav Rajpurkar, Deep Learning for Radiology (Stanford ML Group)

Pranav Rajpurkar is a Ph.D. student in the Computer Science Department at Stanford University, co-advised by Prof. Andrew Ng and Prof. Percy Liang. Pranav's ...

LIVE XRAY4ALL DEMO

Live Xray4All Demo Andrew Ng and Pranav Rajpurkar Stanford 2018 Big Data Medicine Conference.

Reading robots beat humans in Stanford test

Artificial intelligence programs built by Alibaba (BABA) and Microsoft (MSFT) have beaten humans on a Stanford University reading comprehension test."This is ...

Stanford CS224N: NLP with Deep Learning | Winter 2019 | Lecture 10 – Question Answering

Professor Christopher Manning, Stanford University Professor Christopher Manning Thomas M. Siebel Professor in Machine ..

Machine Learning in Medical Imaging: Challenge and Opportunity

MaxQ AI: Radiologists Leading a New Era of Interpretation

CEO Gene Saragnese discusses the role radiologists will play in bringing machine learning to the field of health care.

Stanford CS230: Deep Learning | Autumn 2018 | Lecture 5 - AI + Healthcare

Andrew Ng, Adjunct Professor & Kian Katanforoosh, Lecturer - Stanford University Andrew Ng Adjunct Professor, Computer ..

Artificial intelligence can read! Tech firms race to smarten up thinking machines

PROVIDENCE, R.I. (AP) — Seven years ago, a computer beat two human quizmasters on a "Jeopardy" challenge. Ever since, the tech industry has been ...