AI News, Google's AI for detecting breast cancer beats doctors and ... artificial intelligence

A machine-versus-doctors fixation masks important questions about artificial intelligence

Wallet-sized cards containing a person’s genetic code don’t exist.  Yet they were envisioned in a 1996 Los Angeles Times article, which predicted that by 2020 the makeup of a person’s genome would drive their medical care.  That idea that today we’d be basking in the fruits of ultra-personalized medicine was put forth by scientists who were promoting the Human Genome Project —

He pointed to “incentives for both biologists and journalists to tell simple stories, including the idea of relatively simple genetic causation of common, debilitating disease.” Lately the allure of a simple story thwarts public understanding of another technology that’s grabbed the spotlight in the wake of the genetic data boom:  artificial intelligence (AI).  With AI, headlines often focus on the ability of machines to “beat” doctors at finding disease. Take coverage of a study published this month on a Google algorithm for reading mammograms: CNBC: Google’s DeepMind A.I.

At least anecdotally, Harvey said, some young doctors are eschewing the field of radiology in the UK, where there is a shortage.  Harvey drew chuckles during a speech at the Radiological Society of North American in December when he presented a slide showing that while about 400 AI companies has sprung up in the last five years, the number of radiologists who have lost their jobs stands at zero.

(Medium ran Harvey’s defiant explanation of why radiologists won’t easily be nudged aside by computers.) The human-versus-machine fixation distracts from questions of whether AI will benefit patients or save money.  We’ve often written about the pitfalls of reporting on drugs that have only been studied in mice.

Almost always, a computer’s “deep learning” ability is trained and tested on cleaned-up datasets that don’t necessarily predict how they’ll perform in actual patients.  Harvey said there’s a downside to headlines “overstating the capabilities of the technology before it’s been proven.” “I think patients who read this stuff can get confused.

In Undark, Jeremy Hsu reported on the lack of evidence for a triaging app, Babylon Health.  Harvey said journalists also need to point out “the reality of what it takes to get it into the market and into the hands of end users.” He cites lung cancer screening, for which some stories cover “how good the algorithm is at finding lung cancers and not much else.” For example, a story that appeared in the New York Post (headline: “Google’s new AI is better at detecting lung cancer than doctors”)  declared that “AI is proving itself to be an incredible tool for improving lives” without presenting any evidence.