AI News, Google's AI system can beat doctors at detecting breast cancer

AI: Google, reasonable rules are needed from the EU and the US

Google CEO Sundar Pichai lands in Brussels and, in the midst of an increasingly heated international debate on AI, which raises questions never asked before (from security to social impact), uses conciliatory tones to ask everyone's support , in particular of the EU and the USA, in finding 'an agreement on fundamental values' from which to draw up 'a reasonable regulatory framework', made of 'proportionate' rules, capable of guiding the technologies of the future and 'balancing their potential damages and opportunities social.

'It involves many risks, we are waiting to see how it will be used,' said Pichai, urging governments 'to work as soon as possible on regulations' for the development of the controversial technology, widely cleared through customs in China and instead limited to the United States by lines guide presented by the Trump administration.

A move that the Mountain View CEO does not seem to fully support, thanks to a time horizon perhaps a little too long, which would stop the development of a technology also used for delicate activities such as finding missing persons.

Three Papers in the Eye of the ‘AI Breast Cancer Detection’ Storm

The deep learning and medical research communities are abuzz with discussions triggered by the publication of a trio of promising breast cancer diagnosis papers from Google, NYU and DeepHealth.

However, even as Google DeepMind Founder and CEO Demis Hassabis and others were celebrating the paper’s release, Turing Award winner and Facebook Chief AI Scientist AI Yann LeCun went and spoiled the party, tweeting that the Google paper’s authors owed something to the NYU researchers, and should “cite this prior study on the same topic.” He added that unlike the Google system, the NYU method had been open sourced.

Hassabis shot back that Google did cite the NYU paper, taking a jab at LeCun in the process: “perhaps people should read the paper *first* before posting angry messages with incorrect information on twitter.” LeCun sort of backed down at that point: “I was not angry ;-)” and “I did read the paper but missed the citation the first time around.” Globally, breast cancer is the most common cancer in women, according to the World Health Organization.

Returning to the twittersphere to take another swipe at the paper’s novelty, LeCun retweeted comments from the UK Royal College of Radiologists’ Hugh Harvey: “Congrats to Google, but let’s not forgot the team from NYU who last year published better results, validated on more cases, tested on more readers, and made their code and data available.

My paper was probably the first that had the combination of experiments on a large scale, a careful evaluation of different possible models, very good results, a large reader study and the trained model publicly available online.

However, there is still room for improvement and I’m sure that there will be many papers that will go further in different aspects in the next few years.” The NYU researchers introduced a deep convolutional neural network for breast cancer screening classification that was trained and evaluated on over 1,000,000 images from 200,000 breast exams.

Geras acknowledges the strength of the Google’s paper’s careful result analysis, but warned in a tweet that “novelty is difficult to quantify” and “there are already multiple papers that show similar results.” In fact, an even earlier NYU study from last August achieved an AUC of 0.919.

He believes that multiple groups achieving similar results with similar methods would be a good thing, “co-validating our approaches and showing that the toolbox that we use — in this case, deep neural networks — is robust and works in different scenarios.” There are certainly similarities between the Google and DeepHealth studies in terms of scale, methodology and outcomes — but the biggest difference may be that Google’s paper got published in the prestigious journal Nature while DeepHealth’s is still sitting on arXiv awaiting review.

“One of the core novelties in our paper is that we present a model that works for digital breast tomosynthesis (DBT, or 3D mammography), in addition to 2D mammography,” Lotter wrote in an email, explaining the approach achieved good performance without requiring strongly labeled DBT data.

Trained and tuned on mammograms from more than 76,000 women in the UK and more than 15,000 women in the US, and evaluated on a separate data set of over 25,000 women in the UK and over 3,000 women in the US, Google’s system reduced false positives by 5.7 percent in the US and by 1.2 percent in the UK;

Although it’s difficult to directly compare the three models’ generalization capability — or their overall performance in medical diagnosis — the ultimate test will be in real clinical settings.

“If someone were to sidestep these components and directly use our model code for clinical decision making, especially without assurance of proper pre-processing, input validation, and monitoring, there are significant risks of harm.” Screening is only the first step in breast cancer diagnosis, which often requires more than just mammograms.

A machine-versus-doctors fixation masks important questions about artificial intelligence

Wallet-sized cards containing a person’s genetic code don’t exist.  Yet they were envisioned in a 1996 Los Angeles Times article, which predicted that by 2020 the makeup of a person’s genome would drive their medical care.  That idea that today we’d be basking in the fruits of ultra-personalized medicine was put forth by scientists who were promoting the Human Genome Project —

He pointed to “incentives for both biologists and journalists to tell simple stories, including the idea of relatively simple genetic causation of common, debilitating disease.” Lately the allure of a simple story thwarts public understanding of another technology that’s grabbed the spotlight in the wake of the genetic data boom:  artificial intelligence (AI).  With AI, headlines often focus on the ability of machines to “beat” doctors at finding disease. Take coverage of a study published this month on a Google algorithm for reading mammograms: CNBC: Google’s DeepMind A.I.

At least anecdotally, Harvey said, some young doctors are eschewing the field of radiology in the UK, where there is a shortage.  Harvey drew chuckles during a speech at the Radiological Society of North American in December when he presented a slide showing that while about 400 AI companies has sprung up in the last five years, the number of radiologists who have lost their jobs stands at zero.

(Medium ran Harvey’s defiant explanation of why radiologists won’t easily be nudged aside by computers.) The human-versus-machine fixation distracts from questions of whether AI will benefit patients or save money.  We’ve often written about the pitfalls of reporting on drugs that have only been studied in mice.

Almost always, a computer’s “deep learning” ability is trained and tested on cleaned-up datasets that don’t necessarily predict how they’ll perform in actual patients.  Harvey said there’s a downside to headlines “overstating the capabilities of the technology before it’s been proven.” “I think patients who read this stuff can get confused.

In Undark, Jeremy Hsu reported on the lack of evidence for a triaging app, Babylon Health.  Harvey said journalists also need to point out “the reality of what it takes to get it into the market and into the hands of end users.” He cites lung cancer screening, for which some stories cover “how good the algorithm is at finding lung cancers and not much else.” For example, a story that appeared in the New York Post (headline: “Google’s new AI is better at detecting lung cancer than doctors”)  declared that “AI is proving itself to be an incredible tool for improving lives” without presenting any evidence.

Google’s AI system can beat doctors at detecting breast cancer

A Google artificial intelligence system did a better job to detect breast cancer than human doctors assessing images from mammograms, according to a new ...

Google's artificial intelligence system can beat doctors at detecting breast cancer

Google's artificial intelligence system can beat doctors at detecting breast cancer.

Philips and PathAI to improve breast cancer diagnosis with artificial intelligence

Philips and PathAI team up to improve breast cancer diagnosis using artificial intelligence technology in 'big data' pathology research. Royal Philips, a global ...

Deep Learning Algorithms for Detection of Lymph Node Metastases From Breast Cancer

A new study in JAMA reports on how accurate computer algorithms were at detecting the spread of cancer to lymph nodes in women with breast cancer ...

Google AI system beats doctors in detection tests for breast cancer

Title: Google AI system beats doctors in detection tests for breast cancer. || Godhuli News BD Image Credit to “Getty Images, EPA, PA Media, REUTERS/Akhtar ...

Google says AI can spot breast cancer better than humans

Google says its LYNA machine is 99 percent effective in detecting advanced breast cancer.

Google says its AI beats doctors at detecting breast cancer

Google says it built an AI program that detected breast cancer more accurately than doctors. Google Health Technical lead Shravya Shetty and Product Manager ...

Google AI could increase detection of breast cancer

New technology developed by a Google software engineer could pave the way for artificial intelligence to help doctors better detect cancer.

Artificial Intelligence: Google AI system outperforms experts in spotting breast cancer- NASA News

Artificial Intelligence: Google AI system outperforms experts in spotting breast cancer- NASA News If you're able, and if you like our content and approach, ...

Google's AI can detect breast cancer from scans: Study

A new study has found that Google's artificial intelligence system can detect breast cancer from routine scans. It has proved to be more accurate than human ...