AI News, BOOK REVIEW: AI in Radiological Imaging Applications / Read our new thoughts on ... artificial intelligence

Rise of Robot Radiologists

WHEN REGINA BARZILAY had a routine mammogram in her early 40s, the image showed a complex array of white splotches in her breast tissue.

Over the next four years the team taught a computer program to analyze mammograms from about 32,000 women of different ages and races and told it which women had been diagnosed with cancer within five years of the scan.

When Barzilay’s team ran the program on her own mammograms from 2012—ones her doctor had cleared—the algorithm correctly predicted she was at a higher risk of developing breast cancer within five years than 98 percent of patients.

The numerous researchers, start-up companies and scanner manufacturers designing AI programs hope they can improve the accuracy and timeliness of diagnoses, provide better treatment in developing countries and remote regions that lack radiologists, reveal new links between biology and disease, and even help to predict how soon a person will die.

AI applications are entering clinics at a rapid rate, and physicians have met the technology with equal parts excitement about its potential to reduce their workload and fear about losing their jobs to machines.

The advance has been largely driven by the development of deep-learning methods, in which a computer is given a set of images and then left to draw its own connections between them, ultimately developing a network of associations.

In medical imaging, this might, for example, involve telling the computer which images contain cancer and setting it free to find features common to those images but absent in cancer-free images.

One recent study of chest x-rays for collapsed lungs found that radiologists flag more than 60 percent of the scans they order as high priority, which suggests that they might spend hours wading through nonserious cases before getting to those that are actually urgent.

Google, for instance, is using its computing power to develop AI algorithms that construct two-dimensional CT images of lungs into a three-dimensional lung and look at the entire structure to determine whether cancer is present.

risk of cardiovascular disease by looking at a scan of their retinas, picking up on subtle changes related to blood pressure, cholesterol, smoking history and aging.

A 2019 paper in JAMA Network Open described a deep-learning algorithm trained on more than 85,000 chest x-rays from people enrolled in two large clinical trials that had tracked them for more than 12 years.

The researchers found that 53 percent of the people the AI put into a high-risk category died within 12 years, as opposed to 4 percent in the low-risk category.

The lead investigator, radiologist Michael Lu of Massachusetts General Hospital, says that the algorithm could be a helpful tool for assessing patient health if combined with a physician’s assessment and other data such as genetics.

The disconnect between the way computers and humans think is known as the black box problem: the idea that a computer brain operates in an obscured space that is inaccessible to humans.

They eventually figured out that instead of just analyzing the images, the algorithm was also factoring in the odds of a positive finding based on how common pneumonia was at each institution—not something they expected or wanted the program to do.

“If you naively train [an algorithm] at a hospital from one location, one time, and one population group, you’re unaware of all the thousands of little factors that models are taking into account.

The solution, Finlayson says, is to train an algorithm with data from many locations and in diverse patient populations, then test it prospectively—without any modifications—in a new patient population.

The team found that current standards for assessing breast cancer risk are much less accurate in African-American women, Barzilay says, because those standards were developed mostly using scans from white women: “I think we really are in a position to revamp this sad state of affairs.”

If an AI system leads a physician to make an incorrect diagnosis, the physician may not be able to explain why and the company’s data on the test’s methodology are likely to be a closely guarded trade secret.

The images that are publicly available tend to be poorly labeled or taken with old machines that are no longer in use, Rudin says, and without enormous, diverse data sets, algorithms tend to pick up confounding factors.

Among them is an expectation that producers keep an eye on how their algorithms are changing to ensure they continue to work as designed and asking them to notify the agency if they see unexpected changes that might prompt reevaluation.

In 2012 technology venture capitalist and Sun Microsystems co-founder Vinod Khosla horrified a medical audience by predicting that algorithms would replace 80 percent of doctors, and more recently he claimed that radiologists still practicing in 10 years will be “killing patients.”

In 2015 only 86 percent of radiology resident positions in the U.S. were filled, compared with 94 percent the previous year, although those numbers have improved over the past several years.

In other words, even if an algorithm is better at diagnosing a particular problem, combining it with a physician’s experience and knowledge of the patient’s individual story will lead to a better outcome.

Still, Rao and others believe that the tools and training that radiologists receive, including their day-to-day work, will change drastically over the coming years as a result of artificial-intelligence algorithms.

For instance, physicians working in developing countries might not have access to the same kinds of scanners as a major medical institution in the U.S. or Europe or trained radiologists who can interpret scans.

Lungren’s group is developing a tool that allows doctors to take cell-phone pictures of an x-ray film—not the digital scans that are standard in wealthy nations—and run an algorithm on the photographs that detects problems such as tuberculosis.

Similarly, Langlotz adds that algorithms could one day analyze images while a patient is still in the scanner and predict the final outcome, thus reducing the amount of time and radiation exposure required to get a good image.

Artificial intelligence in healthcare

Artificial intelligence (AI) in healthcare is the use of complex algorithms and software to emulate human cognition in the analysis of complicated medical data.

What distinguishes AI technology from traditional technologies in health care is the ability to gain information, process it and give a well-defined output to the end-user.

AI algorithms behave differently from humans in two ways: (1) algorithms are literal: if you set a goal, the algorithm can't adjust itself and only understand what it has been told explicitly, (2) and algorithms are black boxes;

AI programs have been developed and applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care.

to support operational initiatives that increase cost saving, improve patient satisfaction, and satisfy their staffing and workforce needs.[8]

that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay and optimizing staffing levels.[9]

During this time, there was a recognition by researchers and developers that AI systems in healthcare must be designed to accommodate the absence of perfect data and build on the expertise of physicians.[14]

The ability to interpret imaging results with radiology may aid clinicians in detecting a minute change in an image that a clinician might accidentally miss.

A study at Stanford created an algorithm that could detect pneumonia at that specific site, in those patients involved, with a better average F1 metric (a statistical metric based on accuracy and recall), than the radiologists involved in that trial.[25]

The emergence of AI technology in radiology is perceived as a threat by some specialists, as the technology can achieve improvements in certain statistical metrics in isolated cases, as opposed to specialists.[26][27]

Recent advances have suggested the use of AI to describe and evaluate the outcome of maxillo-facial surgery or the assessment of cleft palate therapy in regard to facial attractiveness or age appearance.[28][29]

In 2018, a paper published in the journal Annals of Oncology mentioned that skin cancer could be detected more accurately by an artificial intelligence system (which used a deep learning convolutional neural network) than by dermatologists.

One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response at baseline.[citation needed]

To address the difficulty of tracking all known or suspected drug-drug interactions, machine learning algorithms have been created to extract information on interacting drugs and their possible effects from medical literature.

Efforts were consolidated in 2013 in the DDIExtraction Challenge, in which a team of researchers at Carlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms.[40]

Other algorithms identify drug-drug interactions from patterns in user-generated content, especially electronic health records and/or adverse event reports.[36][37]

DSP-1181, a molecule of the drug for OCD (obsessive-compulsive disorder) treatment, was invented by artificial intelligence through joint efforts of Exscientia (British start-up) and Sumitomo Dainippon Pharma (Japanese pharmaceutical firm).

The drug development took a single year, while pharmaceutical companies usually spend about five years on similar projects.

The subsequent motive of large based health companies merging with other health companies, allow for greater health data accessibility.[44]

A second project with the NHS involves analysis of medical images collected from NHS patients to develop computer vision algorithms to detect cancerous tissues.[54]

Intel's venture capital arm Intel Capital recently invested in startup Lumiata which uses AI to identify at-risk patients and develop care options.[55]

team associated with the University of Arizona and backed by BPU Holdings began collaborating on a practical tool to monitor anxiety and delirium in hospital patients, particularly those with Dementia.[65]

The AI utilized in the new technology – Senior's Virtual Assistant – goes a step beyond and is programmed to simulate and understand human emotions (artificial emotional intelligence).[66]

Doctors working on the project have suggested that in addition to judging emotional states, the application can be used to provide companionship to patients in the form of small talk, soothing music, and even lighting adjustments to control anxiety.

Virtual nursing assistants are predicted to become more common and these will use AI to answer patient's questions and help reduce unnecessary hospital visits.

Overall, as Quan-Haase (2018) says, technology “extends to the accomplishment of societal goals, including higher levels of security, better means of communication over time and space, improved health care, and increased autonomy” (p. 43).

Whil research on the use of AI in healthcare aims to validate its efficacy in improving patient outcomes before its broader adoption, its use may nonetheless introduce several new types of risk to patients and healthcare providers, such as algorithmic bias, Do not resuscitate implications, and other machine morality issues.

We already have some scientists who know artificial intelligence and machine learning, but we want complementary people who can look forward and see how this technology will evolve.”[76]

As of November 2018, eight use cases are being benchmarked, including assessing breast cancer risk from histopathological imagery, guiding anti-venom selection from snake images, and diagnosing skin lesions.[78][79]

Event Title Public Workshop - Evolving Role of Artificial Intelligence in Radiological Imaging February 25 - 26, 2020

The intent of this public workshop is to discuss emerging applications of Artificial Intelligence (AI) in radiological imaging including AI devices intended to automate the diagnostic radiology workflow as well as guided image acquisition.

Artificial intelligence (AI), including machine learning technologies, has the potential to transform healthcare by deriving new and important insights from the vast amount of data generated during the delivery of health care every day.

The potential for independent action by these devices to bypass human clinical review is an important factor in their benefit-risk profile, and it heightens expectations for the safety and effectiveness of these devices.

Clinical AI applications may assist the acquisition of standardized images independent of the operator, guiding both sonographers and non-experts in sonography, potentially including lay users, to acquire images with equivalent diagnostic quality.

The addition of such clinical AI applications and the potential for new users of these devices, similarly affect the benefit risk profiles for these devices and the expectations for the safety and effectiveness of these devices.

In this workshop, FDA is also seeking innovative and consistent ways to leverage existing methods and to develop new methods for validation of these AI-based algorithms and explore opportunities for stakeholder collaboration in these efforts.   During the workshop, we will be discussing specific topics outlined in the Agenda below.

(301) 718-0200, F: (301) 718-0679 The general sessions of the workshop will be webcasted and the link will be posted on the website after the workshop.  The following public workshop Agenda is current as of 1/30/2020 and subject to change.

All requests to make oral presentations must be received by January 10, 2020, 4 p.m. If selected for presentation, FDA will notify presenters by January, 24, 2020 and presentation materials must be emailed to RadAIWorkshop2020@fda.hhs.gov by February 5, 2020 to present at the event.

FDA-2019-N-5592 by March 26, 2020.  Please refer to the instructions for submitting comments to the docket to ensure that your feedback is received.  Please be advised that as soon as a transcript is available, it will be posted in the Dockets and accessible at http://www.regulations.gov.  For questions regarding workshop content please contact: Jennifer A.

MD vs. Machine: Artificial intelligence in health care

Recent advances in artificial intelligence and machine learning are changing the way doctors practice medicine. Can medical data actually improve health care?

What is the Future of Machine Learning in Healthcare?

It's been over six years since IBM's Watson amazed all of us on Jeopardy, but it has yet to deliver similar breakthroughs in healthcare. The headlines in last ...

Artificial Intelligence in Imaging (Radiology, December 2017)

Guest: Luciano M. Prevedello, MD, MPH, Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, Ohio Moderator: Herbert Y.

Digital Experience Hall – the future of radiology

Visit: Making digitalization more tangible – that's the idea behind our Digital Experience Hall at ECR 2019

DeepMind - Design Functionality over Beauty and Artificial intelligence In Medical Imaging.

We sat down with Dem Gerolemou at DeepMind - an AI research company that joined Google in 2014. In 2016 Deepmind Health was launched to explore how ...

IBM researchers bring AI to radiology

With advances in machine learning and artifiical intelligence, a new role is emerging for machines as intelligent assistants for radiologists in their clinical work ...

Artificial Intelligence in Medical Coding | AI in Healthcare

Will AI transform the traditional medical coding and billing? Certainly, Yes! Artificial Intelligence is comprehensively redefining the medical coding & billing ...

[Webinar] Showing the value of AI - Quality in the radiology workflow

2019 is the year AI finally becomes real - with major institutions starting to embrace AI and search for real outcomes that are impacting the standard of care.

Clara: Supercharging Medical Instruments with AI

Augmenting radiology with artificial intelligence and deep learning is reinventing the way we visualize medical images. A Magnetic Resonance Imaging (MRI) ...

The Future of AI in Healthcare | Appen

Read the blog here: At Appen, we provide high-quality training data for machine learning and artificial intelligence. Subscribe to stay up ..