AI News, Artificial intelligence predicts patient lifespans
- On Tuesday, June 5, 2018
- By Read More
Artificial intelligence predicts patient lifespans
The research, now published in the Nature journal Scientific Reports, has implications for the early diagnosis of serious illness, and medical intervention.
'Although for this study only a small sample of patients was used, our research suggests that the computer has learnt to recognise the complex imaging appearances of diseases, something that requires extensive training for human experts,' Dr Oakden-Rayner says.
'Instead of focusing on diagnosing diseases, the automated systems can predict medical outcomes in a way that doctors are not trained to do, by incorporating large volumes of data and detecting subtle patterns,' Dr Oakden-Rayner says.
'Our research opens new avenues for the application of artificial intelligence technology in medical image analysis, and could offer new hope for the early detection of serious illness, requiring specific medical interventions.'
Man vs. machine?
“There’s initially always going to be some wincing and anxiety among pathologists and radiologists over this idea—that our computational imaging technology can outperform us or even take our jobs,” said Madabhushi, whose center has made significant diagnostic advances in cardiovascular disease and also brain, lung, breast, prostate and head and neck cancers since opening in 2012.
Since 2016, Madabhushi and his team have received over $9.5 million from the National Cancer Institute to develop computational tools for analysis of digital pathology images of breast, lung, and head and neck cancers to identify which patients with these diseases could be spared aggressive radiotherapy or chemotherapy.
For instance, the tools could help reduce the amount of time spent on cases with no obvious disease or obviously benign conditions and instead help them focus on the more confounding cases.” Those tools have been producing exceptionally accurate results at Madabhushi’s Center for Computational Imaging and Personalized Diagnostics (CCIPD) at Case Western Reserve.
“But we really believe this is more evidence of what computational imaging of pathology and radiology images can do for cardiovascular and cancer research and practical use among pathologists and radiologists.” So, what exactly are these supercomputers doing that humans can’t that creates such a wide margin in diagnostic success?
Precision Radiology: Predicting longevity using feature engineering and deep learning methods in a radiomics framework
To evaluate the base predictive value of medical imaging using traditional human feature engineering image analysis and deep learning techniques we performed a retrospective case-control study, with matching used to control for non-imaging clinical and demographic variables that were expected to be highly predictive of five-year mortality.
Participants were excluded based on the following criteria: acute visible disease identified on CT chest by an expert radiologist, metallic artefact on CT chest, and active cancer diagnosis (which would strongly bias survival time).
24 controls were matched on age, gender, and source of imaging referral (emergency, inpatient or outpatient departments), for a total of 48 image studies included (24 who died within five years of imaging, and 24 who survived beyond five years).
Thick section (5 mm slice-thickness) DICOM images were annotated by a radiologist using semi-automated segmentation tools contained in the Vitrea software suite (Vital Images, Toshiba group), with separate binary mask annotations of the following tissues: muscle, body fat, aorta, vertebral column, epicardial fat, heart, and lungs.
The texture based features include the first and second order matrix statistics of the grey level co-occurrence (GLCM), grey level run length (GLRLM), grey level size zone (GLSZM), and multiple grey level size zone (MGLSZM) matrices.
Due to the presence of IV contrast in the images, results were excluded when the average density of the heart or aorta segment was higher than the lower bound of the calcification density bin (for example, when the aortic average density was 250 Hounsfield units (HU), we excluded “calcification” pixel values below 300 HU).
Bone mineral density33 was quantified by first order statistical analysis of the intensity (density) of medullary bone, after the exclusion of cortical bone from the vertebral column segment using a density threshold.
This task is quite different from the analysis of large organs and tissues, and we expected that the variation in these features over space would be useful in the prediction of disease (for example, the craniocaudal distribution of low attenuation areas in emphysema).
The division of features between tissues was as follows: 2506 (aorta), 2506 (heart), 2236 (lungs), 2182 (epicardial fat), 2182 (body fat), 2182 (muscle), 2416 (bone), where 1310 of the total 16,210 features represent evidence-based features.
The exploration of the feature distributions and the mortality risk score survival models were both performed on the entire dataset (training and testing data), as neither process involved model selection or hyperparameter tuning.
Mortality classification was performed to assess the predictive capacity of the engineered features using a variety of standard classifiers including linear and non-linear support vector machines, random forests, and various boosted tree methods.
To reduce the complexity of the problem, the large CT volumes (512 × 512 pixels, with 50–70 slices per case depending on the length of the patients’ lungs) were downsampled to 64 × 64 × 11 volumes using bicubic interpolation.
The addition of the seven binary segmentation masks as channel inputs in the model was intended to promote the learning of anatomy based models for the prediction task: the distribution of tissues which contain predictive but very different features.
We assess the predictive performance of the feature engineering and deep learning methodologies based on a 6-fold cross-validation experiment, where we form six training sets, each containing 40 cases, and six testing sets, each with eight cases with no overlapping between training and testing sets in each fold.
The classification performance is measured using the receiver operating characteristic (ROC) curve and area under the ROC curve (AUC)64 using the classifier confidence on the 5-year mortality classification, as well as the mean accuracy across the 6 experiments.
Prediction of recurrence in early stage non-small cell lung cancer using computer extracted nuclear features from digital HE images
All experimental protocols in the study wereapproved by the University Hospitals Cleveland Medical Center (UHCMC) IRB (IRB# NHR-15–55) and werenot classified as “human subject research” according to Federal Regulations and wereconsidered HIPAA Exempt.
The deidentified tissue samples were obtained under an existing IRB-approved protocol at the Cleveland Clinic (IRB# 14–562) with Dr. Vamsidhar Velcheti as the PI which allows the use of radiographic images, histologic slides, and archival tissue available at the Clinic since 01/01/1990.
TMAs were produced by standard procedures via surgical specimens and digitally scanned at 20x, each sample of a single patient was represented by a 1500 pixel × 1500 pixel image.
deep learning approach previously presented in refs24,25 was applied to accurately segment individual nuclei in each of the TMA spots, both in the training and validation sets.
We adapted popular convolutional neural networks (CNN) to an adaptive architecture resolution, which finds pixels at a low magnification are likely to be nuclei, and solely investigated those pixels at a high magnification, obviating the computational burden of examining all pixels at the high magnification24.
To alleviate the issue of batch effects , a term that refers to variances shared by a set of specimens undergoing similar preparation steps (e.g., staining and digitization30), color normalization was applied to all the images using the non-linear spline mapping approach described in ref.31.
Feature selection was employed to identify a subset of features that were most discriminating of patients who had early versus no disease recurrence from within the larger set of 242 total features.
A variant of the Minimum redundancy maximum relevance (mRMR)32, a feature selection approach that uses mutual information as a similarity measure, was employed to find a subset of themost discriminative features.
Additionally, the Kaplan-Meier method33 was used to correlate the recurrence-free survival (RFS) which was measured from the date of diagnosis to the date of death or the date of disease recurrence whichever occurred first and censored at the date of last seen for those still alive without recurrence with the best classification results.
A Multivariable Cox proportional regression model35 was employed to test the independent predicting capability of the classifier on recurrence-free survival after taking major clinical parameters into account.
- On Wednesday, February 26, 2020
The Future of Machine Learning in Clinical Imaging
Visit: 0:15 - Intro to Machine Learning - Marc Kohli 15:51 - Training Computers to "Look" at X-rays Using Deep Learning - Andrew Taylor ..
Deep Neural Networks in Medical Imaging and Radiology
A Google TechTalk, 5/11/17, presented by Le Lu ABSTRACT: Deep Neural Networks in Medical Imaging and Radiology: Preventative and Precision Medicine ...
Artificial intelligence predicts patient lifespans | QPT
A computer's ability to predict a patient's lifespan simply by looking at images of their organs is a step closer to becoming a reality, Researchers from the ...
Deep Learning in Medical Imaging - Ben Glocker #reworkDL
Machines capable of analysing and interpreting medical scans with super-human performance are within reach. Deep learning, in particular, has emerged as a ...
Artificial Intelligence Can Change the future of Medical Diagnosis | Shinjini Kundu | TEDxPittsburgh
Medical diagnosis often is based on information visible from medical images. But what if machine learning and artificial intelligence could help our doctors gain ...
Medical Visualization: Medical Imaging on the Microsoft Platform
Rick Benge chairs this session at Faculty Summit 2011, which includes the following presentations. - Inner Eye: Toward a Computational Platform for Imaging ...
InnerEye: Medical Image Research in the Hospital
Analysis of medical images is essential in modern medicine. With the increasing amount of patient data, new challenges and opportunities arise for different ...
Crohn's disease: the inside story - futuris
Crohn's disease: the inside story Around 700000 Europeans are diagnosed with Crohn's disease each year. Specialists at University College hospital in central ...
Quantum Medical Research and Treatment - Jorg Wrachtrup
In an interview at the Institute for Quantum Computing, experimental physicist Jorg Wrachtrup from Universität Stuttgart discusses current research and potential ...
Deep Learning for Predicting Glioblastoma Subtypes from MRI. Peter Chang, MD
This talk was delivered at the 2016 i2i Workshop hosted by the Center for Advanced Imaging Innovation & Research (CAI2R) at NYU School of Medicine.