AI News, Artificial intelligence speeds brain tumor diagnosis

Computer-aided diagnosis

Computer-aided detection (CADe), also called computer-aided diagnosis (CADx), are systems that assist doctors in the interpretation of medical images.

Imaging techniques in X-ray, MRI, and ultrasound diagnostics yield a great deal of information that the radiologist or other medical professional has to analyze and evaluate comprehensively in a short time.

CAD systems process digital images for typical appearances and to highlight conspicuous sections, such as possible diseases, in order to offer input to support a decision taken by the professional.

CAD is an interdisciplinary technology combining elements of artificial intelligence and computer vision with radiological and pathology image processing.

For instance, some hospitals use CAD to support preventive medical check-ups in mammography (diagnosis of breast cancer), the detection of polyps in the colon, and lung cancer.

Computer-aided simple triage (CAST) is another type of CAD, which performs a fully automatic initial interpretation and triage of studies into some meaningful categories (e.g.

Although CAD has been used in clinical environments for over 40 years, CAD usually does not substitute the doctor or other professional, but rather plays a supporting role.

However, the goal of some CAD systems is to detect earliest signs of abnormality in patients that human professionals cannot, as in diabetic retinopathy, architectural distortion in mammograms,[2][3]

In the late 1950s, with the dawn of modern computers researchers in various fields started exploring the possibility of building computer-aided medical diagnostic (CAD) systems.

These first CAD systems used flow-charts, statistical pattern-matching, probability theory or knowledge bases to drive their decision making process.[7]

Since the early 1970s, some of the very early CAD systems in medicine, which were often referred as “expert systems” in medicine, were developed and used mainly for educational purposes.

it became clear that there were limitations but also potential opportunities when one develops algorithms to solve groups of important computational problems.[7]

As result of the new understanding of the various algorithmic limitations that Karp discovered in the early 1970s, researchers started realizing the serious limitations that CAD and expert systems in medicine have.[7]

Thus, by the late 1980s and early 1990s the focus sifted in the use of data mining approaches for the purpose of using more advanced and flexible CAD systems.

In the following years several commercial CAD systems for analyzing mammography, breast MRI, medical imagining of lung, colon, and heart also received FAD approvals.

Some challenges are related to various algorithmic limitations in the procedures of a CAD system including input data collection, preprocessing, processing and system assessments.

Algorithms are generally designed to select a single likely diagnosis, thus providing suboptimal results for patients with multiple, concurrent disorders.[21]

Due to the massive availability of data and the need to analyze such data, big data is also one of the biggest challenges that CAD systems face today.

CAD is used in the diagnosis of breast cancer, lung cancer, colon cancer, prostate cancer, bone metastases, coronary artery disease, congenital heart defect, pathological brain detection, Alzheimer's disease, and diabetic retinopathy.

A 2008 systematic review on computer-aided detection in screening mammography concluded that CAD does not have a significant effect on cancer detection rate, but does undesirably increase recall rate (i.e.

In the diagnosis of lung cancer, computed tomography with special three-dimensional CAD systems are established and considered as appropriate second opinions.[24]

A number of researchers developed CAD systems for detection of lung nodules (round lesions less than 30 mm) in chest radiography[26][27][28]

CAD is available for the automatic detection of significant (causing more than 50% stenosis) coronary artery disease in coronary CT angiography (CCTA) studies.[citation needed]

Murmurs, irregular heart sounds, caused by blood flowing through a defective heart, can be detected with high sensitivity and specificity.

Their feature vector of each image is created by considering the magnitudes of Slantlet transform outputs corresponding to six spatial positions chosen according to a specific logic.[39]

In 2010, Wang and Wu presented a forward neural network (FNN) based method to classify a given MR brain image as normal or abnormal.

In 2011, Wu and Wang proposed using DWT for feature extraction, PCA for feature reduction, and FNN with scaled chaotic artificial bee colony (SCABC) as classifier.[41]

found kernel support vector machine decision tree had 80% classification accuracy, with an average computation time of 0.022s for each image classification.[50]

The trained FCN achieved high precision and recall in naive digital whole slide image (WSI) semantic segmentation, correctly identifying NFT objects using a SegNet model trained for 200 epochs.

The FCN reached near-practical efficiency with average processing time of 45 min per WSI per Graphic Processing Unit (GPU), enabling reliable and reproducible large-scale detection of NFTs.

The measured performance on test data of eight naive WSI across various tauopathies resulted in the recall, precision, and an F1 score of 0.92, 0.72, and 0.81, respectively.[51]

Commercial CADx systems for the diagnosis of bone metastases in whole-body bone scans and coronary artery disease in myocardial perfusion images exist.[58]

With a high sensitivity and an acceptable false lesions detection rate, computer-aided automatic lesion detection system is demonstrated as useful and will probably in the future be able to help nuclear medicine physicians to identify possible bone lesions.[59]

Based on the 2014 review, this technique was the most frequently used and appeared in 11 out of 40 recently (since 2011) published primary research.[62]

At the end of the processing, areas that were dark in the input image would be brightened, greatly enhancing the contrast among the features present in the area.

In contrast, exudates, which appear yellow in normal image, are transformed into bright white spots after green filtering.

This technique is mostly used according to the 2014 review, with appearance in 27 out of 40 published articles in the past three years.[62]

Correction of non-uniform illumination (f') can be achieved by modifying the pixel intensity using known original pixel intensity (f), and average intensities of local (λ) and desired pixels (μ) (see formula below).[67]

In order to successfully segregate blood vessel information from the rest of the eye image, SVM algorithm creates support vectors that separate the blood vessel pixel from the rest of the image through a supervised environment.

The deep learning revolution of the 2010s has already produced AIs that are more accurate in many areas of visual diagnosis than radiologists and dermatologists, and this gap is expected to grow.

In contrast, many economists and artificial intelligence experts believe that fields such as radiology will be massively disrupted, with unemployment or downward pressure on the wages of radiologists;

Geoffrey Hinton, the 'Godfather of deep learning', argues that (in view of the likely advances expected in the next five or ten years) hospitals should immediately stop training radiologists, as their time-consuming and expensive training on visual diagnosis will soon be mostly obsolete, leading to a glut of traditional radiologists.[72][73]

An op-ed in JAMA argues that pathologists and radiologists should merge into a single 'information specialist' role, and state that 'To avoid being replaced by computers, radiologists must allow themselves to be displaced by computers.'