AI News, Automating Breast Cancer Detection with DeepLearning
Automating Breast Cancer Detection with DeepLearning
iSono Health is a startup company committed to developing an affordable, automated ultrasound imaging platform to facilitate monthly self-monitoring for women to help with early breast cancer detection.
In 2017, roughly 255,180 new cases of invasive breast cancer are expected to be diagnosed, and 40,610 breast cancer related deaths are anticipated in the U.S. .
Data Overview The raw dataset (courtesy of iSono Health) contains 2,684 labeled 2-D breast ultrasound images in JPEG format: Benign cases: 1007 Malignant cases: 1499 Unusual cases: 178 Subtypes in benign: 12 Subtypes in malignant: 13 Subtypes in unusual: 3 Most images have the size of 300 x 225 pixels, each pixel has a value ranging from 0 to 255.
Specifically, I rotated each image a random small degree from -10° to 10° and I did it for 12 times, so I eventually got 1920 x 12 = 23040 images.
Based on the observation that the most interesting part (lesion and its surroundings) of almost all the images is located around the center of the image, it is safe to crop the images to 200 x 200 pixels to remove the paddings caused by image rotation.
The holdout test and validation datasets were separated from the training set prior to the image augmentation, so there was no overlapping original images across the groups.
Nevertheless, the engineering of effective features is problem-oriented and highly depends on the quality of each intermediate result in the image processing, which often needs many passes of trial-and-error design and case-by-case user interventions .
Nature recently reported a work on classification of skin cancer using deep convolutional neural networks, which demonstrated a level of competence comparable to dermatologists .
While there is no concrete definition of what “deep” means, it is the number of possible causal connections each neuron has that really shapes the “depth” of deep learning structures.
The constructed fully connected neural network has one input layer, three hidden layers that have 512, 256, 128 nodes respectively, and one output layer that has two outputs.
Lowering the threshold value can give higher sensitivity and reduce the false negative cases, but in this case there is a delicate trade-off, with the false positive cases having major implications — especially with regards to preventive mastectomy.
As the number of training iterations increased, the validation accuracy of the convolutional neural network quickly and smoothly ramped up to 0.9 after 3000 iterations, while the fully connected neural network did not reach 0.9 until around 10000 iterations.
On the other hand, the loss value of the convolutional neural network was lower than the fully connected neural network, which indicated that the gradient descent function inside the convolutional neural network had a better performance in converging to the local minimum point.
The convolutional neural network has many hyperparameters that can be further tuned, including but not limited to: number of convolutional layers, number of fully connected layers, number of filters, size of filters, number of hidden nodes, batch size, learning rate, max pooling size, dropout ratio, etc..
In practice, transfer learning is another viable solution which refers to the process of leveraging the features learned by a pre-trained deep learning model (for example, GoogleNet Inception v3) and then applying to a different dataset.
Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation
Abstract: We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images.
Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level.
We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training.
Access Denied Your access to the NCBI website at www.ncbi.nlm.nih.gov has been temporarily blocked due to a possible misuse/abuse situation involving your site.
It could be something as simple as a run away script or learning how to better use E-utilities, http://www.ncbi.nlm.nih.gov/books/NBK25497/, for more efficient work such that your work does not impact the ability of other researchers to also use our site.
- On Saturday, December 14, 2019
Deep Neural Networks in Medical Imaging and Radiology
A Google TechTalk, 5/11/17, presented by Le Lu ABSTRACT: Deep Neural Networks in Medical Imaging and Radiology: Preventative and Precision Medicine Perspectives (the final version from GTC,...
Prof. Yoshua Bengio - Deep learning & Backprop in the Brain
Yoshua Bengio is a Canadian computer scientist, most noted for his work on artificial neural networks and deep learning. Bengio received his Bachelor of Science, Master of Engineering and PhD...
[MISS 2016] Ben Glocker - Deep Learning for Brain Lesion Segmentation
Lecture 3: Deep Learning for Brain Lesion Segmentation The presentation will cover recent work on using deep learning for brain lesion segmentation. We discuss a very efficient multi-scale,...
Imaging large scale ensemble neural codes underying learning and memory, Prof. Mark Schnitzer
27 May 2016, SwissTech Convention Center, Lausanne, Switzerland Website: thebrainforum.org A longstanding challenge in neuroscience is to understand how the dynamics of large populations...
Lecture 6: Machine learning on fMRI data. For science. Guest lecture from Nick Allgaier
Deep Learning From A to Z (Raphael Gontijo Lopes)
Deep Learning From A to Z: From the Basics of Machine Learning to Understanding TensorFlow Internals Slides: Presenter: Raphael Gontijo Lopes External Relations Officer..
DEF CON 24 - Clarence Chio - Machine Duping 101: Pwning Deep Learning Systems
Deep learning and neural networks have gained incredible popularity in recent years. The technology has grown to be the most talked-about and least well-understood branch of machine learning....
Andrej Karpathy, Research Scientist, OpenAI - RE•WORK Deep Learning Summit 2016 #reworkDL
This presentation took place at the RE•WORK Deep Learning Summit in San Francisco on 28-29 January 2016: #reworkDL Andrej Karpathy, Research..
Emin Orhan - When do neural networks learn ... (CCN 2017)
Presented at Cognitive Computational Neuroscience (CCN) 2017 ( held September 6-8, 2017