AI News, (PDF) Artificial Intelligence in Dermatology artificial intelligence

JMIR Publications

An aging patient population and a shortage of medical professionals have led to a worldwide focus on improving the efficiency of clinical services via information technology.

Clinical diagnoses of acute and chronic diseases, such as acute appendicitis [3] and Alzheimer disease [4], have been assisted via AI technologies (eg, support vector machines, classification trees, and artificial neural networks).

Treatment optimization is achievable by AI [8] for patients with common, but complex diseases characterized as being ascribed to multiple factors (eg, genetic environmental or behavioral) such as cardiovascular diseases are more likely to benefit from more precise treatments on account of the AI algorithms based on big data [8].

We included articles if they (1) focused on advanced AI (defined as an AI encompassing a training or learning process to automate expert-comparable sophisticated tasks), (2) enclosed at least an application to particular disease diagnoses, (3) compared the performance between AI and human experts on specific clinical tasks, and (4) were written in English.

The characteristics comprised (1) first author and publication year, (2) AI technology, (3) classification and labeling, (4) data sources (including the sample size of total sets, training sets, validation, and/or tuning sets and test sets), (5) training process, (6) internal validation methods, (7) human clinician reference, and (8) performance assessment.

This tool provides a domain-based approach to help reviewers judge the reporting of various types of risk by scrutinizing information from reviewed articles, and in turn, the judgment can be made based on these pieces of supporting information against specific types of risk of interest.

The types of risk assessed in this review include (1) blinding of participants and personnel (performance bias), (2) blinding of outcome assessment (detection bias), (3) incomplete outcome data (attrition bias), and (4) selective reporting (reporting bias).

Following the systematic search process, 41,769 citations were retrieved from the database and 22,900 articles were excluded based on their titles and abstracts, resulting in 850 articles to be reviewed in detail.

Regarding their studied medical conditions, 3 studies could be categorized under ophthalmology, including diabetic retinopathy [11], macular degeneration [11], and congenital cataracts [12], whereas another 3 studies focused on onychomycosis [13] and skin lesions/cancers [14,15].

convolutional neural network (CNN) was the commonly applied advanced AI technology in all reviewed studies, with the exception of 1 study: González-Castro et al adopted support vector machine classifiers in their study [18].

For instance, studies related to ophthalmological images [11,12] had differences in image sources (eg, ocular images [12] or optical coherence tomography [OCT]–derived images [11]) and, thus, the classification differed correspondingly (Table 1).

The other ophthalmological study [12] focusing on congenital cataracts employed a 3-stage training procedure (ie, identification, evaluation, and strategist networks) to establish a collaborative disease management system beyond only disease identification.

Furthermore, 2 studies [11,16] employed both internal and external validation methods via training and/or validating the effectiveness of their AI algorithms using images from their own datasets and external datasets.

Specifically, the quantity of training sets, validation (and tuning) sets, and test sets ranged from 211 to approximately 113,300, from 53 to approximately 14,163, and from 50 to 1942, respectively.

Performance indices used for comparison included diagnostic accuracy, weighted errors, sensitivity, specificity (and/or the area under the receiver operating characteristic curve [AUC]), and false-positive rate.

A total of 4 articles [11,12,15,17] adopted the accuracy (ie, the proportion of true results [both positives and negatives] among the total number of cases examined) to compare diagnostic performance between AI and humans.

Esteva et al also found that AI achieved comparable accuracy with or outperformed their human rivals (AI vs dermatologists: 72.1% (SD 0.9%) vs 65.8% using 3-class disease partition and 55.4% (SD 1.7%) vs 54.2% using 9-class disease partition [15]).

De Fauw et al [19] reported unweighted errors by using 2 devices, and the results showed their AI’s performance commensurate with retina specialists and generalizable to another OCT device type.

The number of false discoveries occurring in AI was approximate to that by expert and competent ophthalmologists with respect to image evaluation (AI vs expert or competent: 9 vs 5 or 11) and treatment suggestion (AI vs expert or competent: 5 vs 1 or 3) but was lower than that of novice ophthalmologists with 5 versus 12 and 8, regarding image evaluation and treatment suggestion, respectively [12].

The other study also found the false-positive rate of their deep learning algorithm in nodule detection being close to the average level of thoracic radiologists (0.3 vs 0.25) [16].

Apart from false positives, Long et al also compared the number of missed detections between their AI and ophthalmologists, and their AI outperformed (ie, fewer missed detections) all ophthalmologists with varying expertise (expert, competent, and novice).

The time to interpret the tested images between AI and human radiologists was reported by Rajpurkar et al [16] The authors also compared AI and radiologists with respect to positive and negative predictive values, Cohen kappa, and F1 metrics (Table 2).

Although neural network approaches generally require substantial data for training, recent research suggested that it may be feasible to apply AI to rare diseases [11,12] and, in particular circumstances, to databases where a large number of examples are not available.

Computer-assisted technologies facilitate the rapid detection of clinical symptoms of interest (eg, benign and malignant) based on image features (eg, tone and rim) resulting in consistent outputs.

AI-based classification of physical characteristics via vast numbers of examples is reinforced during training, and this ability is consolidated and gradually levels the discriminative academic performance in appearance-based diagnoses such as skin diseases [15,21].

Although the recently promising self-learning abilities of AI may lead to additional prospects [22], the viability of such diagnostic processes is inevitably determined by human experts through cumulative clinical experience [23,24].

Its outstanding performance, comparable with that of experts, saves huge amounts of time in clinical practice, which, in turn, alleviates the tension in the long-established process of the transition from novice clinician to expert.

Login

1Department of Dermatology, China-Japan Friendship Hospital, Beijing 100029, China 2Graduate School, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China 3Department of Dermatology, The First Affiliated Hospital, Anhui Medical University, Hefei, Anhui 230032, China 4Shanghai Wheat Color Intelligent Technology Company, LTD, Shanghai 200051, China 5Department of Dermatology, Air Force General Hospital of People's Liberation Army, Beijing 100142, China 6Department of Dermatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan 450052, China.

doi: 10.1097/CM9.0000000000000372 Received May 15, 2019 This is an open access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited.

Skin Cancer Image Classification (TensorFlow Dev Summit 2017)

Read the "Dermatologist-level classification of skin cancer with deep neural networks" paper: Join Brett Kuprel, and see how TensorFlow ..

The Hugh Thompson Show: Artificial Intelligence

Hugh Thompson, RSA Conference Program Chair Dr. Dawn Song, Professor of Computer Science, UC Berkeley, MacArthur Fellow, and Serial Entrepreneur Dr.

@SkinvisionBot - Telegram bot to screen for skin desease. Powered by AI & CV

SkinvisionBot is a private, secure way to screen yourself for skin deseases. It can detect moles, benign formations, pre-cancer, cancer, warts and acne.

Brett King | Experiential Banking

Brett King, Futurist, Author and CEO of Moven talks about the disruptive technologies that help financial services institutions serve clients better and grow their ...

RL Course by David Silver - Lecture 5: Model Free Control

Reinforcement Learning Course by David Silver# Lecture 5: Model Free Control #Slides and more info about the course:

India's First AI powered doctor less clinic launching in durgapur

India's First AI powered doctor less clinic. come stay fit, manage your diet ,digitize your health record ,consult a doctor online and lot more with India's first AI ...

Nanotechnology future is here

watch more interesting documentaries Inside America's Top Secret Weapons Lab | US Military in Future #Mind Blow . Hello Friends, Hope you all are good, Here ...

The Hugh Thompson Show: Dr. Sebastian Thrun

Dr. Sebastian Thrun, Founder and President, Udacity and CEO, Kitty Hawk Hugh Thompson, RSA Conference Program Chair One of our most popular sessions ...

Topical agents

Wearable Biosensors Market Segmentation and Analysis by Recent Trends, Development and Growth by Reg

Get the Free Pdf format of report click the following link: Wearable ..