AI News, Deep Learning & Artificial Intelligence Solutions from NVIDIA artificial intelligence

K-12 Teaching and Learning

Imagine an eighth-grade classroom in a public school where students are building wind turbines to generate electricity.

Unfortunately, too many schools continue to fail to provide what this diverse majority needs — access to the supportive, flexible learning opportunities that will develop the knowledge, skills, and mindsets necessary to achieve their goals.

Since our country works best when everyone has the chance to contribute, this waste of human potential weakens our society and democracy, leaving us less prepared to tackle the most pressing issues our nation faces today.

Our K-12 Teaching and Learning strategy works with educators, schools and their communities to learn what it takes to turn schools into places that empower and equip students for a lifetime of learning, and to reach their full potential. We support organizations, school systems and educators that are demonstrating the enormous potential of deeper learning.

Deep Learning Basics: Introduction and Overview - MIT

For more lecture videos visit our website or follow code tutorials on our GitHub repo.INFO:Website: https://deeplearning.mit.eduGitHub: https://github.com/lexfridman/mit-dee...Slides: http://bit.ly/deep-learning-basics-sl...Playlist: http://bit.ly/deep-learning-playlistOUTLINE:0:00 - Introduction0:53 - Deep learning in one slide4:55 - History of ideas and tools9:43 - Simple example in TensorFlow11:36 - TensorFlow in one slide13:32 - Deep learning is representation learning16:02 - Why deep learning (and why not)22:00 - Challenges for supervised learning38:27 - Key low-level concepts46:15 - Higher-level methods1:06:00 - Toward artificial general intelligenceCONNECT:- If you enjoyed this video, please subscribe to this channel.- Twitter: https://twitter.com/lexfridman- LinkedIn: https://www.linkedin.com/in/lexfridman- Facebook: https://www.facebook.com/lexfridman- Instagram: https://www.instagram.com/lexfridman

An Observational Study of Deep Learning and Automated Evaluation of Cervical Images for Cancer Screening

However, mainstays of cervical cancer screening programs in high-resource settings, including cervical cytology (Pap tests) and colposcopy, require infrastructure and extensively trained personnel that are lacking in most lower resource settings.

In search of programmatic simplicity and sustainable costs, authorities including the World Health Organization, the US President’s Emergency Plan for AIDS Relief, and the Indian government have endorsed screening of the cervix by visual inspection after application (VIA) of acetic acid to highlight precancerous or cancerous abnormalities (4–6) when more advanced methods are not feasible.

Machine learning-based approaches to cervical cancer screening have yielded promising early results but have lacked a good measurement of precancer, sufficient sample size, or prospective follow-up (Supplementary Table 1, available online) (14,15).

We applied a deep learning-based object detection method [Faster R-CNN, or faster region-based convolutional neural network (16)] algorithm to cervical images taken during a National Cancer Institute (NCI) prospective epidemiologic study, with long follow-up and rigorously defined precancer endpoints, to develop a detection algorithm that can identify cervical precancer.

Participants aged 18–94 years were screened at baseline and periodically for up to 7 years using multiple methods at intervals determined by their screening results and resultant estimated risk of precancer (mean number of visits = 3.4, SD = 2.5, approximately 90% of women with some follow-up) (17,18).

Each cervigram image was originally projected on a wall screen for magnification and classified by one of two highly experienced National Testing Laboratory Worldwide physician colposcopist evaluators as normal, atypical, positive for minor low-grade HPV-induced changes, or positive for precancer or cancer (the last two combined here).

To compensate, we performed transfer learning (28) by first initializing the CNN architecture with pretrained weights from a model trained with the ImageNet dataset (29), a database of millions of (noncervigram) images of all kinds of natural objects.

We also augmented the image data (30), artificially increasing the number of cervical images, by performing minor distortions to the original images including rotation, mirroring, sheering, and gamma transformation.

They first followed the architecture of the proposed method to evaluate the performance of the cervix region locator and of the visual evaluation algorithm, independently evaluating different subsets of the Guanacaste cohort images for model training than the original.

We conducted the analyses presented here using cohort enrollment images from the 30% of women in the initial validation set plus enrollment images from the “leftover” women in the Guanacaste cohort with <CIN2 not chosen for either the training or initial validation set (Figure 1).

The main validation analysis, focused on images taken at cohort enrollment visits, evaluated accuracy of the automated visual evaluation algorithm compared with the originally performed baseline screening tests (cervicography, cytology) and HPV testing introduced for validation but not used as a screening test at that time.

For analyses requiring a categorical positive/negative result for automated visual evaluation, we chose the cutpoint in the continuous score distribution, specific to that age group in age-stratified analyses, that maximized Youden’s index (sensitivity + specificity – 1) (32).

specifically, because the initial validation set was a random sample of the training validation group, we could multiply the results for women in the validation set by the inverse of the sampling fraction (30%) to represent all women used for training and validation.

Cases that resulted in differences between automated visual evaluation algorithm results and original evaluator interpretation of the same cervigram images were rereviewed without masking by an expert gynecologic oncologist and colposcopist (ME) to note any subjective patterns that might explain discrepancies.

The additional positives by automated visual evaluation tended to occur among younger women (median = 26.5, P = .06 by Wilcoxon test compared with the rest of the cases in the cohort whose median age was 35 years).

In anticipation of designing screening programs, we considered three distinct age ranges: 18–24 years, 25–49 years, and 50+ years, with the intermediate age group of primary interest because it coincides with the majority (130/228) of CIN2-CIN3 cases and higher sensitivity (127/130, P < .001 compared with younger women) (Table 2).

We estimated that a single automated visual evaluation screening round targeting women at the prime screening ages of 25–49 years could identify 55.7% (127/228) of precancers (CIN2/CIN3/AIS) diagnosed cumulatively in the entire adult population.

To achieve this level of sensitivity in the entire population by a single round of screening women aged 25–49 years would require referring 11.0% (982/8906) of the entire population (and 18.0% [982/5460] of those aged 25–49 years) for treatment.

Table 2.Estimated comparison of automated visual evaluation performance by age groups Automated visual evaluation by age CIN2+, No. <CIN2, No. Total No. Age-specific sensitivity, % Age-specific specificity, % <25 y       + 46 226 272 82.1 77.2  − 10 765 775  Age-specific total 56 991 1047 25–49 y       + 127 855 982 97.7 84.0  − 3 4475 4478  Age-specific total 130 5330 5460 50+ y       + 39 399 438 92.9 83.2  − 3 1969 1972  Age-specific total 42 2368 2410 Total 228 8689 8917 — — Automated visual evaluation by age CIN2+, No. <CIN2, No. Total No. Age-specific sensitivity, % Age-specific specificity, % <25 y       + 46 226 272 82.1 77.2  − 10 765 775  Age-specific total 56 991 1047 25–49 y       + 127 855 982 97.7 84.0  − 3 4475 4478  Age-specific total 130 5330 5460 50+ y       + 39 399 438 92.9 83.2  − 3 1969 1972  Age-specific total 42 2368 2410 Total 228 8689 8917 — — Table 2.Estimated comparison of automated visual evaluation performance by age groups Automated visual evaluation by age CIN2+, No. <CIN2, No. Total No. Age-specific sensitivity, % Age-specific specificity, % <25 y       + 46 226 272 82.1 77.2  − 10 765 775  Age-specific total 56 991 1047 25–49 y       + 127 855 982 97.7 84.0  − 3 4475 4478  Age-specific total 130 5330 5460 50+ y       + 39 399 438 92.9 83.2  − 3 1969 1972  Age-specific total 42 2368 2410 Total 228 8689 8917 — — Automated visual evaluation by age CIN2+, No. <CIN2, No. Total No. Age-specific sensitivity, % Age-specific specificity, % <25 y       + 46 226 272 82.1 77.2  − 10 765 775  Age-specific total 56 991 1047 25–49 y       + 127 855 982 97.7 84.0  − 3 4475 4478  Age-specific total 130 5330 5460 50+ y       + 39 399 438 92.9 83.2  − 3 1969 1972  Age-specific total 42 2368 2410 Total 228 8689 8917 — —  Review of enrollment images from cases with discrepant human vs automated visual evaluations (and a random tenth of discrepant noncases) revealed that many of the automated visual evaluation algorithm result-positive/cervigram-negative cases of CIN2+ (additional true positives) had suboptimal images (eg, poor focus or washed-out color during scanning of film), or there were obstructing vaginal folds or blood.

Table 3.Severity scores from automatic visual evaluation algorithm and screening results, from enrollment visit of the Guanacaste cohort study, for cases of invasive cancer* Years to diagnosis Enrollment age, y HPV type result Pap smear result Cervigram result Algorithm severity score Enrollment 21 16, 52 Cancer Cancer Training Enrollment 26 16, 18, 51 Normal High-grade Training Enrollment 29 18, 31 HSIL (CIN2) High-grade Training Enrollment 34 16 Microinvasive High-grade Training Enrollment 35 31, 45 Microinvasive Cancer 0.98 Enrollment 38 16 HSIL (CIN2) Cancer Training Enrollment 41 16 ASC-US Cancer 0.78 Enrollment 47 18 Microinvasive Cancer? Training Enrollment 54 16 Microinvasive Ccancer? Training Enrollment 71 53, 82v Microinvasive Negative 0.13 Enrollment 73 33 Microinvasive Cancer 0.96 Enrollment 74 35 HSIL (CIN3) Negative 0.35 Enrollment 42 Equivocal Microinvasive Cancer? 0.84 1 y 37 18 Normal Negative Training 1 y 61 Negative Normal Negative Training 2 y 23 16 ASC-US Low-grade 0.87 2 y 64 45, 51, 58 Normal Negative 0.30 3 y 49 16 ASC-US Negative Training 5 y 29 18 Normal Negative Training 5 y 36 16 Normal Negative 0.71 6 y 38 Negative HSIL (CIN2) Negative Training 6 y 45 31 Normal Missing Missing 6 y 48 Negative Normal Negative Training 6 y 66 56 Microinvasive Negative 0.38 7 y 50 Negative Normal Negative Training 7 y 63 Negative Normal Negative Training 8 y 35 Negative Normal Negative 0.02 8 y 52 16 Normal Negative 0.70 8 y 63 16 Normal Negative Training 8 y 81 Negative Normal Negative 0.34 9 y 28 18 Normal Negative Training 10 y 65 16 HSIL (CIN3) Negative Training 10 y 75 85 Microinvasive Negative Training 11 y 75 Negative Normal Negative 0.16 12 y 68 Negative Normal Negative 0.02 13 y 56 Negative Normal Negative 0.75 15 y 78 Missing Missing Atypical Training 16 y 52 Negative Normal Negative Training 17 y 21 Negative Inadequate Atypical 0.04 Years to diagnosis Enrollment age, y HPV type result Pap smear result Cervigram result Algorithm severity score Enrollment 21 16, 52 Cancer Cancer Training Enrollment 26 16, 18, 51 Normal High-grade Training Enrollment 29 18, 31 HSIL (CIN2) High-grade Training Enrollment 34 16 Microinvasive High-grade Training Enrollment 35 31, 45 Microinvasive Cancer 0.98 Enrollment 38 16 HSIL (CIN2) Cancer Training Enrollment 41 16 ASC-US Cancer 0.78 Enrollment 47 18 Microinvasive Cancer? Training Enrollment 54 16 Microinvasive Ccancer? Training Enrollment 71 53, 82v Microinvasive Negative 0.13 Enrollment 73 33 Microinvasive Cancer 0.96 Enrollment 74 35 HSIL (CIN3) Negative 0.35 Enrollment 42 Equivocal Microinvasive Cancer? 0.84 1 y 37 18 Normal Negative Training 1 y 61 Negative Normal Negative Training 2 y 23 16 ASC-US Low-grade 0.87 2 y 64 45, 51, 58 Normal Negative 0.30 3 y 49 16 ASC-US Negative Training 5 y 29 18 Normal Negative Training 5 y 36 16 Normal Negative 0.71 6 y 38 Negative HSIL (CIN2) Negative Training 6 y 45 31 Normal Missing Missing 6 y 48 Negative Normal Negative Training 6 y 66 56 Microinvasive Negative 0.38 7 y 50 Negative Normal Negative Training 7 y 63 Negative Normal Negative Training 8 y 35 Negative Normal Negative 0.02 8 y 52 16 Normal Negative 0.70 8 y 63 16 Normal Negative Training 8 y 81 Negative Normal Negative 0.34 9 y 28 18 Normal Negative Training 10 y 65 16 HSIL (CIN3) Negative Training 10 y 75 85 Microinvasive Negative Training 11 y 75 Negative Normal Negative 0.16 12 y 68 Negative Normal Negative 0.02 13 y 56 Negative Normal Negative 0.75 15 y 78 Missing Missing Atypical Training 16 y 52 Negative Normal Negative Training 17 y 21 Negative Inadequate Atypical 0.04 *ASC-US = atypical squamous cells of undetermined significance;

HSIL = high-grade squamous intraepithelial lesion.Table 3.Severity scores from automatic visual evaluation algorithm and screening results, from enrollment visit of the Guanacaste cohort study, for cases of invasive cancer* Years to diagnosis Enrollment age, y HPV type result Pap smear result Cervigram result Algorithm severity score Enrollment 21 16, 52 Cancer Cancer Training Enrollment 26 16, 18, 51 Normal High-grade Training Enrollment 29 18, 31 HSIL (CIN2) High-grade Training Enrollment 34 16 Microinvasive High-grade Training Enrollment 35 31, 45 Microinvasive Cancer 0.98 Enrollment 38 16 HSIL (CIN2) Cancer Training Enrollment 41 16 ASC-US Cancer 0.78 Enrollment 47 18 Microinvasive Cancer? Training Enrollment 54 16 Microinvasive Ccancer? Training Enrollment 71 53, 82v Microinvasive Negative 0.13 Enrollment 73 33 Microinvasive Cancer 0.96 Enrollment 74 35 HSIL (CIN3) Negative 0.35 Enrollment 42 Equivocal Microinvasive Cancer? 0.84 1 y 37 18 Normal Negative Training 1 y 61 Negative Normal Negative Training 2 y 23 16 ASC-US Low-grade 0.87 2 y 64 45, 51, 58 Normal Negative 0.30 3 y 49 16 ASC-US Negative Training 5 y 29 18 Normal Negative Training 5 y 36 16 Normal Negative 0.71 6 y 38 Negative HSIL (CIN2) Negative Training 6 y 45 31 Normal Missing Missing 6 y 48 Negative Normal Negative Training 6 y 66 56 Microinvasive Negative 0.38 7 y 50 Negative Normal Negative Training 7 y 63 Negative Normal Negative Training 8 y 35 Negative Normal Negative 0.02 8 y 52 16 Normal Negative 0.70 8 y 63 16 Normal Negative Training 8 y 81 Negative Normal Negative 0.34 9 y 28 18 Normal Negative Training 10 y 65 16 HSIL (CIN3) Negative Training 10 y 75 85 Microinvasive Negative Training 11 y 75 Negative Normal Negative 0.16 12 y 68 Negative Normal Negative 0.02 13 y 56 Negative Normal Negative 0.75 15 y 78 Missing Missing Atypical Training 16 y 52 Negative Normal Negative Training 17 y 21 Negative Inadequate Atypical 0.04 Years to diagnosis Enrollment age, y HPV type result Pap smear result Cervigram result Algorithm severity score Enrollment 21 16, 52 Cancer Cancer Training Enrollment 26 16, 18, 51 Normal High-grade Training Enrollment 29 18, 31 HSIL (CIN2) High-grade Training Enrollment 34 16 Microinvasive High-grade Training Enrollment 35 31, 45 Microinvasive Cancer 0.98 Enrollment 38 16 HSIL (CIN2) Cancer Training Enrollment 41 16 ASC-US Cancer 0.78 Enrollment 47 18 Microinvasive Cancer? Training Enrollment 54 16 Microinvasive Ccancer? Training Enrollment 71 53, 82v Microinvasive Negative 0.13 Enrollment 73 33 Microinvasive Cancer 0.96 Enrollment 74 35 HSIL (CIN3) Negative 0.35 Enrollment 42 Equivocal Microinvasive Cancer? 0.84 1 y 37 18 Normal Negative Training 1 y 61 Negative Normal Negative Training 2 y 23 16 ASC-US Low-grade 0.87 2 y 64 45, 51, 58 Normal Negative 0.30 3 y 49 16 ASC-US Negative Training 5 y 29 18 Normal Negative Training 5 y 36 16 Normal Negative 0.71 6 y 38 Negative HSIL (CIN2) Negative Training 6 y 45 31 Normal Missing Missing 6 y 48 Negative Normal Negative Training 6 y 66 56 Microinvasive Negative 0.38 7 y 50 Negative Normal Negative Training 7 y 63 Negative Normal Negative Training 8 y 35 Negative Normal Negative 0.02 8 y 52 16 Normal Negative 0.70 8 y 63 16 Normal Negative Training 8 y 81 Negative Normal Negative 0.34 9 y 28 18 Normal Negative Training 10 y 65 16 HSIL (CIN3) Negative Training 10 y 75 85 Microinvasive Negative Training 11 y 75 Negative Normal Negative 0.16 12 y 68 Negative Normal Negative 0.02 13 y 56 Negative Normal Negative 0.75 15 y 78 Missing Missing Atypical Training 16 y 52 Negative Normal Negative Training 17 y 21 Negative Inadequate Atypical 0.04 *ASC-US = atypical squamous cells of undetermined significance;

Therefore, primary HPV testing using self-sampling, if it matched the HPV test performance of the early assays we used in 1993–1994, followed by the automated visual evaluation algorithm restricted to positives could achieve the same aggregate sensitivity while more than halving the number of women requiring cervical examinations.

The performance surpassed colposcopist evaluator interpretations of the same images (cervicography) and compared favorably to conventional Pap smears (and alternative kinds of cytology) while matching the screening accuracy of an early version of PCR-based HPV testing.

Restricted to the age group at which risk of precancer peaks, to achieve nearly perfect sensitivity for cases occurring up to 7 years after examination generated a large number of false positives among screened noncases.

Another possibility for improving specificity while retaining high sensitivity might be combination screening (commonly called “cotesting”) with automated visual evaluation and HPV testing, when such tests become more affordable and more widely available than they currently are.

The minimal required equipment (in addition to the algorithm software) would be acetic acid (vinegar), disposable specula (or sterilization equipment), and the imaging system, such as a dedicated smart phone or digital camera.

So, rather than focusing on training for the subtleties of visual appearance, which is a difficult skill, the training for automated visual evaluation could highlight more easily acquired skills of improving image quality and lighting and removing obstructions.

Rather than resurrecting obsolete film camera technology to achieve the observed results, we are currently working to transfer automated visual evaluation to images from contemporary phone cameras and other digital image capture devices to create an accurate and affordable point-of-care screening method that would support the recently announced World Health Organization initiative to accelerate cervical cancer control.

Saving Energy Consumption With Deep Learning

Discover how big data, GPUs, and deep learning, can enable smarter decisions on making your building more energy-efficient with AI startup, Verdigris. Explore ...

The Deep Learning Revolution

More info at: Deep learning is the fastest-growing field in artificial intelligence, helping computers make sense of infinite ..

Why Is Deep Learning Hot Right Now?

Learn more at Deep learning is the fastest-growing field in artificial intelligence (AI), helping computers make ..

Research at NVIDIA: AI Reconstructs Photos with Realistic Results

Researchers from NVIDIA, led by Guilin Liu, introduced a state-of-the-art deep learning method that can edit images or reconstruct a corrupted image, one that ...

NVIDIA - Deep Learning Demystified - www.DeepLearning.love

Artificial Intelligence (AI) is solving problems that seemed well beyond our reach just a few years back. Using deep learning, the fastest growing segment of AI, ...

Research at NVIDIA: Transforming Standard Video Into Slow Motion with AI

Researchers from NVIDIA developed a deep learning-based system that can produce high-quality slow-motion videos from a 30-frame-per-second video, ...

Best Laptop for Machine Learning

What kind of laptop should you get if you want to do machine learning? There are a lot of options out there and in this video i'll describe the components of an ...

NVIDIA's AI Makes Amazing Slow-Mo Videos

The paper "Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation" is available here: ...

Why Deep Learning Now? | AI Revolution Documentary

Learn more: Subscribe here:

What role does Deep Learning play in Self Driving Cars?

deep learning and self driving cars Autonomous Drive is here and ..