AI News, Deep Learning & Artificial Intelligence Solutions from NVIDIA artificial intelligence

Deep learning in Space

In the past few months I have been working on a machine learning application that assists satellite docking from a simple camera video feed.

(The ‘tip’ of the satellite is actually part of its docking mechanism.) Given those 3 (or more) points and the 3D model of the satellite, I could then reconstruct the pose of the satellite and relative position with respect to the camera.

(Actually, I can’t take credit for this work.) For linear motions of the satellite, we simply had to annotate the start and end position and CVAT would interpolate and add all the labels in between.

(I wanted to use multi-threading but… in Python-land threads are not cool and GIL said so.) After creating the TFRecords file, I created this script to benchmark and compare the time it takes to read the 13,198 training images from the TFRecords file versus simply reading each image from disk and decoding them on the fly.

The timing outputs below show that sequentially reading from a TFRecords file is slower than reading each image from disk and decoding them on the fly.

By simply setting the tf.data.Dataset.map num_parallel_calls argument when parsing the dataset, parallel reading of those very same images from the TFRecords file is 2 times faster than its sequential counterpart.

Here are the timing outputs from the script when run on my old 2011 iMac (2,7 GHz Intel Core i5): Sequential parsing of 13198 images: Parallel parsing of 13198 images: Recently, I accomplished Andrew Ng’s Deep Learning Specialization on Coursera.

(Ng is pronounced a bit like the n-sound at the end of ‘song’.) These five courses cover the core concepts of Deep Learning and neural networks including Convolutional Networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more.

The course also details practical case studies from healthcare, autonomous driving, sign language reading, music generation, and natural language processing.

I won’t be explaining the full working principles and details of the original YOLO paper in this post as there are so many excellent blog posts out there doing just that.

PaintBot: A deep learning student that trains then mimics old masters

'Although there are existing filters which can transform digital photographs to make them similar to a painting,' said the report, 'the way that PaintBot's compositions are built up from thousands of individual brushstrokes makes the algorithms AI's works more realistic.'

The paper stated: 'We demonstrate that our painting agent can learn an effective policy with a high dimensional continuous action space comprising pen pressure, width, tilt, and color, for a variety of painting styles.'

The AI would practice reproducing reference paintings, said Randall, 'which it would then compare with the original work to see how similar the two were and if it was improving its imitation of the artist's style.'

'To accelerate training convergence, we adopt a curriculum learning strategy, whereby reference patches are sampled according to how challenging they are using the current policy.'

They said their approach learns without human supervision, 'and does not degrade after thousands of strokes which can handle a large dense reference image.'

While this is all about AI as art-maker, not humans as art-makers, Daily Mail brought out that actually the path to art delivery has one similarity between the two, and that is apprenticeship: 'Much like the pupils of the old masters, the new AI meticulously studies the work of virtuoso painters like Vermeer and Van Gogh and learns to reproduce their works.'

Industrial AI Enabled by Deep Learning with Baker Hughes and NVIDIA

Baker Hughes is redefining how the oil and gas industry is approaching industrial A.I. to improve machine efficiency, worker safety and much more. With A.I. ...

Research at NVIDIA: AI Reconstructs Photos with Realistic Results

Researchers from NVIDIA, led by Guilin Liu, introduced a state-of-the-art deep learning method that can edit images or reconstruct a corrupted image, one that ...

GauGAN: Changing Sketches into Photorealistic Masterpieces

A deep learning model developed by NVIDIA Research turns rough doodles into highly realistic scenes using generative adversarial networks (GANs). Dubbed ...

Saving Energy Consumption With Deep Learning

Discover how big data, GPUs, and deep learning, can enable smarter decisions on making your building more energy-efficient with AI startup, Verdigris. Explore ...

Research at NVIDIA: Transforming Standard Video Into Slow Motion with AI

Researchers from NVIDIA developed a deep learning-based system that can produce high-quality slow-motion videos from a 30-frame-per-second video, ...

Research at NVIDIA: The First Interactive AI Rendered Virtual World

This AI breakthrough will allow developers and artists to create new interactive 3D virtual worlds for automotive, gaming or virtual reality by training models on ...

The Deep Learning Revolution

More info at: Deep learning is the fastest-growing field in artificial intelligence, helping computers make sense of infinite ..

NVIDIA's Largest Ever GPU for Artificial Intelligence - DGX-2 512GB, 2 PetaFLOPS

Recorded: March 27th, 2018 The largest GPU ever built NVIDIA DGX-2 2 PetaFLOPS 10kW 350lbs Presented by CEO Jensen Huang at GTC 2018.

CUDA Explained - Why Deep Learning uses GPUs

Artificial intelligence with PyTorch and CUDA. Let's discuss how CUDA fits in with PyTorch, and more importantly, why we use GPUs in neural network ...

Why Is Deep Learning Hot Right Now?

Learn more at Deep learning is the fastest-growing field in artificial intelligence (AI), helping computers make ..