# AI News, Deep Learning & Artificial Intelligence Solutions from NVIDIA artificial intelligence

- On 11. juli 2019
- By Read More

## 100+ Basic Deep Learning Interview Questions and Answers

I want to see a candidate who has a good understanding of projects they’ve actually done, so I’ll want to see that there was a process in building up the NN solution, that you know how to do sanity checks, that you understand the nuance of how you arrived at your ultimate solution.

- On 11. juli 2019
- By Read More

## Part 2: Deep Learning from the Foundations

Welcome to Part 2: Deep Learning from the Foundations, which shows how to build a state of the art deep learning model from scratch.

It takes you all the way from the foundations of implementing matrix multiplication and back-propogation, through to high performance mixed-precision training, to the latest neural network architectures and learning techniques, and everything in between.

It covers many of the most important academic papers that form the foundations of modern deep learning, using “code-first” teaching, where each method is implemented from scratch in python and explained in detail (in the process, we’ll discuss many important software engineering techniques too).

Along the way, we will practice implementing papers, which is an important skill to master when making state of the art models.

In the remainder of this post I’ll provide a quick summary of some of the topics you can expect to cover in this course—if this sounds interesting, then click on lesson 8 in the “Part 2” section of the sidebar over on the left.

We’ll gradually refactor and accelerate our first, pure python, matrix multiplication, and in the process will learn about broadcasting and einstein summation.

We’ll then use this to create a basic neural net forward pass, including a first look at how neural networks are initialized (a topic we’ll be going into in great depth in the coming lessons).

First we look briefly at loss functions and optimizers, including implementing softmax and cross-entropy loss (and the logsumexp trick).

Finally, we develop a new kind of normalization layer to overcome these problems, compare it to previously published approaches, and see some very encouraging results.

We’ll look closely at each step: Next up, we build a new StatefulOptimizer class, and show that nearly all optimizers used in modern deep learning training are just special cases of this one class.

We develop a new GPU-based data augmentation approach which we find speeds things up quite dramatically, and allows us to then add more sophisticated warp-based transformations.

We implement some really important training techniques in lesson 12, all using callbacks: We also implement xresnet, which is a tweaked version of the classic resnet architecture that provides substantial improvements.

Finally, we show how to implement ULMFiT from scratch, including building an LSTM RNN, and looking at the various steps necessary to process natural language data to allow it to be passed to a neural network.

He shares insights on its development history, and why he thinks it’s a great fit for deep learning and numeric programming more generally.

Thanks to the compilation and language design, basic code runs very fast indeed - about 8000 times faster than Python in the simple example Chris showed in class.

He shows how to use this to quickly and easily get high performance code by interfacing with existing C libraries, using Sox audio processing, and VIPS and OpenCV image processing as complete working examples.

So be sure to study the notebooks to see lots more Swift tricks… We’ll be releasing even more lessons in the coming months and adding them to an attached course we’ll be calling Applications of Deep Learning.

- On 11. juli 2019
- By Read More

## Deep Learning Must Move Beyond Cheap Parlor Tricks

Most importantly, they are why AI companies shroud their algorithms with so much marketing hype and hyperbole and so carefully constrain the conditions under which they are used, in order to maintain the mystique, mystery and magic that sustains the public’s interest in deep learning and which would be at such risk if the public truly understood just how limited today’s algorithms really are.

This refusal to accept the enormous limitations of today’s correlative deep learning is extremely harmful to the future of the field in that it encourages companies to embrace dangerously brittle and unstable algorithms that are frequently far worse than the classical and hand-coded algorithms they replace, creating a crisis of confidence in the AI revolution.

Putting this all together, as I noted last year, “as we ascribe our own aspirations to mundane piles of code, anthropomorphizing them into living breathing silicon intelligences, rather than merely statistical representations of patterns in data, we lose track of their very real limitations and think in terms of utopian hyperbole rather than the very real risk calculus needed to ensure their safe and robust integration into our lives.” In the end, for deep learning to move beyond cheap parlor tricks towards solutions that can truly advance society, we must move beyond today’s correlative approaches and simplistic one trick ponies towards algorithms that can actually reason semantically about the world.

- On 11. juli 2019
- By Read More

## Learning from sequential data — Recurrent Neural Networks

Don’t get too excited, in this next series of posts we are not going to create an omnipotent Artificial Intelligence, rather we will create a simple chatbot that given some input information and a question about such information, responds to yes/no questions regarding what it has been told.

ANNs are Machine Learning models that try to mimic the functioning of the human brain, whose structure is built from a large number of neurons connected in between them — hence the name “Artificial Neural Networks” The simplest ANN model is composed of a single neuron, and goes by the Star-Trek sounding name Perceptron.

It was invented in 1957 by Frank Rossenblatt, and it consist of a simple neuron, which takes the weighted sum of its inputs (which in a biological neuron would be the dendrites) applies a mathematical function to them, and outputs its result (the output would be the equivalent of the axon of a biological neuron).

When networks are built in this way, the neurons that don’t belong to the input or output layers are considered part of the hidden layers, depicting with their name one of the main characteristics of an ANN: they are almost black box models;

we understand the mathematics behind what happens, and kind of have an intuition of what goes on inside the black box, but if we take the output of a hidden layer and try to make sense of it, we will probably crunch our heads and achieve no positive results.

These computing capabilities and the massive increases in the amount of available data to train our models with have allowed us to create larger, deeper neural networks, which just perform better than smaller ones.

For traditional Machine Learning algorithms (linear or logistic regressions, SMVs, Random Forests and so on), performance increases as we train the models with more data, up to a certain point, where performance stops going up as we feed the model with further data.

Another important personality on the field, Jeff Dean (one of the instigators of the adoption of Deep Learning within Google), says the following about deep learning: When you hear the term deep learning, just think of a large deep neural net.

This means that in the image of a larger neural network, they are present in every single one of the black edges, taking the output of one neuron, multiplying it and then giving it as input to the other neuron that such edge is connected to.

When we train a neural network (training a neural network is the ML expression for making it learn) we feed it a set of known data (in ML this is called labelled data), have it predict a characteristic that we know about such data (like if an image represents a dog or a cat) and then compare the predicted result to the actual result.

Now that we know what artificial neural networks and deep learning are, and have a slight idea of how neural networks learn, lets start looking at the type of networks that we will use to build our chatbot: Recurrent Neural Networks or RNNs for short.

This kind of data includes time series (a list of values of some parameters over a certain period of time) text documents, which can be seen as a sequence of words, or audio, which can be seen as a sequence of sound frequencies.

The problem with RNNs is that as time passes by and they get fed more and more new data, they start to “forget” about the previous data they have seen, as it gets diluted between the new data, the transformation from activation function, and the weight multiplication.

- On 11. juli 2019
- By Read More

## Building Better Deep Learning Requires New Approaches Not Just Bigger Data

In its rush to solve all the world’s problems through deep learning, Silicon Valley is increasingly embracing the idea of AI as a universal solver that can be rapidly adapted to any problem in any domain simply by taking a stock algorithm and feeding it relevant training data.

In turn, the hand-coded era’s focus on domain expertise, ethnographic codification and deeply understanding a problem domain has given way to parachute programming in which deep learning specialists take an off-the-shelf algorithm, shove in a pile of training data, dump out the resulting model and move on to the next problem.

Programmers would work hand-in-hand with subject matter experts, deeply immersing themselves in the field, studying human practitioners with the precision and detail of an ethnographic study and even perform the task themselves to learn its complexities and nuances.

In contrast, today’s deep learning practitioners adhere to the utopian dream of galleries of canned models that can be simply plunked from a shelf, shoved full of raw training data from watching humans perform the task and then simply dropped in to take over, without its programmers needing to know a single thing about the problem the model is designed to solve.

At the same time, the domain specialists creating those training and testing datasets typically have little understanding of how deep learning works and so in many cases may simply reuse the same examples they train human analysts on, meaning test datasets may cover only narrow slices of the expected problem domain under the assumption that human intuition will fill in the rest.

By virtue of their much higher stakes, driverless car designers learned long ago the need to revert to this kind of immersive and collaborative design cycle, mixing hand-guided deep learning models with hand-coded algorithms and deep domain immersion.

- On 2. marts 2021

**Research at NVIDIA: AI Reconstructs Photos with Realistic Results**

Researchers from NVIDIA, led by Guilin Liu, introduced a state-of-the-art deep learning method that can edit images or reconstruct a corrupted image, one that ...

**Industrial AI Enabled by Deep Learning with Baker Hughes and NVIDIA**

Baker Hughes is redefining how the oil and gas industry is approaching industrial A.I. to improve machine efficiency, worker safety and much more. With A.I. ...

**GauGAN: Changing Sketches into Photorealistic Masterpieces**

A deep learning model developed by NVIDIA Research turns rough doodles into highly realistic scenes using generative adversarial networks (GANs). Dubbed ...

**Research at NVIDIA: New Core AI and Machine Learning Lab**

NVIDIA's new research lab, led by computer scientist Anima Anandkumar, will push the boundaries of machine learning techniques and take AI to the next level ...

**Embedded Deep Learning with NVIDIA Jetson**

Watch this free webinar to get started developing applications with advanced AI and computer vision using NVIDIA's deep learning tools, including TensorRT ...

**Research at NVIDIA: Transforming Standard Video Into Slow Motion with AI**

Researchers from NVIDIA developed a deep learning-based system that can produce high-quality slow-motion videos from a 30-frame-per-second video, ...

**CUDA Explained - Why Deep Learning uses GPUs**

Artificial intelligence with PyTorch and CUDA. Let's discuss how CUDA fits in with PyTorch, and more importantly, why we use GPUs in neural network ...

**Saving Energy Consumption With Deep Learning**

Discover how big data, GPUs, and deep learning, can enable smarter decisions on making your building more energy-efficient with AI startup, Verdigris. Explore ...

**Jetson Nano: Vision Recognition Neural Network Demo**

Jetson Nano performing vision recognition on a live video stream using a deep neural network (DNN). This video is based on the "Hello AI World" demo ...

**NVIDIA and SAP Bring AI to the Enterprise**

Jim McHugh, NVIDIA VP & GM, and Markus Noga, VP Machine Learning at SAP, discuss how NVIDIA and SAP are bringing AI to the enterprise. Learn more: ...