# AI News, A primer on universal function approximation with deep learning (in Torch and R)

- On Wednesday, October 17, 2018
- By Read More

## A primer on universal function approximation with deep learning (in Torch and R)

Clarke famously stated that “any sufficiently advanced technology is indistinguishable from magic.” No current technology embodies this statement more than neural networks and deep learning.

This primer sheds some light on how neural networks work, hopefully adding to the wonder while reducing the fear.

Recall that given enough terms, a Taylor series can approximate any function to a certain level of precision about a given point, while a Fourier series can approximate any periodic function.

For sake of argument, let’s attempt to approximate this function with a network anyway.

In its most basic form, a one layer neural network with $latex $n input nodes and one output node is described by , where is the input, is the bias and is the weight matrix.

In Torch, this network can be described by After training, we can apply the training set to the model to see what the neural network thinks looks like.

Before you lose faith in artificial neural networks, let’s understand what’s happening.

In the output layer we have a new weight matrix and bias term applied to .

The choice for ranges from the sigmoid to tanh to the default rectified linear unit (ReLU).

(I don’t include these parameters as this is related to an exercise for my students.) To render the plots, I evaluate the trained model against the complete training set and write out a CSV.

One of the key lessons with neural networks is that you cannot blindly create networks and expect them to yield something useful.

- On Tuesday, January 15, 2019

**Neural Networks 7: universal approximation**

**Neural Networks 2 - Multi-Layer Perceptrons**

This video demonstrates how several perceptrons can be combined into a Multi-Layer Perceptron, a standard Neural Network model that can calculate ...

**Neural Network - function approximation**

Training algorithm: Gradient Descent with backpropagation. Momentum: used Activation function: Leaky ReLU for hidden layers and linear for the output layer.

**A small neural network learning a function**

During my final few days at UC Riverside, I started working on a deep neural network code in C++. Here's the first result: a simple function approximator.

**Neural Network - function approximation with regularization**

Training algorithm: Gradient Descent with backpropagation. Momentum: used Activation function: Leaky ReLU for hidden layers and linear for the output layer.

**Neural Networks 9: derivatives we need for backprop**

**Lecture 3 | Loss Functions and Optimization**

Lecture 3 continues our discussion of linear classifiers. We introduce the idea of a loss function to quantify our unhappiness with a model's predictions, and ...

**Function Approximation**

**Linear Value Function Approximation**

This video is part of the Udacity course "Reinforcement Learning". Watch the full course at

**Why Do Neural Networks Work? - The Universal Approximation Theorem**

breakthroughchallenge2017 My Submission for the 2017 Breakthrough Challenge. This video revolves explains why neural networks are so powerful and why ...