AI News, Difference between revisions of "Artificial Neural Networks/Competitive Learning"

Difference between revisions of "Artificial Neural Networks/Competitive Learning"

Competitive learning is a rule based on the idea that only one neuron from a given iteration in a given layer will fire at a time.

The “winner” of each iteration, element i* , is the element whose total weighted input is the largest.

Neurons become trained to be individual feature detectors, and a combination of feature detectors can be used to identify large classes of features from the input space.

Competitive Learning

Neurons in a competitive layer learn to represent different regions of the input space where input vectors occur.

We can configure the network inputs (normally done automatically by TRAIN) and plot the initial weight vectors to see their attempt at classification.

The weight vectors (o's) will be trained so that they occur centered in clusters of input vectors (+'s).

Set the number of epochs to train before stopping and train this competitive layer (may take several seconds).

ORIGINAL ARTICLEAdaptive competitive learning neural networks

In this paper, the adaptive competitive learning (ACL) neural network algorithm is proposed.

The performance of the ACL algorithm is evaluated and compared with the performance of a recently proposed algorithm in the literature in clustering an input data set and determining its number of clusters.

Results show that the ACL algorithm is more accurate and robust in both determining the number of clusters and allocating input feature vectors into these clusters than the other algorithm especially with data sets that are sparsely distributed.

Competitive learning

Competitive learning is a form of unsupervised learning in artificial neural networks, in which nodes compete for the right to respond to a subset of the input data.[1]

Accordingly, the individual neurons of the network learn to specialize on ensembles of similar patterns and in so doing become 'feature detectors' for different classes of input patterns.

The fact that competitive networks recode sets of correlated inputs to one of a few output neurons essentially removes the redundancy in representation which is an essential part of processing in biological sensory systems.[4][5]

w

w

i

1

,

.

.

,

w

i

d

x

x

n

1

,

.

.

,

x

n

d

w

x

w

i

Thus, as more data are received, each node converges on the centre of the cluster that it has come to represent and activates more strongly for inputs in this cluster and more weakly for inputs in other clusters.

How to Predict Stock Prices Easily - Intro to Deep Learning #7

We're going to predict the closing price of the S&P 500 using a special type of recurrent neural network called an LSTM network. I'll explain why we use ...

What is SELF-ORGANIZING MAP? What does SELF-ORGANIZING MAP mean? SELF-ORGANIZING MAP meaning

What is SELF-ORGANIZING MAP? What does SELF-ORGANIZING MAP mean? SELF-ORGANIZING MAP meaning - SELF-ORGANIZING MAP definition ...

Lecture 05 - Training Versus Testing

Training versus Testing - The difference between training and testing in mathematical terms. What makes a learning model able to generalize? Lecture 5 of 18 of ...

Lecture 3 | Loss Functions and Optimization

Lecture 3 continues our discussion of linear classifiers. We introduce the idea of a loss function to quantify our unhappiness with a model's predictions, and ...

How to Make an Image Classifier - Intro to Deep Learning #6

We're going to make our own Image Classifier for cats & dogs in 40 lines of Python! First we'll go over the history of image classification, then we'll dive into the ...

Lecture 14 | Deep Reinforcement Learning

In Lecture 14 we move from supervised learning to reinforcement learning (RL), in which an agent must learn to interact with an environment in order to ...

Lecture 12 - Regularization

Regularization - Putting the brakes on fitting the noise. Hard and soft constraints. Augmented error and weight decay. Lecture 12 of 18 of Caltech's Machine ...

How to Generate Art - Intro to Deep Learning #8

We're going to learn how to use deep learning to convert an image into the style of an artist that we choose. We'll go over the history of computer generated art, ...

How to Use Tensorflow for Classification (LIVE)

In this live session I'll introduce & give an overview of Google's Deep Learning library, Tensorflow. Then we'll use it to build a neural network capable of ...