top of page
Search

# KOHONEN SELF ORGANISING NEURAL NETWORK

Author - Jeneefa Thomas

Artificial Neural Network (ANN) is based on a collection of connected nodes called artificial neurons, which models the neurons like a human brain. The self-organizing map (SOM) or Kohonen Network is a type of Artificial Neural Network that is trained by a unsupervised learning .It is also called as feature map as it maps the high dimension inputs to a low (typically 2 dimension) discretised dimensional representation and this method is dimensionality reduction. This network structure is related to Feedforward Neural Network where the nodes are visualized and it is fundamentally different in arrangement. It commonly uses U-Matrix for calculating shortest distance. The goal of learning in self-organizing map is to make different network structure to respond similarly for the desired inputs.

## Introduction to Neural Networks:

The Neural Network is extremely fast and efficient. The first neural network was Perceptron which was created by Frank Rosenblatt in 1956. The training of a neural network conducted by determining the difference between the processed output of the network (often a prediction) and a target output. This difference is the error. The network then adjusts its weighted associations according to a learning rule and using this error value. The component of ANN was neurons, Connections and weights and propagation function. The learning rate defines the size of the corrective steps that the model to take to adjust for each error in observation. The fundamental property of neural networks is it is also a fault tolerant, such that a small portion of noise data will not cripple the network. Because it learns to adjust by training the data.

## Unsupervised Learning:

This is one of the three learning paradigms of neural network. Here the input data is given along with the cost function. The answer cannot be determined and the network should try its own ideas to discover patterns in the given data. The applications include clustering, compression, filtering and even in statistical distribution. FIG:1

## Kohonen Self-Organizing Neural Network:

Kohonen Map or Self-Organizing Map (SOM) is a type of neural network. This was developed by Tuevo Kohonen in 1982. The name Self-organizing because it does not require supervision. This follows an unsupervised learning approach and its network is trained through a competitive learning algorithm.

The major characteristic of this algorithm is that the input data that are close in high dimensional data are mapped to the nearby nodes in the 2 dimensional spaces (2D). This technique is used in the method called dimensionality reduction, as it maps the high-dimension input to the low discretized representation. The advantage is the nodes are self-organizing, so that supervision is not needed.

## Feature Maps:

The self-organizing maps (SOM) are also called as feature maps, as they are retaining the feature of the input data, and training to define the similarities between the nodes. This makes SOM useful for visualization by creating low-dimensional views of high-dimension and even representing their relationships between them.

## Vector quantization:

Vector quantization is one of the properties of Self-Organizing Maps, which is a compression technique. It provides a way to represent multi-dimensional data in a lower dimensional space typically in one or two dimensions. The SOM utilizes competitive learning instead of error correction, to modify the weights. It implies only on an individual node. Let’s discuss about the architectural structure of self-organizing neural network.

## Architecture of Self-Organizing Maps:

The self-organizing Neural Network differs from the ANNs both in architectural and algorithmic properties. FIG:2

This self-organizing neural network consists of a single layer linear 2D grid of neurons, rather than a series of layers. All the nodes on this lattice are associated directly to the input vector. The SOM network consists of 2 layers input layer and the output layer.

The weight gets updated on the basis of weights as a function of the input data. The grid itself maps the coordinates at each iteration as a function of the input data. Only single node is activated at each iterations in which the features of an instance of the input vector are presented to the neural network as all nodes respond to the input.

## Stages of operations:

The function of self-organizing neural network is divided into three stages:

## Construction:

The self-organizing network consists of few basic elements. The input signals are stimulated in a matrix of neurons. These signals are grouped and transferred to every neuron.

## Learning:

This mechanism defines the similarities between the every neurons and the input signal. This assigns the neurons with shortest distance as the winner. At the start of process the wages are of small random numbers, after learning those wages are modified and show the internal structure of input data.

## Identification:

Thus at the final stage the net values of winning neurons and its neighbours are get adapted and the net topology is defined by determining the neighbours of every input neurons.

## Properties:

Some of the properties to be known are:

## Best Matching Unit (BMU):

The node is chosen by determining the between the current input values and all the nodes in the network.

distance from input=

`i=0i-n(Ii-Wi)2`

Where I- current input vector

W- Node’s weight vector

N=number of weights

## Euclidean distance:

This refers to the difference between the input vector and all the chosen nodes along with its neighbouring nodes within a particular radius such that position slightly adjusted to match their input vector.

The formula determines the Euclidean distance is: distance d= i=1mj=1n(xi-wij)2 FIG:3

Consider the input layer x1, x2, x3 and the hidden layer y1 and y2. By using SOM algorithm initialize a random weights wij. Thus the input consists of weights like [w11, w12 …w31, w32…].

## Winning weights:

The winning weights can be found by -

## wij=wijold-∝t*(x(i)k-wij(old))

After initialises the weights, for each training the input data gets updated and by considering the shortest Euclidean distances the winning vector is decided and it gets updated at each training iteration.

Where;

• t is the learning rate which decreases with the time in interval [0,1] and it ensures the network coverage.

• x(t) is the input vector through which iteration is done to update weights.

• d is the distance between the node and the Best Matching Unit (BMU).

• T refers to the current iteration.

• i,j refers to the row and column coordinate of the nodes grid.

• W refers the weight vector.

• wij(old) refers to the previous updated weight to find the present.

## Algorithm:

Step:1

Initialize the weights wij. And initialize random weights

Step:2

Choose a random input vector x.

Step:3

Repeat steps 4 and 5 for all nodes on the map.

Step:4

Calculate the Euclidean distance between weight vector wij and the input vector x(t), and calculate the square of the distance

Step:5

Track the node that generates the smallest distance t and generate the winning weight using formula.

Step:6

Calculate the overall Best Matching Unit (BMU). It means the node with the smallest distance from all calculated ones.

Step:7

Discover topological neighborhood of BMU in Kohonen Map.

Step:8

Repeat for all nodes in the BMU neighborhood:

Update the winning weight of the first node in the neighborhood of the BMU by including a fraction of the difference between the input vector x(t) and the weight w(t) of the neuron.

Step:9

Repeat the complete iteration until reaching the iteration.

Here, step 1 represents initialization phase, while step 2 to 9 represents the training phase.

Applications:

• Colour classification

• Image classification

• It is easily interpreted and understood.

• The reduction of dimensionality.

• Grid clustering makes it easy to observe similarities in the data.

• It does not build a generative model for the data

• It relies on a predefined distance in feature space (a problem shared by most clustering algorithms, to be fair)

• The magnification factors not well understood (at least to my best knowledge)

• The 1D (proven) topological ordering property does not extend to 2D

• slow training, hard to train against slowly evolving data

• It not so intuitive : neurons close on the map (topological proximity) may be far away in feature space

• It does not behave so gently when using categorical data, even worse for mixed data

Conclusion:

Thus SOM provides a elegant solution to many complex problems and used to interpret data sets etc. Through their properties it is used in color classification and in many applications,