The concept of a self-organizing map, or SOM, was first put forth by Kohonen. It is a way to reduce data dimensions since it is an unsupervised neural network that is trained using unsupervised learning techniques to build a low-dimensional, discretized representation from the input space of the training samples. This representation is known as a map. 

In this article, we will be going through a Beginner’s guide to a popular Self Organizing Map - The Kohonen Map. We will start with understanding what Self-Organizing Maps are.

What Are Self-Organizing Maps?

A sort of artificial neural network called a self-organizing map, often known as a Kohonen map or SOM, was influenced by 1970s neural systems’ biological models. It employs an unsupervised learning methodology and uses a competitive learning algorithm to train its network. To minimize complex issues for straightforward interpretation, SOM is utilized for mapping and clustering (or dimensionality reduction) procedures to map multidimensional data onto lower-dimensional spaces. The output layer and the input layer are the two layers that make up the SOM. This is also known as the Kohonen Map.

Now that we have discussed what SOMs are, we will now be discussing how the Kohonen Maps work.

How Do SOMs Work?

Consider an input set with the dimensions (m, n), where m represents n represents the number of features present in each example and the number of training examples. The weights of size (n, C), where C is the number of clusters, are first initialized. The winning vector (the weight vector with the shortest distance from the training example, for example, the Euclidean distance) is then updated after iterating over the input data for each training example. Weight update guidelines are provided by:

wij = wij(old) + alpha(t) *  (xik - wij(old))

Here, i stands for the ith feature of the training example, j is the winning vector, alpha is the learning rate at time t, and k is the input data’s kth training example. The SOM network is trained, and trained weights are utilized to cluster new examples. A new example is included in the collection of successful vectors.

Algorithm

The involved actions are:

  • Step 1: Initialize each node weight's w_ij to a random value.
  • Step 2: Select input vector x k at random.
  • Step 3: For each node on the map, repeat steps 4 and 5 once more.
  • Step 4: Find the distance in Euclid between the input vector x(t), and the weight vector wij connected to the first node, where t, i, and j are all equal to 0.
  • Step 5: Keep an eye on the node that produces the least t-distance.
  • Step 6: Make a global Best Matching Unit calculation (BMU). It refers to the node that is closest to all other calculated nodes.
  • Step 7: Find the BMU's topological neighborhood and its radius in the Kohonen Map.

Note: Steps 2 through 9 represent the training phase, whereas step 1 represents the initiation phase.

Here, 

X → input vector

The neighborhood function's radius, o(t), determines how far neighbor nodes in the 2D grid are inspected when updating vectors. Over time, it gradually gets smaller.

w_ij → is the association weight between grid nodes I j.

t → current iteration

At iteration t, X(t) equals the input vector instance.

i → is the grid's row coordinate for nodes.

w → Weight vector

The neighborhood function, which represents the distance between nodes I j and the BMU, is β_ij.

j → is the grid's column coordinate for nodes.

Let us now discuss the various uses of Self-Organizing or Kohonen Maps.

Uses of Self-Organizing Maps

Self-Organizing Maps, which are not always linear, have the advantage of keeping the structural information from the training data intact. Principal Component Analysis may simply result in data loss when used to high dimensional data when the dimension is decreased to two. Self-Organizing Maps can be a great alternative to PCA for the reduction in dimensionality if the data has several dimensions and each preset dimension is relevant. Groups are created through seismic facies analysis based on the detection of various individual features. Using this technique, organized relational clusters are created by identifying feature organizations in the dataset.

We will now discuss the architecture of Self-Organizing or Kohonen Maps.

Self-Organizing Maps Architecture

Two crucial layers make up self-organizing maps: the input layer and the output layer, commonly referred to as a feature map. The input layer is the initial layer in a self-organizing map. Every dataset’s data point competes for a representation in order to recognize itself. The initialization of the weight to vectors initiates the mapping processes of the Self-Organizing Maps. 

The mapped vectors are then examined to determine which weight most accurately represents the chosen sample using a sample random vector. Neighboring weights that are near each weighted vector are present. The chosen weight is allowed to turn into a vector for a random sample. This encourages the map to develop and take on new forms. In a 2D feature space, they typically form hexagonal or square shapes. More than 1,000 times are spent repeatedly performing this entire process.

To put it simply, learning takes place in the following ways:

  • To determine whether appropriate weights are similar to the input vector, each node is analyzed. The best matching unit is the term used to describe the appropriate node.
  • The Best Matching Unit's neighborhood value is then determined. Over time, the neighbors tend to decline in number.
  • The appropriate weight further evolves into something more resembling the sample vector. The surrounding areas change similarly to the selected sample vector. A node's weights change more as it gets closer to the Best Matching Unit (BMU), and less as it gets farther away from its neighbor.
  • For N iterations, repeat step two.

Now, we will be exploring the pros and cons of the Self-Organizing Maps or Kohonen Maps.

Pros And Cons Of Self-Organizing Maps

Self-organizing maps have both advantages and disadvantages, some of which are shown below:

Pros

  • Techniques like dimensionality reduction and grid clustering can make it simple to understand and comprehend data.
  • Self-organizing maps can handle a variety of categorization issues while simultaneously producing an insightful and practical summary of the data.

Cons

  • The model cannot grasp how data is formed since it does not generate a generative data model.
  • When dealing with categorical data, Self-Organizing Maps perform poorly, and when dealing with mixed forms of data, they do much worse.
  • In comparison, the model preparation process is extremely slow, making it challenging to train against slowly evolving data.

Let us now deal with the implementation of the Self-Organizing Maps in Python. 

Implementing Self-Organizing Maps Using Python

Self-Organizing Maps can be quickly implemented in Python by using Numpy with the MiniSom package. We will explore how to cluster the iris seed data set using MiniSom in the below example.

!pip install minisom

from minisom import MiniSom 

# defining training and neurons

neurons_a = 9

neurons_b = 9

som = MiniSom(neurons_a, neurons_b, neighborhood_function='gaussian', data.shape[1], random_seed=0, learning_rate=.5, sigma=1.5)

som.pca_weights_init(data)

som.train(data, 1000, verbose=True)

Plotting the distance map or the U-Matrix using a pseudocolor, where the neurons included in the maps are displayed as cells array and the color denotes the weighted distance from the neighboring neurons, allows us to see the results of our training. We can also add markers that reflect the samples that were mapped into the particular cells on top of the fake color.

A scatter chart, where each dot indicates the location of the winning neuron, can be used to visualize how the samples are scattered around the map. To prevent points within a cell from overlapping, an offset that is random can be introduced.

Another pseudocolor graphic reflecting the neural activity frequencies can be made to reveal which map neurons are active more frequently:

plt.figure(figsize=(7, 7))

frequencies = som.activation_response(data)

plt.pcolor(frequencies.T, cmap='Blues') 

plt.colorbar()

plt.show()

Our Learners Also Ask:

Here are some general FAQs on Self-Organizing or Kohonen Maps:

1. What are self-organizing maps used for?

In most cases, a high-dimensional dataset is represented as a two-dimensional discretized pattern using self-organizing maps or Kohonen maps.

2. What is an example of self-organizing maps?

A self-organizing map displaying the voting trends in the US Congress. Each member of Congress had a row in the input data database, and columns for specific votes contained each member's yes/no/abstain vote. These members were sorted in a 2D grid by the SOM method, with related members clustered closer together.

3. What is the advantage of self-organizing maps when compared to neural networks?

The primary benefit of employing a SOM is that the data is simple to read and comprehend. Grid clustering and the decrease of dimensionality make it simple to spot patterns in the data.

4. What are the five stages in a Self Organizing map?

The five stages in a Self Organizing Map or Kohonen Map are:

  • Initialization
  • Sampling
  • Matching
  • Updating
  • Continuation

Master Self-Organizing Maps With Simplilearn

Self-Organizing Maps are distinctive in and of themselves and offer us a broad range of applications in the fields of Deep Learning and Artificial Neural Networks. It is a technique for unsupervised clustering that projects data into a lower grid dimension, making it very beneficial for dimensionality reduction. Clustering techniques can be easily implemented, thanks to their distinctive training architecture.

In this article, we discussed what are Self-organizing Maps or Kohonen Maps, their working, uses, and architecture. Further, we discussed their pros and cons and implementation in Python as well. To dive deeper into Kohonen Maps and learn various other concepts related to Machine Learning in-depth, check out Simplilearn’s AI and ML Course.

Our AI & ML Courses Duration And Fees

AI & Machine Learning Courses typically range from a few weeks to several months, with fees varying based on program and institution.

Program NameDurationFees
Post Graduate Program in AI and Machine Learning

Cohort Starts: 23 Dec, 2024

11 months$ 4,300
No Code AI and Machine Learning Specialization

Cohort Starts: 7 Jan, 2025

16 weeks$ 2,565
Applied Generative AI Specialization

Cohort Starts: 8 Jan, 2025

16 weeks$ 2,995
Generative AI for Business Transformation

Cohort Starts: 15 Jan, 2025

16 weeks$ 2,499
Microsoft AI Engineer Program

Cohort Starts: 20 Jan, 2025

6 months$ 1,999
AI & Machine Learning Bootcamp

Cohort Starts: 22 Jan, 2025

24 weeks$ 8,000
Artificial Intelligence Engineer11 Months$ 1,449

Learn from Industry Experts with free Masterclasses

  • Future-Proof Your AI/ML Career: Top Dos and Don'ts for 2024

    AI & Machine Learning

    Future-Proof Your AI/ML Career: Top Dos and Don'ts for 2024

    5th Dec, Tuesday9:00 PM IST
  • Fast-Track Your Gen AI & ML Career to Success in 2024 with IIT Kanpur

    AI & Machine Learning

    Fast-Track Your Gen AI & ML Career to Success in 2024 with IIT Kanpur

    25th Sep, Wednesday7:00 PM IST
  • Skyrocket your AI/ML Career in 2024 with IIT Kanpur

    AI & Machine Learning

    Skyrocket your AI/ML Career in 2024 with IIT Kanpur

    30th Jan, Tuesday9:00 PM IST
prevNext