A selforganizing map (SOM) or selforganizing feature map (SOFM) is a type of artificial neural network that is trained using unsupervised learning to produce a lowdimensional (typically twodimensional), discretized representation of the input space of the training samples, called a map. Selforganizing maps are different from other artificial neural networks in the sense that they use a neighborhood function to preserve the topological properties of the input space.
This makes SOMs useful for visualizing lowdimensional views of highdimensional data, akin to multidimensional scaling. The model was first described as an artificial neural network by the Finnish professor Teuvo Kohonen, and is sometimes called a Kohonen map.^{[1]}
Like most artificial neural networks, SOMs operate in two modes: training and mapping. Training builds the map using input examples. It is a competitive process, also called vector quantization. Mapping automatically classifies a new input vector.
A selforganizing map consists of components called nodes or neurons. Associated with each node is a weight vector of the same dimension as the input data vectors and a position in the map space. The usual arrangement of nodes is a regular spacing in a hexagonal or rectangular grid. The selforganizing map describes a mapping from a higher dimensional input space to a lower dimensional map space. The procedure for placing a vector from data space onto the map is to find the node with the closest weight vector to the vector taken from data space and to assign the map coordinates of this node to our vector.
While it is typical to consider this type of network structure as related to feedforward networks where the nodes are visualized as being attached, this type of architecture is fundamentally different in arrangement and motivation.
Useful extensions include using toroidal grids where opposite edges are connected and using large numbers of nodes. It has been shown that while selforganizing maps with a small number of nodes behave in a way that is similar to Kmeans, larger selforganizing maps rearrange data in a way that is fundamentally topological in character.
It is also common to use the UMatrix. The UMatrix value of a particular node is the average distance between the node and its closest neighbors (ref. 9). In a square grid for instance, we might consider the closest 4 or 8 nodes, or six nodes in a hexagonal grid.
Large SOMs display properties which are emergent. In maps consisting of thousands of nodes, it is possible to perform cluster operations on the map itself.^{[2]}
Contents
Learning algorithm
Full article ▸
