aweSOM offers a set of tools to explore and analyze datasets with Self-Organizing Maps (SOM, also known as Kohonen maps), a form of artificial neural network originally created by Teuvo Kohonen in the 1980s. The package introduces interactive plots, making analysis of the SOM easier.
aweSOM can be used either through the web-based interface (called by
aweSOM()) or through the command-line functions that are detailed here. The interface also produces reproducible
R code, which closely resembles the scripts below.
This vignette details some of the most important functions of the aweSOM package, along with the workflow of training a SOM (using the
kohonen package), assessing its quality measures and visualizing the SOM along with its superclasses.
For the purpose of this example, we will train a 4x4 hexagonal SOM on the famous iris dataset.
After selecting and pre-processing the training data (here by scaling each variable to unit variance), the
somInit function is used to initialize the map’s prototypes. By default, this is done using a PCA-based method, but other schemes are available using the
The training data, initial prototypes and other parameters are then passed to the
kohonen::som function for training. The further training arguments used here are the default ones produced by the
aweSOM() interface, based on data and grid dimensions.
library(aweSOM) full.data <- iris ## Select variables train.data <- full.data[, c("Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width")] ### Scale training data train.data <- scale(train.data) ### RNG Seed (for reproducibility) set.seed(1465) ### Initialization (PCA grid) init <- somInit(train.data, 4, 4) ## Train SOM iris.som <- kohonen::som(train.data, grid = kohonen::somgrid(4, 4, "hexagonal"), rlen = 100, alpha = c(0.05, 0.01), radius = c(2.65,-2.65), dist.fcts = "sumofsquares", init = init)
somQuality computes several measures to help assess the quality of a SOM.
somQuality(iris.som, train.data) #> #> ## Quality measures: #> * Quantization error : 0.215272 #> * (% explained variance) : 94.58 #> * Topographic error : 0.07333333 #> * Kaski-Lagus error : 1.525657 #> #> ## Number of obs. per map cell: #> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 #> 1 8 4 5 16 9 15 12 20 8 17 9 13 0 10 3
Quantization error: Average squared distance between the data points and the map’s prototypes to which they are mapped. Lower is better.
Percentage of explained variance: Similar to other clustering methods, the share of total variance that is explained by the clustering (equal to 1 minus the ratio of quantization error to total variance). Higher is better.
Topographic error: Measures how well the topographic structure of the data is preserved on the map. It is computed as the share of observations for which the best-matching node is not a neighbor of the second-best matching node on the map. Lower is better: 0 indicates excellent topographic representation (all best and second-best matching nodes are neighbors), 1 is the maximum error (best and second-best nodes are never neighbors).
Kaski-Lagus error: Combines aspects of the quantization and topographic error. It is the sum of the mean distance between points and their best-matching prototypes, and of the mean geodesic distance (pairwise prototype distances following the SOM grid) between the points and their second-best matching prototype.
It is common to further cluster the SOM map into superclasses, groups of cells with similar profiles. This is done using classic clustering algorithms on the map’s prototypes.
Two methods are implemented in the web-based interface, PAM (k-medians) and hierarchical clustering. This is how to obtain them in R. In this example, we choose 3 superclasses. We will return to the choice of superclasses below.
aweSOMplot creates a variety of different interactive SOM visualizations. Using the
type argument to the function, one the following types of plots can be created:
Several placement methods are available for the cloud, the one used here (
"cellPCA") is a PCA on the training data of each cell, independently.
aweSOMsmoothdist function can further be used to produce a smooth representation of the U-Matrix. Note, however, that the result representation is biased when using hexagonal maps (the smoothing function coerces the grid to a rectangular shape).
aweSOMplot offers several types of plots for numeric variables :
In all of these plots, by default the means of the chosen variables are displayed within each cell. Other choices of values (medians or prototypes) can be specified using the
values parameter. The scales of the plots can also be adapted, using the
scales argument. Colors of the variables are controlled by the
On the following barplot, we plot the protoype values instead of the observations means.
On the following box-and-whisker plot, the scales are set to be the same accross variables.
On the following lines plot, we use the observation medians instead of the means.
The following radar chart uses the default parameters.
The color plot, or heat map, applies to a single numeric variable. The superclass overlay can be removed by setting the
showSC parameter to
aweSOMplot can also plot categorical variables, using pie charts or barplots.
In this case, we plot the Species of the iris, a factor with three levels, that was not used during training. The following plots show that, based on the flowers’ measures, the SOM nearly perfectly discriminates between the three species.
By default, the area of each pie is proportional to its cell’s population. This can be changed by setting argument
aweSOM offers three diagnostics plot for choosing the number of superclasses.
aweSOMscreeplot produces a scree plot, that shows the quality of the clustering (percentage of unexplained variance of the prototypes, lower is better) for varying numbers of superclasses. It supports hierarchical and pam clustering. The rule of thumb is to choose the number of superclasses at the inflection point of this curve.
aweSOMsilhouette returns a silhouette plot of the chosen superclasses (hierarchical, pam, or other). The higher the silhouettes, the better.
For hierarchical clustering,
aweSOMdendrogram produces a dendogram, along with the chosen cuts.