Evaluating single-cell clustering with ClustAssess

In this vignette we will illustrate several of the tools available in ClustAssess on a small single-cell RNA-seq dataset.


# we will use the pbmc_small single-cell RNA-seq dataset available via Seurat
DimPlot(pbmc_small, group.by='letter.idents')

Proportion of ambiguously clustered pairs

The proportion of ambiguously clustered pairs (PAC) uses consensus clustering to infer the optimal number of clusters for the data, by observing how variably pairs of elements cluster together. The lower the PAC, the more stable the clustering. PAC was originally presented in https://doi.org/10.1038/srep06207.

# retrieve scaled data for PAC calculation
pbmc.data = GetAssayData(pbmc_small, assay='RNA', slot='scale.data')

# perform consensus clustering
cc.res = consensus_cluster(t(pbmc.data), 

# assess the PAC convergence for a few values of k - each curve should 
# have converged to some value
k.plot = c(4,6,8,10)
pac_convergence(cc.res, k.plot)

# visualize the final PAC across k - there seems to be a local maximum at k=7, 
# indicating that 7 clusters leads to a less stable clustering of the data than 
# nearby values of k

Element-centric clustering comparison

We compare the similarity of clustering results between methods (in this case, between Louvain community detection and k-means) using element-centric similarity, which quantifies the clustering similarity per cell. Higher ECS indicates that an observation is clustered similarly across methods. ECS was introduced in https://doi.org/10.1038/s41598-019-44892-y.

# first, cluster with Louvain algorithm
pbmc_small = FindClusters(pbmc_small, resolution=0.8, verbose=FALSE)
#> Warning: Adding a command log without an assay associated with it
louvain.clustering = create_clustering(pbmc_small@meta.data$seurat_clusters)
DimPlot(pbmc_small, group.by='seurat_clusters')

# also cluster with PCA+k-means
pbmc_pca = Embeddings(pbmc_small, 'pca')
pbmc_small@meta.data$kmeans_clusters = kmeans(pbmc_pca, 
kmeans.clustering = create_clustering(pbmc_small@meta.data$kmeans_clusters)
DimPlot(pbmc_small, group.by='kmeans_clusters')

# where are the clustering results more similar?
pbmc_small@meta.data$ecs = element_sim_elscore(louvain.clustering, 
FeaturePlot(pbmc_small, 'ecs')

#> [1] 0.6632927

Jaccard similarity of cluster markers

As a common step in computational single-cell RNA-seq analyses, discriminative marker genes are identified for each cluster. These markers are then used to infer the cell type of the respective cluster. Here, we compare the markers obtained for each clustering method to ask: how similarly would each cell be interpreted across clustering methods? We compare the markers per cell using the Jaccard similarity (defined as the size of the intersect divided by the size of the overlap) of the marker gene lists. The higher the JSI, the more similar are the marker genes for that cell.

# first, we calculate the markers on the Louvain clustering
Idents(pbmc_small) = pbmc_small@meta.data$seurat_clusters
louvain.markers = FindAllMarkers(pbmc_small, 

# then we get the markers on the k-means clustering
Idents(pbmc_small) = pbmc_small@meta.data$kmeans_clusters
kmeans.markers = FindAllMarkers(pbmc_small, 

# next, compare the top 10 markers per cell
pbmc_small@meta.data$marker.gene.jsi = marker_overlap(louvain.markers, 

# which cells have the same markers, regardless of clustering?
FeaturePlot(pbmc_small, 'marker.gene.jsi')

#> [1] 0.6352273

Element-wise frustration

How consistently are cells clustered by k-means? We will rerun k-means clustering 20 times to investigate.

clustering.list = list()
n.reps = 20
for (i in 1:n.reps){
  # we set nstart=1 and a fairly high iter.max - this should mean that
  # the algorithm converges, and that the variability in final clusterings
  # depends mainly on the random initial cluster assignments
  km.res = kmeans(pbmc_pca, centers=3, nstart=1, iter.max=1e3)$cluster
  clustering.list[[i]] = create_clustering(km.res)

# now, we calculate the element-wise frustration (ie consistency), which
# performs pair-wise comparisons between all 20 clusterings; the 
# frustration is the average per-cell ECS across all comparisons. The higher
# the frustration, the more consistently is that cell clustered across
# random seeds.
pbmc_small@meta.data$frustration = element_frustration(clustering.list)

# which cells are clustered more consistently?
FeaturePlot(pbmc_small, 'frustration')

#> [1] 0.6712922

Session info

#> R version 4.0.2 (2020-06-22)
#> Platform: x86_64-apple-darwin17.0 (64-bit)
#> Running under: macOS Catalina 10.15.7
#> Matrix products: default
#> BLAS:   /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRblas.dylib
#> LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib
#> locale:
#> [1] C/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#> attached base packages:
#> [1] stats     graphics  grDevices utils     datasets  methods   base     
#> other attached packages:
#> [1] ClustAssess_0.1.1 Seurat_3.1.4     
#> loaded via a namespace (and not attached):
#>   [1] tsne_0.1-3          nlme_3.1-148        matrixStats_0.58.0 
#>   [4] RcppAnnoy_0.0.18    RColorBrewer_1.1-2  httr_1.4.2         
#>   [7] numDeriv_2016.8-1.1 sctransform_0.3.2   tools_4.0.2        
#>  [10] R6_2.4.1            irlba_2.3.3         KernSmooth_2.23-17 
#>  [13] uwot_0.1.10         lazyeval_0.2.2      BiocGenerics_0.34.0
#>  [16] colorspace_1.4-1    sn_1.6-2            gridExtra_2.3      
#>  [19] tidyselect_1.1.0    mnormt_2.0.2        compiler_4.0.2     
#>  [22] Biobase_2.50.0      TFisher_0.2.0       plotly_4.9.2.1     
#>  [25] sandwich_3.0-0      labeling_0.3        scales_1.1.1       
#>  [28] lmtest_0.9-38       mvtnorm_1.1-1       ggridges_0.5.3     
#>  [31] pbapply_1.4-3       stringr_1.4.0       digest_0.6.25      
#>  [34] rmarkdown_2.3       pkgconfig_2.0.3     htmltools_0.5.1.1  
#>  [37] parallelly_1.24.0   plotrix_3.8-1       htmlwidgets_1.5.1  
#>  [40] rlang_0.4.10        farver_2.0.3        generics_0.0.2     
#>  [43] zoo_1.8-9           jsonlite_1.7.1      ica_1.0-2          
#>  [46] dplyr_1.0.0         magrittr_1.5        patchwork_1.0.1    
#>  [49] Matrix_1.2-18       Rcpp_1.0.5          munsell_0.5.0      
#>  [52] ape_5.4-1           reticulate_1.18     lifecycle_0.2.0    
#>  [55] stringi_1.4.6       multcomp_1.4-16     yaml_2.2.1         
#>  [58] mathjaxr_1.4-0      MASS_7.3-51.6       Rtsne_0.15         
#>  [61] plyr_1.8.6          grid_4.0.2          parallel_4.0.2     
#>  [64] listenv_0.8.0       ggrepel_0.9.1       crayon_1.3.4       
#>  [67] lattice_0.20-41     cowplot_1.1.1       splines_4.0.2      
#>  [70] multtest_2.46.0     tmvnsim_1.0-2       knitr_1.29         
#>  [73] pillar_1.4.6        fastcluster_1.1.25  igraph_1.2.5       
#>  [76] reshape2_1.4.4      future.apply_1.7.0  codetools_0.2-16   
#>  [79] stats4_4.0.2        leiden_0.3.7        mutoss_0.1-12      
#>  [82] glue_1.4.1          evaluate_0.14       metap_1.4          
#>  [85] data.table_1.13.0   png_0.1-7           vctrs_0.3.1        
#>  [88] Rdpack_2.1.1        tidyr_1.1.0         gtable_0.3.0       
#>  [91] RANN_2.6.1          purrr_0.3.4         future_1.21.0      
#>  [94] ggplot2_3.3.2       xfun_0.15           rsvd_1.0.3         
#>  [97] rbibutils_2.0       viridisLite_0.3.0   survival_3.1-12    
#> [100] tibble_3.0.3        cluster_2.1.0       globals_0.14.0     
#> [103] fitdistrplus_1.1-3  TH.data_1.0-10      ellipsis_0.3.1     
#> [106] ROCR_1.0-11