References:

Introduction

VarSelLCM permits a full model selection (detection of the relevant features for clustering and selection of the number of clusters) in model-based clustering, according to classical information criteria (BIC, MICL or AIC).

Data to analyzed can be composed of continuous, integer and/or categorical features. Moreover, missing values are managed, without any pre-processing, by the model used to cluster with the assumption that values are missing completely at random. Thus, VarSelLCM can also be used for data imputation via mixture models.

An R-Shiny application is implemented to easily interpret the clustering results

Mixed-type data analysis

Clustering

This section performs the whole analysis of the Heart data set. Warning the univariate margin distribution are defined by class of the features: numeric columns imply Gaussian distributions, integer columns imply Poisson distribution while factor (or ordered) columns imply multinomial distribution

library(VarSelLCM)

Attaching package: 'VarSelLCM'
The following object is masked from 'package:stats':

    predict
# Data loading:
# x contains the observed variables
# z the known status (i.e. 1: absence and 2: presence of heart disease)
data(heart)
ztrue <- heart[,"Class"]
x <- heart[,-13]
# Add a missing value artificially (just to show that it works!)
x[1,1] <- NA

Clustering is performed with variable selection. Model selection is done with BIC because the number of observations is large (compared to the number of features). The number of components is between 1 and 3. Do not hesitate to use parallelization (here only two cores are used).

# Cluster analysis without variable selection
res_without <- VarSelCluster(x, gvals = 1:3, vbleSelec = FALSE, crit.varsel = "BIC")

# Cluster analysis with variable selection (with parallelisation)
res_with <- VarSelCluster(x, gvals = 1:3, nbcores = 2, crit.varsel = "BIC")

Comparison of the BIC for both models: variable selection permits to improve the BIC

BIC(res_without)
[1] -6516.216
BIC(res_with)
[1] -6509.506

Comparison of the partition accuracy. ARI is computed between the true partition (ztrue) and its estimators. ARI is an index between 0 (partitions are independent) and 1 (partitions are equals). Variable selection permits to improve the ARI. Note that ARI cannot be used for model selection in clustering, because there is no true partition.

ARI(ztrue, fitted(res_without))
[1] 0.2218655
ARI(ztrue, fitted(res_with))
[1] 0.2661321

To obtained the partition and the probabilities of classification

# Estimated partition
fitted(res_with)
  [1] 1 1 2 1 1 1 1 1 1 1 1 1 2 1 2 1 1 1 1 2 1 2 2 2 2 2 1 2 1 1 1 1 2 1 1
 [36] 1 1 1 2 2 2 2 2 2 1 2 1 2 1 1 1 2 2 2 2 2 1 1 1 1 2 1 2 2 1 1 2 2 2 2
 [71] 1 2 2 1 1 1 1 2 2 2 1 1 1 2 1 2 2 1 2 1 2 2 1 1 2 1 1 1 1 2 2 1 2 1 1
[106] 1 1 1 1 2 1 2 2 2 2 2 2 1 1 1 1 1 1 2 2 2 1 2 2 1 1 1 2 1 2 2 2 1 2 1
[141] 1 2 1 1 2 1 2 1 2 2 2 2 2 1 2 2 1 2 1 1 1 1 2 1 2 1 2 2 1 1 1 1 1 1 2
[176] 1 1 2 1 2 2 1 2 1 2 2 1 1 2 1 2 1 2 2 2 2 1 2 1 1 1 1 1 1 1 2 2 1 1 2
[211] 1 2 2 1 2 2 2 1 1 2 1 1 2 1 2 1 1 1 2 1 1 1 2 1 1 1 2 1 2 2 1 2 1 1 2
[246] 1 1 2 2 1 1 2 2 1 2 2 1 1 2 2 2 1 2 2 2 2 2 2 1 1
# Estimated probabilities of classification
head(fitted(res_with, type="probability"))
       class-1      class-2
[1,] 0.9999917 8.261142e-06
[2,] 0.6334778 3.665222e-01
[3,] 0.1755389 8.244611e-01
[4,] 1.0000000 4.442790e-08
[5,] 0.9961154 3.884586e-03
[6,] 0.9547853 4.521470e-02

To get a summary of the selected model.

# Summary of the best model
summary(res_with)
Model:
   Number of components: 2 
   Model selection has been performed according to the BIC  criterion 
   Variable selection has been performed, 8  ( 66.67 % ) of the variables are relevant for clustering 
   

Discriminative power of the variables (here, the most discriminative variable is MaxHeartRate). The greater this index, the more the variable distinguishes the clusters.

plot(res_with)

Distribution of the most discriminative variable per clusters

# Boxplot for the continuous variable MaxHeartRate
plot(x=res_with, y="MaxHeartRate")

Empirical and theoretical distributions of the most discriminative variable (to check that the distribution is well-fitted)

# Empirical and theoretical distributions (to check that the distribution is well-fitted)
plot(res_with, y="MaxHeartRate", type="cdf")

Distribution of a categorical variable per clusters

# Summary of categorical variable
plot(res_with, y="Sex")

To have details about the selected model

# More detailed output
print(res_with)
Data set:
   Number of individuals: 270 
   Number of continuous variables: 3 
   Number of count variables: 1 
   Percentile of missing values for the integer variables: 0.37 
   Number of categorical variables: 8 

Model:
   Number of components: 2 
   Model selection has been performed according to the BIC  criterion 
   Variable selection has been performed, 8  ( 66.67 % ) of the variables are relevant for clustering 
   
Information Criteria:
   loglike: -6403.136 
   AIC:     -6441.136 
   BIC:     -6509.506 
   ICL:     -6638.116 

To print the parameters

# Print model parameter
coef(res_with)
An object of class "VSLCMparam"
Slot "pi":
  class-1   class-2 
0.5221157 0.4778843 

Slot "paramContinuous":
An object of class "VSLCMparamContinuous"
Slot "pi":
numeric(0)

Slot "mu":
                   class-1  class-2
RestBloodPressure 131.3444 131.3444
SerumCholestoral  249.6593 249.6593
MaxHeartRate      135.4168 165.2587

Slot "sd":
                   class-1  class-2
RestBloodPressure 17.82850 17.82850
SerumCholestoral  51.59043 51.59043
MaxHeartRate      20.98142 13.14844


Slot "paramInteger":
An object of class "VSLCMparamInteger"
Slot "pi":
numeric(0)

Slot "lambda":
     class-1  class-2
Age 58.11335 50.32059


Slot "paramCategorical":
An object of class "VSLCMparamCategorical"
Slot "pi":
numeric(0)

Slot "alpha":
$Sex
                0         1
class-1 0.2358080 0.7641920
class-2 0.4166346 0.5833654

$ChestPainType
                 1          2         3         4
class-1 0.08922423 0.03291644 0.1738651 0.7039943
class-2 0.05752167 0.28954575 0.4223092 0.2306234

$FastingBloodSugar
                0         1
class-1 0.8518519 0.1481481
class-2 0.8518519 0.1481481

$ResElectrocardiographic
                0           1         2
class-1 0.4851852 0.007407407 0.5074074
class-2 0.4851852 0.007407407 0.5074074

$ExerciseInduced
                0          1
class-1 0.4484683 0.55153168
class-2 0.9128110 0.08718905

$Slope
                1         2          3
class-1 0.2266455 0.6884255 0.08492898
class-2 0.7599042 0.1933817 0.04671405

$MajorVessels
                0         1          2           3
class-1 0.4104443 0.2830463 0.17928515 0.127224313
class-2 0.7916000 0.1402681 0.05987773 0.008254211

$Thal
                3            6         7
class-1 0.3183115 9.931104e-02 0.5823775
class-2 0.8302586 1.624399e-11 0.1697414

Probabilities of classification for new observations

# Probabilities of classification for new observations 
predict(res_with, newdata = x[1:3,])
       class-1      class-2
[1,] 0.9999914 8.635241e-06
[2,] 0.6231352 3.768648e-01
[3,] 0.1692210 8.307790e-01

The model can be used for imputation (of the clustered data or of a new observation)

# Imputation by posterior mean for the first observation
not.imputed <- x[1,]
imputed <- VarSelImputation(res_with, x[1,], method = "sampling")
rbind(not.imputed, imputed)
  Age Sex ChestPainType RestBloodPressure SerumCholestoral
1  NA   1             4               130              322
2  47   1             4               130              322
  FastingBloodSugar ResElectrocardiographic MaxHeartRate ExerciseInduced
1                 0                       2          109               0
2                 0                       2          109               0
  Slope MajorVessels Thal
1     2            3    3
2     2            3    3

Shiny application

All the results can be analyzed by the Shiny application…

# Start the shiny application
VarSelShiny(res_with)

… but this analysis can also be done on R.