Visual Methods for Examining SVM Classifiers

 

Doina Caragea, Dianne Cook, Hadley Wickham and Vasant Honavar

This chapter focuses on two topics studied empirically using graphics:


1)   Ways to find the role different variables play in a classifier, to learn more about the

      data and go beyond predictive accuracy alone.


  1. 2)  The variation in the support vector machine fitting when the inputs change.



This web page contains color copies of the figures in the chapter the the R code that was used to generate these figures.

 

Reference: Caragea, D., Cook, D., Wickham, H., and Honavar, V. (2007). Invited Chapter. Visual Methods for Examining SVM Classifiers. In: Visual Data Mining: Theory, Techniques, and Tools for Visual Analytics. Springer, LNCS Volume 4404. To appear.

 

 


Fig. 1. Two projections from a tour of a 5-dimensional data set (variables V5, V6, V7, V8, V9), where the two groups are separable. The left plot shows a projection where the groups are well-separated, and the right plot shows a projection where they are not separated. The magnitude of the projection coefficients indicate the variable importance, the larger the coefficient - in the direction of separation - the more important the variable.

Fig. 2. SVM results for a 4-dim data set. (Left) A projection where the linear separation found by SVM can be seen. (Right) A grid over the data colored according to the class colors. The grid points that have predicted values close to 0, define a nice linear boundary between the two groups.

Fig. 3. A subset of the data (2/3 from each class) is used for training and the rest of the data is used for test. (Top left) A projection showing the separation for the training set. Support vectors are shown as larger solid circles. (Top Right) Bounded support vectors are also shown as open circles. (Bottom) The test set is shown with respect to the separating hyperplane found by the algorithm. We can see the errors. They belong entirely to one group (colored purple).

Fig. 6. Variation of the margin with the cost C . Plots corresponding to values C=1, C=0.7, C=0.5, C=0.1 are shown, from top left to bottom right. As C decreases the margin increases, as seen by the increasing distance between support vectors shown as larger solid circles.

Fig. 5. Visualization of SVM results using three different subsets of the data, corresponding to three different sets of 4 variables. (Top left) Gene subset S1 = {V800, V403, V535, V596}. (Top right) Gene subset S2 = {V390, V389, V4, V596}. (Bottom left) Gene subset S3 = {V800, V721, V357, V596}. Note that the subset S2 presents the largest separation margin, suggesting that S2 may be a better choice than S1 and S3.

Fig. 7. We examine the variation of the separating hyperplane when sub-sampling the data. We find the projection that shows the linear separation for the first data set and we keep this projection fixed for the subsequent views. There is some variation in the separating hyperplane from sample to sample. In some samples the separating hyperplane rotated substantially from that of the first sample, as seen by the thick band of grid points.

Fig. 8. Two different runs of the SVM with slightly different training data sets (9/10 points of the whole data set are selected at each run). The projection is kept fixed in the (Left) and (Middle) plots. A small rotation of the data shows the clear separation found by the second run of the SVM.