Print

Print


>However, to control the sensitivity and specificity of the SVM,  I
apply
>
>an upper bound only on (+1) class (C+) without any upper bound on the
>(-1) class,...
>But, I do not know why the SVM give me always 100% Sensitivity and 0%
>Specificity,

Alizera,

What you are doing by setting C+ finite and C- infinite is the
  following optimization:

(*)  min 0.5||w||^2 + C+sum(xi+)

  where xi+ are the slack variables for the +ve training examples.

This results in the standard SVM formulation with:
0 <= alpha_i <= C+    for +ve's   (B1)
0 <= alpha_i          for -ve's   (B2)

i.e., the -ve's have infinite weight, thus enforcing FP=0

However, the SVM originally came in two flavours: separable or
non-separable, this combination might thus throw some SVM
packages.  Using my own toolbox I obtained the same results as you,
i.e. FP=N, FN=0 which is contradictory to what's expected.

The reason is that the objective function is actually:

(**)  min 0.5||w||^2 + C+sum(xi+) + C-sum(xi+)

and this is reduced to (*) by implicitly setting C- = 0 somewhere
deep in the code.

This results in enforcing:
0 <= alpha_i <= 0    for -ve's (B2')

Which obviously results in all points being predicted +ve, which
is what you observed and I reproduced.

The solution, therefore is to ensure that the bound (B2) is
explicitly enforced in the code, not indirectly via a cost ratio
or other quantity.

Rgds

Robert

PS. See also:

K. Veropoulos,C. Campbell, and N. Cristianini 'Controlling the
    Sensitivity of Support Vector Machines'

although they somehow obtain FP increases as C+ increases, which
contradicts the above