Print

Print


I agree to everything Ingo states below (sorry for my mistake in explaining the number of SVs in the course of the vigorous handwaving explanation I advanced) 

All I meant was that there is no universal best algorithm, as Ingo also states below.

Balaji


-----Original Message-----
From: Ingo Steinwart <[log in to unmask]>
To: [log in to unmask]
Date: Thu, 17 Feb 2005 14:51:31 -0700
Subject: Re: svm and curse of dimensionality

Hi all,

here are just a very few comments:

to answers 1,2:
the fraction of the training samples that are support vectors
tends to TWICE the minimum achievable error rate for an optimal
classifier. And this only holds if you use e.g. a Gaussian kernel
and simply sum up the slacks. Otherwise you should, in general,
expect even more support vectors.

to answer 3.c:
It is well known that there exists no best classifier, and consequently,
neither SVMs nor any other method can be a best classifier.
See the book of Devroye, Gyoerfi and Lugosi.

ingo