Print

Print


Hey,

Although intuitively correct, that not may be the case all the time...
as when introducing
extra features you are also introducing 'noise'. Application of feature
selection
prior to the application of SVM has shown to produce on accounts a
better performance
then on the original features.

Best,
David
----------------------------------------------------------------------
"Who dares... wins"

David R. Hardoon                        [log in to unmask]

Image, Speech, and Intelligent Systems Research Group
School of Electronics & Computer Science
University of Southampton

Office: +44 23 8059 7697                Fax: +44 23 8059 4498
Mobile: +44 79 6763 4954        http://www.ecs.soton.ac.uk/~drh/

On 17 Feb 2005, at 08:53, Ajay Mathur wrote:

> Hi,
> More the features more sparse the data becomes and makes easier for
> classification. SVM works in higher dimensions and should, therefore,
> be
> not a problem.
> The only problem I percieve is the time to analyze.
> As regrards the discussion on kernel type,I found  that RBF takes very
> less
> time as compared to polynomial.
> Best wishes
> Ajay Mathur
> University of Southampton
> U.K
>
>
>
> Quoting Monika Ray <[log in to unmask]>:
>
>> Hello,
>>
>> SVM is known to defeat the curse of dimensionality...then why is it so
>> that having a large number of attributes/features in the data is not
>> desirable when using SVM?
>>
>> I thought it should not matter....where am I getting confused?
>>
>> Thank You.
>>
>> Sincerely,
>> Monika Ray
>>
>> **********************************************************************
>> *
>> The sweetest songs are those that tell of our saddest thought...
>>
>> Computational Intelligence Centre, Washington University St. louis, MO
>> **********************************************************************
>>
>