I am not answering your question about mmx, but have you tried
SVM Torch II at http://www.idiap.ch/learning/SVMTorch.html
(which is a part of Torch library http://www.torch.ch/ )?
I do not know how it compares to libsvm,
but I have found it to be an efficient implementation.
- Kari
Martin Schulze wrote:
> Hello together,
>
> i wanted to know, if someone knows, wheater small assembler routines to the
> multimedia extensions of mmx or isse can be
> applied to compute dot products in the svm kernel.
>
> i mean in a efficient way?
> (only a vew word can be computed at once,
> so there might be a overload for loading the registers over and over,
> especially for large feature vectors..)
>
> for instance in the linear kernel, dot products are computed only.
>
> hence, i do not know exactly, how much time is spend in the dot product
> computation, in comparison to the whole problem solving.
> but i should be quite a lot...
>
> i have got libsvm, and it takes me about a day to compute a trainingset of
> the size of 7 Megabytes.
> it is the linear and polynomial case.
> the rbf-kernel works fast, but produces a large amount of support vectors.
>
> but i want a fast prediction, since my classification should be realtime.
>
> so i wanted to speed the training up, to use the linear and polynomial kernel.
>
> is there a svm-implementation out, which is does this fast?
> or does someone know functions to use in c++ (assembler or library) which
> compute sparse feature vectors,
> so i could put it into libsvm?
>
> thanks
>
> Martin
|