Hello together,
i wanted to know, if someone knows, wheater small assembler routines to the
multimedia extensions of mmx or isse can be
applied to compute dot products in the svm kernel.
i mean in a efficient way?
(only a vew word can be computed at once,
so there might be a overload for loading the registers over and over,
especially for large feature vectors..)
for instance in the linear kernel, dot products are computed only.
hence, i do not know exactly, how much time is spend in the dot product
computation, in comparison to the whole problem solving.
but i should be quite a lot...
i have got libsvm, and it takes me about a day to compute a trainingset of
the size of 7 Megabytes.
it is the linear and polynomial case.
the rbf-kernel works fast, but produces a large amount of support vectors.
but i want a fast prediction, since my classification should be realtime.
so i wanted to speed the training up, to use the linear and polynomial kernel.
is there a svm-implementation out, which is does this fast?
or does someone know functions to use in c++ (assembler or library) which
compute sparse feature vectors,
so i could put it into libsvm?
thanks
Martin
|