Aleksandar Donev wrote:
> john g lewis wrote:
>
> > Could you describe your problem most specifically?
>
> Thank you for the help. Here is what I know:
>
> > In particular, you want "a few (chosen
> > by the user)" eigenvalues. Where in the spectrum of your matrix do
> > the ones you want lie? How many is "a few"?
>
> Our matrix is a semidefinite positive matrix, with exactly one 0 eigenvalue
> with a known eigenvector (all ones [1,1,1,...,1). The rest of the
> eigenvalues are positive and most likely unique, but I can not tell you
> much else, since to find this out I need to run specific experiments for
> which I already need the Lanczos method.
> For starters, say I need only the first smallest non-zero eigenvalue. Later
> on, at the most, I would need the first smallest 4 eigenvalues and their
> associated eigenvectors. High precision is not required, only approximate
> eigenvectors are needed.
>
> > What is the relative
> > separation of these eigenvalues? What are the relative separation of the
> > eigenvalues at each end of the spectrum? This information is needed to
> > choose among the various
> > algorithms.
>
> Nobody really knows this. The specific matrix in this case is something
> called the laplacian matrix of a graph and the problem is to partition the
> graph for distributed memory applications. I presume that the spectrum has
> nice properties, i.e. the first few eigenvalues I am looking after are
> unique and well separated, but I am not certain.
Actually, we do know about the separation of the eigenvalues in the
laplacian matrix. I'm very glad that you mentioned your application.
I'm going to give you a strong, negative, answer, which I hope saves
you a lot of wasted time.
The low eigenvalues in the Laplacian matrix are not very well separated
(typically). What this means is that it takes lots of iterations in the
Lanczos
algorithm to get any sort of accuracy. There is a way to transform the
problem to make the eigenvalues well separated, but it is almost certainly
too expensive to do. That is, one can, in theory, shift the spectrum by
working with L + alpha*I, where alpha is positive, thereby giving a
positive definite matrix -- which one then "inverts" (actually factors).
The desired eigenvalues of this shifted and inverted problem are then
very well separated and converge very quickly.
This probably completely begs the question -- the usual reason for
worrying about distributed memory data distribution is, eventually,
to solve sparse linear systems. We wouldn't worry about
data distribution if we already could factor the sparse matrix, so it is
completely unreasonable to think of doing the transformation described
above.
Your application MIGHT be different. If it's reasonable to dream of
factoring this sparse matrix, I can arrange for you to get an academic
(object code) license for the Boeing Sparse Matrix Package BCSLIB-EXT,
which would do this all rather automatically for you. But I suspect that
factoring L + alpha*I is not a reasonable thing to do.
In that case, I can still save you lots of time and work. The fact that
the Lanczos algorithm, in any of its instantiations, doesn't work well
for the low eigenvectors of the unshifted Laplacian matrix
led several groups to develop multilevel graph partitioning algorithms.
Bruce Hendrickson at Sandia Labs has a long series of papers that
will take you through the long and painful history. That history
eventually led to two good pieces of software for graph partitioning
and sparse matrix reordering. Those are the proprietary Extreme ordering
package from SGI (good if you're running on SGI) or the public
domain METIS package available from the University of Minnesota
(George Karypis in the Computer Science Dept.). I have no interest
in either package and I recommend them both. METIS has more
graph partitioning capabilities than does Extreme. The one difficulty
with both is that they are in C, not F90, but that can be overcome in the
usual multilingual ways.
Best regards,
John Lewis
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|