Print

Print


After coding our Metropolis-Hastings algorithm and testing it on some
simple linear regression problems, we have validated that it works, at least
for those sorts of things.

My earlier post had to do with a complex spatial model in which the
autoregressive terms that had dnorm( ) priors that had to be
iteratively updated.  The iterative nature of the problem had to do with
the fact that neighboring lattice grid squares define the mean of the
autoregressive term in the current grid square.  I have pasted another
copy of our code at the end of this message.

In this formulation, we essentially had 2184+8 total parameters, subject
to the autoregressive constraints, effectively reducing the dimension of
the problem somewhat.  That is to say, there were  2184 autoregressive
terms,
unique to each grid cell but stochastically dependent on neighboring
autoregressive
terms, and 8 model parameters.  We have 2434 total data points.

Therefore, the dimension of the space that we intend to span using MCMC
methods is of VERY high dimension, even if we do have an estimable model.
Asking MCMC to adequately cover these dimensions in incredible!  With this
many dimensions, there is no possible way to generate enough points from
*any*
candidate function in order to properly span the parameter space, unless
there
is a VERY smart and efficient way of choosing them within the interesting
portions of parameter space.  This leads me to my questions:

(1) Does anyone know of some *really* good references on how to implement
an adaptive phase within the Metropolis-Hastings algorithm?  WinBUGS needs
4500 iterations to properly calibrate its Metropolis algorithm whenever we
fit
this model (I am *assuming* that it is using a Metropolis-Hastings
algorithm...
but, having been wrong before, I may be making an erroneous assumption.)
 Since our primary goal is to validate the answers that WinBUGS is
giving us, having some idea of what WinBUGS is doing when it goes into its
"adaptive" mode of sampling would be helpful.

(2) Our understanding of the "tuning" that is possible during an MCMC has to
do with (a) the size of the steps that the sampler makes when it is drawing
values from the candidate function, (b) the shape of the candidate function
itself, and (c) the decision rule that tells the MCMC algorithm whether or
not
to keep a drawn value.  Is our understanding correct?

(3) Are there alternative MCMC methods to Metropolis-Hastings that sample
more efficiently for very high dimensional problems?  Gibbs won't work
because
it's impossible to write down the full conditionals....

(4) Because our model conforms generally to the SG model presented in Noel
Cressie's textbook ("Statistics for Spatial Data", Chapter 6, page 406,
equation
6.3.9) we believe it to be estimable.  Furthermore, there are more data
points
than parameter dimensions.  My question:  At what point does the
dimensionality
of a Bayesian problem become so great that even if it is theoretically
estimable, the
computational aspects prevent a solution from being found?  I ask this
because it
would be helpful to have some sort of guideline to know whether or not a
Bayesian
model should even be attempted.

Any input would be greatly appreciated!!

 David Paul, Ph.D.
  Battelle Memorial Institute
  505 King Avenue
  Columbus, OH  43201
  614.424.3176

model
{

#First, we define the errors in the SAR model structure for the
#data.  The term 'SAR' is used by Ripley in his Spatial Statistics textbook
#and denotes a 'simultaneous autoregressive' model.  The implementation
#here is slightly different from that of Ripley and Cressie.  The
#autoregressive terms are coded very similarly to that found in Peter
#Congdon's new book on Bayesian Statistical Modelling.  The application
#in question deals with a regular lattice, which makes a SAR model
attractive.

for(i in 1:Num.Total.LatticeGrids)
{

e[i] ~ dnorm(e.bar[i],tau.Error)

#The following line specifies that the mean of the autoregressive "errors"
#of all neighboring sites should be multiplied by rho.  This quanity forms
#the mean of the distribution of the
#autoregressive "error" at the current Lattice Grid Cell.

e.bar[i] <- rho*mean( e.neigh[
(cumulative.neighbor.number[i]+1):cumulative.neighbor.number[i+1] ] )

}

#This line causes WinBUGS to associate the numbering of the spatial
#autoregressive "errors" with the numbering of the Grid Cells in
#the Regular Lattice.

for(m in 1:Num.Total.Neighbors)
{
        e.neigh[m] <- e[neighbor[m]]
}

#Now we define the likelihood of the data.  We are going to impose a SAR
#structure on the Predictive errors.

for(i in 1:Num.Total.Grids)
{

#X[i,1] represents the latitude of an observation and X[i,2] represents the
#longitude of an observation, so mu.Trend is a simple trend surface with a
#spatial autoregressive term added to the end.

mu.Trend[i]<-Beta[1] + Beta[2]*X[i,1] + Beta[3]*X[i,2] + e[i]

#There are two sets of data that we are using to obtain estimates of the
#posterior spatial surface.  The variable "cumulative.observations"
#is an index for the total number of observations within a Grid Cell on
#the Regular Lattice.  It is assumed that one of the data sets is biased
#upwards from the truth, so that mu.Trend needs to be appended with a
constant
#term, a.  The function equals( ) is designed to decide if a particular
#observation belongs to the data set needing the adjustment. It is also
assumed
#that the precision associated with each data set is different, so that tau
#changes as well.

for(j in (cumulative.observations[i]+1):cumulative.observations[i+1])
{

logic[j] <- a*equals(indicator[j],2)
mu.Observation[j] <- logic[j] + mu.Trend[i]

Y.Combined[j] ~ dnorm(mu.Observation[j],tau[indicator[j]])

}

}

#PRIORS, not including the autoregressive terms - they are defined above

tau.Error ~ dgamma(0.0001,0.0001)

for(i in 1:2)
{
tau[i] ~ dgamma(0.0001,0.0001)
}

Beta[1:3] ~ dmnorm(Beta.Mean[] , Beta.R[,])
Beta.R[1:3,1:3] ~ dwish(Beta.Omega[,] , 3)

a ~ dnorm(0.0 , 0.001)

rho ~ dunif(0,1)
}

-------------------------------------------------------------------
This list is for discussion of modelling issues and the BUGS software.
For help with crashes and error messages, first mail [log in to unmask]

To mail the BUGS list, mail to [log in to unmask]
Before mailing, please check the archive at www.jiscmail.ac.uk/lists/bugs.html
Please do not mail attachments to the list.

To leave the BUGS list, send LEAVE BUGS to [log in to unmask]
If this fails, mail [log in to unmask], NOT the whole list