Hi Duncan,
Information about tau comes from the over-dispersion of r[] relative to a
"pooled" model which assumed that p[i] is the same for all i. If there
is a lot of over-dispersion in the data and/or N is large, then there
is sufficient information to estimate tau, and there is little prior
sensitivity. Conversely, if there is very little over-dispersion and/or
N is small then there is little information about tau, and you need to
worry about your choice of prior.
My guess is that the SURGICAL data falls into the first case, and in fact
there is little prior sensitivity. The fact that you see differences
in the posterior distribution of tau is probably due to a poor choice
of scale. My experience is that sigma (the standard deviation) gives
the best scale for "eyeballing" the posterior distribution. If you want
to do something more formal, I have written a paper on this topic which
you may find useful. Basically it concerns a diagnostic proposed by
McCulloch (1989) which is very useful for diagnosing prior sensitivity,
but which seems to have been overlooked. I also look briefly at the
"uniform shrinkage" prior proposed by Natajaran and Kass (2000) for
parameters such as tau, and which seems to give good overall performance
in the linear model.
You can pick up a copy here
http://calvin.iarc.fr/~martyn/papers/sensitivity.ps
The paper is submitted to JRSS(B). Comments are welcome.
Martyn
On 05-Mar-2002 Duncan Murdoch wrote:
> In the WinBUGS example "Surgical Institutional Ranking", the random
> effects model looks like this:
>
> model
> {
> for( i in 1 : N ) {
> b[i] ~ dnorm(mu,tau)
> r[i] ~ dbin(p[i],n[i])
> logit(p[i]) <- b[i]
> }
> pop.mean <- exp(mu) / (1 + exp(mu))
> mu ~ dnorm(0.0,1.0E-6)
> sigma <- 1 / sqrt(tau)
> tau ~ dgamma(0.001,0.001)
> }
>
> This is a hierarchical model with binary responses, a normal prior on
> the logits of the response rates for each, and a diffuse (but proper)
> hyperprior on the parameters of the normal.
>
> If we actually used an improper prior on the normal precision, would
> this lead to an improper posterior? I've read that it does with a
> normal mixture model, so I'd guess so, but I'm not sure.
>
> Assuming it does, shouldn't that make the results here quite sensitive
> to the actual choice of proper prior? E.g. I'd expect that if I'd
> used dgamma(0.0001,0.0001) I'd see quite different results.
>
> All of the above reasoning makes sense to me, and indeed, when I make
> that change to the prior for tau, I see substantially different
> results for the mean and s.d. of the posterior for tau (though not
> much as much difference in the quantiles).
>
> HOWEVER, all of the other parameters (including sigma) come up with
> results that are very close to the ones with the original prior. This
> leads me to wonder whether I should worry about that fact that the
> posterior is nearly improper or not.
>
> Could someone with more experience in this comment, please?
>
> Duncan Murdoch
-------------------------------------------------------------------
This list is for discussion of modelling issues and the BUGS software.
For help with crashes and error messages, first mail [log in to unmask]
To mail the BUGS list, mail to [log in to unmask]
Before mailing, please check the archive at www.jiscmail.ac.uk/lists/bugs.html
Please do not mail attachments to the list.
To leave the BUGS list, send LEAVE BUGS to [log in to unmask]
If this fails, mail [log in to unmask], NOT the whole list
|