JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for BUGS Archives


BUGS Archives

BUGS Archives


BUGS@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

BUGS Home

BUGS Home

BUGS  2001

BUGS 2001

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

informative priors: the replies

From:

Stefan Van Dongen <[log in to unmask]>

Reply-To:

Stefan Van Dongen <[log in to unmask]>

Date:

Tue, 20 Mar 2001 09:04:15 +0100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (188 lines)

thank you all for sending me your thoughts on the use of informative
priors to improve convergence behaviour. Below follows a compilation of
ideas.


***
If you are asking about latent variables alone, I haven't had that
experience.
But, I often run into analyses that just will not converge without an
informative prior.  However, I'm using the term loosely.  Usually, I'll
use a
vague, but informative prior, i.e. it has most of it's mass in a logical
range
of the parameter, but it's not attempting to actually identify the true
location.  Sensitivity?  I don't know since these runs don't converge
otherwise.
 I suppose I could choose several different vague priors to see if the
results
are the same.  I hadn't actually thought of that until reading your
post.  I'll
give it a try next time.  And, maybe that is the solution to your
problem as
well.
***

In regards to Stefan Van Dongen's issue about non-convergence, I must
say that I
have the same problem with the linear coefficients of fully bayesian
generalized
linear models. My application is somewhat different than Stefan's, as I
use
WINBUGS for Poisson regression to smooth diease rates in small area maps
and to
assess the effect of covariates.  For smoothing, there is never a
problem, as I
get rapid convergence of the predicted diease counts; however, for
linear terms,
I can not obtain convergence and therefore can not draw any defensible
inference
about these terms. I usually run 3 independent chains, where initial
values for
the linear terms are chosen based on preliminary maximum likelihood
analysis (or
pseudo ML).  Initial values are chosen to represent the MLE estimate and
+/- 4
standard errors in order to represent over-dispersed initial values, but
not
wildly over-dispersed.
As with Stefan, I have also experimented with more informative priors.
This
sometimes leads to "apparent" convergence, but it does not hold as one
continues
to run the chain.

I have discovered through conversation that others experience this same
problem.
Perhaps, however, this should not be a surprise since  Eberly and Carlin

(Statistics in Medicine 2000; 19:2279-2294) point out that this may be
related
to problems with model identifiability.  I use the convolution prior
discussed
by Eberly and Carlin, where two variance component are included--a
spatially
structured and unstructured one; however, I continue to have problems
with
convergence of the linear coefficients even if I proceed with only one
variance
component (either structured or not).

I have always felt that some of the best guidance that us "users" need
from
those more knowledgeable in full bayes analysis is in the selection of
initial
values and priors.  However, this particular problem of non-convergence
of
linear coefficients in GLM's may simply be unsolvable. If interest lies
with
smoothing, they work beautifully; however, if interest lies with
quantifying the
strength of covariate effects, then a non-Bayesian approach, such as
pseudo-MLE
may be necessary (or perhaps Empirical Bayes). Perhaps these
considerations also
apply to Stefan's problem.

***

Yes. I think the use of informative priors is often to be recommended in
such
cases. I have some experience with mixture models, hidden Markov models
and such
like as well as some other kinds of latent variable models. Often there
is not a
unique maximum in the likelihood and the prior has the function of
pushing the
posterior towards one of the maxima of the likelihood. An obvious
example is a
mixture model with two components. Suppose, for example, each component
is a
normal distribution with known and equal variances so that the
components only
differ in the unknown means M1, M2. We also have a mixing parameter P,
the
probability of belonging to component 1. If we have a  "non-informative"
prior
in that we have a uniform prior for P and make M1 and M2 exchangeable,
then
there is no way to distinguish between, say, P=0.3, M1=10, M2=20 and,
P=0.7,
M1=20, M2=10. This is, of course, a very simple example but more or less
the
same thing happens in more complicated examples. We can impose
constraints but
this can sometimes be rather unnatural and it might be better to impose
a "soft"
constraint by using the prior.

You use the phrase "biologically meaningful." This suggests that you do,
in
fact, have genuine prior information which can be used in this way. The
trick, I
think, is to devise a prior which uses the information which you are
happy to
use without being "informative" about aspects where you do not want to
be
"informative." This is not always easy but I think often there is real
prior
information which can be used.

In the example above, we might think that we can use a constraint such
as M1<M2.
Well, what happens then if you have a set of data where really all of
the
observations belong to one of the components? We have no way to tell
whether it
is the first or the second. So, instead, we might try P>0.5 so that, if
there is
only one component, it will be the first. This might work better
(although we
might have problems if P is actually close to 0.5). However, in
practical, e.g.
biological, terms we might want to know which component it really is. I
guess
that this may often be the case with latent variable models. It might be
better
to give M1 and M2 different priors so that the model "knows" which is
the more
likely component for the data. Just what you are prepared to do in this
way
depends on the (biological) system you are modelling and what you know
about it.

***

As usual, I am sure that there are several of us that would like to take
a
peak at your code, so as to better understand the nature of your
problem.
I have frequently been able to avoid this problem by making sure that
the
initial values are near their posterior mode.

***

--
Dr. Stefan Van Dongen
Group of Animal Ecology
Department of Biology
University of Antwerp
Universiteitsplein 1
B-2610 Wilrijk, Belgium

Tel: + 32 (0)3 820 22 61
Fax: + 32 (0)3 820 22 71
Email: [log in to unmask]
URL: http://bio-www.uia.ac.be/u/svdongen/index.html

-------------------------------------------------------------------
To mail the BUGS list, mail to [log in to unmask]
You can search old messages at www.jiscmail.ac.uk/lists/bugs.html
To leave the BUGS list, send LEAVE BUGS to [log in to unmask]
If this fails, mail [log in to unmask], NOT the whole list

This list is for discussion of modelling issues and the BUGS
software.  For help with crashes and error messages, first mail
[log in to unmask]

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

March 2024
January 2024
December 2023
August 2023
March 2023
December 2022
November 2022
August 2022
May 2022
March 2022
February 2022
December 2021
November 2021
October 2021
September 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager