I have searched in the archives for this topic and whilst I can find queries
raised about this topic I can't seem to find any replies.
I have an analysis that I would like to conduct for a data set with some
1500 points that is a mixture of observed data points, (left) censored data
points and also some missing data points.
I have constructed a trial data set and can with some difficulty, up to a
point, undertake the analysis but as soon as I try to utilise my"real" data
set the program will only run for a very limited number of samples.
It appears to me that, while the initial values appear critical to even
allowing the sampling to commence, other aspects of the model that influence
the ability to sample are :
1. likelihood - I'm working with a logNormal distrbution and Normal is more
accomodating
2. structure of the prior beliefs - less uncertainty allows more sampling
3. censored data - everything works better without any censoring
4. the level of censoring compared to observed data points - lowering the
level increases the number of samples that are possible
5. missing data - everything works better without any missing data
6. size of the data set - increasing the size increases the difficulty
and (as a compromise to enable any results to be obtained) the number of
samples.
Now while this may all be self-evident I'm still trying to understand the
"trade-off" between the different aspects and would welcome any guidance
through comments or references etc etc.
Best wishes. Peter.
-------------------------------------------------------------------
This list is for discussion of modelling issues and the BUGS software.
For help with crashes and error messages, first mail [log in to unmask]
To mail the BUGS list, mail to [log in to unmask]
Before mailing, please check the archive at www.jiscmail.ac.uk/lists/bugs.html
Please do not mail attachments to the list.
To leave the BUGS list, send LEAVE BUGS to [log in to unmask]
If this fails, mail [log in to unmask], NOT the whole list
|