I have one question on normal mixture model.
The basic idea to decide the mixture of priors seems to generate
the random index with a certain probability structure like
Ti ~ Categorical(P).
However, I'm not sure of the relationship between the index and
the probability in the node statistics after iterations.
My expectation was if the posterior mean "P" is 0.6 or something like
that, then the Ti would have 1, say 60 % in the results.
But it was not what I expected. Do I miss somethings and misunderstand
the mechanism for posterior of P and T ?
If I want to have some weighted average of two posterior by giving
mixture of two priors, the original mixture model in the BUGS
example, does not work in my case?
I'd like to have some idea on P and T in the example below.
the BUGS code for the example of "Eye:normal mixture model" is
model
{
for( i in 1 : N ) {
y[i] ~ dnorm(mu[i], tau)
mu[i] <- lambda[T[i]]
T[i] ~ dcat(P[])
}
P[1:2] ~ ddirch(alpha[])
theta ~ dnorm(0.0, 1.0E-6)I(0.0, )
lambda[2] <- lambda[1] + theta
lambda[1] ~ dnorm(0.0, 1.0E-6)
tau ~ dgamma(0.001, 0.001) sigma <- 1 / sqrt(tau)
}
The results for P is
node mean sd
P[1] 0.6017 0.08546
P[2] 0.3983 0.08546
some of the results for T is
node mean sd
T[2] 1.003 0.0546
T[3] 1.006 0.0750
...............
-------------------------------------------------------------------
This list is for discussion of modelling issues and the BUGS software.
For help with crashes and error messages, first mail [log in to unmask]
To mail the BUGS list, mail to [log in to unmask]
Before mailing, please check the archive at www.jiscmail.ac.uk/lists/bugs.html
Please do not mail attachments to the list.
To leave the BUGS list, send LEAVE BUGS to [log in to unmask]
If this fails, mail [log in to unmask], NOT the whole list
|