Dear List members I would appreciate any comment regarding the following which I try to explain in 2 steps. 1) There are k age groups and p years. For each age group i and year j, the number of applicants is assumed to have Poisson distribution with mean parameter lambda_ij. And each lambda_ij depends on some previous data through a log link. 2. The program runs with no problem when the observed data for each (age,year) cell is upto a medium scale (e.g. hundreds). I let the WinBUGS generate intitials for lambda_ij's in this case. But if the observed data are big, e.g. about 500,000s (five hundred thousands), the process become so sensitive to the initial points chosen for lambda_ij's. I tried 3 different approaches for these initials: i) I let the WinBUGS generate the initials, ii) I used MLEs of lambda_ij's, iii) I used the observed data. It seems like the observed data are the only initials for which the program runs with no problem. I couldn't come up with any other set of initials that would work the program. PS: I tried approximating this Poission with Normal(lambda,lambda^2) and taking the logarithm ( just to rescale the data so that I would have smaller frequencies) but that still seems to have the same problem as above. I wonder what you would do if the cell frequencies are as big as mine. Thanks, Zeynep Kalaylioglu model { # Likelihood for (i in 2 : k) { for (j in 2 : p) { mu[i,j] <- w0 + w1*log(y[i,j-1]) lambda[i,j] ~ dlnorm(mu[i,j], invsigma2) y[i,j] ~ dpois(lambda[i,j]) } } for (j in 2 : p) { mu[1,j] <- w0 + w1*log(y[1,j-1]) lambda[1,j] ~ dlnorm(mu[1,j], invsigma2) y[1,j] ~ dpois(lambda[1,j]) } for (i in 1 : k) { lambda[i,1] ~ dlnorm(1000,0.01 ) y[i,1] ~ dpois(lambda[i,1]) } # Priors w0 ~ dnorm(1,0.01) w1 ~ dnorm(1,0.01) sigma2 <- 1/invsigma2 invsigma2 ~ dgamma(1, 0.01) } |