On Sat, 29 Apr 2000, Zhen Chen wrote:
ZC> Hi, everyone:
ZC> A bugs code for the multinomial probit model (MNP) was posted in this list
ZC> quite a while ago. I saved it somewhere, but just couldn't locate it.
ZC> Could anyone send a copy to me? I really appereciate it.
ZC> Zhen Chen
a couple of responses:
1. you can search the mailbase archives for posts.
2. back in 1998 there was a post of some code that purported to do MNP in
WinBUGS (I attach it below; I've never tried to run that code). You will
note that the author constrains Sigma to be diagonal, which I think
largely defeats the purpose of running MNP vis-a-vis something much more
tractable like multinomial logit (with its IIA assumption). That is, I'm
not sure what you gain from going to MNP with Sigma diagonal over MNL.
3. as is well known, the MNP problem is tricky because for a choice set
with J outcomes, MLE requires integrating out a J-1 dimensional Normal
density. MCMC amounts to doing that integration by Monte Carlo methods,
sampling from a J-1 dimensional Normal, truncated to a region implied by
the observed choice, y_i.
This is extremely tricky to implement in BUGS, because while there is
support for truncated univariate Normal sampling, e.g.,
ystar[i] ~ dnorm(mu[i],1)I(lower[i],upper[i]);
sampling from a truncated MVN density is not supported. One way forward
might be to try to break down the sampling from a truncated MVN into a
series of truncated (marginal and conditional) univariate Normals but this
extremely ugly. I'm not even sure that the statistical theory is right:
i.e., if a vector y has a truncated multivariate Normal density, then
are the marginal densities for each scalar component of y truncated
univariate Normals?
for these reasons I've attacked MNP with problem-specific code in C or
gauss, using "brute-force" rejection sampling, (sampling from untruncated
MVNs until a draw is obtained that satisfies the constraint implied by the
observed choice), along the lines discussed by Albert & Chib in various
places.
Regards -- Simon Jackman
--
Simon Jackman, Asst Professor, ph: +1 (650) 723-4760
Dept of Political Science, fax: +1 (650) 723-1808
455 Serra Mall, Bldg 160, [log in to unmask]
Stanford University, http://jackman.stanford.edu
Stanford CA, 94305-2044, USA.
>From [log in to unmask] Wed May 3 12:38:14 2000
Date: Wed, 25 Nov 1998 09:49:46 -0500
From: Sudhir Karunakaran <[log in to unmask]>
To: [log in to unmask]
Subject: Re: Estimation of Choice Models
[ The following text is in the "iso-8859-1" character set. ]
[ Your display is set for the "US-ASCII" character set. ]
[ Some characters may be displayed incorrectly. ]
Hi,
I received the following code from Andre Bonfrer
[[log in to unmask]] to implement the MNP model in response to my
query on this list.
Sudhir.
Disclaimer: This is untested - there may be better ways to code up the
probit and I don't know for sure if I managed to get the truncated normal
distbns correct - because this causes the warning about the undirected
cycles.
The code implements the McCulloch and Rossi (1994) "An Exact Likelihood
Analysis
of the MNP". Note there is an assumption that the $\Sigma$ is diagonal,
which
simplified the updates for $\beta$ conditional on $w, G \sim N(\hat{\beta),
\Sigma_\beta)$.
The Code is below:
model mnp2300;
const*
N = 300, # number of observations
K = 3 , # number of brands
Q = 1 ; # number of X covariates
var
q[N,K], # chosen brand dummies
d[N], # chosen brand index
P[N,K], # prices
X[N,K-1], # transformed X matrix
Xs[N,K-1], # Xs = C'Xi
XXs[Q,Q], # XXs = XXs'XXs
Xws[Q,Q], # Xws = Xs'ws
Xwsa[K-1], # Xwsa = Xsa'ws
w[N,K-1], # latent brand utilities
ws[N,K-1], # ws = C'wi
m[N,K-1], # mean of w | beta,X, w(-j)
alpha[K-1], # brand intercepts
alpha.a[K-1], # prior mean for alpha
alpha.A[K-1], # prior varcov for alpha
alphahat[K-1], # conditional mean of alpha
Salpha[K-1], # conditional variance of alpha
beta[Q], # beta matrix, excl alpha
beta.b[Q], # prior 1st parm of beta
beta.A[Q,Q], # prior covariance of beta
betahat[Q], # conditional mean of beta
Sbeta[Q,Q], # conditional varcov of beta
tau[N,K-1], # var of w | beta,X, w(-j)
G[K-1], # inverse of sigma - diagonal only (G[1] = 1)
Gi[K-1,K-1], # inverse of sigma - full wishart draw
Gi.V[K-1,K-1], # prior 1st parm for Sigma
Gi.v, # prior 2nd parm for Sigma
v, # pain in the butt parm to store parms
tws[N,K-1], # pain in the butt parm to store parms
V[K-1,K-1], # same as above
sh[K-1], # sum of brand bought
ee[K-1,K-1], # sum of sq error
e[N,K-1]; # error vector = w - X'B
data q, d in "ymnp300.dat", X in "xmnp300.dat", Gi.V in "mnpcov.dat";
inits in "mnp2300.in";
{
for(i in 1:N){
# draw latent utilities
for(j in 1:q[i,3]){
w[i,1] ~ dnorm(m[i,1],tau[i,1])I(,0);
w[i,2] ~ dnorm(m[i,2],tau[i,2])I(,0);
}
for(j in 1:q[i,1]){
w[i,1] ~ dnorm(m[i,1],tau[i,1])I(0,);
w[i,2] ~ dnorm(m[i,2],tau[i,2])I(,w[i,1]);
}
for(j in 1:q[i,2]){
w[i,2] ~ dnorm(m[i,2],tau[i,2])I(0,);
w[i,1] ~ dnorm(m[i,1],tau[i,1])I(,w[i,2]);
}
# m[i,j] and tau[i,j]
for(j in 1:K-1){
m[i,j] <- alpha[j] + X[i,j]*beta[1];
tau[i,j] <- G[j];
}
# draw betas
for(j in 1:K-1){
Xs[i,j] <- (1/sqrt(G[j]))*X[i,j];
}
for(j in 1:K-1){
tws[i,j] <- alpha[j] + Xs[i,j]*beta[1];
ws[i,j] ~dnorm(tws[i,j],1);
}
# error component
for(j in 1:K-1){
e[i,j] <- w[i,j] - (alpha[j]+X[i,j]*beta[1]);
}
} # end of i loop
# calculate beta | w, G
XXs[1,1] <- inprod(Xs[,1],Xs[,1]) + inprod(Xs[,2],Xs[,2]);
Xws[1,1] <- inprod(Xs[,1],ws[,1]) + inprod(Xs[,2],ws[,2]);
for(j in 1:K-1){
Xwsa[j] <- inprod(q[,j],ws[,j]);
}
Sbeta[1,1] <- 1/(XXs[1,1] + beta.A[1,1]);
betahat[1] <- Sbeta[1,1]*(Xws[1,1]+ beta.A[1,1] *beta.b[1]);
for(j in 1:K-1){
sh[j] <- sum(q[,j]);
Salpha[j] <- 1/(sh[j]+alpha.A[j]);
alphahat[j] <- Salpha[j]*(Xwsa[j] + alpha.A[j] *alpha.a[j]);
}
# error component (SSE)
for(j in 1:K-1){
ee[j,j] <- inprod(e[,j],e[,j]);
for(jj in j+1:K-1){
ee[j,jj] <- inprod(e[,j],e[,jj]);
ee[jj,j] <- inprod(e[,jj],e[,j]);
}
}
# priors for G
for(j in 1:K-1){
for(jj in 1:K-1){
V[j,jj] <- Gi.V[j,jj] + ee[j,jj];
}
}
v <- (Gi.v+N);
Gi[,] ~ dwish(V[,], v);
G[1] <- 1;
for(j in 2:K-1){
G[j] <- Gi[j,j];
}
# priors for alpha and beta
for(j in 1:K-1){
alpha.a[j] <- 0.0;
alpha.A[j] <- 0.03;
}
for(j in 1:Q){
beta.b[j] <- 0.0;
beta.A[j,j] <- 0.03;
}
Gi.v <- 4;
# intercepts
for(j in 1:K-1){
alpha[j] ~ dnorm(alphahat[j],Salpha[j]);
}
for(j in 1:Q){ # parameters
beta[j] ~ dnorm(betahat[1],Sbeta[1,1]);
}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|