Greetings!
Please excuse me if I am posing a naive question. I am an
econometrician working on some research involving the out-of-sample
forecasting of asset returns. I am exploring the use of Bayesian
estimators of the standard linear regression model, except that I am
using non-normal distributions for the regression errors. On the
objectives of my research is to generate both a conditional point
forecast of future returns, as well as an estimate of the conditional
probability distribution of future returns. My question centers on the
"mechanics" of how I can do this in a Bayesian context.
In a non-Bayesian analysis, one could estimate the parameters of the
regression function, using a suitable estimator. A conditional point
forecast of future values would then be found by simply evaluating or
solving the regression function, using "future" value of the regressors
(or solving the regression function forward if there are lagged
dependent variables). To estimate the conditional distribution of the
future values, one could use Monte Carlo or bootstrapping methods,
which essentially create a set a of alternative forecast paths. This
set of paths can then be used to estimate the distribution of the
forecast at each point in time the future.
I am struggling a bit to understand the process from a Bayesian
perspective. For my research, I use a Gibbs sampler to estimate the
posterior density of the regression parameters (the particular model I
use is based on a noncentral Student-t). My question is how to move
from this to an estimate of the posterior forecast density? Note:
assume that I have future values for the regressors. Can I follow a
similar procedure as in the non-Bayesian case, i.e., use the posterior
means to evaluate the regression function using the future values of
the regressors, and then draw from the posterior density of the
regression parameters to create a new set of forecast paths?
Thanks.
Best regards,
Mark
|