Greetings, and apologies for cross-posting...
We are announcing the following courses, which are scheduled to take place at the Statistical Services Centre in November 2013. Summary information is given below. For registration forms please see http://www.reading.ac.uk/ssc/ providing your address and/or fax number, or email [log in to unmask]
****************************************************************************
Day 1: Bayesian Modeling, Inference and Prediction; 27 November 2013
****************************************************************************
Day 2: Bayesian Hierarchical Modeling; 28 November 2013
****************************************************************************
Day 3: Bayesian Model Specification; 29 November 2013
****************************************************************************
Prices: £370 for any one day; £695 for any two days; £995 for all three days.
A 30% academic discount is available for these courses. [Terms and conditions apply.] Note a special discount of 50% will be given to students for these specific courses. Please indicate you wish to apply for a discount when you register, together with information supporting your eligibility. [Terms and conditions apply.]
Please view http://www.reading.ac.uk/ssc/n/Short%20Courses/bayesianmodelling.htm for more details. Further information is also given below.
**********************************************************************************************************************
Course outline
**************
This is a short course that offers you flexibility: you can either take:-
* the first day, which is an introductory course on Bayesian modeling, inference and prediction,
* or the second day, which is an intermediate-level course on Bayesian model comparison and hierarchical modeling,
* or the third day, which is an intermediate-level course on Bayesian model specification,
* or any two days, or all three days.
Day 1 of this course will:
*********************
(1) Compare and contrast the frequentist and Bayesian conceptions of probability, highlighting the strengths and weaknesses of both;
(2) Review maximum-likelihood fitting of statistical models;
(3) Show you how to obtain Bayesian solutions to inferential and predictive problems analytically and in closed form (when such solutions are available); and
(4) Introduce you to simulation-based Bayesian model-fitting using Markov-chain Monte Carlo (MCMC) methods, in the freeware packages WinBUGS and R, when closed-form solutions are not possible.
Day 2 of this course will:
*********************
(1) Introduce Bayesian hierarchical modeling via meta-analysis, the study of how information can be combined across experiments to provide a better summary than those obtained by examining one experiment at a time;
(2) Discuss the critical role played by the choice of prior distributions in Bayesian hierarchical models;
(3) Illustrate the use of latent variables (random effects) as an approach to describing unexplained heterogeneity; and
(4) Explore two in-depth case studies involving random-effects Poisson regression and mixed-effects logistic regression.
Day 3 of this course will:
*********************
(1) Provide an overview of the process of Bayesian model specification;
(2) Introduce five basic principles -- the Calibration Principle, the Modeling-As-Decision Principle, the Prediction Principle, the Inference-Versus-Decision Principle and Cromwell's Rule (Parts 1 and 2) -- and show you how they inform good Bayesian model building and model criticism; and
(3) Introduce you to Calibration Cross-Validation, Bayes factors, BIC, DIC and log scoring (in WinBUGS and R) as methods for finding good Bayesian models.
All three days of the course will be based on a series of practical real-world case studies.
***********************************************************************************************************************
Who should attend?
******************
Statisticians, biostatisticians, epidemiologists, data analysts, data-miners, and machine-learning specialists who wish to broaden and deepen:
(a) their understanding of Bayesian methods and
(b) their toolkits for using Bayesian models to find meaningful patterns, arrive at statistically sound inferences and make better decisions.
Some graduate coursework in statistics (or an allied field such as biostatistics, epidemiology or machine learning) will provide sufficient mathematical background for participants. To get the most out of the course, participants should be comfortable with hearing the course presenter discuss:
(a) differentiation and integration of functions of several variables and
(b) discrete and continuous probability distributions (joint, marginal, and conditional) for several variables at a time, but all necessary concepts will be approached in a sufficiently intuitive manner that rustiness on these topics will not prevent understanding of the key ideas.
The first day of the course will assume no previous exposure to Bayesian ideas or methods. Participants interested in entering the course on the second or third day should ideally have had exposure to the ideas on the days preceding their entry day.
***********************************************************************************************************************
How you will benefit
*******************
You will:
(1) Gain a deeper understanding of maximum-likelihood-based methods and when they can be expected to behave in a sub-optimal manner;
(2) Broaden and deepen your facility in the fitting and interpretation of Bayesian models to solve important problems in science, public policy and business; and
(3) Learn how to write your own programs in WinBUGS and R to fit Bayesian models in your own work.
***********************************************************************************************************************
Course content
***************
Day 1: Bayesian Modeling, Inference and Prediction
******************************************
* Background and basics: strengths and weaknesses of the classical, frequentist and Bayesian probability paradigms
* Sequential learning via Bayes' Theorem
* Coherence as a form of internal calibration
* Bayesian decision theory via maximization of expected utility
* Review of frequentist modeling and maximum-likelihood inference
* Exchangeability as a Bayesian concept parallel to frequentist independence
* Prior, posterior, and predictive distributions
* Bayesian conjugate analysis of binary outcomes, and comparison with frequentist modeling
* Conjugate analysis of integer-valued outcomes (Poisson modeling)
* Conjugate analysis of continuous outcomes (Gaussian modeling)
* Multivariate unknowns and marginal posterior distributions
* Introduction to simulation-based computation, including rejection sampling and Markov chain Monte Carlo (MCMC) methods
* MCMC implementation strategies.
Day 2: Bayesian Hierarchical Modeling
********************************
* Bayesian hierarchical modeling
* Hierarchical modeling with latent variables as an approach to mixture modeling
* Bayesian fixed- and random-effects meta-analysis
* Prior distributions in Bayesian hierarchical modeling
* Bayesian fitting of random-effects and mixed models
* Comparison of likelihood-based and Bayesian methods for fitting hierarchical models: circumstances in which likelihood-based fitting can be poorly calibrated.
Day 3: Bayesian Model Specification
******************************
* The big picture in Bayesian model specification
* The Calibration Principle, the Modeling-As-Decision Principle, the Prediction Principle, the Inference-Versus-Decision Principle, and Cromwell's Rule (Parts 1 and 2)
* Model expansion as a tool for improving Bayesian modeling: embedding a deficient model in a larger class of models of which it's a special case
* Methods for finding good Bayesian models: Calibration Cross Validation, Bayes factors, BIC, DIC and log scores
* A generic Bayesian model-search algorithm
* False positive/false negative trade-offs in comparing {Bayes factors, BIC} and {DIC, log scores} on their ability to correctly discriminate between models.
**********************************************************************************************************************
Guest Presenter
****************
David Draper is a Professor of Statistics in the Department of Applied Mathematics and Statistics at the University of California, Santa Cruz (USA); in the period 1 July -31 December 2013 he is also a Distinguished Statistical Scientist and Visiting Professor at eBay Research Labs in San Jose, CA.
He is a Fellow of the American Association for the Advancement of Science, the American Statistical Association (ASA), the Institute of Mathematical Statistics, and the Royal Statistical Society; from 2001 to 2003 he served as the President-Elect, President, and Past President of the International Society for Bayesian Analysis (ISBA).
He is the author or co-author of about 140 contributions to the methodological and applied statistical literature, including articles in the Journal of the Royal Statistical Society (Series A, B and C), the Journal of the American Statistical Association, the Annals of Applied Statistics, Bayesian Analysis, Statistical Science, the New England Journal of Medicine, and the Journal of the American Medical Association; his 1995 JRSS-B article on assessment and propagation of model uncertainty has been cited about 1,150 times, and taken together his publications have been cited more than 9,200 times.
His research is in the areas of Bayesian inference and prediction, model uncertainty and empirical model-building, hierarchical modeling, Markov Chain Monte Carlo methods, and Bayesian nonparametric methods, with applications mainly in medicine, health policy, education, environmental risk assessment and eCommerce.
His short courses have received Excellence in Continuing Education Awards from the American Statistical Association on two occasions, corresponding to days 1 and 2 of this course.
He has won or been nominated for major teaching awards everywhere he has taught (the University of Chicago; the RAND Graduate School of Public Policy Studies; the University of California, Los Angeles; the University of Bath (UK); and the University of California, Santa Cruz).
He has a particular interest in the exposition of complex statistical methods and ideas in the context of real-world applications.
Course Materials
****************
Please note that there will not be any printed notes for this course; a web link to the materials (PDF files with course notes and .txt files containing data sets and R and WinBUGS code) will be provided to registered participants several weeks before the course occurs.
Location
*********
The Statistical Services Centre at Whiteknights campus, University of Reading is in a prime location in the South-East of England and has excellent transport links. The University is close to the M4 motorway allowing easy access by car. Reading's railway station has high speed links to and from London Paddington, as well as regular services to and from other cities around the UK. There are direct services to and from both London Heathrow and London Gatwick Airports. For further details view: http://www.reading.ac.uk/ssc/n/visiting.htm.
Emma Hollands
Training Co-ordinator
Statistical Services Centre
University of Reading
Harry Pitt Building
Whiteknights Road
Reading RG6 6FN
UK
e-mail: [log in to unmask]
Tel: +44(0)118 378 8689
Fax: +44(0)118 378 8458
www.reading.ac.uk/ssc
******************************************************
Please note that if you press the 'Reply' button your
message will go only to the sender of this message.
If you want to reply to the whole list, use your mailer's
'Reply-to-All' button to send your message automatically
to [log in to unmask]
Disclaimer: The messages sent to this list are the views of the sender and cannot be assumed to be representative of the range of views held by subscribers to the Radical Statistics Group. To find out more about Radical Statistics and its aims and activities and read current and past issues of our newsletter you are invited to visit our web site www.radstats.org.uk.
*******************************************************
|