We invite researchers in machine learning and statistics to participate in the:
NIPS 2014 Workshop on Advances in Variational Inference
12 or 13 December 2014, Montreal, Canada
www.variationalinference.org
Submission deadline: 9 October 2014
1. Call for participation
We invite researchers to submit their recent work on the development, analysis, and application of variational inference. Submissions should take the form of an extended abstract of 2–4 pages in PDF format using the NIPS style available here (author names do not need to be anonymised). Submissions will be accepted either as contributed talks or poster presentations. Final versions of the extended abstract are due by 28 November and will posted on the workshop website.
Abstracts should be submitted by October 9 to [log in to unmask]
2. Workshop overview
The ever-increasing size of data sets has resulted in an immense effort in machine learning and statistics to develop more powerful and scalable probabilistic models. Efficient inference remains a challenge and limits the use of these models in large-scale scientific and industrial applications. Traditional unbiased inference schemes such as Markov chain Monte Carlo (MCMC) are often slow to run and difficult to evaluate in finite time. In contrast, variational inference allows for competitive run times and more reliable convergence diagnostics on large-scale and streaming data—while continuing to allow for complex, hierarchical modelling. This workshop aims to bring together researchers and practitioners addressing problems of scalable approximate inference to discuss recent advances in variational inference, and to debate the roadmap towards further improvements and wider adoption of variational methods.
The recent resurgence of interest in variational methods includes new methods for scalability using stochastic gradient methods, extensions to the streaming variational setting, improved local variational methods, inference in non-linear dynamical systems, principled regularisation in deep neural networks, and inference-based decision making in reinforcement learning, amongst others. Variational methods have clearly emerged as a preferred way to allow for tractable Bayesian inference. Despite this interest, there remain significant trade-offs in speed, accuracy, simplicity, applicability, and learned model complexity between variational inference and other approximative schemes such as MCMC and point estimation. In this workshop, we will discuss how to rigorously characterise these tradeoffs, as well as how they might be made more favourable. Moreover, we will address other issues of adoption in scientific communities that could benefit from the use of variational inference including, but not limited to, the development of relevant software packages.
The workshop will consist of invited and contributed talks, a spotlight and poster session, and a panel discussion. For more details see: www.variational inference.org. This workshop is supported by the International Society for Bayesian Analysis (ISBA), Adobe Creative Technologies Laboratory, and Google DeepMind.
3. Confirmed speakers
Matt Hoffman
Michalis Titsias
Erik Sudderth
Sylvain Le Corff
Durk Kingma
David Knowles (TBC)
4. Key Dates
Paper submission: 9 October 2014
Acceptance notification: 23 October
Final paper submission: 28 November
Workshop: Friday, 12 December
Workshop organisers
Shakir Mohamed (Google DeepMind)
Tamara Broderick (U. California, Berkeley/ MIT)
Charles Blundell (Google Deepmind)
Matt Hoffmann(Adobe Creative Technologies Lab)
David Blei (U. Princeton)
Michael I. Jordan (U. California, Berkeley)
You may leave the list at any time by sending the command
SIGNOFF allstat
to [log in to unmask], leaving the subject line blank.
|