This is probably an elementary question, but I am struggling with it:
I am comparing two (or more) posterior distributions obtained from WinBUGS
(i.e. they are empirical distributions and hence have finite supports).
'Visual' comparison works pretty well, but the journal (applied not
statistical) wants something more definite to quantify
similarity/dissimilarity of distributions.
Any ideas/leads?
I have tried to use Kullback-Leibler entropy (KLe), but I have encountered
several problems with it. I would appreciate views on the following
questions:
1. KLe is asymmetrical and my problem is symmetrical in that I am
interested in how distribution 1 compares to 2 and not how it is
approximated by 2. In my case, reversing the problem yields quite different
numerical values. How can I 'symmetrise' KLe?
2. KLe contains a term log(p2) and p2 is an empirical distribution, with a
finite support different to the support of distribution 1. What value should
I use for the p1 log(p2) contribution to the integral/sum in this case?
Thanks a lot in advance,
Adam
--
Adam Kleczkowski, Dept. of Plant Sciences and Selwyn College
Univ. of Cambridge, Downing Street, Cambridge CB2 3EA, England
tel +44-1223-330229, fax +44-1223-333953, e-mail [log in to unmask]
http://kleczkowski.net (private), http://mathbio.com (work)
-------------------------------------------------------------------
This list is for discussion of modelling issues and the BUGS software.
For help with crashes and error messages, first mail [log in to unmask]
To mail the BUGS list, mail to [log in to unmask]
Before mailing, please check the archive at www.jiscmail.ac.uk/lists/bugs.html
Please do not mail attachments to the list.
To leave the BUGS list, send LEAVE BUGS to [log in to unmask]
If this fails, mail [log in to unmask], NOT the whole list
|