Print

Print


On 29 June 2013 01:13, Douglas Theobald <[log in to unmask]> wrote:
Just because the detectors spit out positive numbers (unsigned ints) does not mean that those values are Poisson distributed.  As I understand it, the readout can introduce non-Poisson noise, which is usually modeled as Gaussian.

OK but positive numbers would seem to rule out a Gaussian model.  I wonder has anyone actually done the experiment of obtaining the distribution of photon counts from a source at various intensities and using different types of detectors?  My suspicion is that the distributions would all be pretty close to Poisson.
 
I think you mean that the Poisson has the property that mean(x) = var(x) (and since the ML estimate of the mean = count, you get your equation).  Many other distributions can approximate that (most of the binomial variants with small p).  Also, the standard gamma distribution with scale parameter=1 has that exact property.

Yes.
 
Maybe it is, but that has its own problems.  I imagine that most people who collect an X-ray dataset think that the intensities in their mtz are indeed estimates of the true intensities from their crystal.  Seems like a reasonable thing to expect, especially since the fourier of our model is supposed to predict Itrue.  If Iobs is not an estimate of Itrue, what exactly is its relevance to the structure inference problem?  Maybe it only serves as a way-station on the road to the French-Wilson correction?  As I understand it, not everyone uses ctruncate.

I assumed from the subject line that we were talking about the case where (c)truncate is used.  Those who don't are on their own AFAIC!
 
I admittedly don't understand TDS well.  But I thought it was generally assumed that TDS contributes rather little to the conventional background measurement outside of the spot (so Stout and Jensen tells me :).  So I was not even really considering TDS, which I see as a different problem from measuring background (am I mistaken here?).  I thought the background we measure (in the area surrounding the spot) mostly came from diffuse solvent scatter, air scatter, loop scatter, etc.  If so, then we can just consider Itrue = Ibragg + Itds, and worry about modeling the different components of Itrue at a different stage.  And then it would make sense to think about blocking a reflection (say, with a minuscule, precisely positioned beam stop very near the crystal) and measuring the background in the spot where the reflection would hit.  That background should be approximated pretty well by Iback, the background around the spot (especially if we move far enough away from the spot so that TDS is negligible there).

Stout & Jensen would not be my first choice to learn about TDS!  It's a textbook of small-molecule crystallography (I know, it was my main textbook during my doctorate on small-molecule structures), and small molecules are generally more highly ordered than macromolecules and therefore exhibit TDS on a much smaller scale (there are exceptions of course).  I think what you are talking about is "acoustic mode" TDS (so-called because of its relationship with sound transmission through a crystal), which peaks under the Bragg spots and is therefore very hard to distinguish from it.  The other two contributors to TDS that are often observed in MX are "optic mode" and "Einstein model".  TDS arises from correlated motions within the crystal, for acoustic mode it's correlated motions of whole unit cells within the lattice, for optic mode it's correlations of different parts of a unit cell (e.g. correlated domain motions in a protein), and for Einstein model it's correlations of the movement of electrons as they are carried along by vibrating atoms (an "Einstein solid" is a simple model of a crystal proposed by A. Einstein consisting of a collection of independent quantised harmonic-isotropic oscillators; I doubt he was aware of its relevance to TDS, that came later).  Here's an example of TDS: http://people.cryst.bbk.ac.uk/~tickle/iucr99/tds2f.gif .  The acoustic mode gives the haloes around the Bragg spots (but as I said mainly coincides with the spots), the optic mode gives the nebulous blobs, wisps and streaks that are uncorrelated with the Bragg spots (you can make out an inner ring of 14 blobs due to the 7-fold NCS), and the Einstein model gives the isotropic uniform greying increasing towards the outer edge (makes it look like the diffraction pattern has been projected onto a sphere).  So I leave you to decide whether TDS contributes to the background!

As for the blocking beam stop, every part of the crystal (or at least every part that's in the beam) contributes to every part of the diffraction pattern (i.e. Fourier transform).  This means that your beam stop would have to mask the whole crystal - any small bit of the crystal left unmasked and exposed to the beam would give a complete diffraction pattern!  That means you wouldn't see anything, not even the background!  You could leave a small hole in the centre for the direct beam and that would give you the air scatter contribution, but usually the air path is minimal anyway so that's only a very small contribution to the total background.  But let's say by some magic you were able to measure only the background, say Iback".  In a separate experiment before you rigged up the mask you will have measured Ispot.  How does that help?: you haven't measured Iback' directly and Iback" will differ from Iback' again due to count fluctuations.  So I think that's a non-starter.
 
Ahh, this all seems a bit too philosophical, what's really real and what's not really real.  There are of course many different observationally equivalent QM interpretations, not just the one you espouse above (e.g., "the only real quantities are the observables" and "wave function collapse" talk).  I won't go down the QM woo road -- next thing we'll confirm Godwin's law and start talking about Nazis ... (Blargh! there I did it :)  Anyway, I don't think any of this matters for the practice of probabilistic inference.  I can model the background from solvent/air scatter as Poisson (as we're dealing with photons that are well known to have Poisson distributions), and this background adds to the intensity from the coherent scatter of the crystal (which is also from Poissonian photons) --- giving the sum of two Poisson variates, which is itself a Poisson variate.  If, OTOH, we can't validly use that model, then I don't see any justification for F&W's method (see below).

Sorry your allusion to "Godwin's law" is lost on me.  Photons only have a Poisson distribution when you can count them: QM says it meaningless to talk about something you can't observe.  It's like asking which path a particular photon took in the double-slit experiment: the best answer we can give (though QM says you shouldn't even attempt to answer since it's a meaningless question) is that the photon went through both slits simultaneously while still remaining as a single particle!  As you say all the QM theories differ only in their interpretation (i.e. what's going on "behind the scenes"), they all agree about the observables.  It's meaningless to talk about the measurement error (or its distribution) if it's physically impossible to make the measurement!  It seems to me that the only way we can get at the PDF of Itrue is from the PDFs of the observables Ispot & Iback and the prior distribution of Itrue.  This is precisely what F & W does: I don't see what's wrong with that.
 
I actually don't want to rag on F&W too much --- the method is clever and I'm inherently fond of Bayesian applications.  I agree that their Gaussian approximation will work at high intensities, and I also suspect that it's probably "good enough" (perhaps even very good, except at very low photon counts).
 
But in the spirit of trying to do things as well as possible, I guess my problem with F&W is threefold:

(1) It works with Ispot-Iback, which seems arbitrary (perhaps just an anachronism at this point?), and intuitively I suspect that using Ispot-Iback discards some important information (e.g., Ispot-Iback is one value instead of two).

(2) It seems the method is really a sort of kludge --- we have some measurement, one that is evidently not an estimate of what we're trying to measure, and then we use F&W to fix that measurement so that we do get an estimate of what we're actually trying to measure.  Can't we just get our estimate of Itrue from the get-go?

(3) It makes a strong Gaussian assumption for strictly non-Gaussian data, which at some point will no longer hold.  We know photon counts are non-negative, in general, and specifically Poisson (in the absence of other added noise).  Why not use that?  My unease here is heightened by the fact that I don't exactly understand the F&W derivation, or where exactly the Gaussian approximation comes in --- it seems there's been a bit of sleight of hand in producing their equations (there is no formal derivation in their ACA 1978 paper), and I don't have a feel for where the Gaussian approximation works and where it breaks down.
 
So, the equation you have above is not quite correct, assuming that P(.) is a probability distribution.  To get an equation for P(J | Is,Ib), we have to use Bayes theorem, so your RHS should be divided by a normalization constant:

P(J | Is,Ib) = P(J | E(J)) P(Is,Ib | J,sdJ) / P(Is,Ib)

(or just replace your '=' with a proportionality)

Yes OK: technically I was being sloppy, but you would be surprised how many papers use unnormalised PDFs! (obviously only when it makes no difference if the PDF is normalised or not). 
 
> The only function of Is and Ib that's relevant to the joint distribution of Is and Ib given J and sdJ, P(Is,Ib | J), is the difference Is-Ib (at least for large Is and Ib: I don't know what happens if they are small).

So why are we using Is-Ib and not {Is,Ib}?  First, note that:

P(Is,Ib | J,sdJ,B,sdB) = P(Is | Ib,J,sdJ) P(Ib | B,sdB)

since Is and Ib are dependent. I've augmented the notation some, to show the explicit dependence on parameters that will be important later.

Note that if you want to work as if Is and Ib are independent, then

P(Is,Ib | J,sdJ,B,sdB) = P(Is | J,sdJ) P(Ib | B,sdB)

But then you've got the likelihood function P(Is | J,sdJ), which is all you need to find an ML (or Bayesian) estimate of J given data Is.  So if Is and Ib are independent, there's no need for F&W at all.
 
Where did the assumption of independence come from?  I don't see how they can be independent (if you have a big Ib chances are Is will be big too).  The implication of having the likelihood function P(Is | J,sdJ) is that you don't even need to measure Iback to get an estimate of J, all you need is Ispot!  That makes no sense.

But where does P(Is-Ib | J,sdJ) actually come from?  Can you derive it?  It's not immediately obvious to me how I could predict Is-Ib if all you gave were the values for J and sdJ.  In fact, I don't think its possible.  What you need is P(Is-Ib | J,sdJ,B,sdB), as I showed above.  Now you and F&W say that P(Is-Ib | J,sdJ,B,sdB) is Gaussian, but where exactly does the Gaussian approximation come in?

Sorry I don't see the problem.  P(Is-Ib | J,sdJ) just means this is conditional on J & sdJ (i.e. _if_ we know J & sdJ _then_ we know Is-Ib with the specified probability).  I hope we've established that Is-Ib is (approximately) Gaussian for reasonably large Is & Ib (from the difference of 2 Poissons).  So Is-Ib is a sample of the population generated by a Gaussian distribution with mean = J and s.d. = sdJ.  This equation being conditional on J tells us nothing about J (or sdJ), but the prior of J is going to tell us that J cannot be negative (and therefore neither can its expectation which is our ultimate goal); however Is-Ib, being sampled from the Gaussian, can be positive or negative.
 
For example, I've been pushing the model:

Is = Ib + Ij

where Ib comes from P(Ib|B,sdB) and Ij comes from P(Ij|J,sdJ).  Given this model, I can derive F&W (at least one way, there may be others).  From theory, both of those distributions are Poisson, and so for large values of B and J, both distributions will approximate a Gaussian.  So Is will also be Gaussian, from N(Is | Ib+Ij,sds) (sds^2=sdB^2+sdJ^2).  It follows then that Id=Is-Ib will also be Gaussian, from N(Id | J,sdd) (where sdd will be a bit complicated, but it will be larger than sds).

So it already seems to me that by using Is-Ib, the std dev of J will be larger than it needs to be --- we should be able to do better if we don't subtract the variates.  And the way I derived this, the Gaussian approximation is applied to both Ib and Ij, which is exactly where we don't need it --- supposedly F&W applies to weak observations, not strong ones.

This all depends on your assumption that P(Ij|J,sdJ) is Poisson & I'm saying that you can't possibly know that (in fact you can use F & W to get an estimate & show that it's not).  How could you prove it since J cannot be observed?

Cheers

-- Ian