Just because the detectors spit out positive numbers (unsigned ints) does not mean that those values are Poisson distributed. As I understand it, the readout can introduce non-Poisson noise, which is usually modeled as Gaussian.
I think you mean that the Poisson has the property that mean(x) = var(x) (and since the ML estimate of the mean = count, you get your equation). Many other distributions can approximate that (most of the binomial variants with small p). Also, the standard gamma distribution with scale parameter=1 has that exact property.
Maybe it is, but that has its own problems. I imagine that most people who collect an X-ray dataset think that the intensities in their mtz are indeed estimates of the true intensities from their crystal. Seems like a reasonable thing to expect, especially since the fourier of our model is supposed to predict Itrue. If Iobs is not an estimate of Itrue, what exactly is its relevance to the structure inference problem? Maybe it only serves as a way-station on the road to the French-Wilson correction? As I understand it, not everyone uses ctruncate.
I admittedly don't understand TDS well. But I thought it was generally assumed that TDS contributes rather little to the conventional background measurement outside of the spot (so Stout and Jensen tells me :). So I was not even really considering TDS, which I see as a different problem from measuring background (am I mistaken here?). I thought the background we measure (in the area surrounding the spot) mostly came from diffuse solvent scatter, air scatter, loop scatter, etc. If so, then we can just consider Itrue = Ibragg + Itds, and worry about modeling the different components of Itrue at a different stage. And then it would make sense to think about blocking a reflection (say, with a minuscule, precisely positioned beam stop very near the crystal) and measuring the background in the spot where the reflection would hit. That background should be approximated pretty well by Iback, the background around the spot (especially if we move far enough away from the spot so that TDS is negligible there).
Ahh, this all seems a bit too philosophical, what's really real and what's not really real. There are of course many different observationally equivalent QM interpretations, not just the one you espouse above (e.g., "the only real quantities are the observables" and "wave function collapse" talk). I won't go down the QM woo road -- next thing we'll confirm Godwin's law and start talking about Nazis ... (Blargh! there I did it :) Anyway, I don't think any of this matters for the practice of probabilistic inference. I can model the background from solvent/air scatter as Poisson (as we're dealing with photons that are well known to have Poisson distributions), and this background adds to the intensity from the coherent scatter of the crystal (which is also from Poissonian photons) --- giving the sum of two Poisson variates, which is itself a Poisson variate. If, OTOH, we can't validly use that model, then I don't see any justification for F&W's method (see below).
I actually don't want to rag on F&W too much --- the method is clever and I'm inherently fond of Bayesian applications. I agree that their Gaussian approximation will work at high intensities, and I also suspect that it's probably "good enough" (perhaps even very good, except at very low photon counts).
But in the spirit of trying to do things as well as possible, I guess my problem with F&W is threefold:
(1) It works with Ispot-Iback, which seems arbitrary (perhaps just an anachronism at this point?), and intuitively I suspect that using Ispot-Iback discards some important information (e.g., Ispot-Iback is one value instead of two).
(2) It seems the method is really a sort of kludge --- we have some measurement, one that is evidently not an estimate of what we're trying to measure, and then we use F&W to fix that measurement so that we do get an estimate of what we're actually trying to measure. Can't we just get our estimate of Itrue from the get-go?
(3) It makes a strong Gaussian assumption for strictly non-Gaussian data, which at some point will no longer hold. We know photon counts are non-negative, in general, and specifically Poisson (in the absence of other added noise). Why not use that? My unease here is heightened by the fact that I don't exactly understand the F&W derivation, or where exactly the Gaussian approximation comes in --- it seems there's been a bit of sleight of hand in producing their equations (there is no formal derivation in their ACA 1978 paper), and I don't have a feel for where the Gaussian approximation works and where it breaks down.
So, the equation you have above is not quite correct, assuming that P(.) is a probability distribution. To get an equation for P(J | Is,Ib), we have to use Bayes theorem, so your RHS should be divided by a normalization constant:
P(J | Is,Ib) = P(J | E(J)) P(Is,Ib | J,sdJ) / P(Is,Ib)
(or just replace your '=' with a proportionality)
> The only function of Is and Ib that's relevant to the joint distribution of Is and Ib given J and sdJ, P(Is,Ib | J), is the difference Is-Ib (at least for large Is and Ib: I don't know what happens if they are small).So why are we using Is-Ib and not {Is,Ib}? First, note that:
P(Is,Ib | J,sdJ,B,sdB) = P(Is | Ib,J,sdJ) P(Ib | B,sdB)
since Is and Ib are dependent. I've augmented the notation some, to show the explicit dependence on parameters that will be important later.
Note that if you want to work as if Is and Ib are independent, then
P(Is,Ib | J,sdJ,B,sdB) = P(Is | J,sdJ) P(Ib | B,sdB)
But then you've got the likelihood function P(Is | J,sdJ), which is all you need to find an ML (or Bayesian) estimate of J given data Is. So if Is and Ib are independent, there's no need for F&W at all.
But where does P(Is-Ib | J,sdJ) actually come from? Can you derive it? It's not immediately obvious to me how I could predict Is-Ib if all you gave were the values for J and sdJ. In fact, I don't think its possible. What you need is P(Is-Ib | J,sdJ,B,sdB), as I showed above. Now you and F&W say that P(Is-Ib | J,sdJ,B,sdB) is Gaussian, but where exactly does the Gaussian approximation come in?
For example, I've been pushing the model:
Is = Ib + Ij
where Ib comes from P(Ib|B,sdB) and Ij comes from P(Ij|J,sdJ). Given this model, I can derive F&W (at least one way, there may be others). From theory, both of those distributions are Poisson, and so for large values of B and J, both distributions will approximate a Gaussian. So Is will also be Gaussian, from N(Is | Ib+Ij,sds) (sds^2=sdB^2+sdJ^2). It follows then that Id=Is-Ib will also be Gaussian, from N(Id | J,sdd) (where sdd will be a bit complicated, but it will be larger than sds).
So it already seems to me that by using Is-Ib, the std dev of J will be larger than it needs to be --- we should be able to do better if we don't subtract the variates. And the way I derived this, the Gaussian approximation is applied to both Ib and Ij, which is exactly where we don't need it --- supposedly F&W applies to weak observations, not strong ones.