JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CCP4BB Archives


CCP4BB Archives

CCP4BB Archives


CCP4BB@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Monospaced Font

LISTSERV Archives

LISTSERV Archives

CCP4BB Home

CCP4BB Home

CCP4BB  June 2013

CCP4BB June 2013

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: ctruncate bug?

From:

Douglas Theobald <[log in to unmask]>

Reply-To:

Douglas Theobald <[log in to unmask]>

Date:

Fri, 28 Jun 2013 20:13:58 -0400

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (108 lines)

On Jun 27, 2013, at 12:30 PM, Ian Tickle <[log in to unmask]> wrote:

> On 22 June 2013 19:39, Douglas Theobald <[log in to unmask]> wrote:
>
>> So I'm no detector expert by any means, but I have been assured by those who are that there are non-Poissonian sources of noise --- I believe mostly in the readout, when photon counts get amplified. Of course this will depend on the exact type of detector, maybe the newest have only Poisson noise.
>
> Sorry for delay in responding, I've been thinking about it. It's indeed possible that the older detectors had non-Poissonian noise as you say, but AFAIK all detectors return _unsigned_ integers (unless possibly the number is to be interpreted as a flag to indicate some error condition, but then obviously you wouldn't interpret it as a count). So whatever the detector AFAIK it's physically impossible for it to return a negative number that is to be interpreted as a photon count (of course the integration program may interpret the count as a _signed_ integer but that's purely a technical software issue).

Just because the detectors spit out positive numbers (unsigned ints) does not mean that those values are Poisson distributed. As I understand it, the readout can introduce non-Poisson noise, which is usually modeled as Gaussian.

> I think we're all at least agreed that, whatever the true distribution of Ispot (and Iback) is, it's not in general Gaussian, except as an approximation in the limit of large Ispot and Iback (with the proviso that under this approximation Ispot & Iback can never be negative). Certainly the assumption (again AFAIK) has always been that var(count) = count and I think I'm right in saying that only a Poisson distribution has that property?

I think you mean that the Poisson has the property that mean(x) = var(x) (and since the ML estimate of the mean = count, you get your equation). Many other distributions can approximate that (most of the binomial variants with small p). Also, the standard gamma distribution with scale parameter=1 has that exact property.

>> No, its just terminology. For you, Iobs is defined as Ispot-Iback, and that's fine. (As an aside, assuming the Poisson model, this Iobs will have a Skellam distribution, which can take negative values and asymptotically approaches a Gaussian.) The photons contributed to Ispot from Itrue will still be Poisson. Let's call them something besides Iobs, how about Ireal? Then, the Poisson model is
>>
>> Ispot = Ireal + Iback'
>>
>> where Ireal comes from a Poisson with mean Itrue, and Iback' comes from a Poisson with mean Iback_true. The same likelihood function follows, as well as the same points. You're correct that we can't directly estimate Iback', but I assume that Iback (the counts around the spot) come from the same Poisson with mean Iback_true (as usual).
>>
>> So I would say, sure, you have defined Iobs, and it has a Skellam distribution, but what, if anything, does that Iobs have to do with Itrue? My point still holds, that your Iobs is not a valid estimate of Itrue when Ispot<Iback. Iobs as an estimate of Itrue requires unphysical assumptions, namely that photon counts can be negative. It is impossible to derive Ispot-Iback as an estimate for Itrue (when Ispot<Iback) *unless* you make that unphysical assumption (like the Gaussian model).
>
> Please note that I have never claimed that Iobs = Ispot - Iback is to be interpreted as an estimate of Itrue, indeed quite the opposite: I agree completely that Iobs has little to do with Itrue when Iobs is negative. In fact I don't believe anyone else is claiming that Iobs is to be interpreted as an estimate of Itrue either, so maybe this is the source of the misunderstanding?

Maybe it is, but that has its own problems. I imagine that most people who collect an X-ray dataset think that the intensities in their mtz are indeed estimates of the true intensities from their crystal. Seems like a reasonable thing to expect, especially since the fourier of our model is supposed to predict Itrue. If Iobs is not an estimate of Itrue, what exactly is its relevance to the structure inference problem? Maybe it only serves as a way-station on the road to the French-Wilson correction? As I understand it, not everyone uses ctruncate.

> Certainly for me Ispot - Iback is merely the difference between the two measurements, nothing more. Maybe if we called it something other than Iobs (say Idiff), or even avoided giving it a name altogether that would avoid any further confusion? Perhaps this whole discussion has been merely about terminology?
>
>> I'm also puzzled as to your claim that Iback' is not Poisson. I don't think your QM argument is relevant, since we can imagine what we would have detected at the spot if we'd blocked the reflection, and that # of photon counts would be Poisson. That is precisely the conventional logic behind estimating Iback' with Iback (from around the spot), it's supposedly a reasonable control. It doesn't matter that in reality the photons are indistinguishable --- that's exactly what the probability model is for.
>
> I'm not clear how you would "block the reflection"? How could you do that without also blocking the background under it? A large part of the background comes from the TDS which is coming from the same place that the Bragg diffraction is coming from, i.e. the crystal. I know of no way of stopping the Bragg diffraction without also stopping the TDS (or vice versa). Indeed the theory shows that there is in reality no distinction between Bragg diffraction and TDS; they are just components of the total scattering that we find convenient to imagine as separate in the dynamical model of scattering (see http://people.cryst.bbk.ac.uk/~tickle/iucr99/s61.html for the relevant equations).

I admittedly don't understand TDS well. But I thought it was generally assumed that TDS contributes rather little to the conventional background measurement outside of the spot (so Stout and Jensen tells me :). So I was not even really considering TDS, which I see as a different problem from measuring background (am I mistaken here?). I thought the background we measure (in the area surrounding the spot) mostly came from diffuse solvent scatter, air scatter, loop scatter, etc. If so, then we can just consider Itrue = Ibragg + Itds, and worry about modeling the different components of Itrue at a different stage. And then it would make sense to think about blocking a reflection (say, with a minuscule, precisely positioned beam stop very near the crystal) and measuring the background in the spot where the reflection would hit. That background should be approximated pretty well by Iback, the background around the spot (especially if we move far enough away from the spot so that TDS is negligible there).

> Any given photon "experiences" the whole crystal on its way from the source to the detector (in fact it experiences more than that: it traverses all possible trajectories simultaneously, it's just that the vast majority cancel by destructive interference). The resulting wave function of the photon only collapses to a single point on hitting the detector, with a frequency proportional to the square of the wave function at that point, so it's meaningless to talk about the trajectory of an individual photon or whether it "belongs" to Ireal or Iback'. You can't talk about the error distribution of the experimental measurements of some quantity if it's a physical impossibility to design an experiment to measure it! It can of course have a probability distribution derived from prior knowledge of the properties of crystals, but that's not a Poisson, it's a Wilson (exponential) distribution. Is that what you're thinking of? According to QM the only real quantities are the observables (or functions of observables); in this case only Ispot, Iback and Ispot-Iback (and any other functions of Ispot & Iback that might be relevant) are physically meaningful quantities, all else is mere speculation, i.e. part of the model.

Ahh, this all seems a bit too philosophical, what's really real and what's not really real. There are of course many different observationally equivalent QM interpretations, not just the one you espouse above (e.g., "the only real quantities are the observables" and "wave function collapse" talk). I won't go down the QM woo road -- next thing we'll confirm Godwin's law and start talking about Nazis ... (Blargh! there I did it :) Anyway, I don't think any of this matters for the practice of probabilistic inference. I can model the background from solvent/air scatter as Poisson (as we're dealing with photons that are well known to have Poisson distributions), and this background adds to the intensity from the coherent scatter of the crystal (which is also from Poissonian photons) --- giving the sum of two Poisson variates, which is itself a Poisson variate. If, OTOH, we can't validly use that model, then I don't see any justification for F&W's method (see below).

> As I understand it the reason you are suggesting an alternate way of estimating Itrue is that you have a fundamental objection to the F & W algorithm? However I'm not clear precisely what you find objectionable?

I actually don't want to rag on F&W too much --- the method is clever and I'm inherently fond of Bayesian applications. I agree that their Gaussian approximation will work at high intensities, and I also suspect that it's probably "good enough" (perhaps even very good, except at very low photon counts).

But in the spirit of trying to do things as well as possible, I guess my problem with F&W is threefold:

(1) It works with Ispot-Iback, which seems arbitrary (perhaps just an anachronism at this point?), and intuitively I suspect that using Ispot-Iback discards some important information (e.g., Ispot-Iback is one value instead of two).

(2) It seems the method is really a sort of kludge --- we have some measurement, one that is evidently not an estimate of what we're trying to measure, and then we use F&W to fix that measurement so that we do get an estimate of what we're actually trying to measure. Can't we just get our estimate of Itrue from the get-go?

(3) It makes a strong Gaussian assumption for strictly non-Gaussian data, which at some point will no longer hold. We know photon counts are non-negative, in general, and specifically Poisson (in the absence of other added noise). Why not use that? My unease here is heightened by the fact that I don't exactly understand the F&W derivation, or where exactly the Gaussian approximation comes in --- it seems there's been a bit of sleight of hand in producing their equations (there is no formal derivation in their ACA 1978 paper), and I don't have a feel for where the Gaussian approximation works and where it breaks down.

> Perhaps it would be useful to go through F & W in detail and identify where the problem (if any) lies?
>
> We can say that the total likelihood of J (= Itrue) given Is (= Ispot) and Ib (= Iback) is equal to the prior probability density of J given only knowledge of the crystal (i.e. the estimated no of atoms from which we can calculate E(J)), multiplied by the joint probability density of Is and Ib given J and its SD (assumed equal to the SD of Is-Ib):
>
> P(J | Is,Ib) = P(J | E(J)) P(Is,Ib | J,sdJ)

So, the equation you have above is not quite correct, assuming that P(.) is a probability distribution. To get an equation for P(J | Is,Ib), we have to use Bayes theorem, so your RHS should be divided by a normalization constant:

P(J | Is,Ib) = P(J | E(J)) P(Is,Ib | J,sdJ) / P(Is,Ib)

(or just replace your '=' with a proportionality)

> The only function of Is and Ib that's relevant to the joint distribution of Is and Ib given J and sdJ, P(Is,Ib | J), is the difference Is-Ib (at least for large Is and Ib: I don't know what happens if they are small).

So why are we using Is-Ib and not {Is,Ib}? First, note that:

P(Is,Ib | J,sdJ,B,sdB) = P(Is | Ib,J,sdJ) P(Ib | B,sdB)

since Is and Ib are dependent. I've augmented the notation some, to show the explicit dependence on parameters that will be important later.

Note that if you want to work as if Is and Ib are independent, then

P(Is,Ib | J,sdJ,B,sdB) = P(Is | J,sdJ) P(Ib | B,sdB)

But then you've got the likelihood function P(Is | J,sdJ), which is all you need to find an ML (or Bayesian) estimate of J given data Is. So if Is and Ib are independent, there's no need for F&W at all.

> Note that it's perfectly proper to talk about P(Is-Ib | J) in this context: it's the distribution of the difference you expect to observe given any J.

I agree with that, your right, assuming you can make practical sense of P(Is-Ib | J) ...

> So the above can be rewritten as:
>
> P(J | Is-Ib) = P(J | E(J)) P(Is-Ib | J,sdJ)
>
> P(Is-Ib | J,sdJ) is just the Gaussian error distribution of Is-Ib making use of the Gaussian approximation of the Poisson.

But where does P(Is-Ib | J,sdJ) actually come from? Can you derive it? It's not immediately obvious to me how I could predict Is-Ib if all you gave were the values for J and sdJ. In fact, I don't think its possible. What you need is P(Is-Ib | J,sdJ,B,sdB), as I showed above. Now you and F&W say that P(Is-Ib | J,sdJ,B,sdB) is Gaussian, but where exactly does the Gaussian approximation come in?

For example, I've been pushing the model:

Is = Ib + Ij

where Ib comes from P(Ib|B,sdB) and Ij comes from P(Ij|J,sdJ). Given this model, I can derive F&W (at least one way, there may be others). From theory, both of those distributions are Poisson, and so for large values of B and J, both distributions will approximate a Gaussian. So Is will also be Gaussian, from N(Is | Ib+Ij,sds) (sds^2=sdB^2+sdJ^2). It follows then that Id=Is-Ib will also be Gaussian, from N(Id | J,sdd) (where sdd will be a bit complicated, but it will be larger than sds).

So it already seems to me that by using Is-Ib, the std dev of J will be larger than it needs to be --- we should be able to do better if we don't subtract the variates. And the way I derived this, the Gaussian approximation is applied to both Ib and Ij, which is exactly where we don't need it --- supposedly F&W applies to weak observations, not strong ones.

But I'm probably missing something.


> Finally, integrating out J to get the expectation of J (or of F=sqrt(J)) completes the F-W procedure. As indicated earlier there are good reasons to postpone this until after merging equivalents (which is exactly what we do now).
>
> So what's wrong with that?
>
> Cheers
>
> -- Ian
>

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager