JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CCP4BB Archives


CCP4BB Archives

CCP4BB Archives


CCP4BB@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CCP4BB Home

CCP4BB Home

CCP4BB  May 2013

CCP4BB May 2013

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: CCDs + Re: PILATUS data collection

From:

James Holton <[log in to unmask]>

Reply-To:

James Holton <[log in to unmask]>

Date:

Thu, 16 May 2013 09:16:09 -0700

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (257 lines)

Just a few things to add:

  If you have a _current_ version of ADXV, then that "disappearing sharp 
spots" problem has been fixed.  Try downloading a new copy. Its free.

To answer the OP's question: there was a paper written about practical 
Pilatus data collection recently:
http://dx.doi.org/10.1107/S0907444911007608

But I think it worth pointing out that, theoretically, the best 
"strategy" for a detector with no read-out noise is no strategy at all.  
This is because the whole point of a "strategy" is to get a complete 
dataset on as few images as possible because each image carries a 
certain amount of noise with it.  Thus minimizing the number of images 
minimizes this source of noise.  However, if there is no read-out noise 
then it doesn't matter how many images you have, and the next thing to 
worry about is radiation damage. The best way to deal with radiation 
damage is to divide your data over as many images as possible.  You then 
move the problem of "strategy" to after you go home, where you can 
figure out where to "cut" the data or perhaps even do some zero-dose 
extrapolation.

An instructive way to think about this is to consider the most extreme 
case of "high multiplicity" where you record only one photon per image.  
For a 100um round crystal, a 30 MGy dataset will involve only a trillion 
or so scattered photons, and that number is pretty much fixed by the 
radiation damage physics (Holton & Frankel 2010).  So when it comes to 
"strategy" the only question is how to divide these photons up.  Images 
with only one photon hit and every other pixel "zero" will compress very 
well, so the storage needed to do a "single photon image" dataset 
doesn't take up nearly as much space as you might initially think.  If 
you have such a "single-photon image dataset" you can then sum all the 
images with "phi" values that fall between 0 and 1 as one new image, 
then sum phi= 1 to 2 as a second image, etc. and process with your 
favorite software.  Or, you can change your mind and sum images for 0 to 
0.1, 0.1 to 0.2, etc.  Essentially, single-photon image data collection 
would allow you to devise any conceivable "strategy" AFTER you have 
collected the data!

Why doesn't everybody do this?  Mostly because they are impatient.  In 
fact, it boggles my mind sometimes how someone who has slaved away on a 
project for years, even decades, will balk at doing a high-multiplicity 
data collection because it will "take too long".  Admittedly, a 
trillion-image dataset, collected at 200 Hz would take 158 years to 
collect, but what about a 2-photon-per-image dataset? 10?  100,000?  
What about one photon per pixel (on average)?  That would only take 14 
minutes (1e12 photons / 6e6 pixels / 200 Hz / 60 s ).  Yes, you'd have 
almost 200,000 images to deal with, but if getting the strategy "right" 
is going to make-or-break solving your structure, do you really care?

Currently, the major barrier to single-photon image or 
single-photon-pixel datasets is the processing software.  Things like 
estimating the variance of a pixel field with no photons in it are, 
well, problematic.  But in the future I imagine a more "holistic" 
approach can be made, where not only the position of the photon hit is 
considered (x,y and phi), but the time as well. Sort of an "intrinsic" 
zero-dose extrapolation.

At this point, one might wonder why we care so much about "dynamic 
range" when all you need is one or two bits per pixel (0-4 photons), and 
that actually IS a very good question.  A question that carries us into 
other sources of error that we don't see in the detector specs, such as 
the accuracy of the pixel calibration.  You might think that if you are 
"counting photons" then the calibration would be perfect, but this is 
only true if you can be sure you counted ALL of the photons, and there 
are no detectors that do that.  In most cases (Pilatus and 
phosphor-coupled CCD alike) about 20% of the photons pass right through 
the x-ray sensitive layer.  If this "capture fraction" varies from pixel 
to pixel (and it always does), then that is a source of systematic 
error.  Same thing for photons that get absorbed in the front window, or 
flecks of dust on the front window.

   Yes, you can "correct" for all these things, and that is what 
"calibration" is all about, but it is important to remember that any 
"calibration" is the result of some sort of experimental measurement, 
and all experimental measurements have an error bar. Calibrating 
something to 5% accuracy is pretty easy.  1% is difficult and 0.1% is 
very very hard.  The resultant of all these "calibration" errors ends up 
in your low-resolution Rmerge (at high multiplicity).  Remember, if your 
brightest spots have an average of 1 million photons in them, then 
Rmerge should be 0.1% (1e6 vs sqrt(1e6)).  The fact that it is bigger 
means that something other than photon-counting error is playing a role.

The error due to pile-up is something that is probably news to our 
current generation of CCD-trained crystallographers who are too young to 
remember the "multiwire era", so I think it important to describe it 
here.  Mind you, Dectris has done a very good job of minimizing the 
influence of pile-up on your data, and on Pilatus3 they are taking even 
further steps to deal with it, but Pilatus is still a counting device, 
and there are certain things the user of any counting device should bear 
in mind:

1) they are not linear
2) they are sensitive to photons/s, not just photons
3) they can "roll over" at high intensity
4) it can be hard to know if any of the above is happening

On any counting device (such as Pilatus) some absorbed photons go 
"missing" because they hit a pixel while that pixel was still 
"recovering" from the last photon hit.  This is called "pile-up" and it 
is the main reason why I don't like counting devices. Perhaps I am 
emotionally scarred from the early days of commissioning my beamline 
when I was trying to figure out why I was getting upside-down absorption 
spectra for Se scans.  A rookie mistake (in retrospect), but I wasted a 
lot of time on that one. Turns out my fluorescence detector (a counting 
device) was not only having pile-up issues, but had rolled over into a 
regime where the detector spends more time "recovering" than it does 
counting, and increasing the true intensity actually gives you less 
observed "counts".  Sometimes a great deal less than the "maximum count 
rate".

   What haunts me to this day about "pile up" is that the detector does 
not tell you when it happens!  That is, Pilatus introduces a new kind of 
'silent overload'.  The first kind of overload: more than 20 bits of 
photons, does turn your pixels yellow to tell you that something is 
"wrong".  But, the new kind of overload: too many photons/s at some 
point during the exposure, does not give you a yellow pixel.  Sometimes 
you will see a spot on a Pilatus that has a "hole" in the middle, and 
that is an excellent indicator that the central pixel "rolled over" due 
to pile-up, but with single-pixel spots, this trick doesn't work.  Yes, 
there are ways around this problem, one of which is Sol Gruner's new 
"mmPAD", which can integrate (no pile up) as well as count photons.  
This has the advantage of being a potentially self-calibrating detector, 
and it is now being marketed by ADSC. Dectris, however, seems to be 
specifically avoiding anything that isn't a counter.

Of course, roll-over is the most extreme kind of pile-up, at lower 
intensities what you get is non-linearity.  This is probably the most 
valuable lesson I learned from my fluorescence detector: counting 
devices are fundamentally non-linear.  That is, the graph of "true 
photons" vs "observed counts" is always curved.  And not only that, it 
has two solutions for every "observed counts". Which one is right?  
There is actually no way to tell, not without changing the incident 
intensity and seeing if "observed counts" goes up or down.  The "pile up 
correction" algorithm always picks the lower of the two possible "true 
photons".

Some beamline scientists turn the pile-up correction off.  Why? Because 
sometimes it makes things worse.  This is because the equation Poisson 
derived for "correcting" the count rate assumes on a very fundamental 
level that the "true intensity" is constant over the counting period, 
but if you have spots rotating through the Ewald sphere of bunches of 
electrons flashing by in the storage ring, then this is not exactly the 
case.  This has even been studied recently:
http://dx.doi.org/10.1107/S0909049513000411

   The pile-up problems arising from sharp spots can be mitigated with 
"fine slicing".  If you slice fine enough to "outrun" the variations in 
instantaneous intensity as the relp moves in and out of the Ewald 
sphere, then the intensity actually is "constant" over any given 
"exposure time", and the pile-up correction will work properly.  This is 
why you MUST fine-slice when using a Pilatus detector so that the 
delta-phi is smaller than the rocking width of your crystal.  With a CCD 
(which has no pile-up) the optimum delta-phi is usually longer.

How can you tell if you are having pile-up problems?  The most 
appropriate test is to repeat the dataset with an attenuated beam and 
longer exposure time so that you get the same photons/pixel, but 
different photons/s.  If the second dataset is better than the first, 
then you had pile-up problems.  Or, perhaps you had beam-flicker or 
shutter-jitter problems, but do you care?  The second dataset was 
better!  The true test for pile-up issues is to merge the two datasets 
and look at "outliers".  If most of the outliers show bright spots 
getting brighter in the slow dataset (relative to weak spots), then you 
had pile-up problems in the fast dataset.

How can you be sure you don't have pile-up problems?  Slow down. As long 
as none of your pixels experience photon arrival rates approaching the 
maximum count rate (~1 million photons/s for Pilatus) at ANY TIME during 
the exposure, then you're good.  This is, of course, easier said that 
done because you never know what your brightest spot will be.  However, 
once you have a first-pass dataset, you can look at your processing 
output to find the brightest spot and see what "phi" value that was.  If 
you rotate to that "phi' and take a series of stills (no rotation during 
exposure) with different attenuation settings, you should be able to 
verify that the intensity of that brightest spot is linear with 
photons/s or not.

How big is the error due to pile-up?  Well, it does depend on a number 
of things, but as long as you are below the "maximum count rate", the 
maximum possible error introduce by the "correction" is always smaller 
than the error due to applying no correction at all.  For a 100 ns 
dead-time (note that this is the "dead time" of the counting circuit, 
not the "dead time" between image read-outs) and a phi slice short 
enough for the spot intensity to be "constant", the error of not doing 
the pile-up correction is 1% for 100,000 photons/s, but only 0.1% for 
10,000 photons/s.  Note that this is not the average count rate across 
the whole detector, it is for a single pixel.  That is, if you have a 
pixel with 100,000 counts in a 1s exposure, then pile-up could be 
introducing up to a 1% error, even though the counting error (sqrt(N)) 
is only 0.3%.  But if you attenuate 10-fold and expose for 10s (100,000 
counts at 10,000 counts/pixel/s), the error due to pile-up will be only 
0.1%, and then the dominant source of error actually is the 
photon-counting limit.

Of course, if those 100,000 photons were all bunched up into a very 
short period of time within the exposure, you might see quite a few less 
"counts".  In the extreme case of a free electron laser, where the x-ray 
pulse is only 10 fs long, all of the photons arrive at the detector at 
the "same time", and Pilatus would give you a "1", even if 100,000 
photons hit.  This is why the XFEL detectors are integrators.

After I learned all this I started to wonder why we like "single photon 
counting" so much.  What was wrong with integrating detectors again?  It 
is often overlooked that fine slicing and "short exposure with high 
multiplicity" are ALSO a good idea on modern CCDs.  Yes, they have 
read-out noise, but as long as the total read-out noise (summed for all 
the pixels for a given hkl over all the images in the dataset) is ~3x 
less than the photon-counting noise (summed for those same pixels) then 
the read-out noise has basically no influence on the total noise (3^2 + 
1^2 ~= 3^2).  For weak spots, the Bragg-scattered photons approach zero, 
so the total noise is dominated by the background counts.  For an ADSC 
Q315r detector with typical settings, the read-out noise is equivalent 
to having 2 extra photons/pixel of background on each image, so for 
5-fold multiplicity you will have 10 "extra photons" worth of noise per 
pixel, and that means you want at least 30 photons/pixel of background 
to bury it.  This is equivalent to "I: 100" in the upper left corner of 
the ADXV display (the digital baseline is 40 and the "gain" is 1.8 pixel 
levels per photon).  On a Rayonix "HE" series in "slow" mode, you can 
actually get less than 1 photon of noise per read-out, and that means 
you can theoretically do a single-photon-per-image dataset with this 
detector.

But, as always, don't take my word for it.  I'm probably prejudiced 
against photon-counters.  But I do recommend that you take a bit of your 
synchrotron trip to try out new data collection strategies on YOUR 
crystals.  Particularly the "attenuate and expose longer" (attenu-wait) 
strategy.  If you get the same or better data quality going fast as you 
do going slow, then great! Keep doing it.  At least, until the next 
"upgrade".

-James Holton
MAD Scientist

On 5/9/2013 12:08 AM, Jose Brandao-Neto wrote:
> Hi all,
>
>   Graeme's message tell the core of the detector usage notes here at diamond. He might have unearthed a can of worms or two, though ;)
>
>   CCDs still are great detectors! I am not sure about his noise comment as the more recent CCDs (post 2000-ish) are designed to operate with anode sources (and definitely image plates have a very good signal-to-noise ratio in long exposures). The mitigation of the readout deadtime is arguably the main experimental design driver when using CCDs.
>
>   One other tiny little thing to bring to the surface is that the CCD images presented to the user are not the raw images and my personal opinion is that the end users should be at least aware of what a real image looks like (zingers, taper features, tiling) and their implications in the noise per reflection. I agree that filtering, stitching and smoothing help in judging the quality of the diffraction pattern (and crystal) itself.
>
> Regards,
> Jose'
> -> try other color scales! black and white is good but a rainbow pallete helps identifying weaks and strongs because if discrete jumps in color at different intensities.
>
> ===
> Jose Brandao-Neto MPhil CPhys
> Senior Beamline Scientist - I04-1
> Diamond Light Source
>
> [log in to unmask]
> +44 (0)1235 778506
> www.diamond.ac.uk
> ===

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager