My "default" MAD strategy is to do single-image inverse beam with round
robin wavelength changes. That is:
energy phi
peak 0
peak 180
remote 0
remote 180
peak 1
peak 181
remote 1
etc....
with one image taken for each line above. I do this until a full
"sphere" is collected for each wavelength (720 images) with an exposure
time short enough so that the final dose is less than 5 MGy. On ALS
8.3.1 that's about 1 second/image. The 5 MGy comes from the half-dose
of the fastest-decaying SeMet site I have ever seen (Holton, 2007).
Once the initial 5 MGy pass is done, then I quadruple the exposure time
and move the detector a little closer to the sample for another
"sphere". Moving the detector is to try and put the spots on "fresh"
pixels and average over the systematic error associated with using
exactly the same part of the detector over and over again. This becomes
important for Bijvoet ratios less than ~2%. It is also a good idea to
always do a full "sphere" for the same reason: never use the same pixel
twice.
The quadrupling of the exposure time is mainly for expediency. Given
that any rad dam reaction will be essentially exponential, but with an
unknown half-dose, the best way to sample the curve is with a geometric
series of exposure times. Doubling the exposure time increases
signal/noise by not more than 40%, which seems hardly worth it.
Quadrupling the exposure doubles the S/N for counting statistics. So:
1s, 4s, 16s, and then the crystal is usually pretty dead (at ~0.5
MGy/min). This then gives the user the "opportunity" to do RIP using
the long exposures as the "native". Or, if there is little damage, they
can just merge everything together and get the best signal. The
influence of read-out noise (if any) also gets effectively washed out in
the longer exposures.
Now, what I call "peak" is actually a compromise between the usual
"peak" and the "inflection". What I do is split the difference between
these two for an "inf-eak" or "pea-lection" wavelength. From a
signal-vs-damage point of view this seems to be optimal in my hands.
Two wavelengths are about twice as good as one, even if the f" and f' to
the remote are only 80% of what they would be at their maxima. Three
wavelengths are "better" than two, but only ~20% better. I judge this
by looking at map correlations and the number of sites I can leave
unmodeled in a 3-wavelength dataset and still get the same map quality
as a 2-wavelength dataset. I call such a 2-wavelength dataset "DAD", or
sometimes Bijvoet Anomalous and Dispersive Anomalous Scattering (BADAS).
The only time using the same pixel twice could actually be an advantage
is if you could somehow put the same spot on the same pixel at two
different wavelengths. You can "sort of" do this by moving the detector
by a distance proportional to the change in wavelength. Doesn't work
exactly because the Ewald sphere is curved and the detector isn't, but
you can get some spots "close". This might be why Gonzalez et al.
(2007) noticed that using inflection-and-remote tended to perform better
than using just the peak. I haven't done an experiment of my own to
show this is due to pixel calibration, however.
Of course, for most "test crystals" it doesn't really matter how you
collect the data because the anomalous signal is so strong relative to
pixel calibration, or almost any other source of error for that matter.
The problem with differentiating the efficacy of one strategy over
another is that the transition between "solvable" and "unsolvable" is
very very sharp. Basically, phase improvement methods either make your
phases better or they don't, and then you iterate. But, in a rad
dam-limited world (such as a very very small "test" crystal), the best
strategy will prevail. The minimum crystal size you should need if you
do everything right is what is reported by this web page:
http://bl831.als.lbl.gov/xtalsize.html
As for the terms, "inverse beam" I think came from Stout and Jensen in
their description of absorption corrections. It is supposed to be a
variation on "normal beam" (which is where the x-ray beam is
perpendicular to the spindle). But like most things, the widespread use
of the term arises because a popular piece of software (BLU-ICE) chose
to put those words next to a button on the GUI.
The term "round robin" I take from a simple load-distribution technique
in computer science where each CPU, network card, etc takes turns
getting the next job. This way each of the things being switched up
gets the same amount of "exposure" with minimal granularity.
Apparently, this name is derived from competitive sporting events where
the athletes do pretty much the same thing.
One final word to the wise: My strategy of single-image-round-robin is
not appropriate to all beamlines! Some shutters are better than others
(even the electronic "shutter" used for shutterless data collections can
experience some jitter), and some monochromators heat up and become
unstable if you change the wavelength too often (or too far too often).
Also, some spindles are slow, and can take a long time to turn the
crystal 180 degrees. This is the main reason why the original MAD
experiments were done in "wedges". It was all to save time. Also, if
there are reproducibility issues in the spindle or the mono, doing a
"wedge" is a way around those sources of error. On modern equipment
most of these problems have been solved, but you should still ask your
beamline scientist what they recommend. Only they know best what sort of
design compromises were made with their particular instrument. Just
remember that the advice you get for one machine may or may not apply to
the next one you use!
-James Holton
BADAS Scientist
On 8/22/2013 4:10 AM, Alexander Batyuk wrote:
> Dear James,
>
> Could you elaborate on the inverse beam protocol a little more in details, especially, on round robin, please? What would be the ideal data collection strategy with minimal rad dam for a MAD experiment?
>
> Thank you and best wishes,
>
> Alex
>
>
>
> On 22 Aug 2013, at 08:07, James Holton <[log in to unmask]> wrote:
>
>> Yafang,
>>
>> I'm afraid that just because you still have spots at the end of your dataset does not mean radiation damage was "not a problem". The reactions that disorder your heavy atom sites go to completion at doses that can be as little as 1/30th of the dose required to noticeably fade your spots. There are a number of nice reviews written about this:
>> http://dx.doi.org/10.1107/S0909049509004361
>> http://dx.doi.org/10.1107/S0909049512050418
>> http://dx.doi.org/10.1107/S0909049506048898
>> http://dx.doi.org/10.1107/S0907444907019580
>>
>> Also, If your datasets were collected one wavelength at a time, such as a complete dataset at the peak, then another complete dataset at the inflection, and then, after all that, you collect the "reference" dataset at the remote, then what you have is not a MAD dataset. This is a series of SAD datasets (M-SAD). Of these three SAD datasets only the "peak" is at the optimum energy for anomalous, and also has the least radiation damage, so that one will work better than the other two. I use the term M-SAD instead of MAD because you are effectively using a different crystal for each wavelength, and that means the inter-wavelength differences are dominated by non-isomorphism. Non-isomorphism can easily bury an anomalous signal, and radiation damage is a pretty efficient way to make a crystal non-isomorphous with its former self.
>>
>> By looking at examples in the literature, (such as Banumathi et al. 2004) one can guestimate that the degree of non-isomorphism induced by radiation damage is about 1% per MGy of dose. You can look up the nominal dose rate of the beamline you collected these data at here:
>> http://bl831.als.lbl.gov/damage_rates.pdf
>> I try to keep the numbers in this document up to date, but most beamlines are attenuated to the point where they deliver about 1 MGy per minute of shutter-open time. That's for a crystal with < ~20 mM heavy atoms, and unattenuated beam.
>>
>> So, if the dispersive signal you are looking for is 3%, then once your crystal has endured more than ~3 minutes of shutter-open time, the non-isomorphism will start to overwhelm that signal, and then trying to use dispersive (inter-wavelength) differences becomes counterproductive. This is because the software is trying to reconcile all the observed differences in terms of heavy-atom positions, and when half the differences are coming from non-isomorphism, the equations all fall apart. This is probably why treating your M-SAD dataset as a MAD experiment fails. Anomalous (Bijvoet) differences, however, tend to come up fairly close together in "phi" because once a spot passes through the Ewald sphere its Friedel mate will generally pop up on the opposite side of the beamstop a few degrees later. Basically, if you're measuring a difference, it is best to measure the two numbers you are going to subtract as close together in time as possible. This is why "inverse beam" with "round robin" wavelength changes is the approach that is most robust to damage effects. Yes, you still get damage, but at least the differences you are subtracting are close together, and therefore comparing "apples to apples".
>>
>> I suppose it was the advent of saggital-focusing monochromators that made wavelength changes more difficult and more recently the advent of so-called "shutterless" data collection has led to more and more M-SAD data collections than MAD. This is a pity, really, because as George has already said, MAD gives you significantly better phases than SAD. It just requires a little more patience to collect it properly.
>>
>> -James Holton
>> MAD Scientist
>>
>>
>>
>>
>> On Tue, Aug 20, 2013 at 2:05 PM, Yafang Chen <[log in to unmask]> wrote:
>> Hi All,
>>
>> I have three datasets of SeMet-incorporated protein at peak, infl and high wavelength respectively. SAD with peak dataset works well to solve the phase problem. However, MAD with all three datasets didn't work at all. The completeness of all three datasets are more than 99%. So I think radiation damage should not be a problem. Does anyone have any idea about the possible reasons that MAD didn't work in this case? Thank you so much for any of your help!
>>
>> Best,
>> Yafang
>>
>> --
>> Yafang Chen
>>
>> Graduate Research Assistant
>> Mesecar Lab
>> Department of Biological Sciences
>> Purdue University
>> Hockmeyer Hall of Structural Biology
>> 240 S. Martin Jischke Drive
>> West Lafayette, IN 47907
>>
> --
> Alex Batyuk
> The Plueckthun Lab
> www.bioc.uzh.ch/plueckthun
>
|