JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CCP4BB Archives


CCP4BB Archives

CCP4BB Archives


CCP4BB@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

CCP4BB Home

CCP4BB Home

CCP4BB  October 2011

CCP4BB October 2011

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: IUCr committees, depositing images

From:

James Holton <[log in to unmask]>

Reply-To:

James Holton <[log in to unmask]>

Date:

Wed, 26 Oct 2011 22:04:23 -0700

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (366 lines)

In the spirit of supporting a "can do" attitude, I have decided to try 
and frame the binary "images or no images" question as a gradual scale.  
Below is a list of ways to represent crystallographic data, with 
increasing amounts of "information" as you move down.  That is, making 
validation more robust and allowing more and more yet-to-be-developed 
technologies to be applied, but also requiring higher costs, such as 
validation effort.

a) depositing coordinates only
b) coordinates and structure factors
c) coordinates, structure factors, and their sigmas (there was a time 
when we didn't do this!)
d) scaled and merged intensities (before "truncate" or sqrt?)
e) scaled and "unmerged" intensities with combined partials (future 
absorption corrections)
f) scaled spot intensities with partials separate (correct the shutter 
jitter someday?)
g) unscaled individual spot intensities (with geometry for calculating 
Lorentz, polarization, etc corrections)
h) spot intensities with a separate column for the "local background 
level" (this would be an efficient way to "compress" the images)
i) spots, local background, and background levels partway between spots 
(for reconstructing diffuse scatter)
j) all pixels from spot areas (~1000x compression over normal 
"corrected" images)
k) spot pixels plus lossy compression of background (1000-30x over 
corrected images)
l) losslessly compressed images (~2x over corrected images)
m) "corrected" images: all pixels from spatially-corrected 
dark-and-flood applied images
n) uncorrected images with dark, flood and bad-pixel map for relevant 
detector
o) raw output from detector's ADC or counter (no idea how to capture 
this...)
p) cryo-preserved data crystal (for verification of cell, or just 
chemical forensics, such as verifying the identity of the protein)
q) cryo-preserved duplicate crystal, to be held in a vault until you are 
accused of fraud.
r) sample of pure protein with crystallization conditions
s) protein sequence, let the PDB "verify" the experiment by repeating 
it, knowing that it "can be solved".

Now, I think it is clear that both the "benefit to the community" as 
well as the burden on PDB resources increase as we move down this list.  
In such situations one looks for "inflection points" where the next, 
small increase in benefit requires a disproportionately large cost.  
Historically, going from a to b was such an inflection point.  This was 
back when the compact disc was a new thing, and the whole PDB could fit 
on one!  In fact, if you had a "multi-session" drive you could back up 
your hard drive onto the remaining space.  Those days are gone.

Recently, I heard the PDB is seriously considering jumping to somewhere 
between "e" and "g" (unmerged data).  Doing so, I think, will be an 
interesting exercise in file format "standardization" since every major 
data processing package treats partials and postrefinement differently.  
And this is perhaps the main reason why "the PDB" (aka Gerard K) are 
wary of the idea of going all the way to "m" (corrected images).  Do we 
expect PDB staff to re-process our dataset as part of the "validation" 
procedure?

Of course, if we are willing to relax the requirement of validation and 
curation, this could be a whole lot easier.  In fact, there is already 
an image deposition infrastructure in place!  It is called TARDIS:

http://tardis.edu.au/

Perhaps the best way forward would be for "the PDB" to introduce a new 
field for one or more TARDIS ids in a PDB deposition?  It would be 
optional at the first, but no doubt required in the future.


-James Holton
MAD Scientist


On 10/26/2011 4:20 PM, Colin Nave wrote:
> Dear Gerard
>
> Yes, perhaps I was getting a bit carried away with the possibilities. Although I believe that, with high resolution detectors and low divergence beams, one should be able to separate out the various lattices it is not really relevant to the main issue - getting the best from existing data.  The point I made about "correcting" data probably comes in a similar category - taking the opportunity to air a favourite subject.
>
> Regards
>    Colin
>
> PS. While here though I realise one of my points was a bit unclear. Point 5 should be
> "5.  My view is that for data in the PDB the same release rules should apply for the images as for the other data. For data not (yet) in the PDB, the funders of the research might want to define release rules. However, we can make suggestions!"
> The original had "For other data" rather than "For data not (yet) in the PDB"
>
> -----Original Message-----
> From: Gerard Bricogne [mailto:[log in to unmask]]
> Sent: 26 October 2011 23:23
> To: Nave, Colin (DLSLtd,RAL,DIA)
> Cc: ccp4bb
> Subject: Re: [ccp4bb] IUCr committees, depositing images
>
> Dear Colin,
>
>       Thank you for accepting the heavy burden of responsibility your
> colleagues have thrown onto your shoulders ;-) . It is great that you are
> entering this discussion, and I am grateful for the support you are bringing
> to the notion of starting something at ground level and learning from it,
> rather that staying in the realm of conjecture and axiomatics, or entering
> the virility contest as to whose beamline will make raw data archiving most
> impossible.
>
>       One small point, however, about your statement regarding multiple
> lattices, that
>
>       "...  all crystals are, to a greater or lesser extent, subject to this.
>       We just might not see it easily as the detector resolution or beam
>       divergence is inadequate. Just think we could have several structures
>       (one from each lattice) each with less disorder rather than just one
>       average structure."
>
> I am not sure that what you describe in your last sentence is a realistic
> prospect, nor that it would in any case constitute the main advantage of
> better dealing with multiple lattices. The most important consequence of
> their multiplicity is that their spots overlap and corrupt each other's
> intensities, so that the main benefit of improved processing would be to
> mitigate that mutual corruption, first by correctly flagging overlaps, then
> by partially trying to resolve those overlaps internally as much as scaling
> procedures will allow (one could call that "non-merohedral detwinning" - it
> is done e.g. by small-molecule softeware), and finally by adapting
> refinement protocols to recognise that they may have to refine against
> measurements that are a mixture of several intensities, to a degree and
> according to a pattern that varies from one observation to another (unlike
> regular twinning).
>
>       Currently, if a "main" lattice can be identified and indexed, one tends
> to integrate the spots it successfully indexes, and to abstain from worrying
> about the accidental corruption of the resulting intensities by accidental
> overlaps with spots of the other lattices (whose existence is promptly
> forgotten). It is the undoing of that corruption that would bring the main
> benefit, not the fact that one could see several variants of the structure
> by fitting the data attached to the various lattices: that would be possible
> only if overlaps were negligible. The prospects for improving electron
> density maps by reprocessing raw images in the future are therefore
> considerable for mainstream structures, not just as a way of perhaps teasing
> interestingly different structures from each lattice in infrequent cases.
>
>       I apologise if I have laboured this point, but I am concerned that
> every slight slip of the pen that makes the benefits of future reprocessing
> look as if they will just contribute to splitting hairs does a disservice to
> this crucial discussion (and hence, potentially, to the community) by
> belittling the importance and urgency of the task.
>
>
>       With best wishes,
>
>          Gerard (B.)
>
> --
> On Wed, Oct 26, 2011 at 07:58:51PM +0000, Colin Nave wrote:
>> I have been nominated by the IUCr synchrotron commission (thanks colleagues!) to represent them for this issue. However, at the moment, this is a personal view.
>>
>> 1. For archiving "raw" diffraction image data for structures in the PDB, it should be the responsibility of the worldwide PDB. They are by far the best place to do it and as Jacob says the space requirements are trivial. Gerard K's negative statement at CCP4-2010 sounds rather ex cathedra (in increasing order of influence/power do we have the Pope, US president, the Bond Market and finally Gerard K?). Did he make the statement in a formal presentation or in the bar? More seriously, I am sure he had good reasons (e.g. PDB priorities) if he did make this statement. It would be nice if Gerard could provide some explanation.
>>
>> 2. I agree with the "can do" attitude at Madrid as supported by Gerard B. Setting up something as best one can with existing enthusiasts will get the ball rolling, provide some immediate benefit and allow subsequent improvements.
>>
>> 3. Ideally the data to be deposited should include all stages e.g. raw images, "corrected" images, MIR/SAD/MAD images, unmerged integrated intensities, scaled, merged etc. Plus the metadata, software&  versions used for the various stages. Worrying too much about all of this should not of course prevent a start being made. (An aside. I put the "corrected" in quotes because the raw images have fewer errors. The subsequent processing for detector distortions etc. depend on an imperfect model for the detector. I don't like the phrase data correction).
>>
>> 4. Doing this for PDB depositions would then provide a basis for other data which did not result in PDB depositions. There seems to be a view that the archiving of this should be the responsibility of the synchrotrons which generated the data. This should be possible for some synchrotrons (e.g. Diamond) where there is pressure in any case from their funders to archive all data generated at the facility. However not all synchrotrons will be able to do this. There is also the issue of data collected at home sources. Presumably it will require a few willing synchrotrons to pioneer this in a coordinated way. Hopefully others will then follow. I don't think we can expect the PDB to archive the 99.96% of the data which did not result in structures.
>>
>> 5.  My view is that for data in the PDB the same release rules should apply for the images as for the other data. For other data, the funders of the research might want to define release rules. However, we can make suggestions!
>>
>> 6. Looking to the future, there is FEL data coming along, both single molecule and nano-crystals (assuming the FEL delivers for these areas).
>>
>> 7. I agree with Gerard B - "as far as I see it, the highest future benefit of having archived raw images will result from being able to reprocess datasets from samples containing multiple lattices"
>> My view is that all crystals are, to a greater or lesser extent, subject to this. We just might not see it easily as the detector resolution or beam divergence is inadequate. Just think we could have several structures (one from each lattice) each with less disorder rather than just one average structure.  Not sure whether Gloria's modulated structures would be as ubiquitous but her argument is along the same lines.
>>
>> Regards
>>    Colin
>>
>> -----Original Message-----
>> From: CCP4 bulletin board [mailto:[log in to unmask]] On Behalf Of Herbert J. Bernstein
>> Sent: 26 October 2011 18:55
>> To: ccp4bb
>> Subject: Re: [ccp4bb] IUCr committees, depositing images
>>
>> Dear Colleagues,
>>
>>     Gerard strikes a very useful note in pleading for a "can-do"
>> approach.  Part of going from "can-do" to "actually-done"
>> is to make realistic estimates of the costs of "doing" and
>> then to adjust plans appropriately to do what can be afforded
>> now and to work towards doing as much of what remains undone
>> as has sufficient benefit to justify the costs.
>>
>>     We appear to be in a fortunate situation in which some
>> portion of the raw data behind a signficant portion of the
>> studies released in the PDB could probably be retained for some
>> significant period of time and be made available for further
>> analysis.  It would seem wise to explore these possibilities
>> and try to optimize the approaches used -- e.g. to consider
>> moves towards well documented formats, and retention of critical
>> metadata with such data to help in future analysis.
>>
>>     Please do not let the perfect be the enemy of the good.
>>
>>     Regards,
>>       Herbert
>>
>> =====================================================
>>    Herbert J. Bernstein, Professor of Computer Science
>>      Dowling College, Kramer Science Center, KSC 121
>>           Idle Hour Blvd, Oakdale, NY, 11769
>>
>>                    +1-631-244-3035
>>                    [log in to unmask]
>> =====================================================
>>
>> On Wed, 26 Oct 2011, Gerard Bricogne wrote:
>>
>>> Dear John and colleagues,
>>>
>>>      There seem to be a set a centrifugal forces at play within this thread
>>> that are distracting us from a sensible path of concrete action by throwing
>>> decoys in every conceivable direction, e.g.
>>>
>>>      * "Pilatus detectors spew out such a volume of data that we can't
>>> possibly archive it all" - does that mean that because the 5th generation of
>>> Dectris detectors will be able to write one billion images a second and
>>> catch every scattered photon individually, we should not try and archive
>>> more information than is given by the current merged structure factor data?
>>> That seems a complete failure of reasoning to me: there must be a sensible
>>> form of raw data archiving that would stand between those two extremes and
>>> would retain much more information that the current merged data but would
>>> step back from the enormous degree of oversampling of the raw diffraction
>>> pattern that the Pilatus and its successors are capable of.
>>>
>>>      * "It is all going to cost an awful lot of money, therefore we need a
>>> team of grant writers to raise its hand and volunteer to apply for resources
>>> from one or more funding agencies" - there again there is an avoidance of
>>> the feasible by invocation of the impossible. The IUCr Forum already has an
>>> outline of a feasibility study that would cost only a small amount of
>>> joined-up thinking and book-keeping around already stored information, so
>>> let us not use the inaccessibility of federal or EC funding as a scarecrow
>>> to justify not even trying what is proposed there. And the idea that someone
>>> needs to decide to stake his/her career on this undertaking seems totally
>>> overblown.
>>>
>>>      Several people have already pointed out that the sets of images that
>>> would need to be archived would be a very small subset of the bulk of
>>> datasets that are being held on the storage systems of synchrotron sources.
>>> What needs to be done, as already described, is to be able to refer to those
>>> few datasets that gave rise to the integrated data against which deposited
>>> structures were refined (or, in some cases, solved by experimental phasing),
>>> to give them special status in terms of making them visible and accessible
>>> on-line at the same time as the pdb entry itself (rather than after the
>>> statutory 2-5 years that would apply to all the rest, probably in a more
>>> off-line form), and to maintain that accessibility "for ever", with a link
>>> from the pdb entry and perhaps from the associated publication. It seems
>>> unlikely that this would involve the mobilisation of such large resources as
>>> to require either a human sacrifice (of the poor person whose life would be
>>> staked on this gamble) or writing a grant application, with the indefinite
>>> postponement of action and the loss of motivation this would imply.
>>>
>>>      Coming back to the more technical issue of bloated datasets, it is a
>>> scientific problem that must be amenable to rational analysis to decide on a
>>> sensible form of compression of overly-verbose sets of thin-sliced, perhaps
>>> low-exposure images that would already retain a large fraction, if not all,
>>> of the extra information on which we would wish future improved versions of
>>> processing programs to cut their teeth, for a long time to come. This
>>> approach would seem preferable to stoking up irrational fears of not being
>>> able to cope with the most exaggerated predictions of the volumes of data to
>>> archive, and thus doing nothing at all.
>>>
>>>      I very much hope that the "can do" spirit that marked the final
>>> discussions of the DDDWG (Diffraction Data Deposition Working Group) in
>>> Madrid will emerge on top of all the counter-arguments that consist in
>>> moving the goal posts to prove that the initial goal is unreachable.
>>>
>>>
>>>      With best wishes,
>>>
>>>           Gerard.
>>>
>>> --
>>> On Wed, Oct 26, 2011 at 02:18:25PM +0100, John R Helliwell wrote:
>>>> Dear Frank,
>>>> re 'who will write the grant?'.
>>>>
>>>> This is not as easy as it sounds, would that it were!
>>>>
>>>> There are two possible business plans:-
>>>> Option 1. Specifically for MX is the PDB as the first and foremost
>>>> candidate to seek such additional funds for full diffraction data
>>>> deposition for each future PDB deposiition entry. This business plan
>>>> possibility is best answered by PDB/EBI (eg Gerard Kleywegt has
>>>> answered this in the negative thus far at the CCP4 January 2010).
>>>>
>>>> Option 2 The Journals that host the publications could add the cost to
>>>> the subscriber and/or the author according to their funding model. As
>>>> an example and as a start a draft business plan has been written by
>>>> one of us [JRH] for IUCr Acta Cryst E; this seemed attractive because
>>>> of its simpler 'author pays' financing. This proposed business plan is
>>>> now with IUCr Journals to digest and hopefully refine. Initial
>>>> indications are that Acta Cryst C would be perceived by IUCr Journals
>>>> as a better place to start considering this in detail, as it involves
>>>> fewer crystal structures than Acta E and would thus be more
>>>> manageable. The overall advantage of the responsibility being with
>>>> Journals as we see it is that it encourages such 'archiving of data
>>>> with literature' across all crystallography related techniques (single
>>>> crystal, SAXS, SANS, Electron crystallography etc) and fields
>>>> (Biology, Chemistry, Materials, Condensed Matter Physics etc) ie not
>>>> just one technique and field, although obviously biology is dear to
>>>> our hearts here in the CCP4bb.
>>>>
>>>> Yours sincerely,
>>>> John and Tom
>>>> John Helliwell  and Tom Terwilliger
>>>>
>>>> On Wed, Oct 26, 2011 at 9:21 AM, Frank von Delft
>>>> <[log in to unmask]>  wrote:
>>>>> Since when has the cost of any project been limited by the cost of
>>>>> hardware?  Someone has to implement this -- and make a career out of it;
>>>>> thunderingly absent from this thread has been the chorus of volunteers who
>>>>> will write the grant.
>>>>> phx
>>>>>
>>>>>
>>>>> On 25/10/2011 21:10, Herbert J. Bernstein wrote:
>>>>>
>>>>> To be fair to those concerned about cost, a more conservative estimate
>>>>> from the NSF RDLM workshop last summer in Princeton is $1,000 to $3,000
>>>>> per terabyte per year for long term storage allowing for overhead in
>>>>> moderate-sized institutions such as the PDB.  Larger entities, such
>>>>> as Google are able to do it for much lower annual costs in the range of
>>>>> $100 to $300 per terabyte per year.  Indeed, if this becomes a serious
>>>>> effort, one might wish to consider involving the large storage farm
>>>>> businesses such as Google and Amazon.  They might be willing to help
>>>>> support science partially in exchange for eyeballs going to their sites.
>>>>>
>>>>> Regards,
>>>>>     H. J. Bernstein
>>>>>
>>>>> At 1:56 PM -0600 10/25/11, James Stroud wrote:
>>>>>
>>>>> On Oct 24, 2011, at 3:56 PM, James Holton wrote:
>>>>>
>>>>> The PDB only gets about 8000 depositions per year
>>>>>
>>>>> Just to put this into dollars. If each dataset is about 17 GB in
>>>>> size, then that's about 14 TB of storage that needs to come online
>>>>> every year to store the raw data for every structure. A two second
>>>>> search reveals that Newegg has a 3GB hitachi for $200. So that's
>>>>> about $1000 / year of storage for the raw data behind PDB deposits.
>>>>>
>>>>> James
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Professor John R Helliwell DSc
>>> -- 
>>>
>>>      ===============================================================
>>>      *                                                             *
>>>      * Gerard Bricogne                     [log in to unmask]  *
>>>      *                                                             *
>>>      * Global Phasing Ltd.                                         *
>>>      * Sheraton House, Castle Park         Tel: +44-(0)1223-353033 *
>>>      * Cambridge CB3 0AX, UK               Fax: +44-(0)1223-366889 *
>>>      *                                                             *
>>>      ===============================================================

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager