JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for CCP4BB Archives


CCP4BB Archives

CCP4BB Archives


CCP4BB@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Monospaced Font

LISTSERV Archives

LISTSERV Archives

CCP4BB Home

CCP4BB Home

CCP4BB  March 2009

CCP4BB March 2009

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: RESUME: long term data backup

From:

David Aragao <[log in to unmask]>

Reply-To:

David Aragao <[log in to unmask]>

Date:

Tue, 31 Mar 2009 12:37:24 +0100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (299 lines)

 > Dear All,
 >
 > I wonder how people currently do their long term backups. I see
DATs/DLTs being slowly dropped off at the beamlines and most people
brings their data home in external HDs.
 >
 > Anyone using blue-ray or double layer DVDs for long term backups? If
so what kind of hardware? Do you use HDs for long term storage? If so,
do you do a second copy and how do you store them?
 >
 > I will try to compile the answers and relay back to the list a resume.

----------------------------------------------------------------------
David Aragao (our own setup):

We currently have online NAS server (QNAP 209 Pro -
http://www.qnap.com/pro_detail_feature.asp?p_id=93) with 2x
hotswappable 1 Tb disks. The system allows ftp, nfs, samba over the
network and also to directly connect our USB2 transport HD. The
drawback is that the system is very very slow (4h-6h to transfer 150 Gb)
and has crashed a few times needing reboot. We are not using any of the
RAID options on the QNAP since we use 1 Tb for x-ray diffraction data
(latest trips) and 1 Tb for automatic office/windoze backups over the
network. We keep an extra 1 Tb HD for failures.

Then we have been using an extra external USB2 750 Gb HDs for second
offline copy of the data.

One of the reasons that triggered my question was that we cannot rely
on single HD type for backup. Unfortunately our QNAP has exactly this
hardisk:
http://www.theinquirer.net/inquirer/news/374/1050374/seagate-barracudas-7200-11-failing

----------------------------------------------------------------------
Graeme Winter:

I have quite a lot of data, as you know, and I have a three-phase way of
handling data...

I keep the data on hard drive on computers as far as possible, primary
backup of everything to firewire with secondary backup to DVD as bzip2'd
images. This way if I lose something I can fetch the data from firewire
(or process from there if I run out of space) then if one of those fails
(and they do) I can recover the data from DVD.

Overall I have a few TB of data kept in this way.

Redundancy is good, as is orthogonality i.e. DVD and Firewire, not 2x
firewire disk or 2x DVD.

Follow up:

Ok, so I don't accumulate data at this kind of rate, so I write the
DVD's manually. I know at Brookhaven they have a DVD robot, which would
probably do what you want....

Otherwise probably the two HD backups is probably actually the best you
are going to get.
----------------------------------------------------------------------
Roger Rowlett:
We keep our backups on hard drives of two servers (master and backup) in
separate locations on campus. Some data is kept on CD-ROM, but we're
doing that less now.
----------------------------------------------------------------------
Stephen Graham:

If at all possible you should consider outsourcing it. You might have
access to some kind of large university of national facility for
archiving scientific/academic 'data'. Otherwise there are companies who
specialise in archiving data - for a fee they will take the problem out
of your hands (and you don't need to worry about what format to use,
what to do once the media you currently use become scarce, etc).

Either way, we should all lobby the PDB or someone to archive all the
images for us pronto!
----------------------------------------------------------------------
Kay Diederichs:

Burning DVDs must be a nightmare, and recovering from DVD failure even
more so. Whenever I burn a DVD with important data I also create a CD
with the ECC data (see http://www.dvdisaster.de).

I have my synchrotron data since 1999 online on harddisk (_all_ our
data, not only those datasets that gave structures). Disks are cheap
and convenient. Whenever we start to be short on disk space, I go
shopping for bigger disks.

The hardware currently is an eSATA 4 TB RAID5 in a €340,- RaidSonic
Stardom ST6600-5S-S2 5-disk case
(http://www.raidsonic.de/de/pages/search/search_list.php?we_objectID=4239&pid=0).
A terabyte disk now is less than €100, so the whole thing costs €800,- .
RAID5 guards against single-disk failures, and I keep a spare terabyte
disk in case I have to exchange one of the five internal ones. The unit
is hooked up to a Linux machine with a recent kernel (which supports
the SATA port multipier feature) and a eSATA adapter (e.g. Adaptec 1225SA).

We have two of those in different buildings, and I do a daily (rsync)
copy of the master to the backup. I'm running this for over a year, and
am happy with it.
----------------------------------------------------------------------
Patrick Loll:

We're currently using normal DVD-Rs. I don't know how robust these will
prove to be in the long term, but for right now it's cheap and easy and
requires no fancy hardware.
----------------------------------------------------------------------
Wladek Minor:

Hard drive.

1TB cost around $120. 1.5TB are not as reliable yet.

Plus Thermaltake BlacX ST0005U - storage enclosure around $46
----------------------------------------------------------------------
Sergei Strelkov:

I have been looking into that some time ago.
We have chosen a rather simple way.
We save all collected data on external disks (single copy).
We never delete anything from these disks.
Then students copy whatever they need to their machines.
These are backed up from time to time.
----------------------------------------------------------------------
Paul Swepston:

The Australians have something that addresses this: TARDIS is a
multi-institutional collaborative venture that aims to facilitate the
archiving and sharing of raw X-ray diffraction images (collectively
known as a 'dataset') from the Australian protein crystallography community.
http://www.tardis.edu.au/
----------------------------------------------------------------------
James Holton
MAD Scientist

At ALS beamlines 8.3.1 and 12.3.1 we use a combination of DVD-R and
LTO-4 tapes for long-term backup, and have the entire data collection
history of each beamline backed up on DVD-R disks. This is at about 50
TB for 8.3.1 (built in 2001) and 30 TB for 12.3.1 (built in 2004). We
also make a DVD of the user's data automatically and near-real-time
using a ~$4k robot that inkjet prints the user's name and dataset
summary onto each disk. Portable hard disk drives for "sneakernet" are
also popular, but so is transferring the data over the internet, which
can also be done in near-real time.

I started using LTO-4 tapes recently for two reasons: 1) the price per
TB became competitive with DVD-R and the tape drive is only ~$4k. 2) I
used to keep two copies of each DVD, but found this was not really
"redundant" because if you write two DVD's one after the other on the
same day with the same writer using media from the same batch, then if
you can't read one of these disks 4 years later, the chances of not
being able to read the other disk are pretty high.
So, a lesson I learned is to store data on two very different media
types so you get "orthogonal" failure modes.

I can also tell you that it is a good idea to erase your LTO tapes 2-3
times before writing any data to them. I think this is because the
primary source of error on these tapes is the roughness of the edge of
the tape itself (which is used for alignment) and running it back and
forth a few times probably wears/folds down any big bumps. Sounds
strange, but I had some tapes I initially thought had "bad spots" on
them, but upon erasing and re-writing the data to them again, the "bad
spots" are now gone, and have remained gone each time I have checked
those tapes over the last year. Subsequent tapes that I have erased 3x
before use have never had "bad spots". Also, you need to write data to
them at a minimum of 80 MB/s, or you can actually have problems reading
back the tape. I do my writes in 2 GB chunks from the system RAM.
ALWAYS test reading back the tape. Preferably more than once.

DVD-R media should also be verified and preferably in a low-quality DVD
drive. This is because writers tend to have much higher quality than
average drive mechanisms and I have seen many DVDs that read back just
fine in the drive that wrote them, but throw all kinds of media errors
when you take them home to a dusty old DVD reader.

As for getting the PDB to do image backup for us, I don't think that
will be easy.

The average data collection rate at 8.3.1 is 2 GB/hour or ~10 TB/year.
So I imagine storing all of the data from the ~100 MX beamlines around
the world would be a ~1 PetaB/year proposition. Since an average of 25
to 50 data sets are collected for every one that is published, the
storage demand on the PDB would be ~30 TB/year. Why only 1 in 50 you
ask? That is a very good question, and it will probably never be
answered unless the 49 of 50 unsolved data sets can be made available to
methods developers.

I just now Froogled for media prices and got this:

$33/TB LTO-4
$60/TB DVD-R
$100/TB hard disks
$400/TB Blue-Ray
$3000/TB Solid-state drives (such as USB thumbdrives)
$3M/TB clay tablets

So PDB will only need to find an "extra" ~$1k/year to buy the media for
1 dataset/structure, or $30k/year for all of the data. Unfortunately,
the media is not nearly as expensive as access to it. An LTO tape
library with ~50 TB storage capacity is ~$20k on eBay, but this is
EMPTY! You have to fill it with tapes, and then write software to make
the data sets available on the web. Tape librarys in the multiple PetaB
range are available, but not their prices. Clearly this represents a
non-trivial investment in resources and effort for the PDB. The central
problem is that the per-GB prices of storage do not scale well to
PetaB-class systems. However, there is now Stimulus Package mone
available in the US for large equipment investments like this. Perhaps
someone at Rutgers could submit one? I, for one, am very willing to
write them a letter of support.

Another approach is to try and spread the storage out across the world
and create a central registry for finding it. The TARDIS initiative in
Australia (Androulakis et al. Acta D 2008) seems to be an important step
in that direction, but I haven't been able to test it since I don't have
a Fedora Repository Server. I do, however, have a web server, and I
think a repository of URLs is probably better than nothing.
----------------------------------------------------------------------
Ed Pozharski:

DVDs. Single-layer DVDR holds ~4.6Gb - enough for most datasets. We do
most of our data collection at SSRL and they have a nice option of
shipping you DVDs for free.
----------------------------------------------------------------------
Mark:

We have a tiered system:
a) Personal files. Small and many, change often. Typical: CCP4, coot,
CNS and other files. Backed up daily.
b) X-ray images. Not so many, but large. Large in total. Never change
once established. Backed up every two hours.
c) Archive. Mostly X-ray images but also some personal files from people
who have left the lab. Projects that have been or are being published
and data that need to be preserved 'indefinitely'. Backed up when I have
time or when we run low on storage space (whichever comes first).

All files reside on a network-attached storage device with currently 2TB
of space, can be expanded to 4x largest HD (currently 4x1TB or better, I
lose track). We have two of these devices, one primary and one backup in
a different building.

We archive (are set up to archive) to external HDs. We make two archive
copies, one stays in a file cabinet, one goes home to PI, so there are
copies at all times. Presumably entire projects will be archived (with
multiple data sets, consisting of hundreds of X-ray images) at once.

We designed it this way because we wanted 'instant security' once the
files are established and we did not want to overwhelm the campus
network with large backups overnight when data are collected.

In the end, all our storage is on standard HDs, always in duplicate. Our
network-attached storage consists of two Infrant (now NetGear) ReadyNAS
NV+ systems (they are X-RAIDed). We have run this system for a coupl e
of years now and it works like a charm. Our local computers do not have
disk storage other than O/S, so no local files. Our O/S systems are
backed up once in a long while to a VM server so in theory everything
should be disaster-proof.

I don't know that I would ask 'outsiders' like PDB to keep copies of
files. After all, the researcher is responsible to keep good copies of
their research data. It is not hard to do, but it requires quite a bit
of thinking, probably by an IT specialist. In particular, I can remember
when our 9-track tape system was thrown out in grad school. All media
(with data) were subsequently useless. So you have to stay with time and
upgrade storage once in a while, even if I have to admit that James'
clay tablets are 'almost forever'. Technically I think that our
'forever' storage ends when the PI(s) retire(s).

----------------------------------------------------------------------
Ashley Buckle

We are working on a new version of TARDIS that massively simplifies the
software requirements (no database needed), using Web Stores. We are
planning to release this at the beginning of April (but not the 1st!)

See http://tardis.edu.au/wiki/index.php/TARDIS_Web_Stores

In a nutshell:


TARDIS Web Stores takes the original federated approach and makes it far
more powerful, flexible and easy to set up in individual labs and
institutions. Instead of the current requirement for data/metadata to
reside in a Fedora Digital Repository, TARDIS Web Stores indexes files
stored on any simple web server (and optional additional FTP server).

Aside from the greatly simplified storage setup, added bonuses of this
approach involve data sets no longer residing in large archives - one
can download individual files or entire data sets at once. Metadata will
be storable/searchable on any level (experiment/dataset/datafile)
meaning the flexibility of what metadata can be stored for display on
the TARDIS site is virtually infinite. Shifting data from server to
server, or changes in web address pointing to data is no problem, as all
that needs to be done for data to show up in TARDIS is a link to an XML
manifest residing next to the data itself. A program to scan files for
metadata and produce a tardis-compatible manifest file for registration
will also be distributed. We believe this added functionality, coupled
with the ease of making data known to TARDIS will greatly increase the
data indexed once this next iteration is released.
----------------------------------------------------------------------

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

May 2024
April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager