JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for GRIDPP-STORAGE Archives


GRIDPP-STORAGE Archives

GRIDPP-STORAGE Archives


GRIDPP-STORAGE@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

GRIDPP-STORAGE Home

GRIDPP-STORAGE Home

GRIDPP-STORAGE  November 2005

GRIDPP-STORAGE November 2005

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: SRM deployment at RHUL

From:

Greig A Cowan <[log in to unmask]>

Reply-To:

Greig A Cowan <[log in to unmask]>

Date:

Mon, 21 Nov 2005 22:21:36 +0000

Content-Type:

TEXT/PLAIN

Parts/Attachments:

Parts/Attachments

TEXT/PLAIN (245 lines)

Hi Oliver,

Here is an email i sent to Simon George about these issues. His questions 
and my comments are interleved. 

Cheers,
Greig

---------- Forwarded message ----------
Date: Fri, 18 Nov 2005 14:18:23 +0000 (GMT)
From: Greig A Cowan <[log in to unmask]>
To: Simon George <[log in to unmask]>
Cc: Olivier van der Aa <[log in to unmask]>,
     duncan rand <[log in to unmask]>
Subject: Re: SRM deployment at RHUL


Hi Simon,

Thanks for outlining your situation.

> Currently we have a front end system with no significant disk space, known 
> as se1. This is currently running our classic SE. The space it provided 
> comes from 3 networked disk arrays, each providing 2.8TB of space. They 
> are nfs-mounted on /lcg2/storage/<VO> where about three VOs are mounted 
> from each array. The arrays are actually PCs wth 3-ware cards and lots of 
> disks, running SLC3 + 2.6 kernel, but they have no grid software 
> installed. se1 is the only one suitably equiped to have a mirrored system 
> disk, although it is not currently configured this way.
> 
> As I understand DPM, we should have a controller and multiple file systems
> which are pooled.  They can all be on one machine, but file systems on
> other machines can also be added. If we were starting from scratch with
> these 4 systems, what configuration would you recommend as optimal?

DPM is fairly flexible in how it can be setup. 

On the server side, there are various services that must operate (DPM
daemon, DPNS daemon, SRM, DPM database) but it is possible to set these up
to run on the same or different hosts. Running on different hosts would be
an advantage for large sites that are expecting a lot of traffic which
will have a large number of queries of the database. In your case (as with
the other Tier-2s) having all services on a single "DPM head node" will be
sufficient.

On the storage side, DPM has a set of disk pools that store the data
managed by the head node. These filesystems can be on the head node itself
(including NFS mounted on the head node) and/or on separate disk servers. 
The dpm-gridftp and dpm-rfiod daemons must run on each host that operates 
a pool. To improve access performance, it is best to use a distributed 
system in which the storage is spread out over the available nodes since 
you can then have multiple instances of dpm-gridftp running to all 
simultaneous access to different pools.

> I expect the 2.8TB file systems will be a problem for DPM. I think this
> rules out the simplest SE to SRM conversion, because DPM cannot use the
> 2.8TB file systems where the files now exist. If we could free up those
> filesystems, we could re-create the file systems in two parts, each under
> 2.8TB.

DPM can handle filesystems > 2TB. There was a problem a few months ago 
about this, but it has been fixed. I will need to change that part of the 
wiki that mentions this problem. The question is if you want to keep your 
storage partitioned this way? (see below)

> Luckily our classic SE is nowhere near full, so I have already 
> consolidated it down to two of the 3 storage arrays. This means we now 
> have a 2.8TB system unused to play with.
> 
> A sparate point: since our WNs are on a private network and all four
> SE-related machines are multi-homed, we would like to take advantage of
> this with the DPM.

This sounds like a great idea and I would be very interested in seeing how 
this goes. I need to warn you though that no one else in the UK has tried 
this yet, so you guys would really be leading if you gave it a go. It 
really shouldn't be too big of an issue. As I mentioned above, all you 
really need to do is have the WNs running the gridftp and rfiod daemons 
and change a couple of configuration files to ensure that the WNs can talk 
to the DPM head node. It is something that I should talk to our sysadmin 
about doing (although we do only have 5 WNs).

> Should we have one big disk pool, or one per VO? How easy is it to expand 
> a pool when more network storage becomes available?

Withing DPM, you can restrict which VOs can use which pools. See the wiki:

http://wiki.gridpp.ac.uk/wiki/DPM_VO_Specific_Pools

But, I would probably recommend splitting up your partitions into smaller 
filesystems. That way you can more easily control how much storage you are 
placing in each disk pool.

It is very easy to add extra filesystems to existing pools as more network 
(or disk servers) become available. It is simply a case of running the 
command:

dpm-addfs --poolname pool_name --server fs_server --fs fs_name

> So, given all this, I wonder what is the best way to proceed. I don't want 
> to just leap into the migration driven by what is the easiest way to 
> migrate, only to find that what we end up with is not optimal.
> 
> With my limited understanding, the following options have occured to me:
> 
> 1) se1 becomes the DPM controller, and nfs-mounts 6x1.4TB file systems 
> from the disk arrays, which are put into disk pools.
> (Migration: can se1 be both a DPM-SRM and classic-SE at once?)
> 
> 2) se1 is just the controller, the 3 storage arrays each with 2x1.4TB file 
> systems are handled over the network by DPM.
> (Migration: can se1 be both a DPM-SRM and classic-SE at once?)
> 
> 3) se1 remains the classic SE. The disk array that is now spare becomes 
> the new DPM controller and has 2x1.4 TB file systems in its pool.
> 
> The migration strategy in all cases would be to move the data from the 
> classic-SE to the DPM-SRM once it is up and running, then once the other 2 
> storage arrays are freed up, add their file systems to DPM.

Regarding the migration of the Classic SE to DPM. You do not have to 
physically move the data in any way, it is purely a metadata operation 
within the DPM namespace. All you do is convert the classic SE host into 
your DPM host and then run a script to do the metadata operation. All your 
Classic SE data will then be accessible via the DPM. It will also still be 
accessible using the traditional Classic SE data management commands (so 
it is effectively a Classic SE and SRM at once).

I think the optimal situation would be performing the above migration of 
se1 to your new DPM head node. You could then install the DPM pool 
software (in addition to the other required LCG components) on the disk 
servers. This is the most scalable solution. It would probably be best to 
decrease the size of the partitions on the disk servers, just to give you 
more flexibility in deciding how much storage each VO gets. There may be 
issues in installing LCG on your disks servers, I'm not sure about the 
specifics of doing this.

Does all of the above answer your questions? Let me know if it didn't 
make sense and I will try again. Don't hesitate to ask me for more 
information.

Cheers,
Greig




> Simon George, Dept of Physics, Royal Holloway college, University of London
> Email [log in to unmask]    Tel. +44 1784 41 41 85    Fax. +44 1784 472794
> 
> On Fri, 18 Nov 2005, Greig A Cowan wrote:
> 
> > 
> > Hi Simon,
> > 
> > Today is not ideal for me to speak to you unfortunately. I am getting the 
> > keys to my new flat sometime this afternoon so I am unsure when I will be 
> > available. Would it be possible for you to send me an email with your 
> > questions? I will also be unavailable for the majority of next week since 
> > the flat needs decorating.
> > 
> > If you send me your number I will hopefully try and phone sometime this 
> > afternoon. 
> > 
> > Sorry about the inconvenience.
> > 
> > Greig
> > 
> > On Fri, 18 Nov 2005, Simon George wrote:
> > 
> > > Hi Greig,
> > > 
> > > Duncan and I have looked through the documentation. Now we have some 
> > > questions, mostly concerning the best architecture for our SE. I think it 
> > > would be most efficient if we could speak to you on the phone to explain 
> > > further. Is there a convenient time for us to call you some time today?
> > > 
> > > Cheers,
> > > Simon
> > > 
> > > ---------------------------------------------------------------------------
> > > Simon George, Dept of Physics, Royal Holloway college, University of London
> > > Email [log in to unmask]    Tel. +44 1784 41 41 85    Fax. +44 1784 472794
> > > 
> > > On Sun, 13 Nov 2005, Greig A Cowan wrote:
> > > 
> > > > 
> > > > Hi Simon,
> > > > 
> > > > > just to let you know that our GridPP-funded support post (0.25 FTE,
> > > > > pooled with Brunel) has been taken up by Duncan Rand, who started working
> > > > > at RHUL one day per week last week. He is new to the Grid but
> > > > > more generally experienced in IT so should be able to get up to speed
> > > > > fairly quickly. 
> > > > 
> > > > That's good to hear. There is plenty of information on storage related 
> > > > issues in the storage area of the GridPP wiki:
> > > > 
> > > > http://wiki.gridpp.ac.uk/wiki/Grid_Storage
> > > > 
> > > > If he has time it would be good if Duncan could have a look at some of the 
> > > > pages in the wiki. After finding out about SRM in general it will be 
> > > > necessary to decide whether you want to deploy DPM or dCache. Judging from 
> > > > your current storage capacity and the amount of time that Duncan will have 
> > > > to spend in administering your system, it would appear to me that DPM is 
> > > > most suitable for you. Let me know what you think.
> > > > 
> > > > > As I mentioned before it will be his job to migrate RHUL
> > > > > from Classic SE to SRM. I remember you offered to help advise us on this,
> > > > > so I think it will soon be productive to discuss it a bit with you if you
> > > > > agree. Duncan may be in touch with you about that soon.
> > > > 
> > > > No problem, I'll be happy to give advice at any time. If you want we can
> > > > arrange a phone call. I would also recommend that Duncan join the GridPP
> > > > storage mailing list and attend the weekly phone conferences (Wednesdays
> > > > 1000-1030). These have shown to be very useful resources for sites 
> > > > deploying SRMs. You can find out joining instructions at the above URL.
> > > > 
> > > > If Duncan would like to contact me he can use email or phone my office on 
> > > > 0131 650 5300 to discuss these issues further.
> > > > 
> > > > Looking forward to working with you guys.
> > > > 
> > > > Thanks,
> > > > Greig
> > > 
> > 
> > -- 
> > ========================================================================
> > Dr Greig A Cowan                         http://www.ph.ed.ac.uk/~gcowan1
> > School of Physics, University of Edinburgh, James Clerk Maxwell Building
> > 
> > TIER-2 STORAGE SUPPORT PAGES: http://wiki.gridpp.ac.uk/wiki/Grid_Storage
> > ========================================================================
> > 
> 

-- 
 =======================================================================
Dr Greig A Cowan                         http://www.ph.ed.ac.uk/~gcowan1
School of Physics, University of Edinburgh, James Clerk Maxwell Building

TIER-2 STORAGE SUPPORT PAGES: http://wiki.gridpp.ac.uk/wiki/Grid_Storage
 =======================================================================

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager