This is actually pretty important.
If I understand it correctly it means that we are limited to 2TB fs (am I
correct?). Our last four or five disk servers came with just over 3TB of
disk and we have just had two 6TB disk servers delivered. We had planned
to install DPM on these machines (an action from the dteam meeting)
So my point is that while 20TB as a single file system is still
exceptional, file systems larger than 2TB are common place.
So, please can people close to the DPM developers try to persuade them
that this deserves some priority (assuming that my understand is correct
... and I would be grateful if people could correct it if it isn't).
All the best,
david
On Mon, 8 Aug 2005, Alex Martin wrote:
> For Info...it appears its not currently possible to use DPM
> on our setup here:
>
>
> On Saturday 06 August 2005 09:32, you wrote:
> > On Fri, 5 Aug 2005, noreply [Alex Martin] wrote:
> > > My attempt to add a fairly large (~20TB) filesystem to dpm
> > > results in the following error:
> > >
> > > [root@se01 local]# dpm-addfs --pool Volatile --server se01
> > > --fs /pool/data/lcg
> > > dpm-addfs Volatile se01 /pool/data/lcg: Value too large for defined
> > > datatype
> >
> > We use statfs/statvfs to get the capacity and the free space on
> > the filesystems. Your filesystem is too big and the system call
> > returns EOVERFLOW.
> > We have to use statfs64/statvfs64.
> > So we have to add this method to RFIO and modify DPM to use it.
> > This is not a one hour job (much more).
> > We will certainly do it, but later.
> > However having a huge filesystem like this does not fit very well
> > with the DPM architecture. The DPM manages filesystems distributed over
> > many servers and will try to optimize the filesystem selection
> > according to the I/O load (or even network load). Having a huge
> > filesystem like this in the DPM means that the DPM cannot optimize
> > anything and must rely on the filesystem optimization...
> > May be it's the solution for the future.
> > Jean-Philippe
>
>
|