Hi Kostas,
it means to install things by hand. Maybe a better solution is like the
Tier1 has done: nfs mounting the disk servers on frontend pools. In this
way you can keep the best of both worlds RHEL4 on the disks servers and a
supported dcache installation on the frontends pools.
cheers
alessandra
On Wed, 25 May 2005, Kostas Georgiou wrote:
> On Wed, May 25, 2005 at 10:47:15AM +0100, Owen Synge wrote:
>
>> On Wed, 25 May 2005 09:32:45 +0100
>> Alessandra Forti <[log in to unmask]> wrote:
>>
>>> Hi Mona,
>>>
>>> I don't think it will work that easily. Somene tried already a yaim
>>> installation on a RHEL4 32 bits and there were problems with the
>>> dependencies. Besides you should check if dcache, globus, srm.... already
>>> have a 64bit version.
>>>
>>> cheers
>>> alessandra
>>
>> As I said before I would use SL3 as everyone in tier 1 and tier 0 use SL3
>> for most things.
>
> The RHEL3 based kernels have problems with disks above 1TB/2TB. Since the
> system has a 3ware card with 3TB of data using the system with RHEL3 was
> impossible until recently[1] without using two seperate raid5 volumes
> which means you loose another 250GB for it.
> Performance is also much much better in RHEL4 compared to the old release.
>
> [1] The latest 3ware firmware/driver that was released recently supports
> autocarving above 2TB now but i haven't tested it. It might not solve the
> problem though since 2TB is still above the "officially" 1TB device limit
> in RHEL3 (some drivers work fine with 2TB some don't).
>
>> I shall give you feed back from D-Cache about support on 64 bits soon as
>> I get the feedback myself.
>
> Since D-Cache is java based it shouldn't matter at all where it's running.
> Just ignore the LCG provided rpms iand hand pick the ones that you *really*
> need (possibly building new rpms for them) and everything should be fine.
>
> Kostas
>
--
********************************************
* Dr Alessandra Forti *
* Technical Coordinator - NorthGrid Tier2 *
* http://www.hep.man.ac.uk/u/aforti *
********************************************
|