Barry
>If I partition the disk as described at the very bottom of the attached
>(hdc1, hdc2, hdc3), the installation works perfectly.
>If I partition the disk into four (also shown below as hdc1, hdc2, hdc3,
>hdc4),
>the disk is reformatted and the rpms flagged for installation.
>But at the end of actually installing the rpms, I get:
>LCFG object update: update rpms failed
>and I'm exited into a bash shell. The disk is 20-odd Gb, as I recall.
Check in /var/obj/log/updaterpm
I think this is a space problem though.
On the CE here.
$ df -klh
Filesystem Size Used Avail Use% Mounted on
/dev/hda2 37G 4.0G 31G 11% /
/dev/hda1 38M 5.3M 31M 15% /boot
And
$ du -skh /home
336M /home
which I really should have in a seperate partition.
So that is 3.5GB for / including /usr, /cern, /opt , ... which is absolutely
massive.
I am not sure why cern, atlas , alice stuff is on the CE , unless people
want to run fork jobs but then I expect people would want to discourage that
anyway.
> While I'm about it, do you have any examples of use of the fstab component
?
I just had a look around about this, EDG does not use the fstab component ,
I had not realised but Edinburgh does have one. Instead EDG uses the
nfsmount object.
Here is a sample export
EXTRA(nfs.exports) gridsecurity
+nfs.fs_gridsecurity /etc/grid-security
+nfs.options_gridsecurity SITE_SE_HOSTS_(rw,no_root_squash)
SITE_WN_HOSTS(rw,no_
root_squash)
And the corresponding mount
EXTRA(nfsmount.nfsmount) gridsecurity
+nfsmount.nfsdetails_gridsecurity /mnt/grid-security
CE_HOSTNAME:/etc/grid-security rw
It is worth mentioning also that this mount is important, you have to
share /etc/grid-security/gridmapdir and /etc/grid-security/grid-mapfile
across all your pool account machines with no_root_squash.
I am quit glad that Edinburgh does not have an nfsmount object as I never
liked it much, it is a little unique in that it does the mounting rather
than
configuring the existing unix configuration (/etc/fstab) file.
Of course I could be completely wrong in my understanding.
Steve
ps. I am heading to IC tomorrow so will be around in the afternoon.
-----Original Message-----
From: Dr Barry MacEvoy [mailto:[log in to unmask]]
Sent: Tuesday, April 30, 2002 5:20 PM
To: Traylen, SM (Steve)
Subject: More partition fun
Hi Steve,
I am mystified again.
If I partition the disk as described at the very bottom of the attached
(hdc1, hdc2, hdc3), the installation works perfectly.
If I partition the disk into four (also shown below as hdc1, hdc2, hdc3,
hdc4),
the disk is reformatted and the rpms flagged for installation.
But at the end of actually installing the rpms, I get:
LCFG object update: update rpms failed
and I'm exited into a bash shell. The disk is 20-odd Gb, as I recall.
Any ideas ?
While I'm about it, do you have any examples of use of the fstab component ?
I want to mount several things, for example:
gm00:/stage/gm00/stage as /stage/gm00/stage
gm01:/stage/gm01/stage as /stage/gm01/stage
Cheers,
Barry.
/*
gw30
==============================================
BARRY'S CE FARM NODE
*/
/* Host specific definitions */
#define HOSTNAME gw30
/* Some useful macros */
#include "macros-cfg.h"
/* Site specific definitions */
#include "site-cfg-farm.h.ic"
/* Linux default resources */
#include "linuxdef-cfg-gw30.h"
/* LCFG client specific resources */
#include "client_testbed-cfg-gw30.h"
/* Users */
#include "Users-cfg.h"
/* Computing Element specific resources */
#include "ComputingElement-cfg.h"
/* Specific NIC */
+update.modlist label
+update.mod_label alias eth0 eepro100
/* Partitions */
+update.partitions_hdc hdc3 hdc1 hdc2 hdc4
+update.pdetails_hdc1 32 /boot
+update.pdetails_hdc2 2048 swap
+update.pdetails_hdc3 2048 /
+update.pdetails_hdc4 free /stage/gw30/stage
/*
+update.partitions_hdc hdc2 hdc1 hdc3
+update.pdetails_hdc1 32 /boot
+update.pdetails_hdc2 free /
+update.pdetails_hdc3 512 swap
*/
|