> -----Original Message-----
> From: Testbed Support for GridPP member institutes [mailto:TB-
> [log in to unmask]] On Behalf Of Peter Grandi
>
> >> I am just looking around to find what is the commonly used VM
> >> Hypervisor in GridPP group (i.e Xen, KVM or etc)
>
> > KVM without a shadow of a doubt. It works, it's easy, and it's in SL5
> > as standard issue. You'd need a good positive reason to go for
> > anything else these days.
>
> I agree with this. I would however say that while KVM is good, but in
> other contexts I have had good experiences with Xen, with a
> paravirtualized kernel, as paravirtualization can have reduced overheads
> compared to virtualization (even with AMD or Intel hw virtualization
> assist).
>
Not so much any more; suitably modern Linux distributions (which
in SL terms means 5.4 or later) use the 'virtio' device drivers
for networking and disk access which work on essentially the same
principle as Xen style paravirtualised drivers, rather than the
traditional 'emulated hardware' approach that full virt used to
use.
> Xen also allows moving VM images around, which may be
> useful (it was designed to implement the XenoServers "cloud").
>
Not quite sure what you mean, but if you're referring to 'live
migration' of a running VM from one host to another, then Kvm/libvirt
allow that too. You do, of course, still need shared storage, whether
that's iSCSI, NFS, Lustre or whatever.
>
> However, given the relatively small number of hosts and host types in a T2
> however I would strongly prefer for a new setup to just buy a number of
> smaller, low power-draw real machines.
>
There are a number of problems with that - firstly the availability
of such things from our normal suppliers, and under the sorts or
support terms we tend to want isn't all that good, and secondly and
most importantly, you have to know ahead of time what service nodes
you're going to want and what their system requirement will be, and
we usually don't. A VM setup gives you the ability to commission a
new node for production of for testing, or to increase the memory/
disk space/cpu cores allocated to an existing one, very easily. That
has been enormously useful.
> > [ ... ] CREAM CEs, in our experience, will want about 6Gb of RAM each,
> > so two of those, plus say 1GB for each of the others, totals 15GB, and
> > leaves you a little over for the host OS.
>
> That seems reasonable to me, but I found to my surprise that the CREAM CE
> was lighter than the LCG one, and 2GiB seemed adequate:
>
I suspect this rather depends on the size and usage of the cluster.
> > we have our VMs storage on an old 14 drive supermicro disk server, and
> > that seems able to cope too (it's not running the VMs though, so all
> > its memory is disk cache).
>
> If that is a SAN with virtual disks allocated as chunks of the SAN it
> seems viable to me, if it is a NAS (NFS) it seems a lot less of a good
> idea.
>
Ours is iSCSI (using the software iSCSI target running on SL), but I
have heard of people getting perfectly respectable performance with
files on NFS backing stores.
> > typical small server setup of two basic SATA disks in a RAID mirror
> > though, you won't have enough IO capacity to go round, particularly
> > for the CREAM CEs.
>
> In my experience that actually sort of worked, choosing nice disks and a
> nice compact layout, but it was just sufficient in some cases.
>
It's sufficient if you don't do much IO. It wouldn't stand up to
a pair of CREAM CEs, which do like thrashing their disks a bit.
Ewan
|