Sorry, my "P4" statement came out wrong.
What I meant was that one does not need a high-end modern machine for such a server.
The workstation we use is dual-processor Dell Precision 470 with ECC RAM. As it happens, we also have an identical computer sitting around in case of a failure. )
I absolutely agree that reliability is paramount.
CPU load-wise:
our software Raid6 with 8 members does not produce CPU loads of >50% when writing 1 TB. I think 50% of 2x3.2 GHz Prescott-2M CPUs from 2005 is not all that much, though older single-core processor will struggle in such situation.
Best regards,
Dmitry
On 2013-07-29, at 11:08 AM, Georg Zocher wrote:
> Dear Sergei,
>
> I agree in principle with the setup suggested by Dmitry.
>
> But I would not use an old P4 system to serve as central device for all other workstations. Keep in mind that such an old system might have a higher chance to fail. As this is your central unit which keeps the work of all other members running, I would not trust in such an old machine (beside there is no money). Lifetime should be more expensive than hardware...
>
> If you go for such a cheap setup, I would at least configure a second P4 system, that you can plug-in directly after a hardware failure of system-1. Depending on the RAID-level and the number of hard disks, I assume that a P4 single core will not be sufficient in a setup with several users, especially with a software raid setup (although I do not have solid data for it).
>
> I would highly recommend to buy a system which is designed to do a 24/7/365 job. I installed such a machine in our workgroup three years ago including a 3x RAID6 hardware setup for all /home/$USERNAME, diffraction data, and crystallographic software. Workstations are attached via 2x1GBit network connections (bonding) and are diskless. They get their image from the server using tftpboot. This substantially reduces the administration time. Especially, it allows you to "setup" a new workstation by simply adding it to your dhcp.conf on the server...
>
> All the best,
> Georg
>
>
>
>
>
>
> Am 29.07.2013 15:38, schrieb Dmitry Rodionov:
>> Dear Sergei,
>>
>> IMO, the easiest way to achieve your goals is good old NIS and NFS with a centralized server on wired gigabit network. You could go with LDAP instead of NIS, but it is considerably more difficult to set up.
>> One computer would act as a server, containing the user database, homes and programs.
>> Hardware RAID is not worth it. You are better off getting a Linux-supported SAS/SATA HBA (e.g. Dell SAS 6/iR) and making a software RAID 5 with mdadm out of a bunch of inexpensive consumer-grade SATA disks. You need a minimum of 4 drives for RAID5. An external HDD enclosure might be necessary depending on server's chassis and the desired number of drives.
>> We built our server from an old P4 workstation with a couple gigs of RAM (8 clients). Having two or more cores is a benefit.
>> If I am not mistaken, software RAID 5 is not bootable, so you would need an extra drive (can be very small) for the core part of the OS.
>> Export /home and /usr/local with NFS, mount them from client machines, hook the clients up to NIS and you are done. Some programs might not reside in /usr/local in which case you would have to export and mount more directories.
>> Ubuntu community has pretty good and easy to follow guides for NIS, NFS and mdadm.
>>
>> Bets regards,
>> Dmitry
>>
>> On 2013-07-29, at 6:22 AM, Sergei Strelkov wrote:
>>
>>> Dear all,
>>>
>>> In old times I, just like about any protein crystallographer,
>>> used to work on a cluster of SGI/IRIX workstations with complete NFS-based
>>> cross-mounting of hard disks.
>>>
>>> A typical operation included:
>>> 1. A single home directory location for every user:
>>> if my home directory was on workstation X, I would by default use
>>> it after logging on any of the workstations in the cluster.
>>> 2. A single location for all software for general use.
>>> (And, obviously, 3. The ability to log on any node from
>>> any terminal; today this is done via the 'ssh -X' command).
>>>
>>> I wondered if someone could give us an advice on a painless
>>> setup enabling 1. and 2., for a small cluster of Ubuntu computers.
>>> We (will) have about five similar Dell computers in a local (192.168.*.*)
>>> network (wired/wireless). Any tips on the hardware (especially the
>>> LAN and network disks) are also welcome.
>>>
>>> Many thanks,
>>> Sergei
>>>
>>> --
>>> Prof. Sergei V. Strelkov
>>> Laboratory for Biocrystallography
>>> Dept of Pharmaceutical and Pharmacological Sciences, KU Leuven
>>> Herestraat 49 bus 822, 3000 Leuven, Belgium
>>> Work phone: +32 16 330845 Mobile: +32 486 294132
>>> Lab pages: http://pharm.kuleuven.be/anafar
>
>
> --
> Universität Tübingen
> Interfakultäres Institut für Biochemie
> Dr. Georg Zocher
> Hoppe-Seyler-Str. 4
> 72076 Tuebingen
> Germany
> Fon: +49(0)-7071-2973374
> Mail: [log in to unmask]
> http://www.ifib.uni-tuebingen.de
>
|