For everyone who already downloaded the script: The main tests where
still commented out in the script from testing the help function.
It is corrected now in the online version.
If you have it already downloaded, you can also go to the end of the
script and remove the # in front of the tests; or just download the
corrected script from the same link.
Sorry for the confusion!
Marcus
On Thu, 26 Jan 2017, Marcus Ebert wrote:
> Hi all,
>
> I put the script I used for the XFS/ZFS/EXT4 tests on a web server.
> Maybe it could be helpful for everyone who wants to test different
> filesystems/hardware configurations.
>
> For the read test, I used a mix of log files, data files, user output files
> from different experiments which where stored on compressed space for ZFS;
> about 5TB in total. A mix of files was used to simulate what is used in real
> production; a mix of files with very small file size up to very large files.
>
> The write test will also write about 5TB in total by default.
> All parameters however can easily be varied.
>
> The link to the script can be found here:
> http://gridpp-storage.blogspot.co.uk/2017/01/file-system-tests.html
>
> Hope it can be helpful.
>
>
> Cheers,
> Marcus
>
> On Thu, 26 Jan 2017, George, Simon wrote:
>
>> Sure Matt, ok.
>>
>> If we agree what tests we want to do (even if it is several) it does not
>> matter so much who does them.
>>
>> If something is not as expected and you want to poke around, we can
>> arrange it.
>>
>> Cheers,
>>
>> Simon
>>
>> ________________________________
>> From: GRIDPP2: Deployment and support of SRM and local storage management
>> <[log in to unmask]> on behalf of Matt Doidge
>> <[log in to unmask]>
>> Sent: 26 January 2017 12:44
>> To: [log in to unmask]
>> Subject: Re: opportunity for hardware test
>>
>> Hi Simon,
>> This is good news that you're being offered this, I'm also very
>> interested in hearing the results. I'm happy to get involved too (we
>> have a good relationship with XMA so I don't forsee a problem there),
>> but then I can't help but think there's a certain point where "too many
>> admins would spoil the hardware test" - we can do only one test at once,
>> and I suspect we'd all be interested in the results of the same types of
>> tests. Creating a wishlist of these tests would be a good addition to
>> Marcus' proposed wiki page (which I also think is a good idea).
>>
>> Cheers!
>> Matt
>>
>> On 26/01/17 11:03, Marcus Ebert wrote:
>> > Hi George,
>> >
>> > That's great!
>> > I would appreciate if you could include me for the discussion and test
>> > access.
>> >
>> > I haven't got such good offer from any vendor, but for LSST we purchased
>> > a testbed which consist of 2 different kind of servers where the only
>> > difference is the hardware raid vs HBA card.
>> >
>> > I think in general it would be good if we open a Wiki page where we
>> > collect all test results incl. OS version, HBA/hardware raid version,
>> > cache modes, RAM/CPU, disk number, kind of disks, and test modes (e.g.
>> > sequential vs parallel), and also note any problems.
>> > This could maybe helpful for future purchases.
>> >
>> >
>> > Thanks,
>> > Marcus
>> >
>> >
>> >
>> > On Thu, 26 Jan 2017, George, Simon wrote:
>> >
>> > > Hi,
>> > >
>> > > XMA have offered me the use of their test facility to compare
>> > > performance of HBA + sw raid vs RAID cards in a current storage
>> > > system. They are happy for other people to be involved too. If you're
>> > > interested, please let me know so I can include you in discussion of
>> > > the system setup and get you access for testing.
>> > >
>> > > Has anyone else asked for or received a similar offer from this or any
>> > > other vendor? Our local HPC sales and technical people are very keen
>> > > to have us make use of their test facility.
>> > >
>> > > Thanks,
>> > >
>> > > Simon
>> > >
>> >
>>
>
>
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
|