Hi all,
I created a page in the wiki and put some test results already in.
Feel free to change the format (I'm not a Mediawiki expert).
We could think about what is better or how to maintain the page:
* create a new table (same style) for each set of tests
* continue the current table until it get's too wide and start a new one then
* do something completely different
The page is at: https://www.gridpp.ac.uk/wiki/ZFS-Tests
Cheers,
Marcus
On Thu, 26 Jan 2017, George, Simon wrote:
> Thanks Dan, I'll include you too then.
>
>
> Would one of the leading GridPP storage enthusiasts please make a wiki page where we can start collecting this information please?
>
> I'm not sure the best place to put it in the current structure.
>
> I am willing to fill it in with what we have so far.
>
>
> Thanks,
>
> Simon
>
> ________________________________
> From: Daniel Traynor <[log in to unmask]>
> Sent: 26 January 2017 15:07
> To: [log in to unmask]; George, Simon
> Subject: Re: opportunity for hardware test
>
> When I set up the new lustre system at QM I went through a set of IOzone benchmark tests (which I then wrote up as a CHEP poster). I would be interested to run the same tests on other hardware for comparison.
>
> Some difference will exist. e.g. we use a modified version of ext4 which allows for very large file system (normal version is limited to 16TB(?). We did the test on sl6 but these tests should probable be done on centos7. I'm not sure the tuning we have for ext4 will work for ZFS(might be interesting to check).
>
> https://indico.cern.ch/event/505613/contributions/2230962/attachments/1338189/2029076/Poster-175.pdf
>
> To test a single server IOzone was run with 12 threads each transferring a file size of 24GB in chunks of 1024kB (12 threads because the box had 12 cores, 24G files to try and remove impact of RAM file caching.
>
> iozone -e -+u -t 12 -r 1024k -s 24g -i0 -i1 -i 5 -i 8
>
> I would like to run that test and a second one if the box has more than 12 cores.
>
> dan
>
> * Dr Daniel Traynor, Grid cluster system manager
> * Tel +44(0)20 7882 6560, Particle Physics,QMUL
>
> ________________________________________
> From: GRIDPP2: Deployment and support of SRM and local storage management <[log in to unmask]> on behalf of George, Simon <[log in to unmask]>
> Sent: 26 January 2017 14:27
> To: [log in to unmask]
> Subject: Re: opportunity for hardware test
>
> Sure Matt, ok.
>
> If we agree what tests we want to do (even if it is several) it does not matter so much who does them.
>
> If something is not as expected and you want to poke around, we can arrange it.
>
> Cheers,
>
> Simon
>
> ________________________________
> From: GRIDPP2: Deployment and support of SRM and local storage management <[log in to unmask]> on behalf of Matt Doidge <[log in to unmask]>
> Sent: 26 January 2017 12:44
> To: [log in to unmask]
> Subject: Re: opportunity for hardware test
>
> Hi Simon,
> This is good news that you're being offered this, I'm also very
> interested in hearing the results. I'm happy to get involved too (we
> have a good relationship with XMA so I don't forsee a problem there),
> but then I can't help but think there's a certain point where "too many
> admins would spoil the hardware test" - we can do only one test at once,
> and I suspect we'd all be interested in the results of the same types of
> tests. Creating a wishlist of these tests would be a good addition to
> Marcus' proposed wiki page (which I also think is a good idea).
>
> Cheers!
> Matt
>
> On 26/01/17 11:03, Marcus Ebert wrote:
>> Hi George,
>>
>> That's great!
>> I would appreciate if you could include me for the discussion and test
>> access.
>>
>> I haven't got such good offer from any vendor, but for LSST we purchased
>> a testbed which consist of 2 different kind of servers where the only
>> difference is the hardware raid vs HBA card.
>>
>> I think in general it would be good if we open a Wiki page where we
>> collect all test results incl. OS version, HBA/hardware raid version,
>> cache modes, RAM/CPU, disk number, kind of disks, and test modes (e.g.
>> sequential vs parallel), and also note any problems.
>> This could maybe helpful for future purchases.
>>
>>
>> Thanks,
>> Marcus
>>
>>
>>
>> On Thu, 26 Jan 2017, George, Simon wrote:
>>
>>> Hi,
>>>
>>> XMA have offered me the use of their test facility to compare
>>> performance of HBA + sw raid vs RAID cards in a current storage
>>> system. They are happy for other people to be involved too. If you're
>>> interested, please let me know so I can include you in discussion of
>>> the system setup and get you access for testing.
>>>
>>> Has anyone else asked for or received a similar offer from this or any
>>> other vendor? Our local HPC sales and technical people are very keen
>>> to have us make use of their test facility.
>>>
>>> Thanks,
>>>
>>> Simon
>>>
>>
>
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
|