Hi Daniel,
I've run some benchmarks on a variety of setups:
FS is XFS with stripe/stride set for 256kb
eg for our RAID10
mkfs.xfs -d su=256k,sw=5 /dev/sda1
mount -o rw,noatime,logbufs=8,logbsize=256k /dev/sda1 /mntpoint
RAID is RAID10 with 5 pairs, or RAID6 on 24 drives
Drives SAS or SATA
OS SL6 or Centos7
Scheduler has been cfq, deadline or noop. nr_requests 1, 128, 4096 or
10000, readahead 128 or 4096.
MegaRAID 9271-4i RAID controller (with latest firmware) has had
Readahead always/off
IO Policy Direct/Cached
Write Policy Writeback/Writethrough
Disk Cache Enabled/Disabled
The only setting I've found that significantly affects the starvation is
the writeback/writethrough setting.
The artificial test has been to have something like this to write a
bunch of files
for i in `seq 1 100` ; do dd if=/dev/zero of=/mntpoint/test$i.img bs=4M
count=256 ; done
while simultaneously running something like (with test101-200 precreated)
for i in `seq 1 100` ; do dd if=/mntpoint/test$i.img of=/dev/null bs=4M
count=256 ; done
to write and read a bunch of zeros to/from the array in 1GB chunks.
Start the reads first, then after a few seconds start up the writes as
well. What I see is that the reads run just fine until the dirty pages
start to be written to disk then the reads grind to a halt.
eg this is the reads on a Centos7 RAID10
1073741824 bytes (1.1 GB) copied, 1.37975 s, 778 MB/s
256+0 records in
256+0 records out
1073741824 bytes (1.1 GB) copied, 1.387 s, 774 MB/s
256+0 records in
256+0 records out
1073741824 bytes (1.1 GB) copied, 1.35299 s, 794 MB/s
256+0 records in
256+0 records out
1073741824 bytes (1.1 GB) copied, 107.404 s, 10.0 MB/s
256+0 records in
256+0 records out
1073741824 bytes (1.1 GB) copied, 1.43173 s, 750 MB/s
The 10.0 MB/s is when the writes started being flushed to disk. The
write speeds never dropped below a steady 900MB/s.
This isn't typical workload for a grid storage array, but when big
chunks of new data are being streamed in it's pretty similar and could
end up blocking lots of clients.
My feeling is it's the scheduler writing bits into the array, which with
a big fast writeback cache is just a black hole it can keep shoveling
bits into, while the higher latency reads just stall (but why doesn't
using deadline or noop affect that?). Or it could be the controller
logic/driver just throwing Linux's queue ordering out the window and
prioritising writes at the expense of everything else.
I've not tried with any non-LSI RAID controllers as they're in
production. Seeing if it happens on a different controller would be useful.
Cheers,
John
On 16/06/16 15:03, Daniel Traynor wrote:
> What test are you running and what is the config of the raid and file system.
>
> I could try and run the tests on our new HP storage which is not yet in production.
>
> for are dell R730XD nodes we have
>
> Read Policy : Adaptive Read Ahead
> Write Policy : Write Back
> Cache Policy : Not Applicable
> Stripe Element Size : 64 KB
> Disk Cache Policy : Disabled
>
>
> together with aligned file system for 16 disks in raid 6
>
> mkfs.ext4 -b 4096 -E stride=16,stripe-width=224 /dev/sdb
>
>
> #
> # OS io tunes for lustre
> #
> echo deadline > /sys/block/sdb/queue/scheduler
> echo 4096 > /sys/block/sdb/queue/nr_requests
> echo 4096 > /sys/block/sdb/queue/read_ahead_kb
> echo madvise > /sys/kernel/mm/redhat_transparent_hugepage/enabled
> echo madvise > /sys/kernel/mm/redhat_transparent_hugepage/defrag
> echo performance | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor >/dev/null
> echo 1 > /proc/sys/vm/dirty_background_ratio
> echo 75 > /proc/sys/vm/dirty_ratio
> echo 262144 > /proc/sys/vm/min_free_kbytes
>
>
>
>
>
> * Dr Daniel Traynor, Grid cluster system manager
> * Tel +44(0)20 7882 6560, Particle Physics,QMUL
>
> ________________________________________
> From: GRIDPP2: Deployment and support of SRM and local storage management <[log in to unmask]> on behalf of John Bland <[log in to unmask]>
> Sent: 16 June 2016 14:30
> To: [log in to unmask]
> Subject: Heavy writes starving reads
>
> Hi,
>
> Just running some tests on our new storage after seeing very heavy load
> during some transfers.
>
> I think the main problem is that on our MegaRAID controllers write
> operations can starve out reads.
>
> Running concurrent write operations is fine, individual throughput is
> lower but total throughput isn't much less than for a single thread.
> Same for reads.
>
> But running heavy write and read operations concurrently the write
> operations run at nearly normal speed, while reads slow to a crawl
> (orders of magnitude slower).
>
> This happens regardless of which scheduler or scheduler settings I use,
> but only happens on the RAID controller. If I run the same tests on a
> local disk directly attached to the motherboard reads are affected but
> still run at a reasonable throughput.
>
> The only way of stopping this appears to be to disable the Write Back
> cache on the controller, but this impacts write performance terribly.
>
> Has anyone else seen behaviour like this or have any fixes for it? We
> noticed it recently because our new storage was being filled up with 10s
> of TBs of ATLAS data, causing far higher load than expected. Under
> normal operations the writes are more spread out so it's not so noticeable.
>
> We have the same controller on some VM storage where writing 10s of GBs
> at a time is pretty normal and blocking a load of VM images isn't good.
>
> Cheers,
>
> John
>
> ps Sounds very similar to the issue seen in this thread
>
> http://www.spinics.net/lists/target-devel/msg03885.html
>
> --
> John Bland [log in to unmask]
> Research Fellow office: 220
> High Energy Physics Division tel (int): 42911
> Oliver Lodge Laboratory tel (ext): +44 (0)151 794 2911
> University of Liverpool http://www.liv.ac.uk/physics/hep/
> "I canna change the laws of physics, Captain!"
>
--
John Bland [log in to unmask]
System Administrator office: 220
High Energy Physics Division tel (int): 42911
Oliver Lodge Laboratory tel (ext): +44 (0)151 794 2911
University of Liverpool http://www.liv.ac.uk/physics/hep/
"I canna change the laws of physics, Captain!"
|