Quantum 3.5.2 Portable Media Storage User Manual


 
StorNext File System Tuning
The Underlying Storage System
StorNext File System Tuning Guide 4
While read-ahead caching improves sequential read performance, it does
not help highly transactional performance. Furthermore, some SNFS
customers actually observe maximum large sequential read throughput
by disabling caching. While disabling read-ahead is beneficial in these
unusual cases, it severely degrades typical scenarios. Therefore, it is
unsuitable for most environments.
RAID Level, Segment
Size, and Stripe Size 0
Configuration settings such as RAID level, segment size, and stripe size
are very important and cannot be changed after put into production, so it is
critical to determine appropriate settings during initial configuration.
The best RAID level to use for high I/O throughput is usually RAID5.
The stripe size is determined by the product of the number of disks in the
RAID group and the segment size. For example, a 4+1 RAID5 group with
64K segment size results in a 256K stripe size. The stripe size is a very
critical factor for write performance because I/Os smaller than the stripe
size may incur a read/modify/write penalty. It is best to configure
RAID5 settings with no more than 512K stripe size to avoid the read/
modify/write penalty. The read/modify/write penalty is most
noticeable in the absence of “write-back” caching being performed by the
RAID controller.
The RAID stripe size configuration should typically match the
SNFS
StripeBreadth
configuration setting when multiple LUNs are utilized in a
stripe group. However, in some cases it might be optimal to configure the
SNFS StripeBreadth as a multiple of the RAID stripe size, such as when
the RAID stripe size is small but the user's I/O sizes are very large.
However, this will be suboptimal for small I/O performance, so may not
be suitable for general purpose usage.
RAID1 mirroring is the best RAID level for metadata and journal storage
because it is most optimal for very small I/O sizes. Quantum
recommends using fibre channel or SAS disks (as opposed to SATA) for
metadata and journal due to the higher IOPS performance and reliability.
It is also very important to allocate entire physical disks for the Metadata
and Journal LUNs in ordep to avoid bandwidth contention with other I/
O traffic. Metadata and Journal storage requires very high IOPS rates
(low latency) for optimal performance, so contention can severely impact
IOPS (and latency) and thus overall performance. If Journal I/O exceeds
1ms average latency, you will observe significant performance
degradation.
It can be useful to use a tool such as
lmdd to help determine the storage
system performance characteristics and choose optimal settings. For