Phil Harman
2011-06-07 16:12:52 UTC
Ok here's the thing ...
A customer has some big tier 1 storage, and has presented 24 LUNs (from
four RAID6 groups) to an OI148 box which is acting as a kind of iSCSI/FC
bridge (using some of the cool features of ZFS along the way). The OI
box currently has 32GB configured for the ARC, and 4x 223GB SSDs for
L2ARC. It has a dual port QLogic HBA, and is currently configured to do
round-robin MPXIO over two 4Gbps links. The iSCSI traffic is over a dual
10Gbps card (rather like the one Sun used to sell).
I've just built a fresh pool, and have created 20x 100GB zvols which are
mapped to iSCSI clients. I have initialised the first 20GB of each zvol
with random data. I've had a lot of success with write performance (e.g.
in earlier tests I had 20 parallel streams writing 100GB each at over
600MB/sec aggregate), but read performance is very poor.
Right now I'm just playing with 20 parallel streams of reads from the
first 2GB of each zvol (i.e. 40GB in all). During each run, I see lots
of writes to the L2ARC, but less than a quarter the volume of reads. Yet
my FC LUNS are hot with 1000s of reads per second. This doesn't change
from run to run. Why?
Surely 20x 2GB of data (and it's associated metadata) will sit nicely in
4x 223GB SSDs?
Phil
A customer has some big tier 1 storage, and has presented 24 LUNs (from
four RAID6 groups) to an OI148 box which is acting as a kind of iSCSI/FC
bridge (using some of the cool features of ZFS along the way). The OI
box currently has 32GB configured for the ARC, and 4x 223GB SSDs for
L2ARC. It has a dual port QLogic HBA, and is currently configured to do
round-robin MPXIO over two 4Gbps links. The iSCSI traffic is over a dual
10Gbps card (rather like the one Sun used to sell).
I've just built a fresh pool, and have created 20x 100GB zvols which are
mapped to iSCSI clients. I have initialised the first 20GB of each zvol
with random data. I've had a lot of success with write performance (e.g.
in earlier tests I had 20 parallel streams writing 100GB each at over
600MB/sec aggregate), but read performance is very poor.
Right now I'm just playing with 20 parallel streams of reads from the
first 2GB of each zvol (i.e. 40GB in all). During each run, I see lots
of writes to the L2ARC, but less than a quarter the volume of reads. Yet
my FC LUNS are hot with 1000s of reads per second. This doesn't change
from run to run. Why?
Surely 20x 2GB of data (and it's associated metadata) will sit nicely in
4x 223GB SSDs?
Phil