Discussion:
Best controller card for 8 SATA drives ?
(too old to reply)
Simon Breden
2009-06-21 13:35:50 UTC
Permalink
Hi, I'm trying to find out which controller card people here recommend that can drive 8 SATA hard drives and that would work with my Asus M2N-SLI Deluxe motherboard, which has following expansion slots:
2 x PCI Express x16 slot at x16, x8 speed (PCIe)

The main requirements I have are:
- drive 8 SATA drives
- rock solid reliability with x86 OpenSolaris 2009.06 or SXCE
- easy to identify failed drives and replace them (hot swap is not necessary but a bonus if supported)
- I must be able to move disks with data from one controller to another of different brands (and back!), only doing zpool export and import, which implies the HBA must be able to run in JBOD-mode without storing or modify anything on the disks. And preferably, the drives must show up with the format command.
- should support staggered spinup of drives preferably
1. Supermicro AOC-SAT2-MV8 (PCI-X interface) (pure SATA) (~$100)
2. Supermicro AOC-USAS-L8i / AOC-USASLP-L8i (PCIe interface) (miniSAS to SATA cables) (~$100)
3. LSI SAS 3081E-R or other LSI 'MegaRAID' cards (PCIe interface) (miniSAS to SATA cables) (~$200+)

1. AOC-SAT2-MV8 :
Again, from reading a bit, I can see that although the M2N-SLI Deluxe motherboard does not have a PCI-X slot, apparently it could take the AOC-SAT2-MV8 card in one of the PCIe slots, although the card would only run in 32-bit mode, instead of 64-bit mode, and would therefore run slower.

2. AOC-USAS-L8i :
The AOC-USAS-L8i card looks possible too, again running in the PCIe slot, but the old threads I saw on this seem to talk about some device numbering issue which could make determining the right failed drive to pull out, a difficult task -- see here for more details:
http://www.opensolaris.org/jive/thread.jspa?messageID=271751
http://www.opensolaris.org/jive/thread.jspa?threadID=46982&tstart=90

3. LSI SAS 3081E-R or other LSI 'MegaRAID' cards :
http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3081er/index.html?remote=1&locale
http://www.newegg.com/Product/Product.aspx?Item=N82E16816118100

This forum thread from DEC 2007 doesn't sound too good regarding drive numbering (for identifying failed drives etc), but the thread is 18 months old, and perhaps the issues may have been resolved now?
Also I noticed an extra '-R' in the model number I found, but this might be an omission of the original forum poster -- see here:
http://www.opensolaris.org/jive/thread.jspa?threadID=46982&tstart=90

I saw Ben Rockwood saying good things about the LSI MegaRAID cards, although the model he references supports only 4 internal and 4 external drives so is not what I want -- see here:
http://opensolaris.org/jive/message.jspa?messageID=368445#368445

Perhaps there are better LSI MegaRAID cards that people know of and can recommend? Preferably not too expensive though, as it's for a home system :)

If anyone can throw some light on these topics, I would be pleased to hear from you. Thanks a lot.

Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
--
This message posted from opensolaris.org
dick hoogendijk
2009-06-21 15:12:26 UTC
Permalink
On Sun, 21 Jun 2009 06:35:50 PDT
Post by Simon Breden
If anyone can throw some light on these topics, I would be pleased to
hear from you. Thanks a lot.
I follow this thread with much interest.
Curious to see what'll come out of it.
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
Simon Breden
2009-06-21 16:36:49 UTC
Permalink
After checking some more sources, it seems that if I used the AOC-SAT2-MV8 with this motherboard, I would need to run it on the standard PCI slot. Here is the full listing of the motherboard's expansion slots:

2 x PCI Express x16 slot at x16, x8 speed
2 x PCI Express x1
3 x PCI 2.2 <<----------- this one
--
This message posted from opensolaris.org
Orvar Korvar
2009-06-21 19:01:42 UTC
Permalink
I use the AOC-SAT2-MV8 in a ordinary PCI slot. The PCI slot maxes at 150MB/sec or so. That is the fastest you will get. That card works very good with Solaris/OpenSolaris. Detects automatically, etc. Ive heard though that it does not work with hot swapping discs - avoid this.

However, In a PCI-X it will max at 1GB/sec. I have been thinkin about buying a server mobo (they have PCI-X) to get 1GB/sec. Or should I buy a PCIe card instead? I dont know.
--
This message posted from opensolaris.org
roland
2009-06-21 21:07:49 UTC
Permalink
I fol****this thread with much interest.
what are these "*****" for ?

why is "followed" turned into "fol*****" on this board?
--
This message posted from opensolaris.org
dick hoogendijk
2009-06-21 21:10:19 UTC
Permalink
On Sun, 21 Jun 2009 14:07:49 PDT
Post by roland
I fol****this thread with much interest.
what are these "*****" for ?
why is "followed" turned into "fol*****" on this board?
The text of my original message was:

On Sun, 21 Jun 2009 06:35:50 PDT
Post by roland
If anyone can throw some light on these topics, I would be pleased to
hear from you. Thanks a lot.
I follow this thread with much interest.
Curious to see what'll come out of it.

Does the change occur again?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
Simon Breden
2009-06-21 21:20:49 UTC
Permalink
Hey Kebabber, long time no hear! :)

It's great to hear that you've had good experiences with the card. It's a great pity to have throughput drop from a potential 1GB/s to 150MB/s, but as most of my use of the NAS is across the network, and not local intra-NAS transfers, this should not be a problem. Of course, with a single GbE connection speeds are normally limited to around 50MB/s or so anyway...

Tell me, have you had any drives fail and had to figure out how to identify the failed drive and replace it & resilver using the AOC-SAT2-MV8, or have you tried any experiments to test resilvering ? I'm just curious as to how easy it is to do this with this controller card.

Like yourself, I was toying with the idea of upgrading and buying a shiny new mobo with dual 64-bit PCI-X slots and socket LGA1366 for Xeon 5500 series (Nehalem) processors -- the SuperMicro X8SAX here: http://www.supermicro.com/products/motherboard/Xeon3000/X58/X8SAX.cfm

Then I added up the price of all the components and decided to try to make do with the existing kit and just do an upgrade.

So I narrowed down possible SATA controllers to the above choices and I'm interested in people's experiences of using these controllers to help me decide.

Seems like the cheapest and simplest choice will be the AOC-SAT2-MV8, and I just take a hit on the reduced speed -- but that won't be a big problem.

However, as I have 2 x PCIe x16 slots available, if the AOC-USAS-L8i is reliable and doesn't have issues now with identifying drive ids, and supports JBOD mode, then it looks like the better choice. It is uses the more modern PCI Express (PCIe) interface, rather than the ageing PCI-X interface, fine as I'm sure it is.

Simon
--
This message posted from opensolaris.org
Carson Gaspar
2009-06-22 02:01:31 UTC
Permalink
I'll chime in as a happy owner of the LSI SAS 3081E-R PCI-E board. It
works just fine. You need to get "lsiutil" from the LSI web site to
fully access all the functionality, and they cleverly hide the download
link only under their FC HBAs on their support site, even though it
works for everything.


As for identifying disks, you can just use lsiutil:

root:gandalf 0 # lsiutil -p 1 42

LSI Logic MPT Configuration Utility, Version 1.62, January 14, 2009

1 MPT Port found

Port Name Chip Vendor/Type/Rev MPT Rev Firmware Rev IOC
1. mpt0 LSI Logic SAS1068E B3 105 011a0000 0

mpt0 is /dev/cfg/c6

B___T___L Type Operating System Device Name
0 0 0 Disk /dev/rdsk/c6t0d0s2
0 1 0 Disk /dev/rdsk/c6t1d0s2
0 2 0 Disk /dev/rdsk/c6t2d0s2
0 3 0 Disk /dev/rdsk/c6t3d0s2
0 4 0 Disk /dev/rdsk/c6t4d0s2
0 5 0 Disk /dev/rdsk/c6t5d0s2
0 6 0 Disk /dev/rdsk/c6t6d0s2
0 7 0 Disk /dev/rdsk/c6t7d0s2
James C. McPherson
2009-06-22 02:14:21 UTC
Permalink
On Sun, 21 Jun 2009 19:01:31 -0700
Post by Carson Gaspar
I'll chime in as a happy owner of the LSI SAS 3081E-R PCI-E board. It
works just fine. You need to get "lsiutil" from the LSI web site to
fully access all the functionality, and they cleverly hide the download
link only under their FC HBAs on their support site, even though it
works for everything.
As a member of the team which works on mpt(7d), I'm disappointed that\
you believe you need to use lsiutil to "fully access all the functionality"
of the board.

What gaps have you found in mpt(7d) and the standard OpenSolaris
tools that lsiutil fixes for you?

What is the "full functionality" that you believe is missing?
... or use cfgadm(1m) which has had this ability for many years.
Post by Carson Gaspar
root:gandalf 0 # lsiutil -p 1 42
LSI Logic MPT Configuration Utility, Version 1.62, January 14, 2009
1 MPT Port found
Port Name Chip Vendor/Type/Rev MPT Rev Firmware Rev IOC
1. mpt0 LSI Logic SAS1068E B3 105 011a0000 0
mpt0 is /dev/cfg/c6
B___T___L Type Operating System Device Name
0 0 0 Disk /dev/rdsk/c6t0d0s2
0 1 0 Disk /dev/rdsk/c6t1d0s2
0 2 0 Disk /dev/rdsk/c6t2d0s2
0 3 0 Disk /dev/rdsk/c6t3d0s2
0 4 0 Disk /dev/rdsk/c6t4d0s2
0 5 0 Disk /dev/rdsk/c6t5d0s2
0 6 0 Disk /dev/rdsk/c6t6d0s2
0 7 0 Disk /dev/rdsk/c6t7d0s2
You can get that information from use of cfgadm(1m).



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel
Carson Gaspar
2009-06-22 08:25:54 UTC
Permalink
Post by James C. McPherson
On Sun, 21 Jun 2009 19:01:31 -0700
As a member of the team which works on mpt(7d), I'm disappointed that\
you believe you need to use lsiutil to "fully access all the functionality"
of the board.
What gaps have you found in mpt(7d) and the standard OpenSolaris
tools that lsiutil fixes for you?
What is the "full functionality" that you believe is missing?
How does one upgrade firmware without using lsiutil?
Or toggle controller LEDs to identify which board is which, or...
Feel free to read the lsiutil docs (bad though they are) - the PDF is
available from the LSI web site.

Although both lsiutil and hd produce errors from mpt when trying to get
SMART data (specifically "Command failed with IOCStatus = 0045 (Data
Underrun)"). I haven't tried using the LSI provided drivers.
Post by James C. McPherson
... or use cfgadm(1m) which has had this ability for many years.
Great. Please provide a sample command line. Because the man page is
completely useless (no, really - try reading it). And no invocation _I_
can find provides the same information. I can only assume it's one of
the "hardware specific" options, which are documented nowhere that I can
find.

Note that my comments all releate to Solaris 10 u7 - it's certainly
possible that things are better in OpenSolaris.
--
Carson
James C. McPherson
2009-06-22 11:26:59 UTC
Permalink
On Mon, 22 Jun 2009 01:25:54 -0700
Post by Carson Gaspar
Post by James C. McPherson
On Sun, 21 Jun 2009 19:01:31 -0700
As a member of the team which works on mpt(7d), I'm disappointed that\
you believe you need to use lsiutil to "fully access all the functionality"
of the board.
What gaps have you found in mpt(7d) and the standard OpenSolaris
tools that lsiutil fixes for you?
What is the "full functionality" that you believe is missing?
How does one upgrade firmware without using lsiutil?
Use raidctl(1m). For fwflash(1m), this is on the "future project"
list purely because we've got much higher priority projects on the
boil - if we couldn't use raidctl(1m) this would be higher up the
list.
Post by Carson Gaspar
Or toggle controller LEDs to identify which board is which, or...
Feel free to read the lsiutil docs (bad though they are) - the PDF is
available from the LSI web site.
LED stuff ... yeah, not such an easy thing to solve.
I believe there has been a fair amount of effort gone into the generic
FMA topology "parser" so that we can do this, but I do not know the
status of the project.
Post by Carson Gaspar
Although both lsiutil and hd produce errors from mpt when trying to get
SMART data (specifically "Command failed with IOCStatus = 0045 (Data
Underrun)"). I haven't tried using the LSI provided drivers.
Is "hd" a utility from the x4500 software suite?
Post by Carson Gaspar
Post by James C. McPherson
... or use cfgadm(1m) which has had this ability for many years.
Great. Please provide a sample command line. Because the man page is
completely useless (no, really - try reading it). And no invocation _I_
can find provides the same information. I can only assume it's one of
the "hardware specific" options, which are documented nowhere that I can
find.
Did you try "cfgadm -lav" ? I was under the impression that the
cfgadm(1m) manpage's examples section was sufficient to provide
at least a starting point for a usable command line.

If you don't believe that is the case, I'd appreciate you filing
a bug against it (yes, we do like to get doc/manpage bugs) so that
we can make the manpage better.
Post by Carson Gaspar
Note that my comments all releate to Solaris 10 u7 - it's certainly
possible that things are better in OpenSolaris.
$ cfgadm -alv c0 c3
Ap_Id Receptacle Occupant Condition Information
When Type Busy Phys_Id
c0 connected configured unknown
unavailable scsi-bus n /devices/***@0,0/pci10de,***@a/pci1000,***@0:scsi
c0::dsk/c0t4d0 connected configured unknown ST3320620AS ST3320620AS
unavailable disk n /devices/***@0,0/pci10de,***@a/pci1000,***@0:scsi::dsk/c0t4d0
c0::dsk/c0t5d0 connected configured unknown ST3320620AS ST3320620AS
unavailable disk n /devices/***@0,0/pci10de,***@a/pci1000,***@0:scsi::dsk/c0t5d0
c0::dsk/c0t6d0 connected configured unknown ST3320620AS ST3320620AS
unavailable disk n /devices/***@0,0/pci10de,***@a/pci1000,***@0:scsi::dsk/c0t6d0
c0::dsk/c0t7d0 connected configured unknown ST3320620AS ST3320620AS
unavailable disk n /devices/***@0,0/pci10de,***@a/pci1000,***@0:scsi::dsk/c0t7d0
c3 connected configured unknown
unavailable scsi-bus n /devices/***@79,0/pci10de,***@a/pci1000,***@0:scsi
c3::dsk/c3t3d0 connected configured unknown ST3320620AS ST3320620AS
unavailable disk n /devices/***@79,0/pci10de,***@a/pci1000,***@0:scsi::dsk/c3t3d0
c3::dsk/c3t5d0 connected configured unknown SAMSUNG HD321KJ
unavailable disk n /devices/***@79,0/pci10de,***@a/pci1000,***@0:scsi::dsk/c3t5d0
c3::dsk/c3t6d0 connected configured unknown WDC WD3200AAKS-00VYA0
unavailable disk n /devices/***@79,0/pci10de,***@a/pci1000,***@0:scsi::dsk/c3t6d0
c3::dsk/c3t7d0 connected configured unknown ST3320620AS ST3320620AS
unavailable disk n /devices/***@79,0/pci10de,***@a/pci1000,***@0:scsi::dsk/c3t7d0


That functionality has been in Solaris 10 since FCS. The manpage
for cfgadm(1m) indicates that it was last changed in October 2004,
which is a good 6 months prior to FCS of Solaris 10.

If you don't like it, and don't tell us, how are we supposed to
know that it needs improving?


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel
Carson Gaspar
2009-06-22 22:28:08 UTC
Permalink
Post by James C. McPherson
Use raidctl(1m). For fwflash(1m), this is on the "future project"
list purely because we've got much higher priority projects on the
boil - if we couldn't use raidctl(1m) this would be higher up the
list.
Nice to see that raidctl can do that. Although I don't see a way to
flash the BIOS or fcode with raidctl... am I missing something, is it a
doc bug, or is it not possible? The man page intro mentions BIOS and
fcode, but the only option I can see is '-F' and it just says firmware...
Post by James C. McPherson
Post by Carson Gaspar
Although both lsiutil and hd produce errors from mpt when trying to get
SMART data (specifically "Command failed with IOCStatus = 0045 (Data
Underrun)"). I haven't tried using the LSI provided drivers.
Is "hd" a utility from the x4500 software suite?
Yes. It's the only Sun provided tool I know of that will dump detailed
SMART info (even on non-X45x0 hosts).
Post by James C. McPherson
Did you try "cfgadm -lav" ? I was under the impression that the
cfgadm(1m) manpage's examples section was sufficient to provide
at least a starting point for a usable command line.
If you don't believe that is the case, I'd appreciate you filing
a bug against it (yes, we do like to get doc/manpage bugs) so that
we can make the manpage better.
...
Post by James C. McPherson
$ cfgadm -alv c0 c3
Ap_Id Receptacle Occupant Condition Information
When Type Busy Phys_Id
c0 connected configured unknown
c0::dsk/c0t4d0 connected configured unknown ST3320620AS ST3320620AS
That gives the same data as 'ls -l /dev/dsk/c0t4d0'. It does _not_ give
the LSI HBA port number. And given the plethora of device mapping
options in the LSI controller, you really want the real port numbers.

As for the man page, for a basic "give me a list of devices" the man
page is overly complex and verbose, but sufficient. It's all the _other_
options that are documented to exist, but without any specifics. It all
basically reads as "reserved for future use".
--
Carson
James C. McPherson
2009-06-23 22:37:17 UTC
Permalink
On Mon, 22 Jun 2009 15:28:08 -0700
Post by Carson Gaspar
Post by James C. McPherson
Use raidctl(1m). For fwflash(1m), this is on the "future project"
list purely because we've got much higher priority projects on the
boil - if we couldn't use raidctl(1m) this would be higher up the
list.
Nice to see that raidctl can do that. Although I don't see a way to
flash the BIOS or fcode with raidctl... am I missing something, is it a
doc bug, or is it not possible? The man page intro mentions BIOS and
fcode, but the only option I can see is '-F' and it just says firmware...
We include both bios and fcode in the definition of firmware. The
manpage for raidctl(1m) also gives an example:

Example 4 Updating Flash Images on the Controller


The following command updates flash images on the controller
0:


# raidctl -F lsi_image.fw 0



...
Post by Carson Gaspar
Post by James C. McPherson
Did you try "cfgadm -lav" ? I was under the impression that the
cfgadm(1m) manpage's examples section was sufficient to provide
at least a starting point for a usable command line.
If you don't believe that is the case, I'd appreciate you filing
a bug against it (yes, we do like to get doc/manpage bugs) so that
we can make the manpage better.
...
Post by James C. McPherson
$ cfgadm -alv c0 c3
Ap_Id Receptacle Occupant Condition Information
When Type Busy Phys_Id
c0 connected configured unknown
c0::dsk/c0t4d0 connected configured unknown ST3320620AS ST3320620AS
That gives the same data as 'ls -l /dev/dsk/c0t4d0'. It does _not_ give
the LSI HBA port number. And given the plethora of device mapping
options in the LSI controller, you really want the real port numbers.
I don't see why that makes a difference to you, and I'd be grateful
if you'd clue me in on that. I only know of two device mapping options
for the 1064/1068-based cards, which are "logical target id" and "SAS
WWN". We use the "logical target id" method with mpt(7d).
Post by Carson Gaspar
As for the man page, for a basic "give me a list of devices" the man
page is overly complex and verbose, but sufficient. It's all the _other_
options that are documented to exist, but without any specifics. It all
basically reads as "reserved for future use".
There are other manpages referred to in the SEE ALSO section of
the cfgadm(1m) manpage, just like with other manpages:


SEE ALSO
cfgadm_fp(1M), cfgadm_ib(1M), cfgadm_pci(1M),cfgadm_sbd(1M),
cfgadm_scsi(1M), cfgadm_usb(1M), ifconfig(1M), mount(1M),
prtdiag(1M), psradm(1M), syslogd(1M), config_admin(3CFGADM),
getopt(3C), getsubopt(3C), isatty(3C), attributes(5),
environ(5)


What else are you thinking of as "reserved for future use" ?



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel
Andre van Eyssen
2009-06-22 02:05:24 UTC
Permalink
I'll chime in as a happy owner of the LSI SAS 3081E-R PCI-E board. It works
just fine. You need to get "lsiutil" from the LSI web site to fully access
all the functionality, and they cleverly hide the download link only under
their FC HBAs on their support site, even though it works for everything.
I'll add another vote for the LSI products. I have a four port PCI-X card
in my V880, and the performance is good and the product is well behaved.
The only caveats:

1. Make sure you upgrade the firmware ASAP
2. You may need to use lsiutil to fiddle the target mappings

Andre.
--
Andre van Eyssen.
mail: ***@purplecow.org jabber: ***@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org
Eric D. Mudama
2009-06-22 04:51:57 UTC
Permalink
Post by Andre van Eyssen
I'll add another vote for the LSI products. I have a four port PCI-X card
in my V880, and the performance is good and the product is well behaved.
1. Make sure you upgrade the firmware ASAP
2. You may need to use lsiutil to fiddle the target mappings
We bought a Dell T610 as a fileserver, and it comes with an LSI 1068E
based board (PERC6/i SAS). Worked out of the box, no special drivers
or anything to install, everything autodetected just fine.

Hotplug works great too, I've yanked drives (Came with WD RE3 SASA
devices) while the box was under load without issues, took ~5 seconds
to timeout the device and give me full interactivity at the console.
They then show right back up when hot plugged back in, and I can
resilver without problems.

--eric
--
Eric D. Mudama
***@mail.bounceswoosh.org
Miles Nordin
2009-06-22 19:46:45 UTC
Permalink
edm> We bought a Dell T610 as a fileserver, and it comes with an
edm> LSI 1068E based board (PERC6/i SAS).

which driver attaches to it?

pciids.sourceforge.net says this is a 1078 board, not a 1068 board.

please, be careful. There's too much confusion about these cards.
Eric D. Mudama
2009-06-23 02:33:37 UTC
Permalink
Post by Miles Nordin
edm> We bought a Dell T610 as a fileserver, and it comes with an
edm> LSI 1068E based board (PERC6/i SAS).
which driver attaches to it?
pciids.sourceforge.net says this is a 1078 board, not a 1068 board.
please, be careful. There's too much confusion about these cards.
Sorry, that may have been confusing. We have the cheapest storage
option on the T610, with no onboard cache. I guess it's called the
"Dell SAS6i/R" while they reserve the PERC name for the ones with
cache. I had understood that they were basically identical except for
the cache, but maybe not.

Anyway, this adapter has worked great for us so far.


snippet of prtconf -D:


i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3411, instance #6 (driver name: pcie_pci)
pci1028,1f10, instance #0 (driver name: mpt)
sd, instance #1 (driver name: sd)
sd, instance #6 (driver name: sd)
sd, instance #7 (driver name: sd)
sd, instance #2 (driver name: sd)
sd, instance #4 (driver name: sd)
sd, instance #5 (driver name: sd)


For this board the mpt driver is being used, and here's the prtconf
-pv info:


Node 0x00001f
assigned-addresses: 81020010.00000000.0000fc00.00000000.00000100.83020014.00000000.df2ec000.00000000.00004000.8302001c.00000000.df2f0000.00000000.00010000
reg: 00020000.00000000.00000000.00000000.00000000.01020010.00000000.00000000.00000000.00000100.03020014.00000000.00000000.00000000.00004000.0302001c.00000000.00000000.00000000.00010000
compatible: 'pciex1000,58.1028.1f10.8' + 'pciex1000,58.1028.1f10' + 'pciex1000,58.8' + 'pciex1000,58' + 'pciexclass,010000' + 'pciexclass,0100' + 'pci1000,58.1028.1f10.8' + 'pci1000,58.1028.1f10' + 'pci1028,1f10' + 'pci1000,58.8' + 'pci1000,58' + 'pciclass,010000' + 'pciclass,0100'
model: 'SCSI bus controller'
power-consumption: 00000001.00000001
devsel-speed: 00000000
interrupts: 00000001
subsystem-vendor-id: 00001028
subsystem-id: 00001f10
unit-address: '0'
class-code: 00010000
revision-id: 00000008
vendor-id: 00001000
device-id: 00000058
pcie-capid-pointer: 00000068
pcie-capid-reg: 00000001
name: 'pci1028,1f10'


--eric
--
Eric D. Mudama
***@mail.bounceswoosh.org
Erik Ableson
2009-06-23 07:11:41 UTC
Permalink
Just a side note on the PERC labelled cards: they don't have a JBOD
mode so you _have_ to use hardware RAID. This may or may not be an
issue in your configuration but it does mean that moving disks between
controllers is no longer possible. The only way to do a pseudo JBOD is
to create broken RAID 1 volumes which is not ideal.

Cordialement,

Erik Ableson

+33.6.80.83.58.28
Envoyé depuis mon iPhone

On 23 juin 2009, at 04:33, "Eric D. Mudama"
Post by Eric D. Mudama
Post by Miles Nordin
edm> We bought a Dell T610 as a fileserver, and it comes with an
edm> LSI 1068E based board (PERC6/i SAS).
which driver attaches to it?
pciids.sourceforge.net says this is a 1078 board, not a 1068 board.
please, be careful. There's too much confusion about these cards.
Sorry, that may have been confusing. We have the cheapest storage
option on the T610, with no onboard cache. I guess it's called the
"Dell SAS6i/R" while they reserve the PERC name for the ones with
cache. I had understood that they were basically identical except for
the cache, but maybe not.
Anyway, this adapter has worked great for us so far.
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3411, instance #6 (driver name: pcie_pci)
pci1028,1f10, instance #0 (driver name: mpt)
sd, instance #1 (driver name: sd)
sd, instance #6 (driver name: sd)
sd, instance #7 (driver name: sd)
sd, instance #2 (driver name: sd)
sd, instance #4 (driver name: sd)
sd, instance #5 (driver name: sd)
For this board the mpt driver is being used, and here's the prtconf
Node 0x00001f
81020010.00000000.0000fc00.00000000.00000100.83020014.00000000.
df2ec000.00000000.00004000.8302001c.
00000000.df2f0000.00000000.00010000
00020000.00000000.00000000.00000000.00000000.01020010.00000000.00000000.00000000.00000100.03020014.00000000.00000000.00000000.00004000.0302001c.
00000000.00000000.00000000.00010000
compatible: 'pciex1000,58.1028.1f10.8' + 'pciex1000,58.1028.1f10'
+ 'pciex1000,58.8' + 'pciex1000,58' + 'pciexclass,010000' +
'pciexclass,0100' + 'pci1000,58.1028.1f10.8' +
'pci1000,58.1028.1f10' + 'pci1028,1f10' + 'pci1000,58.8' +
'pci1000,58' + 'pciclass,010000' + 'pciclass,0100'
model: 'SCSI bus controller'
power-consumption: 00000001.00000001
devsel-speed: 00000000
interrupts: 00000001
subsystem-vendor-id: 00001028
subsystem-id: 00001f10
unit-address: '0'
class-code: 00010000
revision-id: 00000008
vendor-id: 00001000
device-id: 00000058
pcie-capid-pointer: 00000068
pcie-capid-reg: 00000001
name: 'pci1028,1f10'
--eric
--
Eric D. Mudama
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Kyle McDonald
2009-06-23 15:49:58 UTC
Permalink
Post by Erik Ableson
Just a side note on the PERC labelled cards: they don't have a JBOD
mode so you _have_ to use hardware RAID. This may or may not be an
issue in your configuration but it does mean that moving disks between
controllers is no longer possible. The only way to do a pseudo JBOD is
to create broken RAID 1 volumes which is not ideal.
It won't even let you make single drive RAID 0 LUNs? That's a shame.

The lack of portability is disappointing. The trade-off though is
battery backed cache if the card supports it.

-Kyle
Post by Erik Ableson
Cordialement,
Erik Ableson
+33.6.80.83.58.28
Envoyé depuis mon iPhone
On 23 juin 2009, at 04:33, "Eric D. Mudama"
Post by Eric D. Mudama
Post by Miles Nordin
edm> We bought a Dell T610 as a fileserver, and it comes with an
edm> LSI 1068E based board (PERC6/i SAS).
which driver attaches to it?
pciids.sourceforge.net says this is a 1078 board, not a 1068 board.
please, be careful. There's too much confusion about these cards.
Sorry, that may have been confusing. We have the cheapest storage
option on the T610, with no onboard cache. I guess it's called the
"Dell SAS6i/R" while they reserve the PERC name for the ones with
cache. I had understood that they were basically identical except for
the cache, but maybe not.
Anyway, this adapter has worked great for us so far.
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3411, instance #6 (driver name: pcie_pci)
pci1028,1f10, instance #0 (driver name: mpt)
sd, instance #1 (driver name: sd)
sd, instance #6 (driver name: sd)
sd, instance #7 (driver name: sd)
sd, instance #2 (driver name: sd)
sd, instance #4 (driver name: sd)
sd, instance #5 (driver name: sd)
For this board the mpt driver is being used, and here's the prtconf
Node 0x00001f
81020010.00000000.0000fc00.00000000.00000100.83020014.00000000.
df2ec000.00000000.00004000.8302001c.
00000000.df2f0000.00000000.00010000
00020000.00000000.00000000.00000000.00000000.01020010.00000000.00000000.00000000.00000100.03020014.00000000.00000000.00000000.00004000.0302001c.
Post by Eric D. Mudama
00000000.00000000.00000000.00010000
compatible: 'pciex1000,58.1028.1f10.8' + 'pciex1000,58.1028.1f10'
+ 'pciex1000,58.8' + 'pciex1000,58' + 'pciexclass,010000' +
'pciexclass,0100' + 'pci1000,58.1028.1f10.8' +
'pci1000,58.1028.1f10' + 'pci1028,1f10' + 'pci1000,58.8' +
'pci1000,58' + 'pciclass,010000' + 'pciclass,0100'
model: 'SCSI bus controller'
power-consumption: 00000001.00000001
devsel-speed: 00000000
interrupts: 00000001
subsystem-vendor-id: 00001028
subsystem-id: 00001f10
unit-address: '0'
class-code: 00010000
revision-id: 00000008
vendor-id: 00001000
device-id: 00000058
pcie-capid-pointer: 00000068
pcie-capid-reg: 00000001
name: 'pci1028,1f10'
--eric
--
Eric D. Mudama
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Erik Ableson
2009-06-23 20:01:10 UTC
Permalink
The problem I had was with the single raid 0 volumes (miswrote RAID 1
on the original message)

This is not a straight to disk connection and you'll have problems if
you ever need to move disks around or move them to another controller.

I agree that the MD1000 with ZFS is a rocking, inexpensive setup (we
have several!) but I'd recommend using a SAS card with a true JBOD
mode for maximum flexibility and portability. If I remember correctly,
I think we're using the Adaptec 3085. I've pulled 465MB/s write and
1GB/s read off the MD1000 filled with SATA drives.

Cordialement,

Erik Ableson

+33.6.80.83.58.28
Envoyé depuis mon iPhone
Post by Kyle McDonald
Post by Erik Ableson
Just a side note on the PERC labelled cards: they don't have a
JBOD mode so you _have_ to use hardware RAID. This may or may not
be an issue in your configuration but it does mean that moving
disks between controllers is no longer possible. The only way to
do a pseudo JBOD is to create broken RAID 1 volumes which is not
ideal.
It won't even let you make single drive RAID 0 LUNs? That's a shame.
We currently have 90+ disks that are created as single drive RAID 0
LUNs
on several PERC 6/E (LSI 1078E chipset) controllers and used by ZFS.
I can assure you that they work without any problems and perform very
well indeed.
In fact, the combination of PERC 6/E and MD1000 disk arrays has worked
so well for us that we are going to double the number of disks during
this fall.
Post by Kyle McDonald
The lack of portability is disappointing. The trade-off though is
battery backed cache if the card supports it.
-Kyle
Post by Erik Ableson
Cordialement,
Erik Ableson
+33.6.80.83.58.28
Envoyé depuis mon iPhone
Post by Eric D. Mudama
Post by Miles Nordin
edm> We bought a Dell T610 as a fileserver, and it comes with an
edm> LSI 1068E based board (PERC6/i SAS).
which driver attaches to it?
pciids.sourceforge.net says this is a 1078 board, not a 1068
board.
Post by Eric D. Mudama
Post by Miles Nordin
please, be careful. There's too much confusion about these
cards.
Post by Eric D. Mudama
Sorry, that may have been confusing. We have the cheapest storage
option on the T610, with no onboard cache. I guess it's called
the
Post by Eric D. Mudama
"Dell SAS6i/R" while they reserve the PERC name for the ones with
cache. I had understood that they were basically identical
except for
Post by Eric D. Mudama
the cache, but maybe not.
Anyway, this adapter has worked great for us so far.
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3411, instance #6 (driver name: pcie_pci)
pci1028,1f10, instance #0 (driver name: mpt)
sd, instance #1 (driver name: sd)
sd, instance #6 (driver name: sd)
sd, instance #7 (driver name: sd)
sd, instance #2 (driver name: sd)
sd, instance #4 (driver name: sd)
sd, instance #5 (driver name: sd)
For this board the mpt driver is being used, and here's the
prtconf
Post by Eric D. Mudama
Node 0x00001f
assigned-addresses: >
81020010.00000000.0000fc00.00000000.00000100.83020014.00000000.
Post by Eric D. Mudama
df2ec000.00000000.00004000.8302001c.
00000000.df2f0000.00000000.00010000
reg: >
00020000.00000000.00000000.00000000.00000000.01020010.00000000.00000000.00000000.00000100.03020014.00000000.00000000.00000000.00004000.0302001c.
Post by Eric D. Mudama
00000000.00000000.00000000.00010000
compatible: 'pciex1000,58.1028.1f10.8' +
'pciex1000,58.1028.1f10' > + 'pciex1000,58.8' + 'pciex1000,58' +
'pciexclass,010000' + > 'pciexclass,0100' +
'pci1000,58.1028.1f10.8' + > 'pci1000,58.1028.1f10' +
'pci1028,1f10' + 'pci1000,58.8' + > 'pci1000,58' + 'pciclass,
010000' + 'pciclass,0100'
Post by Eric D. Mudama
model: 'SCSI bus controller'
power-consumption: 00000001.00000001
devsel-speed: 00000000
interrupts: 00000001
subsystem-vendor-id: 00001028
subsystem-id: 00001f10
unit-address: '0'
class-code: 00010000
revision-id: 00000008
vendor-id: 00001000
device-id: 00000058
pcie-capid-pointer: 00000068
pcie-capid-reg: 00000001
name: 'pci1028,1f10'
--eric
--
Eric D. Mudama
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Med venlig hilsen / Best Regards
Henrik Johansen
Tlf. 75 53 35 00
ScanNet Group
A/S ScanNet
Henrik Johansen
2009-06-23 20:34:53 UTC
Permalink
Post by Erik Ableson
The problem I had was with the single raid 0 volumes (miswrote RAID 1
on the original message)
This is not a straight to disk connection and you'll have problems if
you ever need to move disks around or move them to another controller.
Would you mind explaining exactly what issues or problems you had ? I
have moved disks around several controllers without problems. You must
remember however to create the RAID 0 lun throught LSI's megaraid CLI
tool and / or to clear any foreign config before the controller will
expose the disk(s) to the OS.

The only real problem that I can think of is that you cannot use the
autoreplace functionality of recent ZFS versions with these controllers.
Post by Erik Ableson
I agree that the MD1000 with ZFS is a rocking, inexpensive setup (we
have several!) but I'd recommend using a SAS card with a true JBOD
mode for maximum flexibility and portability. If I remember correctly,
I think we're using the Adaptec 3085. I've pulled 465MB/s write and
1GB/s read off the MD1000 filled with SATA drives.
Cordialement,
Erik Ableson
+33.6.80.83.58.28
Envoyé depuis mon iPhone
Post by Kyle McDonald
Post by Erik Ableson
Just a side note on the PERC labelled cards: they don't have a
JBOD mode so you _have_ to use hardware RAID. This may or may not
be an issue in your configuration but it does mean that moving
disks between controllers is no longer possible. The only way to
do a pseudo JBOD is to create broken RAID 1 volumes which is not
ideal.
It won't even let you make single drive RAID 0 LUNs? That's a shame.
We currently have 90+ disks that are created as single drive RAID 0
LUNs
on several PERC 6/E (LSI 1078E chipset) controllers and used by ZFS.
I can assure you that they work without any problems and perform very
well indeed.
In fact, the combination of PERC 6/E and MD1000 disk arrays has worked
so well for us that we are going to double the number of disks during
this fall.
Post by Kyle McDonald
The lack of portability is disappointing. The trade-off though is
battery backed cache if the card supports it.
-Kyle
Post by Erik Ableson
Cordialement,
Erik Ableson
+33.6.80.83.58.28
Envoyé depuis mon iPhone
Post by Eric D. Mudama
Post by Miles Nordin
edm> We bought a Dell T610 as a fileserver, and it comes with an
edm> LSI 1068E based board (PERC6/i SAS).
which driver attaches to it?
pciids.sourceforge.net says this is a 1078 board, not a 1068
board.
Post by Eric D. Mudama
Post by Miles Nordin
please, be careful. There's too much confusion about these
cards.
Post by Eric D. Mudama
Sorry, that may have been confusing. We have the cheapest storage
option on the T610, with no onboard cache. I guess it's called
the
Post by Eric D. Mudama
"Dell SAS6i/R" while they reserve the PERC name for the ones with
cache. I had understood that they were basically identical
except for
Post by Eric D. Mudama
the cache, but maybe not.
Anyway, this adapter has worked great for us so far.
i86pc (driver name: rootnex)
pci, instance #0 (driver name: npe)
pci8086,3411, instance #6 (driver name: pcie_pci)
pci1028,1f10, instance #0 (driver name: mpt)
sd, instance #1 (driver name: sd)
sd, instance #6 (driver name: sd)
sd, instance #7 (driver name: sd)
sd, instance #2 (driver name: sd)
sd, instance #4 (driver name: sd)
sd, instance #5 (driver name: sd)
For this board the mpt driver is being used, and here's the
prtconf
Post by Eric D. Mudama
Node 0x00001f
assigned-addresses: >
81020010.00000000.0000fc00.00000000.00000100.83020014.00000000.
Post by Eric D. Mudama
df2ec000.00000000.00004000.8302001c.
00000000.df2f0000.00000000.00010000
reg: >
00020000.00000000.00000000.00000000.00000000.01020010.00000000.00000000.00000000.00000100.03020014.00000000.00000000.00000000.00004000.0302001c.
Post by Eric D. Mudama
00000000.00000000.00000000.00010000
compatible: 'pciex1000,58.1028.1f10.8' +
'pciex1000,58.1028.1f10' > + 'pciex1000,58.8' + 'pciex1000,58' +
'pciexclass,010000' + > 'pciexclass,0100' +
'pci1000,58.1028.1f10.8' + > 'pci1000,58.1028.1f10' +
'pci1028,1f10' + 'pci1000,58.8' + > 'pci1000,58' + 'pciclass,
010000' + 'pciclass,0100'
Post by Eric D. Mudama
model: 'SCSI bus controller'
power-consumption: 00000001.00000001
devsel-speed: 00000000
interrupts: 00000001
subsystem-vendor-id: 00001028
subsystem-id: 00001f10
unit-address: '0'
class-code: 00010000
revision-id: 00000008
vendor-id: 00001000
device-id: 00000058
pcie-capid-pointer: 00000068
pcie-capid-reg: 00000001
name: 'pci1028,1f10'
--eric
--
Eric D. Mudama
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Med venlig hilsen / Best Regards
Henrik Johansen
Tlf. 75 53 35 00
ScanNet Group
A/S ScanNet
--
Med venlig hilsen / Best Regards

Henrik Johansen
***@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet
Jorgen Lundman
2009-06-22 02:15:20 UTC
Permalink
I only have a 32bit PCI bus in the Intel Atom 330 board, so I have no
choice than to be "slower", but I can confirm that the Supermicro
dac-sata-mv8 (SATA-1) card works just fine, and does display in cfgadm.
(Hot-swapping is possible).

I have been told aoc-sat2-mv8 does as well (SATA-II) but I have not
personally tried it.

Lund
--
Jorgen Lundman | <***@lundman.net>
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell)
Japan | +81 (0)3 -3375-1767 (home)
Erik Trimble
2009-06-22 03:02:50 UTC
Permalink
Post by Jorgen Lundman
I only have a 32bit PCI bus in the Intel Atom 330 board, so I have no
choice than to be "slower", but I can confirm that the Supermicro
dac-sata-mv8 (SATA-1) card works just fine, and does display in
cfgadm. (Hot-swapping is possible).
I have been told aoc-sat2-mv8 does as well (SATA-II) but I have not
personally tried it.
Lund
I have an AOC-SAT2-MV8 in an older Opteron-based system. It's a
2-socket, Opteron 252 system with 8GB of RAM, and PCI-X slots. It's one
of the newer AOC cards with the latest Marvell chipset, and it works
like a champ - very well, smooth, and I don't see any problems. Simple,
out-of-the-box installation and works with no tinkering at all (with OS
2008.11 and 2009.05).

That said, there's a couple of things you want to be aware of about the AOC:

(1) it uses normal sata cables. This is really nice in terms of
availability (you can get any length you want easily at any computer
store), but it's a bit messy compared to the nice multi-lane ones.

(2) It's a PCI-X card, and will run at 133Mhz. I have a second gigabit
ethernet card in my motherboard, which limits the two PCI-X cards to
100Mhz. The down side is that with 8 drives and 2 gigabit ethernet
interfaces, it's not hard to flood the PCI-X bus (which can pump 100Mhz
x 64bit = 6400 Mbps max, but about 50% of that under real usage)

(3) as a PCI-X card, it's a "two-third's" length, low-profile card. It
will fit in any PCI-X slot you have. However, if you are trying to put
it in a 32-bit PCI slot, be aware that it extends about 2 inches (50mm)
beyond the back of the PCI slot. Make sure you have the proper
clearances for such a card. Also, it's a 3.3v card (won't work in 5v
slots). None of this should be a problem in any modern motherboard/case
setup, only in really old stuff.

(4) All the SATA connectors are on the end of the card, which means
you'll need _at least_ another 1 inch (25mm) clearance to plug the
cables in.


As much as I like the card, these days, I'd chose the LSI PCI-E model.
The PCI-E bus is just superior to PCI-X - you get much less bus
contention which means it's easier to get full throughput from each card


One more thing: I've found that the newest MLC-based SSDs with the
newer "barefoot" SATA controller and 64MB or more of cache are more than
suitable for use as Read cache, and they actually do OK as write-cache,
too. Particularly for small business server machine (those that have
have 8-12 data drives, total).

And, these days, there's nice little funky dual-2.5 drives in a floppy
form-factor things.

http://www.addonics.com/products/mobile_rack/doubledrive.asp

Example new SSD for Readzilla/Logzilla :

http://www.ocztechnology.com/products/flash_drives/ocz_summit_series_sata_ii_2_5-ssd
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Jorgen Lundman
2009-06-22 04:52:00 UTC
Permalink
I hesitate to post this question here, since the relation to ZFS is
tenuous at best (zfs to sata controller to LCD panel).

But maybe someone has already been down this path before me. Looking at
building a RAID, with osol and zfs, I naturally want a front-panel. I
was looking at something like;

http://www.mini-box.com/picoLCD-256x64-Sideshow-CDROM-Bay

Since it appears to come with OpenSource drivers. Based on lcd4linux,
which I can compile with marginal massaging. Has anyone run this
successfully with osol?

It appears to handle mrtg directly, so I should be able to graph a whole
load of ZFS data. Has someone already been down this road too?
--
Jorgen Lundman | <***@lundman.net>
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo | +81 (0)90-5578-8500 (cell)
Japan | +81 (0)3 -3375-1767 (home)
Orvar Korvar
2009-06-24 19:58:34 UTC
Permalink
Hey sbreden! :o)

No, I havent tried to tinker with my drives. They have been functioning all the time. I suspect (can not remember) that each SATA slot in the card has a number attached to it? Can anyone confirm this? If I am right, OpenSolaris will say something about "disc 6 is broken" and on the card there is a number 6? Then you can identify the disc?

I thought of exchanging my PCI card with a PCIe card variant instead to reach higher speeds. PCI-X is legacy. The problem with PCIe cards is that soon SSD drives will be common. A ZFS raid with SSD would need maybe PCIe x 16 or so, to reach max band width. The PCIe cards are all PCIe x 4 or something of today. I need a PCIe x 16 card to make it future proof for the SSD discs. Maybe the best bet would be to attach a SSD disc directly to a PCIe slot, to reach max transfer speed? Or wait for SATA 3? I dont know. I want to wait until SSD raids are tested out. Then I will buy an apropriate card capable of SSD raids. Maybe SSD discs should never be used in conjunction with a card, and always connect directly to the SATA port? Until I know more on this, my PCI card will be fine. 150MB/sec is ok f
or my personal needs.

(My ZFS raid is connected to my Desktop PC. I dont have a server that is on 24/7 using power. I want to save power. Save the earth! :o) All my 5 ZFS raid discs are connected to one Molex. That molex has a power switch. So I just turn on the ZFS raid and copy all files I need to my system disc (which is 500GB) and then immediately reboot and turn off the ZFS raid. This way I only have one disc active, which I use as a cache. When my data are ready, I copy them to the ZFS raid and then shut down the power to the ZFS raid discs.)





However I have a question. Which speed will I get with this solution. I have 2 SSD discs in a PCI slot => 150MB/sec. Now I add 1 SSD disc into a SATA slot and another SSD disc into another SATA slot. Then I have
5 disc in PCI => 150MB/sec
1 disc in SATA => 300MB/sec (I assume SATA reach 300MB/sec?)
1 disc in SATA => 300MB/sec.

I connect all the 7 discs into one ZFS raid. Which speed will I get? Will I get 150 + 300 + 300MB/sec? Or will the PCI slot strangle the SATA ports? Or will the fastest speed "win" and I will only get 300MB/sec?
--
This message posted from opensolaris.org
Bob Friesenhahn
2009-06-24 20:38:39 UTC
Permalink
Post by Orvar Korvar
I thought of exchanging my PCI card with a PCIe card variant instead
to reach higher speeds. PCI-X is legacy. The problem with PCIe cards
is that soon SSD drives will be common. A ZFS raid with SSD would
need maybe PCIe x 16 or so, to reach max band width. The PCIe cards
are all PCIe x 4 or something of today. I need a PCIe x 16 card to
make it future proof for the SSD discs. Maybe the best bet would be
to attach a SSD disc directly to a PCIe slot, to reach max transfer
speed? Or wait for SATA 3? I dont know. I want to wait until SSD
I don't think this is valid thinking because it assumes that write
rates for SSDs are higher than for traditional hard drives. This
assumption is not often correct. Maybe someday.

SSDs offer much lower write latencies (no head seek!) but their bulk
sequential data transfer properties are not yet better than hard
drives.

The main purpose for using SSDs with ZFS is to reduce latencies for
synchronous writes required by network file service and databases.

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Eric D. Mudama
2009-06-24 22:28:15 UTC
Permalink
Post by Orvar Korvar
I thought of exchanging my PCI card with a PCIe card variant instead
to reach higher speeds. PCI-X is legacy. The problem with PCIe cards
is that soon SSD drives will be common. A ZFS raid with SSD would need
maybe PCIe x 16 or so, to reach max band width. The PCIe cards are all
PCIe x 4 or something of today. I need a PCIe x 16 card to make it
future proof for the SSD discs. Maybe the best bet would be to attach a
SSD disc directly to a PCIe slot, to reach max transfer speed? Or wait
for SATA 3? I dont know. I want to wait until SSD
I don't think this is valid thinking because it assumes that write rates
for SSDs are higher than for traditional hard drives. This assumption is
not often correct. Maybe someday.
SSDs offer much lower write latencies (no head seek!) but their bulk
sequential data transfer properties are not yet better than hard drives.
The main purpose for using SSDs with ZFS is to reduce latencies for
synchronous writes required by network file service and databases.
In the "available 5 months ago" category, the Intel X25-E will write
sequentially at ~170MB/s according to the datasheets. That is faster
than most, if not all rotating media today.

--eric
--
Eric D. Mudama
***@mail.bounceswoosh.org
Bob Friesenhahn
2009-06-24 23:43:00 UTC
Permalink
Post by Eric D. Mudama
Post by Bob Friesenhahn
The main purpose for using SSDs with ZFS is to reduce latencies for
synchronous writes required by network file service and databases.
In the "available 5 months ago" category, the Intel X25-E will write
sequentially at ~170MB/s according to the datasheets. That is faster
than most, if not all rotating media today.
Sounds good. Is that is after the whole device has been re-written a
few times or just when you first use it? How many of these devices do
you own and use?

Seagate Cheetah drives can now support a sustained data rate of
204MB/second. That is with 600GB capacity rather than 64GB and at a
similar price point (i.e. 10X less cost per GB). Or you can just
RAID-0 a few cheaper rotating rust drives and achieve a huge
sequential data rate.

I see that the Intel X25-E claims a sequential read performance of
250 MB/s.

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Eric D. Mudama
2009-06-25 16:11:42 UTC
Permalink
Post by Bob Friesenhahn
Post by Eric D. Mudama
Post by Bob Friesenhahn
The main purpose for using SSDs with ZFS is to reduce latencies for
synchronous writes required by network file service and databases.
In the "available 5 months ago" category, the Intel X25-E will write
sequentially at ~170MB/s according to the datasheets. That is faster
than most, if not all rotating media today.
Sounds good. Is that is after the whole device has been re-written a
few times or just when you first use it?
Based on the various review sites, some tests experience a temporary
performance decrease when performing sequential IO over the top of
previously randomly written data, which resolves in some short time
period.

I am not convinced that simply writing the devices makes them slower.

Actual performance will be workload specific, YMMV.
Post by Bob Friesenhahn
How many of these devices do you own and use?
I own two of them personally, and work with many every day.
Post by Bob Friesenhahn
Seagate Cheetah drives can now support a sustained data rate of
204MB/second. That is with 600GB capacity rather than 64GB and at a
similar price point (i.e. 10X less cost per GB). Or you can just
RAID-0 a few cheaper rotating rust drives and achieve a huge
sequential data rate.
True. In $ per sequential GB/s, rotating rust still wins by far.
However, your comment about all flash being slower than rotating at
sequential writes was mistaken. Even at 10x the price, if you're
working with a dataset that needs random IO, the $ per IOP from flash
can be significantly greater than any amount of rust, and typically
with much lower power consumption to boot.

Obviously the primary benefits of SSDs aren't in sequential
reads/writes, but they're not necessarilly complete dogs there either.

--eric
--
Eric D. Mudama
***@mail.bounceswoosh.org
Nicholas Lee
2009-06-25 22:14:36 UTC
Permalink
On Fri, Jun 26, 2009 at 4:11 AM, Eric D. Mudama
Post by Eric D. Mudama
True. In $ per sequential GB/s, rotating rust still wins by far.
However, your comment about all flash being slower than rotating at
sequential writes was mistaken. Even at 10x the price, if you're
working with a dataset that needs random IO, the $ per IOP from flash
can be significantly greater than any amount of rust, and typically
with much lower power consumption to boot.
Obviously the primary benefits of SSDs aren't in sequential
reads/writes, but they're not necessarilly complete dogs there either.
It's all about iops. HDD can do about 300 iops, SSD can get up to 10k+
iops. On sequential writes obviously low iops is not a problem - 300 x
128kB is 40MB. But for small packet random sync NFS traffic 300 * 32kb is
hardly a 1MB/s.

Nicholas

Ian Collins
2009-06-21 21:56:17 UTC
Permalink
Post by dick hoogendijk
I fol****this thread with much
interest.
what are these "*****" for ?
why is "followed" turned into "fol*****" on this
board?
It isn't a board, it's a mail list. All the forum does is bugger up the formatiing and threading of emails!
--
Ian
Simon Breden
2009-06-22 20:36:53 UTC
Permalink
Thanks guys, keep your experiences coming.
--
This message posted from opensolaris.org
Simon Breden
2009-06-22 21:02:56 UTC
Permalink
Also, is anybody using the AOC-USAS-L8i?

If so, what's your experience of it, and identifying drives and replacing failed drives with it?
--
This message posted from opensolaris.org
Continue reading on narkive:
Loading...