Post by Constantin Gonzalez SchmitzHi,
I do agree that zpool remove is a _very_ desirable feature, not doubt about
that.
Here are a couple of thoughts and workarounds, in random order, that might
- My home machine has 4 disks and a big zpool across them. Fine. But what
if a controller fails or worse, a CPU? Right, I need a second machine, if
I'm really honest with myself and serious with my data. Don't laugh, ZFS
on a Solaris server is becoming my mission-critical home storage solution
that is supposed to last beyond CDs and DVDs and other vulnerable media.
So, if I was an enterprise, I'd be willing to keep enough empty LUNs
available to facilitate at least the migration of one or more filesystems
if not complete pools. With a little bit of scripting, this can be done
quite easily and efficiently through zfs send/receive and some LUN
juggling.
If I was an enterprise's server admin and the storage guys wouldn't have
enough space for migrations, I'd be worried.
I think you may find in practice that many medium to large enterprise IT
departments are in this exact situation -- we do not have luns sitting
stagnant just waiting for data migrations of our largest data sets. We
have been sold (and rightly so, because it works and is cost effective and
has no downtime) that we should be able to move luns around at will without
duplicating (to tape or disk) and dumping. You are really expecting to
have the storage guys to have 40tb of disk just sitting collecting dust
when you want to pull out 10 disks from a 44tb system? This type of
thinking may very well be why Sun has hard time in the last few years
(although zfs, and recent products show that the tide is turning).
Post by Constantin Gonzalez Schmitz- We need to avoid customers thinking "Veritas can shrink, ZFS can't". That
is wrong. ZFS _filesystems_ grow and shrink all the time, it's just the
pools below them that can just grow. And Veritas does not even have pools.
Sorry, that is silly. Can we compare if we call them both "volumes or
filesystems (or any virtualization of each) which are reserved for data in
which we want to remove and add disks online"? vxfs can grow and shrink
and the volumes can grow and shrink. Pools may blur the line of volume/fs
but they are still delivering the same constraints to administrators trying
to admin these boxes and the disks attached to them.
Post by Constantin Gonzalez SchmitzPeople have started to follow a One-pool-to-store-them-all which I think
- One pool per zone might be a good idea if you want to migrate zones
across systems which then becomes easy through zpool export/import in
a SAN.
- One pool per service level (mirror, RAID-Z2, fast, slow, cheap, expensive)
might be another idea. Keep some cheap mirrored storage handy for your pool
migration needs and you could wiggle your life around zpool remove.
You went from one pool to share data (the major advantage of the pool
concept) to a bunch of constrained pools. Also how does this resolve the
issue of lun migration online?
Post by Constantin Gonzalez SchmitzSwitching between Mirror, RAID-Z, RAID-Z2 then becomes just a zfs
send/receive pair.
Shrinking a pool requires some more zfs send/receiving and maybe some
scripting, but these are IMHO less painful than living without ZFS'
data integrity and the other joys of ZFS.
Ohh, never mind, dump to tape and restore (err disk) -- you do realize
that the industry has been selling products that have made this behavior
obsolete for close to 10 years now?
Post by Constantin Gonzalez SchmitzSorry if I'm stating the obvious or stuff that has been discussed before,
but the more I think about zpool remove, the more I think it's a question
of willingness to plan/work/script/provision vs. a real show stopper.
No, it is a specific workflow that requires disk to stay online, while
allowing for economically sound use of resources -- this is not about
laziness (that is how I am reading your view) or not wanting to script up
solutions.
Post by Constantin Gonzalez SchmitzBest regards,
Constantin
P.S.: Now with my big mouth I hope I'll survive a customer confcall next
week with a customer asking for exactly zpool remove :).
I hope so, you may want to rethink the "script and go back in sysadmin
time 10 years" approach. ZFS buys alot and is a great filesystem but there
are places such as this that are still weak and need fixing for many
environments to be able to replace vxvm/vxfs or other solutions. Sure, you
will find people that are viewing this new pooled filesystem with old eyes,
but there are admins on this list that actually understand what they are
missing and the other options for working around these issues. We don't
look at this like a feature tickmark, but as a feature that we know is
missing that we really need to consider moving some of our systems from
vxvm/fs to zfs.
-Wade Stuart
Post by Constantin Gonzalez Schmitz--
Constantin Gonzalez Sun Microsystems GmbH, Germany
Platform Technology Group, Client Solutions
http://www.sun.de/
http://blogs.sun.com/constantin/
Post by Constantin Gonzalez Schmitz_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss