Discussion:
ZFS backup configuration
Wolfraider
2010-03-24 19:20:46 UTC
Permalink
Sorry if this has been dicussed before. I tried searching but I couldn't find any info about it. We would like to export our ZFS configurations in case we need to import the pool onto another box. We do not want to backup the actual data in the zfs pool, that is already handled through another program.
--
This message posted from opensolaris.org
Eric D. Mudama
2010-03-24 19:31:19 UTC
Permalink
Post by Wolfraider
Sorry if this has been dicussed before. I tried searching but I
couldn't find any info about it. We would like to export our ZFS
configurations in case we need to import the pool onto another
box. We do not want to backup the actual data in the zfs pool, that
is already handled through another program.
I'm pretty sure the configuration is embedded in the pool itself.
Just import on the new machine. You may need --force/-f the pool
wasn't exported on the old system properly.

--eric
--
Eric D. Mudama
***@mail.bounceswoosh.org
Khyron
2010-03-24 23:00:57 UTC
Permalink
Yes, I think Eric is correct.

Funny, this is an adjunct to the thread I started entitled "Thoughts on ZFS
Pool
Backup Strategies". I was going to include this point in that thread but
thought
better of it.

It would be nice if there were an easy way to extract a pool configuration,
with
all of the dataset properties, ACLs, etc. so that you could easily reload it
into a
new pool. I could see this being useful in a disaster recovery sense, and
I'm
sure people smarter than I can think of other uses.
From my reading of the documentation and man pages, I don't see that any
such command currently exists. Something that would allow you dump the
config into a file and read it back from a file using typical Unix semantics
like
STDIN/STDOUT. I was thinking something like:

zpool dump <pool> [-o <filename>]

zpool load <pool> [-f <filename]

Without "-o" or "-f", the output would go to STDOUT or the input would come
from STDIN, so you could use this in pipelines. If you have a particularly
long
lived and stable pool, or one that has been through many upgrades, this
might
be a nice way to save a configuration that you could restore later (if
necessary)
with a single command.

Thoughts?
Post by Wolfraider
Sorry if this has been dicussed before. I tried searching but I
couldn't find any info about it. We would like to export our ZFS
configurations in case we need to import the pool onto another
box. We do not want to backup the actual data in the zfs pool, that
is already handled through another program.
I'm pretty sure the configuration is embedded in the pool itself.
Just import on the new machine. You may need --force/-f the pool
wasn't exported on the old system properly.
--eric
--
Eric D. Mudama
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
"You can choose your friends, you can choose the deals." - Equity Private

"If Linux is faster, it's a Solaris bug." - Phil Harman

Blog - http://whatderass.blogspot.com/
Twitter - @khyron4eva
Freddie Cash
2010-03-25 04:04:48 UTC
Permalink
Post by Khyron
Yes, I think Eric is correct.
Funny, this is an adjunct to the thread I started entitled "Thoughts on ZFS
Pool
Backup Strategies". I was going to include this point in that thread but
thought
better of it.
It would be nice if there were an easy way to extract a pool configuration,
with
all of the dataset properties, ACLs, etc. so that you could easily reload
it into a
new pool. I could see this being useful in a disaster recovery sense, and
I'm
sure people smarter than I can think of other uses.
From my reading of the documentation and man pages, I don't see that any
such command currently exists. Something that would allow you dump the
config into a file and read it back from a file using typical Unix
semantics like
zpool dump <pool> [-o <filename>]
zpool load <pool> [-f <filename]
Without "-o" or "-f", the output would go to STDOUT or the input would come
from STDIN, so you could use this in pipelines. If you have a particularly
long
lived and stable pool, or one that has been through many upgrades, this
might
be a nice way to save a configuration that you could restore later (if
necessary)
with a single command.
I don't use ACLs, but you can get the pool configuration and dataset
properties via "zfs get poolname". With some fancy scripting, you should be
able to come up with something that would take that output and recreate the
pool with the same settings.
--
Freddie Cash
***@gmail.com
Edward Ned Harvey
2010-03-25 12:30:15 UTC
Permalink
Post by Wolfraider
Sorry if this has been dicussed before. I tried searching but I
couldn't find any info about it. We would like to export our ZFS
configurations in case we need to import the pool onto another box. We
do not want to backup the actual data in the zfs pool, that is already
handled through another program.
What's the question? It seems like the answer is probably "zpool export"
and "man zpool"
Wolfraider
2010-03-25 13:11:57 UTC
Permalink
It seems like the zpool export will ques the drives and mark the pool as exported. This would be good if we wanted to move the pool at that time but we are thinking of a disaster recovery scenario. It would be nice to export just the config to where if our controller dies, we can use the zpool import on another box to get back up and running.
--
This message posted from opensolaris.org
Richard Elling
2010-03-25 14:06:10 UTC
Permalink
Post by Wolfraider
It seems like the zpool export will ques the drives and mark the pool as exported. This would be good if we wanted to move the pool at that time but we are thinking of a disaster recovery scenario. It would be nice to export just the config to where if our controller dies, we can use the zpool import on another box to get back up and running.
IMHO, this places artificial constraints on your DR site. You care about the
data, you shouldn't have to worry about the exact configuration of the pool.
In other words, if I have 2TB of data in a 500GB disk pool with mirrored
configuration for performance, I can be happy with the DR site using large,
2TB disks.
-- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
Wolfraider
2010-03-25 14:20:14 UTC
Permalink
This assumes that you have the storage to replicate or at least restore all data to a DR site. While this is another way to do it, it is not really cost effective in our situation.

What I am thinking is basically having 2 servers. One has the zpool attached and sharing out our data. The other is a cold spare. The zpool is stored on 3 JBOD chassis attached with Fibrechannel. I would like to export the config at specific intervals and have it ready to import on the cold spare if the hot spare ever dies. The eventual goal would be to configure an active/passive cluster for the zpool.
--
This message posted from opensolaris.org
David Magda
2010-03-25 14:33:09 UTC
Permalink
Post by Wolfraider
What I am thinking is basically having 2 servers. One has the zpool
attached and sharing out our data. The other is a cold spare. The zpool is
stored on 3 JBOD chassis attached with Fibrechannel. I would like to
export the config at specific intervals and have it ready to import on the
cold spare if the hot spare ever dies. The eventual goal would be to
configure an active/passive cluster for the zpool.
You may want to look into Solaris Cluster / Open HA:

http://hub.opensolaris.org/bin/view/Community+Group+ha-clusters/
http://www.opensolaris.com/learn/features/availability/
http://www.sun.com/software/solaris/cluster/

Free to use; support costs.
Richard Elling
2010-03-25 15:15:14 UTC
Permalink
Post by Wolfraider
This assumes that you have the storage to replicate or at least restore all data to a DR site. While this is another way to do it, it is not really cost effective in our situation.
If the primary and DR site aren't compatible, then it won't be much of a
DR solution... :-P
Post by Wolfraider
What I am thinking is basically having 2 servers. One has the zpool attached and sharing out our data. The other is a cold spare. The zpool is stored on 3 JBOD chassis attached with Fibrechannel. I would like to export the config at specific intervals and have it ready to import on the cold spare if the hot spare ever dies. The eventual goal would be to configure an active/passive cluster for the zpool.
You don't need to export the pool to make this work. Just import it
on the cold spare when the primary system dies. KISS. If you'd like
that to be automatic, then run the HA cluster software.
-- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
Wolfraider
2010-03-25 15:28:37 UTC
Permalink
Post by Wolfraider
Post by Wolfraider
This assumes that you have the storage to replicate
or at least restore all data to a DR site. While this
is another way to do it, it is not really cost
effective in our situation.
If the primary and DR site aren't compatible, then it
won't be much of a
DR solution... :-P
Which assumes that we have a DR site. lol We are working towards a full DR site but the funding has been a little tight. That project should be started in the next year or 2.
Post by Wolfraider
Post by Wolfraider
What I am thinking is basically having 2 servers.
One has the zpool attached and sharing out our data.
The other is a cold spare. The zpool is stored on 3
JBOD chassis attached with Fibrechannel. I would like
to export the config at specific intervals and have
it ready to import on the cold spare if the hot spare
ever dies. The eventual goal would be to configure an
active/passive cluster for the zpool.
You don't need to export the pool to make this work.
Just import it
n the cold spare when the primary system dies. KISS.
If you'd like
that to be automatic, then run the HA cluster
software.
Which, when I asked the question, I wasn't sure how it all worked. I didn't know if the import process need a config file or not. I am learning alot, very quickly. We will be looking into the HA cluster in the future. The spare is a cold spare for alot of different roles so we cant dedicate it to just the solaris box (yet :) ).
--
This message posted from opensolaris.org
David Magda
2010-03-25 16:04:08 UTC
Permalink
Post by Wolfraider
Which, when I asked the question, I wasn't sure how it all worked. I
didn't know if the import process need a config file or not. I am learning
alot, very quickly. We will be looking into the HA cluster in the future.
The spare is a cold spare for alot of different roles so we cant dedicate
it to just the solaris box (yet :) ).
What is the service being offered from the pool(s) in question? It may be
that you could use zones to spread services around different machines, but
still keep them isolated (with their own IPs and hostnames).
Wolfraider
2010-03-25 18:16:02 UTC
Permalink
We are sharing the LUNS out with Comstar from 1 big pool. In essence, we created our own low cost SAN. We currently have our windows clients connected with Fibrechannel to the COMSTAR target.
--
This message posted from opensolaris.org
Richard Elling
2010-03-25 18:39:43 UTC
Permalink
Post by Wolfraider
Which, when I asked the question, I wasn't sure how it all worked. I didn't know if the import process need a config file or not. I am learning alot, very quickly. We will be looking into the HA cluster in the future. The spare is a cold spare for alot of different roles so we cant dedicate it to just the solaris box (yet :) ).
No worries. It can work just fine. You can forgive us for reading more into the
question than perhaps was warranted... bad habits die hard :-)
-- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
Edward Ned Harvey
2010-03-26 11:56:06 UTC
Permalink
Post by Wolfraider
It seems like the zpool export will ques the drives and mark the pool
as exported. This would be good if we wanted to move the pool at that
time but we are thinking of a disaster recovery scenario. It would be
nice to export just the config to where if our controller dies, we can
use the zpool import on another box to get back up and running.
Correct, "zpool export" will offline your disks so you can remove them and
bring them somewhere else.

I don't think you need to do anything in preparation for possible server
failure. Am I wrong about this? I believe once your first server is down,
you just move your disks to another system, and then "zpool import." I
don't believe the export is necessary in order to do an import. You would
only export if you wanted to disconnect while the system is still powered
on.

You just "export" to tell the running OS "I'm about to remove those disks,
so don't freak out." But if there is no running OS, you don't worry about
it.

Again, I'm only 98% sure of the above. So it might be wise to test on a
sandbox system.

One thing that is worth mention: If you have an HBA such as 3ware, or Perc,
or whatever ... it might be impossible to move the disks to a different HBA,
such as Perc or 3ware (swapped one for the other). If your original system
is using Perc 6/i, only move them to another system with Perc 6/i (and if
possible, ensure the controller is using the same rev of firmware.)

If you're using a simple unintelligent non-raid sas or sata controller, you
should be good.

Loading...