Discussion:
unsetting the bootfs property possible? imported a FreeBSD pool
Reshekel Shedwitz
2010-05-25 08:58:04 UTC
Permalink
Greetings -

I am migrating a pool from FreeBSD 8.0 to OpenSolaris (Nexenta 3.0 RC1). I am in what seems to be a weird situation regarding this pool. Maybe someone can help.

I used to boot off of this pool in FreeBSD, so the bootfs property got set:

***@nexenta:~# zpool get bootfs tank
NAME PROPERTY VALUE SOURCE
tank bootfs tank local

The presence of this property seems to be causing me all sorts of headaches. I cannot replace a disk or add a L2ARC because the presence of this flag is how ZFS code (libzfs_pool.c: zpool_vdev_attach and zpool_label_disk) determines if a pool is allegedly a root pool.

***@nexenta:~# zpool add tank cache c1d0
cannot label 'c1d0': EFI labeled devices are not supported on root pools.

To replace disks, I was able to hack up libzfs_zpool.c and create a new custom version of the zpool command. That works, but this is a poor solution going forward because I have to be sure I use my customized version every time I replace a bad disk.

Ultimately, I would like to just set the bootfs property back to default, but this seems to be beyond my ability. There are some checks in libzfs_pool.c that I can bypass in order to set the value back to its default of "-", but ultimately I am stopped because there is code in zfs_ioctl.c, which I believe is kernel code, that checks to see if the bootfs value supplied is actually an existing dataset.

I'd compile my own kernel but hey, this is only my first day using OpenSolaris - it was a big enough feat just learning how to compile stuff in the ON source tree :D

What should I do here? Is there some obvious solution I'm missing? I'd like to be able to get my pool back to a state where I can use the *stock* zpool command to maintain it. I don't boot off of this pool anymore and if I could somehow set the boot.

BTW, for reference, here is the output of zpool status (after I hacked up zpool to let me add a l2arc):

pool: tank
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: resilvered 351G in 2h44m with 0 errors on Tue May 25 23:33:38 2010
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c2t5d0p0 ONLINE 0 0 0
c2t4d0p0 ONLINE 0 0 0
c2t3d0p0 ONLINE 0 0 0
c2t2d0p0 ONLINE 0 0 0
c2t1d0p0 ONLINE 0 0 0
cache
c1d0 ONLINE 0 0 0

errors: No known data errors


Thanks,
Darren
--
This message posted from opensolaris.org
Cindy Swearingen
2010-05-25 15:04:46 UTC
Permalink
Hi Reshekel,

You might review these resources for information on using ZFS without
having to hack code:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs

ZFS Administration Guide

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

I will add a section on migrating from FreeBSD because this problem
comes up often enough. You might search the list archive for this
problem to see how others have resolved the partition issues.

Moving ZFS storage pools from a FreeBSD system to a Solaris system is
difficult because it looks like FreeBSD uses the disk's p0 partition
and in Solaris releases, ZFS storage pools are either created with
whole disks by using the d0 identifier or root pools, which are created
by using the disk slice identifier (s0). This is an existing boot
limitation.

For example, see the difference in the two pools:

# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
pool: dozer
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
dozer ONLINE 0 0 0
c2t5d0 ONLINE 0 0 0
c2t6d0 ONLINE 0 0 0

errors: No known data errors


If you want to boot from a ZFS storage pool then you must create the
pool with slices. This is why you see the message about EFI labels
because pools that are created with whole disks use an EFI label and
Solaris doesn't boot from an EFI label.

You can add a cache device to a pool reserved for booting, but you
must create a disk slice and then, add the cache device like this:

# zpool add rpool cache c1t2d0s0
# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
c1t1d0s0 ONLINE 0 0 0
cache
c1t2d0s0 ONLINE 0 0 0


I suggest creating two pools, one small pool for booting and one larger
pool for data storage.

Thanks,

Cindy
Post by Reshekel Shedwitz
Greetings -
I am migrating a pool from FreeBSD 8.0 to OpenSolaris (Nexenta 3.0 RC1). I am in what seems to be a weird situation regarding this pool. Maybe someone can help.
NAME PROPERTY VALUE SOURCE
tank bootfs tank local
The presence of this property seems to be causing me all sorts of headaches. I cannot replace a disk or add a L2ARC because the presence of this flag is how ZFS code (libzfs_pool.c: zpool_vdev_attach and zpool_label_disk) determines if a pool is allegedly a root pool.
cannot label 'c1d0': EFI labeled devices are not supported on root pools.
To replace disks, I was able to hack up libzfs_zpool.c and create a new custom version of the zpool command. That works, but this is a poor solution going forward because I have to be sure I use my customized version every time I replace a bad disk.
Ultimately, I would like to just set the bootfs property back to default, but this seems to be beyond my ability. There are some checks in libzfs_pool.c that I can bypass in order to set the value back to its default of "-", but ultimately I am stopped because there is code in zfs_ioctl.c, which I believe is kernel code, that checks to see if the bootfs value supplied is actually an existing dataset.
I'd compile my own kernel but hey, this is only my first day using OpenSolaris - it was a big enough feat just learning how to compile stuff in the ON source tree :D
What should I do here? Is there some obvious solution I'm missing? I'd like to be able to get my pool back to a state where I can use the *stock* zpool command to maintain it. I don't boot off of this pool anymore and if I could somehow set the boot.
pool: tank
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: resilvered 351G in 2h44m with 0 errors on Tue May 25 23:33:38 2010
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c2t5d0p0 ONLINE 0 0 0
c2t4d0p0 ONLINE 0 0 0
c2t3d0p0 ONLINE 0 0 0
c2t2d0p0 ONLINE 0 0 0
c2t1d0p0 ONLINE 0 0 0
cache
c1d0 ONLINE 0 0 0
errors: No known data errors
Thanks,
Darren
Reshekel Shedwitz
2010-05-25 15:42:27 UTC
Permalink
Cindy,

Thanks for your reply. The important details may have been buried in my post, I will repeat them again to make it more clear:

(1) This was my boot pool in FreeBSD, but I do not think the partitioning differences are really the issue. I can import the pool to nexenta/opensolaris just fine.

Furthermore, this is *no longer* being used as a root pool in nexenta. I purchased an SSD for the purpose of booting nexenta. This pool is used purely for data storage - no booting.

(2) I had to hack the code because zpool is forbidding me from adding or replacing devices - please see my logs in the previous post.

zpool thinks this pool is a "boot pool" due to the bootfs flag being set, and zpool will not let me unset the bootfs property. So I'm stuck in a situation where zpool thinks my pool is a boot pool because of the bootfs property, and zpool will not let me unset the bootfs property. Because zpool thinks this pool is the boot pool, it is trying to forbid me from creating a configuration that isn't compatible with booting.

In this situation, I am unable to add or replace devices without using my hacked version of zpool.

I was able to hack the code to allow zpool to replace and add devices, but I was not able to figure out how to set the bootfs property back to the default value.

Does this help explain my situation better? I think this is a bug, or maybe I'm missing something totally obvious.

Thanks!
--
This message posted from opensolaris.org
Cindy Swearingen
2010-05-25 19:48:55 UTC
Permalink
Hi--

I apologize for missing understanding your original issue.

Regardless of the original issues and the fact that current Solaris
releases do not let you set the bootfs property on a pool that has a
disk with an EFI label, the secondary bug here is not being able to
remove a bootfs property on a pool that has a disk with an EFI label.
If this helps with the migration of pools, then we should allow you
to remove the bootfs property.

I will file this bug on your behalf.

In the meantime, I don't see how you can resolve the problem on this
pool.

Thanks,

Cindy
Post by Reshekel Shedwitz
Cindy,
(1) This was my boot pool in FreeBSD, but I do not think the partitioning differences are really the issue. I can import the pool to nexenta/opensolaris just fine.
Furthermore, this is *no longer* being used as a root pool in nexenta. I purchased an SSD for the purpose of booting nexenta. This pool is used purely for data storage - no booting.
(2) I had to hack the code because zpool is forbidding me from adding or replacing devices - please see my logs in the previous post.
zpool thinks this pool is a "boot pool" due to the bootfs flag being set, and zpool will not let me unset the bootfs property. So I'm stuck in a situation where zpool thinks my pool is a boot pool because of the bootfs property, and zpool will not let me unset the bootfs property. Because zpool thinks this pool is the boot pool, it is trying to forbid me from creating a configuration that isn't compatible with booting.
In this situation, I am unable to add or replace devices without using my hacked version of zpool.
I was able to hack the code to allow zpool to replace and add devices, but I was not able to figure out how to set the bootfs property back to the default value.
Does this help explain my situation better? I think this is a bug, or maybe I'm missing something totally obvious.
Thanks!
Reshekel Shedwitz
2010-05-25 22:59:02 UTC
Permalink
Cindy,

Thanks. Same goes to everyone else on this thread.

I actually solved the issue - I booted back into FreeBSD's "Fixit" mode and was still able to import the pool (wouldn't have been able to if I upgraded the pool version!). FreeBSD's zpool command allowed me to unset the bootfs property.

I guess that should have been more obvious to me. At least now I'm in good shape as far as this pool goes - zpool won't complain when I try to replace disks or add cache.

Might be worth documenting this somewhere as a "gotcha" when migrating from FreeBSD to OpenSolaris.

Thanks!
--
This message posted from opensolaris.org
Cindy Swearingen
2010-05-26 15:09:41 UTC
Permalink
Hi--

I'm glad you were able to resolve this problem.

I drafted some hints in this new section:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Pool_Migration_Issues

We had all the clues, you and Brandon got it though.

I think my brain function was missing yesterday.

Thanks,

Cindy
Post by Reshekel Shedwitz
Cindy,
Thanks. Same goes to everyone else on this thread.
I actually solved the issue - I booted back into FreeBSD's "Fixit" mode and was still able to import the pool (wouldn't have been able to if I upgraded the pool version!). FreeBSD's zpool command allowed me to unset the bootfs property.
I guess that should have been more obvious to me. At least now I'm in good shape as far as this pool goes - zpool won't complain when I try to replace disks or add cache.
Might be worth documenting this somewhere as a "gotcha" when migrating from FreeBSD to OpenSolaris.
Thanks!
Lori Alt
2010-05-26 20:09:47 UTC
Permalink
Was a bug ever filed against zfs for not allowing the bootfs property to
be set to ""? We should always let that request succeed.

lori
Post by Cindy Swearingen
Hi--
I'm glad you were able to resolve this problem.
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Pool_Migration_Issues
We had all the clues, you and Brandon got it though.
I think my brain function was missing yesterday.
Thanks,
Cindy
Post by Reshekel Shedwitz
Cindy,
Thanks. Same goes to everyone else on this thread.
I actually solved the issue - I booted back into FreeBSD's "Fixit"
mode and was still able to import the pool (wouldn't have been able
to if I upgraded the pool version!). FreeBSD's zpool command allowed
me to unset the bootfs property.
I guess that should have been more obvious to me. At least now I'm in
good shape as far as this pool goes - zpool won't complain when I try
to replace disks or add cache.
Might be worth documenting this somewhere as a "gotcha" when
migrating from FreeBSD to OpenSolaris.
Thanks!
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Cindy Swearingen
2010-05-26 20:21:44 UTC
Permalink
Hi Lori,

I haven't filed it yet.

We need to file a CR that allows us to successfully set bootfs to "".

The failure case in this thread was attempting to unset bootfs on
a pool with disks that have EFI labels.

Thanks,

Cindy
Post by Lori Alt
Was a bug ever filed against zfs for not allowing the bootfs property to
be set to ""? We should always let that request succeed.
lori
Post by Cindy Swearingen
Hi--
I'm glad you were able to resolve this problem.
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Pool_Migration_Issues
We had all the clues, you and Brandon got it though.
I think my brain function was missing yesterday.
Thanks,
Cindy
Post by Reshekel Shedwitz
Cindy,
Thanks. Same goes to everyone else on this thread.
I actually solved the issue - I booted back into FreeBSD's "Fixit"
mode and was still able to import the pool (wouldn't have been able
to if I upgraded the pool version!). FreeBSD's zpool command allowed
me to unset the bootfs property.
I guess that should have been more obvious to me. At least now I'm in
good shape as far as this pool goes - zpool won't complain when I try
to replace disks or add cache.
Might be worth documenting this somewhere as a "gotcha" when
migrating from FreeBSD to OpenSolaris.
Thanks!
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Richard Elling
2010-05-26 21:25:09 UTC
Permalink
Post by Cindy Swearingen
Hi Lori,
I haven't filed it yet.
We need to file a CR that allows us to successfully set bootfs to "".
+1
Post by Cindy Swearingen
The failure case in this thread was attempting to unset bootfs on
a pool with disks that have EFI labels.
I think the community would be happier when booting from EFI is
solved :-)
-- richard
Post by Cindy Swearingen
Thanks,
Cindy
Was a bug ever filed against zfs for not allowing the bootfs property to be set to ""? We should always let that request succeed.
lori
Post by Cindy Swearingen
Hi--
I'm glad you were able to resolve this problem.
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Pool_Migration_Issues
We had all the clues, you and Brandon got it though.
I think my brain function was missing yesterday.
Thanks,
Cindy
Post by Reshekel Shedwitz
Cindy,
Thanks. Same goes to everyone else on this thread.
I actually solved the issue - I booted back into FreeBSD's "Fixit" mode and was still able to import the pool (wouldn't have been able to if I upgraded the pool version!). FreeBSD's zpool command allowed me to unset the bootfs property.
I guess that should have been more obvious to me. At least now I'm in good shape as far as this pool goes - zpool won't complain when I try to replace disks or add cache.
Might be worth documenting this somewhere as a "gotcha" when migrating from FreeBSD to OpenSolaris.
Thanks!
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
ZFS and NexentaStor training, Rotterdam, July 13-15, 2010
http://nexenta-rotterdam.eventbrite.com/
Brandon High
2010-05-26 21:48:33 UTC
Permalink
On Wed, May 26, 2010 at 2:25 PM, Richard Elling
Post by Richard Elling
I think the community would be happier when booting from EFI is
solved :-)
+10
--
Brandon High : ***@freaks.com
Brandon High
2010-05-25 15:33:23 UTC
Permalink
Post by Reshekel Shedwitz
Ultimately, I would like to just set the bootfs property back to default, but this seems to be beyond my ability. There are some checks in libzfs_pool.c that I can bypass in order to set the value back to its default of "-", but ultimately I am stopped because there is code in zfs_ioctl.c, which I believe is kernel code, that checks to see if the bootfs value supplied is actually an existing dataset.
I'm fairly certain that I've been able to set and unset the bootfs
property on my rpool in snv_133 and snv_134. Just use an empty value
when setting it.

In fact:
***@basestar:~$ zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/snv_134 local
***@basestar:~$ pfexec zpool set bootfs= rpool
***@basestar:~$ zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs - default
***@basestar:~$ pfexec zpool set bootfs=rpool/ROOT/snv_134 rpool
***@basestar:~$ zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/snv_134 local
--
Brandon High : ***@freaks.com
Reshekel Shedwitz
2010-05-25 15:47:21 UTC
Permalink
On Tue, May 25, 2010 at 1:58 AM, Reshekel Shedwitz
Post by Reshekel Shedwitz
Ultimately, I would like to just set the bootfs
property back to default, but this seems to be beyond
my ability. There are some checks in libzfs_pool.c
that I can bypass in order to set the value back to
its default of "-", but ultimately I am stopped
because there is code in zfs_ioctl.c, which I believe
is kernel code, that checks to see if the bootfs
value supplied is actually an existing dataset.
I'm fairly certain that I've been able to set and
unset the bootfs
property on my rpool in snv_133 and snv_134. Just use
an empty value
when setting it.
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/snv_134 local
NAME PROPERTY VALUE SOURCE
rpool bootfs - default
bootfs=rpool/ROOT/snv_134 rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs rpool/ROOT/snv_134 local
--
_______________________________________________
zfs-discuss mailing list
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
ss
***@nexenta:~# zpool set bootfs= tank
cannot set property for 'tank': property 'bootfs' not supported on EFI labeled devices

***@nexenta:~# zpool get bootfs tank
NAME PROPERTY VALUE SOURCE
tank bootfs tank local

Could this be related to the way FreeBSD's zfs partitioned my disk? I thought ZFS used EFI by default though (except for boot pools).

Thanks.
--
This message posted from opensolaris.org
Andrew Gabriel
2010-05-25 16:03:49 UTC
Permalink
_______________________________________________
zfs-discuss mailing list
zfs-***@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Brandon High
2010-05-25 18:09:55 UTC
Permalink
Post by Reshekel Shedwitz
Could this be related to the way FreeBSD's zfs partitioned my disk? I thought ZFS used EFI by default though (except for boot pools).
Looks like it. Solaris thinks that it's EFI partitioned.

By default, Solaris uses SMI for boot volumes, EFI for non-boot volumes.

You could create a new pool (or use space on another existing pool)
and move your data to it, then re-create your old pool.

-B
--
Brandon High : ***@freaks.com
Brandon High
2010-05-25 22:48:41 UTC
Permalink
Post by Reshekel Shedwitz
I am migrating a pool from FreeBSD 8.0 to OpenSolaris (Nexenta 3.0 RC1). I am in what seems to be a weird situation regarding this pool. Maybe someone can help.
I think everyone missed the completely obvious implication: FreeBSD
allows bootfs to be set on EFI partitioned disks. It might allow the
property to be unset as well.

Can you boot a FreeBSD live cd and unset the zpool property?

If not, you could comment out the check that Andrew Gabriel identified
and rebuild the zpool command. Once unset, you should be able to use
the distro-supplied binary.

-B
--
Brandon High : ***@freaks.com
Loading...