Solaris Express Installation Guide: Solaris Live Upgrade and Upgrade Planning
Previous Next

Upgrading a System With Non-Global Zones Installed (Example)

The following procedure provides an example with abbreviated instructions for upgrading with Solaris Live Upgrade.

For detailed explanations of steps, see Upgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System (Tasks).

Upgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System

The following example provides abbreviated descriptions of the steps to upgrade a system with non-global zones installed. In this example, a new boot environment is created by using the lucreate command on a system that is running the Solaris 10 release. This system has non-global zones installed and has a non-global zone with a separate file system on a shared file system, zone1/root/export. The new boot environment is upgraded to the Solaris Express 5/07 release by using the luupgrade command. The upgraded boot environment is activated by using the luactivate command.


Note - This procedure assumes that the system is running removable media services. If you have questions about removable media services that manage discs, refer to System Administration Guide: Devices and File Systems for detailed information.


  1. Remove the Solaris Live Upgrade packages from the current boot environment.

    # pkgrm SUNWlucfg SUNWluu SUNWlur

    Note - Remove the SUNWlucfg package if your current boot environment is Solaris Express 2/07 build 52 or later. If your boot environment is build 51 or earlier or another release, this package is not on your system.


  2. Insert the Solaris DVD or CD. Then install the replacement Solaris Live upgrade packages from the target release.

    • For SPARC based systems:

      # pkgadd -d /media/cdrom/s0/Solaris_11/Product SUNWlucfg SUNWlur SUNWluu
    • For x86 based systems:

      # pkgadd -d /media/cdrom/Solaris_11/Product SUNWlucfg SUNWlur SUNWluu
  3. Create a boot environment.

    In the following example, a new boot environment named newbe is created. The root (/) file system is placed on c0t1d0s4. All non-global zones in the current boot environment are copied to the new boot environment. A separate file system was created with the zonecfg add fs command for zone1. This separate file system /zone/root/export is placed on a separate file system, c0t1d0s1. This option prevents the separate file system from being shared between the current boot environment and the new boot environment.

    # lucreate -n newbe -m /:/dev/dsk/c0t1d0s4:ufs -m /export:/dev/dsk/c0t1d0s1:ufs:zone1
  4. Upgrade the new boot environment.

    In this example, /net/server/export/Solaris_11/combined.solaris_wos is the path to the network installation image.

    # luupgrade -n newbe -u -s /net/server/export/Solaris_11/combined.solaris_wos
  5. (Optional) Verify that the boot environment is bootable.

    The lustatus command reports if the boot environment creation is complete.

    # lustatus
    boot environment   Is        Active  Active     Can        Copy
    Name               Complete  Now     OnReboot   Delete     Status
    ------------------------------------------------------------------------
    c0t1d0s0            yes      yes      yes       no           -
    newbe               yes       no       no       yes          -
  6. Activate the new boot environment.

    # luactivate newbe
    # init 6

    The boot environment newbe is now active.

  7. (Optional) Fall back to a different boot environment. If the new boot environment is not viable or you want to switch to another boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).

Previous Next