Solaris ZFS Administration Guide
Previous Next

Managing Devices in ZFS Storage Pools

Most of the basic information regarding devices is covered in Components of a ZFS Storage Pool. Once a pool has been created, you can perform several tasks to manage the physical devices within the pool.

Adding Devices to a Storage Pool

You can dynamically add space to a pool by adding a new top-level virtual device. This space is immediately available to all datasets within the pool. To add a new virtual device to a pool, use the zpool add command. For example:

# zpool add zeepool mirror c2t1d0 c2t2d0

The format of the virtual devices is the same as for the zpool create command, and the same rules apply. Devices are checked to determine if they are in use, and the command cannot change the level of redundancy without the -f option. The command also supports the -n option so that you can perform a dry run. For example:

# zpool add -n zeepool mirror c3t1d0 c3t2d0
would update 'zeepool' to the following configuration:
      zeepool
        mirror
            c1t0d0
            c1t1d0
        mirror
            c2t1d0
            c2t2d0
        mirror
            c3t1d0
            c3t2d0

This command syntax would add mirrored devices c3t1d0 and c3t2d0 to zeepool's existing configuration.

For more information about how virtual device validation is done, see Detecting in Use Devices.

Example 4-1 Adding Disks to a RAID-Z Configuration

Additional disks can be added similarly to a RAID-Z configuration. The following example shows how to convert a storage pool with one RAID–Z device comprised of 3 disks to a storage pool with two RAID-Z devices comprised of 3 disks.

# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:
        NAME         STATE     READ WRITE CKSUM
        rpool        ONLINE       0     0     0
          raidz1     ONLINE       0     0     0
            c1t2d0   ONLINE       0     0     0
            c1t3d0   ONLINE       0     0     0
            c1t4d0   ONLINE       0     0     0

errors: No known data errors
# zpool add rpool raidz c2t2d0 c2t3d0 c2t4d0
# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:
        NAME         STATE     READ WRITE CKSUM
        rpool        ONLINE       0     0     0
          raidz1     ONLINE       0     0     0
            c1t2d0   ONLINE       0     0     0
            c1t3d0   ONLINE       0     0     0
            c1t4d0   ONLINE       0     0     0
          raidz1     ONLINE       0     0     0
            c2t2d0   ONLINE       0     0     0
            c2t3d0   ONLINE       0     0     0
            c2t4d0   ONLINE       0     0     0

errors: No known data errors
Example 4-2 Adding a Mirrored Log Device to a ZFS Storage Pool

The following example shows how to add a mirrored log device to mirrored storage pool.For more information about using log devices in your storage pool, see Setting Up Separate ZFS Logging Devices.

# zpool status newpool
  pool: newpool
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        newpool      ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c1t9d0   ONLINE       0     0     0
            c1t10d0  ONLINE       0     0     0

errors: No known data errors
# zpool add newpool log mirror c1t11d0 c1t12d0
# zpool status newpool
  pool: newpool
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        newpool      ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c1t9d0   ONLINE       0     0     0
            c1t10d0  ONLINE       0     0     0
        logs         ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c1t11d0  ONLINE       0     0     0
            c1t12d0  ONLINE       0     0     0

errors: No known data errors

You can attach a log device to an existing log device to create a mirrored log device. This operation is identical to attaching a device in a unmirrored storage pool.

Example 4-3 Adding and Removing Cache Devices to Your ZFS Storage Pool

You can add and remove cache devices to your ZFS storage pool.

Use the zpool add command to add cache devices. For example:

# zpool add tank cache c2t5d0 c2t8d0
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0
        cache
          c2t5d0    ONLINE       0     0     0
          c2t8d0    ONLINE       0     0     0

errors: No known data errors

Cache devices cannot be mirrored or be part of a RAID-Z configuration.

Use the zpool remove command to remove cache devices. For example:

E zpool remove tank c2t5d0 c2t8d0
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0

errors: No known data errors

Currently, the zpool remove command only supports removing hot spares and cache devices. Devices that are part of the main mirrored pool configuration can be removed by using the zpool detach command. Non-redundant and RAID-Z devices cannot be removed from a pool.

For more information about using cache devices in a ZFS storage pool, see Creating a ZFS Storage Pool with Cache Devices.

Attaching and Detaching Devices in a Storage Pool

In addition to the zpool add command, you can use the zpool attach command to add a new device to an existing mirrored or non-mirrored device.

Example 4-4 Converting a Two-Way Mirrored Storage Pool to a Three-way Mirrored Storage Pool

In this example, zeepool is an existing two-way mirror that is transformed to a three-way mirror by attaching c2t1d0, the new device, to the existing device, c1t1d0.

# zpool status
  pool: zeepool
 state: ONLINE
 scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        zeepool     ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
errors: No known data errors
# zpool attach zeepool c1t1d0 c2t1d0
# zpool status
  pool: zeepool
 state: ONLINE
 scrub: resilver completed with 0 errors on Fri Jan 12 14:47:36 2007
config:

        NAME        STATE     READ WRITE CKSUM
        zeepool     ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0

If the existing device is part of a two-way mirror, attaching the new device, creates a three-way mirror, and so on. In either case, the new device begins to resilver immediately.

Example 4-5 Converting a Non-Redundant ZFS Storage Pool to a Mirrored ZFS Storage Pool

In addition, you can convert a non-redundant storage pool into a redundant storage pool by using the zpool attach command. For example:

# zpool create tank c0t1d0
# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          c0t1d0    ONLINE       0     0     0

errors: No known data errors
# zpool attach tank c0t1d0 c1t1d0
# zpool status
  pool: tank
 state: ONLINE
 scrub: resilver completed with 0 errors on Fri Jan 12 14:55:48 2007
config:
        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0

You can use the zpool detach command to detach a device from a mirrored storage pool. For example:

# zpool detach zeepool c2t1d0

However, this operation is refused if there are no other valid replicas of the data. For example:

# zpool detach newpool c1t2d0
cannot detach c1t2d0: only applicable to mirror and replacing vdevs

Onlining and Offlining Devices in a Storage Pool

ZFS allows individual devices to be taken offline or brought online. When hardware is unreliable or not functioning properly, ZFS continues to read or write data to the device, assuming the condition is only temporary. If the condition is not temporary, it is possible to instruct ZFS to ignore the device by bringing it offline. ZFS does not send any requests to an offlined device.


Note - Devices do not need to be taken offline in order to replace them.


You can use the offline command when you need to temporarily disconnect storage. For example, if you need to physically disconnect an array from one set of Fibre Channel switches and connect the array to a different set, you could take the LUNs offline from the array that was used in ZFS storage pools. After the array was reconnected and operational on the new set of switches, you could then bring the same LUNs online. Data that had been added to the storage pools while the LUNs were offline would resilver to the LUNs after they were brought back online.

This scenario is possible assuming that the systems in question see the storage once it is attached to the new switches, possibly through different controllers than before, and your pools are set up as RAID-Z or mirrored configurations.

Taking a Device Offline

You can take a device offline by using the zpool offline command. The device can be specified by path or by short name, if the device is a disk. For example:

# zpool offline tank c1t0d0
bringing device c1t0d0 offline

Keep the following points in mind when taking a device offline:

  • You cannot take a pool offline to the point where it becomes faulted. For example, you cannot take offline two devices out of a RAID-Z configuration, nor can you take offline a top-level virtual device.

    # zpool offline tank c1t0d0
    cannot offline c1t0d0: no valid replicas
  • By default, the offline state is persistent. The device remains offline when the system is rebooted.

    To temporarily take a device offline, use the zpool offline -t option. For example:

    # zpool offline -t tank c1t0d0
     bringing device 'c1t0d0' offline

    When the system is rebooted, this device is automatically returned to the ONLINE state.

  • When a device is taken offline, it is not detached from the storage pool. If you attempt to use the offlined device in another pool, even after the original pool is destroyed, you will see a message similar to the following:

    device is part of exported or potentially active ZFS pool. Please see zpool(1M)

    If you want to use the offlined device in another storage pool after destroying the original storage pool, first bring the device back online, then destroy the original storage pool.

    Another way to use a device from another storage pool if you want to keep the original storage pool is to replace the existing device in the original storage pool with another comparable device. For information about replacing devices, see Replacing Devices in a Storage Pool.

Offlined devices show up in the OFFLINE state when you query pool status. For information about querying pool status, see Querying ZFS Storage Pool Status.

For more information on device health, see Determining the Health Status of ZFS Storage Pools.

Bringing a Device Online

Once a device is taken offline, it can be restored by using the zpool online command:

# zpool online tank c1t0d0
bringing device c1t0d0 online

When a device is brought online, any data that has been written to the pool is resynchronized to the newly available device. Note that you cannot use device onlining to replace a disk. If you offline a device, replace the drive, and try to bring it online, it remains in the faulted state.

If you attempt to online a faulted device, a message similar to the following is displayed from fmd:

# zpool online tank c1t0d0
Bringing device c1t0d0 online
# 
SUNW-MSG-ID: ZFS-8000-D3, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Thu Aug 31 11:13:59 MDT 2006
PLATFORM: SUNW,Ultra-60, CSN: -, HOSTNAME: neo
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: e11d8245-d76a-e152-80c6-e63763ed7e4f
DESC: A ZFS device failed.  Refer to http://sun.com/msg/ZFS-8000-D3 for more information.
AUTO-RESPONSE: No automated response will occur.
IMPACT: Fault tolerance of the pool may be compromised.
REC-ACTION: Run 'zpool status -x' and replace the bad device.

For more information on replacing a faulted device, see Repairing a Missing Device.

Clearing Storage Pool Devices

If a device is taken offline due to a failure that causes errors to be listed in the zpool status output, you can clear the error counts with the zpool clear command.

If specified with no arguments, this command clears all device errors within the pool. For example:

# zpool clear tank

If one or more devices are specified, this command only clear errors associated with the specified devices. For example:

# zpool clear tank c1t0d0

For more information on clearing zpool errors, see Clearing Transient Errors.

Replacing Devices in a Storage Pool

You can replace a device in a storage pool by using the zpool replace command.

If you are physically replacing a device with another device in the same location in a redundant pool, then you only need identify the replaced device. ZFS recognizes that it is a different disk in the same location. For example, to replace a failed disk (c1t1d0) by removing the disk and replacing it in the same location, use the syntax similar to the following:

# zpool replace tank c1t1d0

If you are replacing a device in a non-redundant storage pool that contains only one device, you will need to specify both devices. For example:

# zpool replace tank c1t1d0 c1t2d0

Keep the following considerations in mind when replacing devices in a ZFS storage pool:

  • The replacement device must be greater than or equal to the minimum size of all the devices in a mirrored or RAID-Z configuration.

  • If the replacement device is larger, the pool capacity is increased when the replacement is complete. Currently, you must export and import the pool to see the expanded capacity. For example:

    # zpool list tank
    NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    tank  16.8G    94K  16.7G     0%  ONLINE  -
    # zpool replace tank c0t0d0 c0t4d0
    # zpool list tank
    NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    tank  16.8G   112K  16.7G     0%  ONLINE  -
    # zpool export tank
    # zpool import tank
    # zpool list tank
    NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    tank  33.9G   114K  33.9G     0%  ONLINE  -

    For more information about exporting and importing pools, see Migrating ZFS Storage Pools.

  • Currently, you must also perform the export and import steps when growing the size of an existing LUN that is part of a storage pool to see the expanded capacity.

  • Replacing many disks in a large pool is time consuming due to resilvering the data onto the new disks. In addition, you might consider running the zpool scrub command between disk replacements to ensure that the replacement devices are operational and the data is written correctly.

For more information about replacing devices, see Repairing a Missing Device and Repairing a Damaged Device.

Designating Hot Spares in Your Storage Pool

The hot spares feature enables you to identify disks that could be used to replace a failed or faulted device in one or more storage pools. Designating a device as a hot spare means that the device is not an active device in a pool, but if an active device in the pool fails, the hot spare automatically replaces the failed device.

Devices can be designated as hot spares in the following ways:

  • When the pool is created with the zpool create command

  • After the pool is created with the zpool add command

  • Hot spare devices can be shared between multiple pools

Designate devices as hot spares when the pool is created. For example:

# zpool create zeepool mirror c1t1d0 c2t1d0 spare c1t2d0 c2t2d0
# zpool status zeepool
pool: zeepool
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        zeepool      ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c1t1d0   ONLINE       0     0     0
            c2t1d0   ONLINE       0     0     0
        spares
          c1t2d0     AVAIL   
          c2t2d0     AVAIL   

Designate hot spares by adding them to a pool after the pool is created. For example:

# zpool add -f zeepool spare c1t3d0 c2t3d0
# zpool status zeepool
pool: zeepool
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        zeepool      ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c1t1d0   ONLINE       0     0     0
            c2t1d0   ONLINE       0     0     0
        spares
          c1t3d0     AVAIL   
          c2t3d0     AVAIL   

Multiple pools can share devices that are designated as hot spares. For example:

# zpool create zeepool mirror c1t1d0 c2t1d0 spare c1t2d0 c2t2d0
# zpool create tank raidz c3t1d0 c4t1d0 spare c1t2d0 c2t2d0

Hot spares can be removed from a storage pool by using the zpool remove command. For example:

# zpool remove zeepool c1t2d0
# zpool status zeepool
pool: zeepool
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        zeepool      ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c1t1d0   ONLINE       0     0     0
            c2t1d0   ONLINE       0     0     0
        spares
          c1t3d0     AVAIL

A hot spare cannot be removed if it is currently used by the storage pool.

Keep the following points in mind when using ZFS hot spares:

  • Currently, the zpool remove command can only be used to remove hot spares.

  • Add a disk as a spare that is equal to or larger than the size of the largest disk in the pool. Adding a smaller disk as a spare to a pool is allowed. However, when the smaller spare disk is activated, either automatically or with the zpool replace command, the operation fails with an error similar to the following:

    cannot replace disk3 with disk4: device is too small
Activating and Deactivating Hot Spares in Your Storage Pool

Hot spares are activated in the following ways:

  • Manually replacement – Replace a failed device in a storage pool with a hot spare by using the zpool replace command.

  • Automatic replacement – When a fault is received, an FMA agent examines the pool to see if it has any available hot spares. If so, it replaces the faulted device with an available spare.

    If a hot spare that is currently in use fails, the agent detaches the spare and thereby cancels the replacement. The agent then attempts to replace the device with another hot spare, if one is available. This feature is currently limited by the fact that the ZFS diagnosis engine only emits faults when a device disappears from the system.

    Currently, no automated response is available to bring the original device back online. You must explicitly take one of the actions described in the example below. A future enhancement will allow ZFS to subscribe to hotplug events and automatically replace the affected device when it is replaced on the system.

Manually replace a device with a hot spare by using the zpool replace command. For example:

# zpool replace zeepool c2t1d0 c2t3d0
# zpool status zeepool
  pool: zeepool
 state: ONLINE
 scrub: resilver completed with 0 errors on Fri Jun  2 13:44:40 2006
config:

        NAME            STATE     READ WRITE CKSUM
        zeepool         ONLINE       0     0     0
          mirror        ONLINE       0     0     0
            c1t2d0      ONLINE       0     0     0
            spare       ONLINE       0     0     0
              c2t1d0    ONLINE       0     0     0
              c2t3d0    ONLINE       0     0     0
        spares
          c1t3d0        AVAIL   
          c2t3d0        INUSE     currently in use

errors: No known data errors

A faulted device is automatically replaced if a hot spare is available. For example:

# zpool status -x
  pool: zeepool
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: resilver completed with 0 errors on Fri Jun  2 13:56:49 2006
config:

        NAME                 STATE     READ WRITE CKSUM
        zeepool              DEGRADED     0     0     0
          mirror             DEGRADED     0     0     0
            c1t2d0           ONLINE       0     0     0
            spare            DEGRADED     0     0     0
              c2t1d0         UNAVAIL      0     0     0  cannot open
              c2t3d0         ONLINE       0     0     0
        spares
          c1t3d0             AVAIL   
          c2t3d0             INUSE     currently in use

errors: No known data errors

Currently, three ways to deactivate hot spares are available:

  • Canceling the hot spare by removing it from the storage pool

  • Replacing the original device with a hot spare

  • Permanently swapping in the hot spare

After the faulted device is replaced, use the zpool detach command to return the hot spare back to the spare set. For example:

# zpool detach zeepool c2t3d0
# zpool status zeepool
  pool: zeepool
 state: ONLINE
 scrub: resilver completed with 0 errors on Fri Jun  2 13:58:35 2006
config:

        NAME               STATE     READ WRITE CKSUM
        zeepool            ONLINE       0     0     0
          mirror           ONLINE       0     0     0
            c1t2d0         ONLINE       0     0     0
            c2t1d0         ONLINE       0     0     0
        spares
          c1t3d0           AVAIL   
          c2t3d0           AVAIL

errors: No known data errors
Previous Next