Document Information
Preface
Solaris Virtualization Product Overview
Part I Resource Management
1. Introduction to Solaris Resource Management
2. Projects and Tasks (Overview)
3. Administering Projects and Tasks
4. Extended Accounting (Overview)
5. Administering Extended Accounting (Tasks)
6. Resource Controls (Overview)
7. Administering Resource Controls (Tasks)
8. Fair Share Scheduler (Overview)
9. Administering the Fair Share Scheduler (Tasks)
10. Physical Memory Control Using the Resource Capping Daemon (Overview)
11. Administering the Resource Capping Daemon (Tasks)
12. Resource Pools (Overview)
13. Creating and Administering Resource Pools (Tasks)
14. Resource Management Configuration Example
15. Resource Control Functionality in the Solaris Management Console
Part II Zones
16. Introduction to Solaris Zones
17. Non-Global Zone Configuration (Overview)
18. Planning and Configuring Non-Global Zones (Tasks)
19. About Installing, Halting, Cloning, and Uninstalling Non-Global Zones (Overview)
20. Installing, Booting, Halting, Uninstalling, and Cloning Non-Global Zones (Tasks)
21. Non-Global Zone Login (Overview)
22. Logging In to Non-Global Zones (Tasks)
23. Moving and Migrating Non-Global Zones (Tasks)
24. About Packages and Patches on a Solaris System With Zones Installed (Overview)
25. Adding and Removing Packages and Patches on a Solaris System With Zones Installed (Tasks)
26. Solaris Zones Administration (Overview)
Global Zone Visibility and Access
Process ID Visibility in Zones
System Observability in Zones
Non-Global Zone Node Name
Networking in Shared-IP Non-Global Zones
Networking in Exclusive-IP Non-Global Zones
Device Use in Non-Global Zones
Running Applications in Non-Global Zones
Resource Controls Used in Non-Global Zones
Fair Share Scheduler on a Solaris System With Zones Installed
Extended Accounting on a Solaris System With Zones Installed
Privileges in a Non-Global Zone
Using IP Security Architecture in Zones
Using Solaris Auditing in Zones
Core Files in Zones
Running DTrace in a Non-Global Zone
About Backing Up a Solaris System With Zones Installed
Determining What to Back Up in Non-Global Zones
About Restoring Non-Global Zones
Commands Used on a Solaris System With Zones Installed
27. Administering Solaris Zones (Tasks)
28. Troubleshooting Miscellaneous Solaris Zones Problems
Part III Branded Zones
29. About Branded Zones and the Linux Branded Zone
30. Planning the lx Branded Zone Configuration (Overview)
31. Configuring the lx Branded Zone (Tasks)
32. About Installing, Booting, Halting, Cloning, and Uninstalling lx Branded Zones (Overview)
33. Installing, Booting, Halting, Uninstalling and Cloning lx Branded Zones (Tasks)
34. Logging In to lx Branded Zones (Tasks)
35. Moving and Migrating lx Branded Zones (Tasks)
36. Administering and Running Applications in lx Branded Zones (Tasks)
Part IV Sun xVM
37. Sun xVM Hypervisor System Requirements
38. Booting and Running the Sun xVM Hypervisor
39. Xvnc
40. Using virt-install to Install a Domain
41. xVM System Administration
42. Troubleshooting Miscellaneous Sun xVM Problems
Glossary
Index
|
File Systems and Non-Global Zones
This section provides information about file system issues on a Solaris system with
zones installed. Each zone has its own section of the file system
hierarchy, rooted at a directory known as the zone root. Processes in the
zone can access only files in the part of the hierarchy that is
located under the zone root. The chroot utility can be used in a
zone, but only to restrict the process to a root path within the
zone. For more information about chroot, see chroot(1M).
The -o nosuid Option
The -o nosuid option to the mount utility has the following functionality:
Processes from a setuid binary located on a file system that is mounted using the nosetuid option do not run with the privileges of the setuid binary. The processes run with the privileges of the user that executes the binary. For example, if a user executes a setuid binary that is owned by root, the processes run with the privileges of the user.
Opening device-special entries in the file system is not allowed. This behavior is equivalent to specifying the nodevices option.
This file system-specific option is available to all Solaris file systems that can
be mounted with mount utilities, as described in the mount(1M) man page. In this
guide, these file systems are listed in Mounting File Systems in Zones. Mounting capabilities are also described.
For more information about the -o nosuid option, see “Accessing Network File
Systems (Reference)” in System Administration Guide: Network Services.
Mounting File Systems in Zones
When file systems are mounted from within a zone, the nodevices option applies.
For example, if a zone is granted access to a block device
(/dev/dsk/c0t0d0s7) and a raw device (/dev/rdsk/c0t0d0s7) corresponding to a UFS file system, the
file system is automatically mounted nodevices when mounted from within a zone.
This rule does not apply to mounts specified through a zonecfg configuration. Options for mounting file systems in non-global zones are described in the following
table. Procedures for these mounting alternatives are provided in Configuring, Verifying, and Committing a Zone and Mounting File Systems in Running Non-Global Zones. Any file system type not listed in the table can be specified
in the configuration if it has a mount binary in /usr/lib/fstype/mount. File System |
Mounting Options
in a Non-Global Zone |
AutoFS |
Cannot be mounted using zonecfg, cannot be manually mounted from
the global zone into a non-global zone. Can be mounted from within the
zone. |
CacheFS |
Cannot be used in a non-global zone. |
FDFS |
Can be mounted using zonecfg, can
be manually mounted from the global zone into a non-global zone, can be
mounted from within the zone. |
HSFS |
Can be mounted using zonecfg, can be manually mounted from
the global zone into a non-global zone, can be mounted from within
the zone. |
LOFS |
Can be mounted using zonecfg, can be manually mounted from the global zone
into a non-global zone, can be mounted from within the zone. |
MNTFS |
Cannot be mounted
using zonecfg, cannot be manually mounted from the global zone into a non-global
zone. Can be mounted from within the zone. |
NFS |
Cannot be mounted using zonecfg. V2,
V3, and V4, which are the versions currently supported in zones, can be
mounted from within the zone. |
PCFS |
Can be mounted using zonecfg, can be manually
mounted from the global zone into a non-global zone, can be mounted from
within the zone. |
PROCFS |
Cannot be mounted using zonecfg, cannot be manually mounted from the global
zone into a non-global zone. Can be mounted from within the zone. |
TMPFS |
Can be
mounted using zonecfg, can be manually mounted from the global zone into a
non-global zone, can be mounted from within the zone. |
UDFS |
Can be mounted using zonecfg,
can be manually mounted from the global zone into a non-global zone, can
be mounted from within the zone. |
UFS |
Can be mounted using zonecfg, can be manually
mounted from the global zone into a non-global zone, can be mounted from
within the zone. |
XMEMFS |
Support for this file system has been removed from Solaris with
this release. |
ZFS |
Can be mounted using the zonecfg dataset and fs resource types. |
For more information, see How to Configure the Zone, Mounting File Systems in Running Non-Global Zones, and the mount(1M) man page.
Unmounting File Systems in Zones
The ability to unmount a file system will depend on who performed
the initial mount. If a file system is specified as part of the
zone's configuration using the zonecfg command, then the global zone owns this mount and
the non-global zone administrator cannot unmount the file system. If the file system
is mounted from within the non-global zone, for example, by specifying the mount
in the zone's /etc/vfstab file, then the non-global zone administrator can unmount the
file system.
Security Restrictions and File System Behavior
There are security restrictions on mounting certain file systems from within a zone.
Other file systems exhibit special behavior when mounted in a zone. The list
of modified file systems follows. - AutoFS
Autofs is a client-side service that automatically mounts the appropriate file system. When a client attempts to access a file system that is not presently mounted, the AutoFS file system intercepts the request and calls automountd to mount the requested directory. AutoFS mounts established within a zone are local to that zone. The mounts cannot be accessed from other zones, including the global zone. The mounts are removed when the zone is halted or rebooted. For more information on AutoFS, see How Autofs Works in System Administration Guide: Network Services. Each zone runs its own copy of automountd. The auto maps and timeouts are controlled by the zone administrator. You cannot trigger a mount in another zone by crossing an AutoFS mount point for a non-global zone from the global zone. Certain AutoFS mounts are created in the kernel when another mount is triggered. Such mounts cannot be removed by using the regular umount interface because they must be mounted or unmounted as a group. Note that this functionality is provided for zone shutdown.
- MNTFS
MNTFS is a virtual file system that provides read-only access to the table of mounted file systems for the local system. The set of file systems visible by using mnttab from within a non-global zone is the set of file systems mounted in the zone, plus an entry for root (/) . Mount points with a special device that is not accessible from within the zone, such as /dev/rdsk/c0t0d0s0, have their special device set to the same as the mount point. All mounts in the system are visible from the global zone's /etc/mnttab table. For more information on MNTFS, see Chapter 19, Mounting and Unmounting File Systems (Tasks), in System Administration Guide: Devices and File Systems.
- NFS
NFS mounts established within a zone are local to that zone. The mounts cannot be accessed from other zones, including the global zone. The mounts are removed when the zone is halted or rebooted. As documented in the mount_nfs(1M) man page, an NFS server should not attempt to mount its own file systems. Thus, a zone should not NFS mount a file system exported by the global zone. Zones cannot be NFS servers. From within a zone, NFS mounts behave as though mounted with the nodevices option. The nfsstat command output only pertains to the zone in which the command is run. For example, if the command is run in the global zone, only information about the global zone is reported. For more information about the nfsstat command, see nfsstat(1M). The zlogin command will fail if any of its open files or any portion of its address space reside on NFS. For more information, see zlogin Command.
- PROCFS
The /proc file system, or PROCFS, provides process visibility and access restrictions as well as information about the zone association of processes. Only processes in the same zone are visible through /proc. Processes in the global zone can observe processes and other objects in non-global zones. This allows such processes to have system-wide observability. From within a zone, procfs mounts behave as though mounted with the nodevices option. For more information about procfs, see the proc(4) man page.
- LOFS
The scope of what can be mounted through LOFS is limited to the portion of the file system that is visible to the zone. Hence, there are no restrictions on LOFS mounts in a zone.
- UFS, UDFS, PCFS, and other storage-based file systems
When using the zonecfg command to configure storage-based file systems that have an fsck binary, such as UFS, the zone administrator must specify a raw parameter. The parameter indicates the raw (character) device, such as /dev/rdsk/c0t0d0s7. zoneadmd automatically runs the fsck command in non-interactive check-only mode (fsck -m) on this device before it mounts the file system. If the fsck fails, zoneadmd cannot bring the zone to the ready state. The path specified by raw cannot be a relative path. It is an error to specify a device to fsck for a file system that does not provide an fsck binary in /usr/lib/fs/fstype/fsck. It is also an error if you do not specify a device to fsck if an fsck binary exists for that file system. For more information, see The zoneadmd Daemon and the fsck(1M)
- ZFS
You can add a ZFS dataset to a non-global zone by using the zonecfg command with the add dataset resource. The dataset will be visible and mounted in the non-global zone and no longer visible in the global zone. The zone administrator can create and destroy file systems within that dataset, and modify the properties of the dataset. The zoned attribute of zfs indicates whether a dataset has been added to a non-global zone. # zfs get zoned tank/sales
NAME PROPERTY VALUE SOURCE
tank/sales zoned on local If you want to share a dataset from the global zone, you can add an LOFS-mounted ZFS file system by using the zonecfg command with the add fs subcommand. The global administrator is responsible for setting and controlling the properties of the dataset. For more information on ZFS, see Chapter 10, ZFS Advanced Topics, in Solaris ZFS Administration Guide.
Non-Global Zones as NFS Clients
Zones can be NFS clients. Version 2, version 3, and version 4
protocols are supported. For information on these NFS versions, see Features of the NFS Service in System Administration Guide: Network Services. . The default version is NFS version 4. You can enable other NFS
versions on a client by using one of the following methods:
Use of mknod Prohibited in a Zone
Note that you cannot use the mknod command documented in the mknod(1M) man
page to make a special file in a non-global zone.
Traversing File Systems
A zone's file system namespace is a subset of the namespace accessible from
the global zone. Unprivileged processes in the global zone are prevented from traversing
a non-global zone's file system hierarchy through the following means:
Specifying that the zone root's parent directory is owned, readable, writable, and executable by root only
Restricting access to directories exported by /proc
Note that attempting to access AutoFS nodes mounted for another zone will fail.
The global administrator must not have auto maps that descend into other zones.
Restriction on Accessing A Non-Global Zone From the Global Zone
After a non-global zone is installed, the zone must never be accessed directly
from the global zone by any commands other than system backup utilities. Moreover,
a non-global zone can no longer be considered secure after it has been
exposed to an unknown environment. An example would be a zone placed on
a publicly accessible network, where it would be possible for the zone to
be compromised and the contents of its file systems altered. If there
is any possibility that compromise has occurred, the global administrator should treat the zone
as untrusted. Any command that accepts an alternative root by using the -R or
-b options (or the equivalent) must not be used when the following are
true:
The command is run in the global zone.
The alternative root refers to any path within a non-global zone, whether the path is relative to the current running system's global zone or the global zone in an alternative root.
An example is the -R root_path option to the pkgadd utility run from
the global zone with a non-global zone root path. The list of commands, programs, and utilities that use -R with an alternative
root path include the following:
auditreduce
bart
flar
flarcreate
installf
localeadm
makeuuid
metaroot
patchadd
patchrm
pkgadd
pkgadm
pkgask
pkgchk
pkgrm
prodreg
removef
routeadm
showrev
syseventadm
The list of commands and programs that use -b with an alternative root
path include the following:
add_drv
pprosetup
rem_drv
roleadd
sysidconfig
update_drv
useradd
|