Solaris ZFS y Zonas



The following is a simple example of creating a ZFS filesystem and using it to hold a newly-created Solaris Zone (Solaris Container). Zones are in Solaris 10 now. ZFS is a new filesystem in OpenSolaris that allows for large, more reliable filesystems. Tke three key advantages are:

* Simple administration
* Data integrity (64-bit checksums on data)
* Large capacity format for future growth (2**128 512-byte block files). That's 256 quadrillion zettabytes.

Other features are:

* Filesystems built on virtual storage "pools"
* Copy-on-write removes need for recovery (no fsck)
* Dynamic striping and multiple block sizes optimizes throughput (512 to 128K)
* Optional compression
* No modifications needed for apps

ZFS software is in packages SUNWzfsr and SUNWzfsu.
Create a ZFS Pool

First, you need a virtual device for ZFS. Normally this would be raw disk (or raw disk slice, if you prefer). However, for testing/demonstration, I'll create a regular file (this takes a few minutes):

# mkfile 5g /virtualDeviceForZFS
4m12.95s

Now I create a "ZFS Storage Pool" for one or more ZFS filesystems:

# zpool create poolForZones /virtualDeviceForZFS
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
poolForZones 4.97G 32.5K 4.97G 0% ONLINE

To create a mirrored-pool use the keyword "pool" and specify two virtual devices.
Create a ZFS Filesystem

Now, I'll create a ZFS filesystem using the ZFS pool I just created:

# zfs create poolForZones/twilightZone
# zfs set mountpoint=/twilightZone poolForZones/twilightZone
# zpool status -z
pool: poolForZones
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
poolForZones ONLINE 0 0 0
/virtualDeviceForZFS ONLINE 0 0 0

# mount |grep twilightZone
/twilightZone on poolForZones/twilightZone read/write/setuid/devices/exec/atime/dev=3f50004 on Mon Nov 14 12:34:37 2005
# df -k /twilightZone
Filesystem kbytes used avail capacity Mounted on
poolForZones/twilightZone
5169408 8 5169341 1% /twilightZone
# ls -l /twilightZone
total 0

Note that /twilightZone is not in /etc/vfstab. Mounting is done automatically at boot time by ZFS:

# grep /twilightZone /etc/vfstab
#

If you want to allow the filesystem to be managed inside the zone, use the zfs zoned=on option when creating or modifying the filesystem.
Create a Solaris Zone

Use zonecfg to setup your zone:

# zonecfg -z twilightZone
twilightZone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:twilightZone> create
zonecfg:twilightZone> set zonepath=/twilightZone
zonecfg:twilightZone> set autoboot=true
zonecfg:twilightZone> add net
zonecfg:twilightZone:net> set address=10.140.1.25
zonecfg:twilightZone:net> set physical=ce0
zonecfg:twilightZone:net> end
zonecfg:twilightZone> verify
zonecfg:twilightZone> commit
zonecfg:twilightZone> exit

Install a Solaris Zone

Now install packages to your Solaris Zone:

# zoneadm -z twilightZone install
/twilightZone must not be group readable.
/twilightZone must not be group executable.
/twilightZone must not be world readable.
/twilightZone must not be world executable.
could not verify zonepath /twilightZone because of the above errors.
zoneadm: zone twilightZone failed to verify

Ooops. We need to set proper permissions. The directory must not be world or group read, write, or execute:

# ls -ld /twilightZone
drwxr-xr-x 2 root sys 2 Nov 14 12:34 /twilightZone
# chmod go-rxw /twilightZone
# ls -ld /twilightZone
drwx------ 2 root sys 2 Nov 14 12:34 /twilightZone

Try install with zoneadm again. This takes several minutes:

# zoneadm -z twilightZone install
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <2808> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <946> packages on the zone.
Initializing package <252> of <946>: percent complete: 26%
. . .
Initialized <946> packages on zone.
Zone is initialized.
The file
contains a log of the zone installation.

Later, if you wish to halt, uninstall, or delete a zone, use these commands, respectively:

zonecfg -z twilightZone halt
zonecfg -z twilightZone uninstall
zonecfg -z twilightZone delete

By default zonecfg creates a "sparse" zone--that is read-only files are shared from the "global" zone. This saves a lot of space as shown below: only 68 MB is used (as opposed to the 4GB or so for the global zone):

# df -k /twilightZone
Filesystem kbytes used avail capacity Mounted on
poolForZones/twilightZone
5169408 68508 5100754 2% /twilightZone

If a "sparse" zone isn't desired, use "create -b" instead of "create" in zonecfg above. This prevents the new zone from "inheriting" packages from the global zone. This is called a "whole root" configuration.

The zone has been created, but it won't show up until after the initial boot:

# zoneadm list -v
ID NAME STATUS PATH
0 global running /

Boot and Configure a Solaris Zone

Lets boot the zone and login to the console with zoneadm and zlogin. The initial boot prompts for basic configuration information (language, locale, terminal, hostname, name service, time zone, and root password):

# zoneadm -z twilightZone boot
# zlogin -C twilightZone
[Connected to zone 'twilightZone' console]
Loading smf(5) service descriptions: 1/108

twilightZone2 console login: root

0 Response to Solaris ZFS y Zonas

Publicar un comentario