0% found this document useful (0 votes)
92 views14 pages

Creation de Container Sous Solaris 10 (Test Sur Apollo) : 1. Les Pools de Ressources

Uploaded by

Tanishsadan A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views14 pages

Creation de Container Sous Solaris 10 (Test Sur Apollo) : 1. Les Pools de Ressources

Uploaded by

Tanishsadan A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 14

CREATION DE CONTAINER SOUS

SOLARIS 10 (TEST SUR APOLLO)

1. LES POOLS DE RESSOURCES

a. Enable the “Ressource Pools Feature”

# pooladm
pooladm: couldn't open pools state file: Facility is not active
# pooladm -e

b. Configuration des Ressources par défaut

# pooladm
system apollo
string system.comment
int system.version 1
boolean system.bind-default true
int system.poold.pid 27663

pool pool_default
int pool.sys_id 0
boolean pool.active true
boolean pool.default true
int pool.importance 1
string pool.comment
pset pset_default

pset pset_default
int pset.sys_id -1
boolean pset.default true
uint pset.min 1
uint pset.max 65536
string pset.units population
uint pset.load 164
uint pset.size 1
string pset.comment

cpu
int cpu.sys_id 0
string cpu.comment
string cpu.status on-line

c. Création d’un pool de ressources « testfl_pool » qui va s’appuyer sur le pset


(processor set) par défaut pset_default (avec plus de CPU dans la machine, il aurait
été possible de créer un pset particulier).

# poolcfg –c "create pool testfl_pool"


poolcfg: cannot load configuration from /etc/pooladm.conf: No such file
or directory
# ls -al /etc/pooladm.conf
/etc/pooladm.conf: No such file or directory

Use «-s» option of pooladm to save active configuration from memory in to


/etc/pooladm.conf.

# pooladm -s
# poolcfg -c 'create pool testfl_pool'
# ls -al /etc/pooladm.conf
-rw-r--r-- 1 root root 1184 Feb 12 14:49 /etc/pooladm.conf

d. On relie le nouveau pool testfl_pool au processor set par défaut (à savoir


pset_default)

global# poolcfg -c 'associate pool testfl_pool (pset pset_default)'

Activation de la configuration

global# pooladm –c

Vérification

global# pooladm

system apollo
string system.comment
int system.version 1
boolean system.bind-default true
int system.poold.pid 27663

pool testfl_pool
int pool.sys_id 1
boolean pool.active true
boolean pool.default false
int pool.importance 1
string pool.comment
pset pset_default

pool pool_default
int pool.sys_id 0
boolean pool.active true
boolean pool.default true
int pool.importance 1
string pool.comment
pset pset_default

pset pset_default
int pset.sys_id -1
boolean pset.default true
uint pset.min 1
uint pset.max 65536
string pset.units population
uint pset.load 153
uint pset.size 1
string pset.comment

cpu
int cpu.sys_id 0
string cpu.comment
string cpu.status on-line

2. CREATION DE LA ZONE SUR CE POOL DE RESSOURCES

a. Configuration

Création du répertoire de la zone globale qui va contenir la zone locale

global# mkdir /util/zones/testfl_zone

On configure…..
global# zonecfg -z testfl_zone
testfl_zone: No such zone configured
Use 'create' to begin configuring a new zone.

Création…

zonecfg:testfl_zone> create

Association de la zone avec un système de fichiers

zonecfg:testfl_zone> set zonepath=/util/zones/testfl_zone

La zone redémarrera quand apollo redémarre

zonecfg:testfl_zone> set autoboot=true

Les réseaux….

zonecfg:testfl_zone> add net


zonecfg:testfl_zone:net> set address=10.128.161.21
zonecfg:testfl_zone:net> set physical=hme0
zonecfg:testfl_zone:net> end

On associe cette zone au pool de resource définit avant

zonecfg:testfl_zone> set pool=testfl_pool

Vérification et prise en compte

zonecfg:testfl_zone> verify
zonecfg:testfl_zone> commit
zonecfg:testfl_zone> exit

b. Installation

global# zoneadm -z testfl_zone install


/util/zones/testfl_zone must not be group readable.
/util/zones/testfl_zone must not be group executable.
/util/zones/testfl_zone must not be world readable.
/util/zones/testfl_zone must not be world executable.
could not verify zonepath /util/zones/testfl_zone because of the above
errors.
zoneadm: zone testfl_zone failed to verify

On résout le problème…..

global# chmod 700 /util/zones/testfl_zone

On reprend…

global# zoneadm -z testfl_zone install


Preparing to install zone <testfl_zone>.
Creating list of files to copy from the global zone.
Copying <32617> files to the zone.
Initializing zone product registry. Attention cela
Determining zone package initialization order. dure à peu près
Preparing to initialize <1173> packages on the zone. 30 minutes !!
Initialized <1173> packages on zone.
Zone <testfl_zone> is initialized.
The file </util/zones/testfl_zone/root/var/sadm/system/logs/install_log>
contains a log of the zone installation.

Démarrage de la zone :

global# zoneadm –z testfl_zone boot


zoneadm: zone 'testfl_zone': WARNING: hme0:1: no matching subnet found in
netmasks(4) for 10.128.161.21; using default of 255.0.0.0.

On constate des modifications de la configuration réseau :

global# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone testfl_zone
inet 127.0.0.1 netmask ff000000
hme0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.128.161.20 netmask ffffff00 broadcast 10.128.161.255
ether 0:3:ba:37:d8:10
hme0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone testfl_zone
inet 10.128.161.21 netmask ff000000 broadcast 10.255.255.255

c. Connexion à la zone
# zlogin testfl_zone
[Connected to zone 'testfl_zone' pts/12]
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
#
# df -k
Filesystem kbytes used avail capacity Mounted on
/ 8445089 4983646 3376993 60% /
/dev 8445089 4983646 3376993 60% /dev
/lib 8635837 3536170 5013309 42% /lib
/platform 8635837 3536170 5013309 42% /platform
/sbin 8635837 3536170 5013309 42% /sbin
/usr 8635837 3536170 5013309 42% /usr
proc 0 0 0 0% /proc
ctfs 0 0 0 0% /system/contract
swap 1630152 216 1629936 1% /etc/svc/volatile
mnttab 0 0 0 0% /etc/mnttab
fd 0 0 0 0% /dev/fd
swap 1629936 0 1629936 0% /tmp
swap 1629936 0 1629936 0% /var/run
# ifconfig -a
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
hme0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.128.161.21 netmask ff000000 broadcast 10.255.255.255

d. Finalisation de la configuration

Connexion à la console de la zone :

# zlogin –C testfl_zone
Select a Language

0. English
1. French

Please make a choice (0 - 1), or press h or ? for help: 0


Select a Locale

0. English (C - 7-bit ASCII)


………………
8. Netherlands (ISO8859-15 - Euro)
9. Go Back to Previous Screen
Please make a choice (0 - 9), or press h or ? for help: 0
What type of terminal are you using?
1) ANSI Standard CRT
………………
12) X Terminal Emulator (xterms)
13) CDE Terminal Emulator (dtterm)
14) Other
Type the number of your choice and press Return: 12
Creating new rsa public/private host key pair
Creating new dsa public/private host key pair

On donne le hostname de la zone locale :

- Host Name for hme0:1 --------------------------------------------------------------

Enter the host name which identifies this system on the network. The name must be
unique within your domain; creating a duplicate host name will cause problems on the
network after you install Solaris.

A host name must have at least one character; it can contain letters, digits, and
minus signs (-).

Host name for hme0:1 testfl-zone Correction du nom de


testfl_zone en testfl-zone
car le caractère «_» n’est
-------------------------------------------------------------------------------------
F2_Continue F6_Help pas autorisé.

Confirmation du hostname

- Confirm Information for hme0:1 ----------------------------------------------------

> Confirm the following information. If it is correct, press F2;


to change any information, press F4.

Host name: testfl-zone

-------------------------------------------------------------------------------------
F2_Continue F4_Change F6_Help

Configuration de la politique de sécurité + confirmation

- Configure Security Policy: --------------------------------------------------------

Specify Yes if the system will use the Kerberos security mechanism.

Specify No if this system will use standard UNIX security.

Configure Kerberos Security


---------------------------
[ ] Yes
[X] No

-------------------------------------------------------------------------------------
F2_Continue F6_Help

Configuration du name services + confirmation

- Name Service ----------------------------------------------------------------------

On this screen you must provide name service information. Select the name
service that will be used by this system, or None if your system will either
not use a name service at all, or if it will use a name service not listed
here.

> To make a selection, use the arrow keys to highlight the option
and press Return to mark it [X].
Name service
------------
[ ] NIS+
[ ] NIS
[ ] DNS
[ ] LDAP
[X] None

-------------------------------------------------------------------------------------
F2_Continue F6_Help

Reboot de la zone :

rebooting system due to change(s) in /etc/default/init


[NOTICE: Zone rebooting]
SunOS Release 5.10 Version Generic_118822-25 64-bit
Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: testfl-zone

testfl-zone console login: root


Password:
Last login: Tue Feb 13 09:15:01 on pts/12
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
#

Déconnexion de la console

# exit

testfl-zone console login: ~.


Closed connection.
root@adminfire #

Connexion à la zone locale

# zlogin testfl_zone
[Connected to zone 'testfl_zone' pts/12]
Last login: Tue Feb 13 09:27:47 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
# exit

[Connection to zone 'testfl_zone' pts/12 closed]

3. RECUPERATION DES INFORMATIONS

A partir de la zone globale :

# zonecfg -z testfl_zone info


zonepath: /util/zones/testfl_zone
autoboot: true
pool: testfl_pool
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
net:
address: 10.128.161.21
physical: hme0

# zoneadm list
global
testfl_zone

4. MISE NE PLACE DU FSS (Fair Share Scheduling)

a. Mise en place du FSS sur le pool de ressource testfl_pool

# poolcfg -c 'modify pool testfl_pool (string pool.scheduler="FSS")'


# pooladm –c

b. Vérification :

# pooladm

system apollo
string system.comment
int system.version 1
boolean system.bind-default true
int system.poold.pid 27663

pool testfl_pool
int pool.sys_id 1
boolean pool.active true
boolean pool.default false
string pool.scheduler FSS
int pool.importance 1
string pool.comment
pset pset_default
………
………

5. CREATION D’UNE AUTRE ZONE SUR LE POOL DE RESSOURCES

a. Configuration

# mkdir /util/zones/testfl_zone2
# zonecfg -z testfl_zone2
testfl_zone2: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:testfl_zone2> create
zonecfg:testfl_zone2> set zonepath=/util/zones/testfl_zone2
zonecfg:testfl_zone2> set autoboot=true
zonecfg:testfl_zone2> add net
zonecfg:testfl_zone2:net> set address=10.128.161.22
zonecfg:testfl_zone2:net> set physical=hme0
zonecfg:testfl_zone2:net> end
zonecfg:testfl_zone2> set pool=testfl_pool
zonecfg:testfl_zone2> verify
zonecfg:testfl_zone2> commit
zonecfg:testfl_zone2> exit

b. Installation

# zoneadm -z testfl_zone2 install


Preparing to install zone <testfl_zone2>.
Creating list of files to copy from the global zone.
Copying <32617> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1173> packages on the zone.
Initialized <1173> packages on zone.
Zone <testfl_zone2> is initialized.
The file </util/zones/testfl_zone2/root/var/sadm/system/logs/install_log>
contains a log of the zone installation.

# zoneadm -z testfl_zone2 boot


zoneadm: zone 'testfl_zone2': WARNING: hme0:2: no matching subnet found
in netmasks(4) for 10.128.161.22; using default of 255.0.0.0.

# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu
8232 index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu
8232 index 1
zone testfl_zone
inet 127.0.0.1 netmask ff000000
lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu
8232 index 1
zone testfl_zone2
inet 127.0.0.1 netmask ff000000
hme0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 10.128.161.20 netmask ffffff00 broadcast 10.128.161.255
ether 0:3:ba:37:d8:10
hme0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index
2
zone testfl_zone
inet 10.128.161.21 netmask ff000000 broadcast 10.255.255.255
hme0:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index
2
zone testfl_zone2
inet 10.128.161.22 netmask ff000000 broadcast 10.255.255.255

c. Finalisation de la configuration

Connexion à la console de la zone :

# zlogin –C testfl_zone2
Select a Language

0. English
1. French

Please make a choice (0 - 1), or press h or ? for help: 0


Select a Locale

0. English (C - 7-bit ASCII)


………………
8. Netherlands (ISO8859-15 - Euro)
9. Go Back to Previous Screen

Please make a choice (0 - 9), or press h or ? for help: 0


What type of terminal are you using?
1) ANSI Standard CRT
………………
12) X Terminal Emulator (xterms)
13) CDE Terminal Emulator (dtterm)
14) Other
Type the number of your choice and press Return: 12
Creating new rsa public/private host key pair
Creating new dsa public/private host key pair

On donne le hostname de la zone locale :

- Host Name for hme0:2 --------------------------------------------------------------

Enter the host name which identifies this system on the network. The name must be
unique within your domain; creating a duplicate host name will cause problems on the
network after you install Solaris.

A host name must have at least one character; it can contain letters, digits, and
minus signs (-).

Host name for hme0:1 testfl-zone2

-------------------------------------------------------------------------------------
F2_Continue F6_Help

Confirmation du hostname

- Confirm Information for hme0:2 ----------------------------------------------------

> Confirm the following information. If it is correct, press F2;


to change any information, press F4.

Host name: testfl-zone2

-------------------------------------------------------------------------------------
F2_Continue F4_Change F6_Help

Configuration de la politique de sécurité + confirmation

- Configure Security Policy: --------------------------------------------------------

Specify Yes if the system will use the Kerberos security mechanism.

Specify No if this system will use standard UNIX security.

Configure Kerberos Security


---------------------------
[ ] Yes
[X] No

-------------------------------------------------------------------------------------
F2_Continue F6_Help

Configuration du name services + confirmation

- Name Service ----------------------------------------------------------------------

On this screen you must provide name service information. Select the name
service that will be used by this system, or None if your system will either
not use a name service at all, or if it will use a name service not listed
here.

> To make a selection, use the arrow keys to highlight the option
and press Return to mark it [X].

Name service
------------
[ ] NIS+
[ ] NIS
[ ] DNS
[ ] LDAP
[X] None

-------------------------------------------------------------------------------------
F2_Continue F6_Help

Reboot de la zone

rebooting system due to change(s) in /etc/default/init


[NOTICE: Zone rebooting]
SunOS Release 5.10 Version Generic_118822-25 64-bit
Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: testfl-zone2

testfl-zone2 console login: root


Password:
Last login: Tue Feb 13 09:15:01 on pts/12
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
#

Déconnexion de la console

# exit

testfl-zone console login: ~.


Closed connection.

Connexion à la zone locale

# zlogin testfl_zone2
[Connected to zone 'testfl_zone2' pts/12]
Last login: Tue Feb 13 09:27:47 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
# exit

[Connection to zone 'testfl_zone2' pts/12 closed]

Création de fs + repertoire dans zone globale et configuration des ces repertoires vers zone
locale

global# zoneadm -z testfl_zone halt


# zoneadm list
global
# newfs /dev/rdsk/c0t8d0s0
newfs: construct a new file system /dev/rdsk/c0t8d0s0: (y/n)? y
Warning: 4096 sector(s) in last cylinder unallocated
/dev/rdsk/c0t8d0s0: 8388608 sectors in 1366 cylinders of 48 tracks, 128 sect
ors
4096.0MB in 86 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
7472672, 7571104, 7669536, 7767968, 7866400, 7964832, 8063264, 8161696,
8260128, 8358560
# Mar 2 13:56:26 apollo in.routed[155]: route 192.168.1.0 --> 192.168.1.10 next
hop is not directly connected

# newfs /dev/rdsk/c0t8d0s1
newfs: construct a new file system /dev/rdsk/c0t8d0s1: (y/n)? y
Warning: 4096 sector(s) in last cylinder unallocated
/dev/rdsk/c0t8d0s1: 8388608 sectors in 1366 cylinders of 48 tracks, 128 sect
ors
4096.0MB in 86 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
7472672, 7571104, 7669536, 7767968, 7866400, 7964832, 8063264, 8161696,
8260128, 8358560
#

# zonecfg -z testfl_zone
zonecfg:testfl_zone> add fs
zonecfg:testfl_zone:fs> set dir=/app
zonecfg:testfl_zone:fs> set special=/dev/dsk/c0t8d0s1
zonecfg:testfl_zone:fs> set raw=/dev/rdsk/c0t8d0s1
zonecfg:testfl_zone:fs> set type=ufs
zonecfg:testfl_zone:fs> set options=logging
zonecfg:testfl_zone:fs> end
zonecfg:testfl_zone> verify
zonecfg:testfl_zone> commit
zonecfg:testfl_zone> exit
# zpool create -f storage_pool c0t9d0
cannot open 'c0t9d0': no such device in /dev/dsk
must be a full path or shorthand device name
# uname -a
SunOS testfl-zone 5.10 Generic_118833-17 sun4u sparc SUNW,Ultra-250
# exit

[Connection to zone 'testfl_zone' pts/1 closed]


#
#
#
# zpool create -f storage_pool c0t9d0
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
storage_pool 8.38G 51.5K 8.37G 0% ONLINE -
# mkdir /export/zfs
# cd /
# zfs create storage_pool/fs
# zfs set ddddcreate storage_pool/fs
#
# Mar 2 15:16:26 apollo in.routed[155]: route 192.168.1.0 --> 192.168.1.10 next
hop is not directly connected

#
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
storage_pool 104K 8.24G 25.5K /storage_pool
storage_pool/fs 24.5K 8.24G 24.5K /storage_pool/fs
# zfs create storage_pool/fs/DATA1
# zfs create storage_pool/fs/DATA2
# zfs set quota=1G storage_pool/fs/DATA1
# zfs set quota=1G storage_pool/fs/DATA2

zonecfg -z testfl_zone
zonecfg:testfl_zone> add dataset
zonecfg:testfl_zone:dataset> set name=storage_pool/fs/DATA1
zonecfg:testfl_zone:dataset> add dataset
usage:
add <resource-type>
(global scope)
add <property-name> <property-value>
(resource scope)
Add specified resource to configuration.
zonecfg:testfl_zone:dataset> end
zonecfg:testfl_zone> verify
zonecfg:testfl_zone> commit
zonecfg:testfl_zone> add dataset
zonecfg:testfl_zone:dataset> set name=storage_pool/fs/DATA2
zonecfg:testfl_zone:dataset> end
zonecfg:testfl_zone> verify
zonecfg:testfl_zone> commit
zonecfg:testfl_zone> exit

Creation des snapshot…

# zfs snapshot storage_pool/fs/DATA1@snapshotDATA1


# zfs snapshot storage_pool/fs/DATA2@snapshotDATA2

# zfs list -t snapshot


NAME USED AVAIL REFER MOUNTPOINT
storage_pool/fs/DATA1@snapshotDATA1 23.5K - 25.5K -
storage_pool/fs/DATA2@snapshotDATA2 23.5K - 25.5K -
Les snapshot sont dans /storage_pool/fs/DATA1/.zfs/snapshot/snapshotDATA1 et
/storage_pool/fs/DATA2/.zfs/snapshot/snapshotDATA2

Sauvegarde du snapshot dans un fichier.. (oui c’est possible).


# zfs send storage_pool/fs/DATA1@snapshotDATA1 > zfs_send_snap_data1
# ls -altr
total 36
drwxr-xr-x 42 root sys 1024 Mar 2 13:48 ..
drwxrwxrwt 2 root sys 512 Mar 2 16:29 .
-rw-r--r-- 1 root root 16056 Mar 2 16:29 zfs_send_snap_data1
# file *
zfs_send_snap_data1: data
# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
storage_pool/fs/DATA1@snapshotDATA1 23.5K - 25.5K -
storage_pool/fs/DATA2@snapshotDATA2 23.5K - 25.5K -
# zfs destroy storage_pool/fs/DATA1@snapshotDATA1
# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
storage_pool/fs/DATA2@snapshotDATA2 23.5K - 25.5K -

Restauration du snapshot à partir dn fichier (Attention on restaure dans un file systeme à partir
d’un snapshot. On ne restaure pas un snapshot à partir d’un snapshot !)

Demontage du FS ZFS :
# umount <mount_point>
Remontage ZFS :
# zfs mount storage_pool/fs/DATA1
On redémonte
# zfs umount storage_pool/fs/DATA1

# zfs receive storage_pool/fs/DATA1@today < ./zfs_send_snap_data1


cannot receive full stream: destination filesystem storage_pool/fs/DATA1 already
exists

# zfs destroy storage_pool/fs/DATA1


cannot destroy 'storage_pool/fs/DATA1': permission denied
# exit
# exit

[Connection to zone 'testfl_zone' pts/1 closed]


# uname -a
SunOS apollo 5.10 Generic_118833-17 sun4u sparc SUNW,Ultra-250

# zfs destroy storage_pool/fs/DATA1


cd /export/zones/*
# ls
dev root
# cd root/var/tmp
# ls
zfs_send_snap_data1
# zfs receive storage_pool/fs/DATA1@today < ./zfs_send_snap_data1
 FS monté dans la zone globale. Il faut l’en démonter pour le remonter dans la zone locale !!

6. Sauvagarde à partir de snapshot + restauration dans zone globale

Création d’un ZFS pour la zone Globale


# zfs create storage_pool/fs/FS_ZONE_GLOBALE
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
….
storage_pool/fs/FS_ZONE_GLOBALE 24.5K 8.24G 24.5K /storage_pool/fs/FS_ZONE_GLOBALE
….
Snapsho de ce ZFS
# zfs snapshot storage_pool/fs/FS_ZONE_GLOBALE@today
# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
……
storage_pool/fs/FS_ZONE_GLOBALE@today 0 - 26.5K -

Sauvegarde dans fichier


# zfs send storage_pool/fs/FS_ZONE_GLOBALE@today > /var/tmp/FS_ZONE_GLOBALE@today
# ls -al /var/tmp/FS*
-rw-r--r-- 1 root root 17080 Mar 5 09:00 /var/tmp/FS_ZONE_GLOBALE@today
#
Restauration du snapshot, dans une nouveau snapshot mais avec un autre nom !
# zfs receive storage_pool/fs/FS_ZONE_RESTORE@today < ./FS_ZONE_GLOBALE@today
Voila ce qu’il a créé :
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
…..
storage_pool/fs/FS_ZONE_GLOBALE 51K 8.24G 27.5K /storage_pool/fs/FS_ZONE_GLOBALE
storage_pool/fs/FS_ZONE_GLOBALE@today 23.5K - 26.5K -
storage_pool/fs/FS_ZONE_RESTORE 26.5K 8.24G 26.5K /storage_pool/fs/FS_ZONE_RESTORE
storage_pool/fs/FS_ZONE_RESTORE@today 0 - 26.5K -
On renomme le ZFS original
# zfs rename storage_pool/fs/FS_ZONE_GLOBALE storage_pool/fs/FS_ZONE_GLOBALE.old
cannot unmount '/storage_pool/fs/FS_ZONE_GLOBALE': Device busy
on le démonte !
# zfs umount /storage_pool/fs/FS_ZONE_GLOBALE
On le renomme :
# zfs rename storage_pool/fs/FS_ZONE_GLOBALE storage_pool/fs/FS_ZONE_GLOBALE.old
On voit que le nom du snapshot associé a également été modifié !
# zfs list
NAME USED AVAIL REFER MOUNTPOINT

storage_pool/fs/FS_ZONE_GLOBALE.old 51K 8.24G 27.5K /storage_pool/fs/FS_ZO
NE_GLOBALE.old
storage_pool/fs/FS_ZONE_GLOBALE.old@today 23.5K - 26.5K -
…..
On remplace le FS
# zfs rename storage_pool/fs/FS_ZONE_RESTORE storage_pool/fs/FS_ZONE_GLOBALE
On vérifie ….
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
…….
storage_pool/fs/FS_ZONE_GLOBALE 26.5K 8.24G 26.5K /storage_pool/fs/FS_ZONE_GLOBALE
storage_pool/fs/FS_ZONE_GLOBALE@today 0 - 26.5K -
storage_pool/fs/FS_ZONE_GLOBALE.old 51K 8.24G 27.5K /storage_pool/fs/FS_ZONE_GLOBALE.old
storage_pool/fs/FS_ZONE_GLOBALE.old@today 23.5K - 26.5K -
……
On retrouve bien nos données du début!

7. ANNEXE

a. Arrêt propre d’une zone locale


global# zlogin <nom zone locale> shutdown
oubien
global# zlogin <nom zone locale> init 0

b. Arrêt (un peu moins) propre d’une zone locale

global# zoneadm -z <nom zone locale> halt

c. Liste des zones sur une machine


global# zoneadm list

d. Configuration particulière d’une zone


global# zonecfg -z <nom zone locale> info

Exemple :
root@udksepp1 # zonecfg -z udkserc1 info
zonepath: /export/zones/udkserc1
autoboot: true
pool: pool_default
fs:
dir: /var
special: /dev/md/dsk/d140
raw: /dev/md/rdsk/d140
type: ufs
options: [logging]
fs:
dir: /app
special: /dev/md/dsk/d150
raw: /dev/md/rdsk/d150
type: ufs
options: [logging]
net:
address: 10.128.25.23
physical: ce0
attr:
name: comment
type: string
value: "Sesame - Zone udkserc1"

e. Pour rendre le répertoire de snapshot visible

# zfs get all storage_pool/fs/FS_ZONE_GLOBALE


…..
storage_pool/fs/FS_ZONE_GLOBALE snapdir hidden default
…..
# zfs set snapdir=visible storage_pool/fs/FS_ZONE_GLOBALE
….
storage_pool/fs/FS_ZONE_GLOBALE snapdir visible local
…..
# pwd
/storage_pool/fs/FS_ZONE_GLOBALE
# ls -al
total 10
drwxr-xr-x 2 root sys 4 Mar 5 08:34 .
drwxr-xr-x 4 root sys 4 Mar 5 09:19 ..
dr-xr-xr-x 3 root root 3 Mar 5 08:30 .zfs
-r--r--r-- 1 root root 75 Mar 5 08:33 0.dbf
-r--r--r-- 1 root root 372 Mar 5 08:34 1.dbf

You might also like