Sunday, February 12, 2017

CVM and CFS-Basic



Cluster Volume Manager (CVM)

====================


CVM is an extension of Veritas Volume Manager, the industry-standard storage virtualization platform. CVM extends the concepts of VxVM across multiple nodes. Each node recognizes the same logical volume layout, and more importantly, the same state of all volume resources.
CVM supports performance-enhancing capabilities, such as striping, mirroring, and mirror break-off (snapshot) for off-host backup. You can use standard VxVM commands from one node in the cluster to manage all storage. All other nodes immediately recognize any changes in disk group and volume configuration with no user interaction.

CVM architecture
==============
CVM is designed with a "master and slave" architecture. One node in the cluster acts as the configuration master for logical volume management, and all other nodes are slaves. Any node can take over as master if the existing master fails. The CVM master exists on a per-cluster basis and uses GAB and LLT to transport its configuration data.

Just as with VxVM, the Volume Manager configuration daemon, vxconfigd, maintains the configuration of logical volumes. This daemon handles changes to the volumes by updating the operating system at the kernel level. For example, if a mirror of a volume fails, the mirror detaches from the volume and vxconfigd determines the proper course of action, updates the new volume layout, and informs the kernel of a new volume layout. CVM extends this behavior across multiple nodes and propagates volume changes to the master vxconfigd.

Note:
You must perform operator-initiated changes on the master node.
The vxconfigd process on the master pushes these changes out to slave vxconfigd processes, each of which updates the local kernel. The kernel module for CVM is kmsg.

CVM does not impose any write locking between nodes. Each node is free to update any area of the storage. All data integrity is the responsibility of the upper application. From an application perspective, standalone systems access logical volumes in the same way as CVM systems.


CVM imposes a "Uniform Shared Storage" model. All nodes must connect to the same disk sets for a given disk group. Any node unable to detect the entire set of physical disks for a given disk group cannot import the group. If a node loses contact with a specific disk, CVM excludes the node from participating in the use of that disk.

CVM communication
===============
CVM communication involves various GAB ports for different types of communication. For an illustration of these ports:

CVM communication involves the following GAB ports:

Port w

Most CVM communication uses port w for vxconfigd communications. During any change in volume configuration, such as volume creation, plex attachment or detachment, and volume resizing, vxconfigd on the master node uses port w to share this information with slave nodes.

When all slaves use port w to acknowledge the new configuration as the next active configuration, the master updates this record to the disk headers in the VxVM private region for the disk group as the next configuration.

Port v

CVM uses port v for kernel-to-kernel communication. During specific configuration events, certain actions require coordination across all nodes. An example of synchronizing events is a resize operation. CVM must ensure all nodes see the new or old size, but never a mix of size among members.

CVM also uses this port to obtain cluster membership from GAB and determine the status of other CVM members in the cluster.

Port u



CVM uses the group atomic broadcast (GAB) transport mechanism of VCS to ship the commands from the slave node to the master node. CVM uses group atomic broadcast (GAB) port u.


CVM processes one node joining the cluster at a time. If multiple nodes want to join the cluster simultaneously, each node attempts to open port u in exclusive mode. (GAB only allows one node to open a port in exclusive mode). As each node joins the cluster, GAB releases the port. The next node can then open the port and join the cluster. In a case of multiple nodes, each node continuously retries at pseudo-random intervals until it wins the port.


CVM recovery
============
When a node leaves a cluster, the new membership is delivered by GAB, to CVM on existing cluster nodes. The fencing driver (VXFEN) ensures that split-brain scenarios are taken care of before CVM is notified. CVM then initiates recovery of mirrors of shared volumes that might have been in an inconsistent state following the exit of the node.


For database files, when ODM is enabled with SmartSync option, Oracle Resilvering handles recovery of mirrored volumes. For non-database files, this recovery is optimized using Dirty Region Logging (DRL). The DRL is a map stored in a special purpose VxVM sub-disk and attached as an additional plex to the mirrored volume. When a DRL subdisk is created for a shared volume, the length of the sub-disk is automatically evaluated so as to cater to the number of cluster nodes. If the shared volume has Fast Mirror Resync (FlashSnap) enabled, the DCO (Data Change Object) log volume created automatically has DRL embedded in it. In the absence of DRL or DCO, CVM does a full mirror resynchronization.


Configuration differences with VxVM
=======================
CVM configuration differs from VxVM configuration in the following areas:


  1. Configuration commands occur on the master node.
  2. Disk groups are created (could be private) and imported as shared disk groups.
  3. Disk groups are activated per node.
  4. Shared disk groups are automatically imported when CVM starts.

Cluster File System (CFS)
==================



CFS enables you to simultaneously mount the same file system on multiple nodes and is an extension of the industry-standard Veritas File System. Unlike other file systems which send data through another node to the storage, CFS is a true SAN file system. All data traffic takes place over the storage area network (SAN), and only the metadata traverses the cluster interconnect.



In addition to using the SAN fabric for reading and writing data, CFS offers storage checkpoints and rollback for backup and recovery.




Access to cluster storage in typical SF Oracle RAC configurations use CFS. Raw access to CVM volumes is also possible but not part of a common configuration.

CFS architecture
===========

SF Oracle RAC uses CFS to manage a file system in a large database environment. Since CFS is an extension of VxFS, it operates in a similar fashion and caches metadata and data in memory (typically called buffer cache or vnode cache). CFS uses a distributed locking mechanism called Global Lock Manager (GLM) to ensure all nodes have a consistent view of the file system. GLM provides metadata and cache coherency across multiple nodes by coordinating access to file system metadata, such as inodes and free lists. The role of GLM is set on a per-file system basis to enable load balancing.


CFS involves a primary/secondary architecture. One of the nodes in the cluster is the primary node for a file system. Though any node can initiate an operation to create, delete, or resize data, the GLM master node carries out the actual operation. After creating a file, the GLM master node grants locks for data coherency across nodes. For example, if a node tries to modify a block in a file, it must obtain an exclusive lock to ensure other nodes that may have the same file cached have this cached copy invalidated.

SF Oracle RAC configurations minimize the use of GLM locking. Oracle RAC accesses the file system through the ODM interface and handles its own locking; only Oracle (and not GLM) buffers data and coordinates write operations to files. A single point of locking and buffering ensures maximum performance. GLM locking is only involved when metadata for a file changes, such as during create and resize operations.

CFS communication
=============
CFS uses port f for GLM lock and metadata communication. SF Oracle RAC configurations minimize the use of GLM locking except when metadata for a file changes.

CFS file system benefits
===============

Many features available in VxFS do not come into play in an SF Oracle RAC environment because ODM handles such features. CFS adds such features as high availability, consistency and scalability, and centralized management to VxFS. Using CFS in an SF Oracle RAC environment provides the following benefits:
Increased manageability, including easy creation and expansion of files

In the absence of CFS, you must provide Oracle with fixed-size partitions. With CFS, you can grow file systems dynamically to meet future requirements.

Less prone to user error

Raw partitions are not visible and administrators can compromise them by mistakenly putting file systems over the partitions. Nothing exists in Oracle to prevent you from making such a mistake.

Data center consistency

If you have raw partitions, you are limited to a RAC-specific backup strategy. CFS enables you to implement your backup strategy across the data center.

CFS recovery
=========
The vxfsckd daemon is responsible for ensuring file system consistency when a node crashes that was a primary node for a shared file system. If the local node is a secondary node for a given file system and a reconfiguration occurs in which this node becomes the primary node, the kernel requests vxfsckd on the new primary node to initiate a replay of the intent log of the underlying volume. The vxfsckd daemon forks a special call to fsck that ignores the volume reservation protection normally respected by fsck and other VxFS utilities. The vxfsckd can check several volumes at once if the node takes on the primary role for multiple file systems.

After a secondary node crash, no action is required to recover file system integrity. As with any crash on a file system, internal consistency of application data for applications running at the time of the crash is the responsibility of the applications.

Comparing raw volumes and CFS for data files
=============================
Keep these points in mind about raw volumes and CFS for data files:
If you use file-system-based data files, the file systems containing these files must be located on shared disks. Create the same file system mount point on each node.

If you use raw devices, such as VxVM volumes, set the permissions for the volumes to be owned permanently by the database account.

VxVM sets volume permissions on import. The VxVM volume, and any file system that is created in it, must be owned by the Oracle database user.

Friday, February 3, 2017

Symantec Cluster Server VCS Moving Resource from One Service Group to Another





Unfortunately our Veritas Clusters are rather new, and when setting them up we didn't have a clear plan as to how things where to be laid out. This means that we have far to many Resource Groups making the linking of the resources difficult. Some details discovered did cover some of the things, but not everything, such as the clean up and removal of those redundant RGs. So I decided to include that extra information here, for my reference later. First a note; if you have access to the GUI. This job is 100% easier, as it's just a move and copy affair, in this case we don't have this access due to resources, change requirements and firewall permissions.
Using the tools that VCS gives you, you need to take snap shot of the completed cluster:
# cd /etc/VRTSvcs/conf/config; hacf -verify .
This will create a main.cmd file in the local directory, from this collect the info we need about the RG, so list them:
# hagrp -state
#Group         Attribute             System     Value
cfs_backup     State                 server1    |ONLINE|
cfs_other      State                 server1    |ONLINE|
cvm            State                 server1    |ONLINE|
sg-ip          State                 server1    |OFFLINE|
sg-weblogic    State                 server1    |OFFLINE|
vxfen          State                 server1    |ONLINE|
> hagrp -resources cfs_backup
backup
In the example above we need to move cfs_backup and sg-ip into the sg_weblogic group, and we can see the resource 'backup' belongs to the group 'cfs_backup'. The next command shows us the out put we are going to be using:
# grep "^hares.* backup*" main.cmd
hares -add backup CFSMount cfs_backup
hares -modify backup Critical 0
hares -modify backup MountPoint "/backup"
hares -modify backup BlockDevice "/dev/vx/dsk/backupvg/backupvol"
hares -local backup MountOpt
hares -modify backup MountOpt "cluster" -sys server1
hares -modify backup NodeList  server1
hares -modify backup CloneSkip no
hares -modify backup Policy -delete -keys
hares -modify backup Enabled 1
hares -modify backup_vol CVMDiskGroup backupvg
hares -modify vxfsckd ActivationMode  backupvg sw othervg sw -sys server1
hares -link backup backup_vol
# grep "^hares.* backup*" main.cmd > backup.cmd
Now that we have the captured info, we need to create a delete script
# awk '/^hares -add backup/ {print "hares -delete "$3}' backup.cmd > backup_del.cmd
Once created, edit the backup.cmd and we replace the RED cfs_backup with the new group name 'sg-weblogic'.
# cat backup_del.cmd
hares -delete backup
Update the permissions of the new scripts:
# chmod a+x backup*.cmd
Make the Cluster ready for the edit:
# haconf -makerw
Then run the scripts:
# ./backup_del.cmd
# ./backup.cmd
In our case, we also did the same for the IP group, using the same method above. Once complete you will need to set the cluster back to read-only:
# haconf -dump -makero
Then we stop and start the cluster to make sure it looks all okay?
# hagrp -state
#Group         Attribute             System     Value
cfs_backup     State                 server1    |OFFLINE|
cfs_other      State                 server1    |ONLINE|
cvm            State                 server1    |ONLINE|
sg-ip          State                 server1    |OFFLINE|
sg-weblogic    State                 server1    |ONLINE|
vxfen          State                 server1    |ONLINE|
Once it looks okay, we need to clean up any left parts of these now old defunct RGs, so list there information and dependencies:
# hagrp -resources cfs_backup
<empty>
# hagrp -dep cfs_backup
#Parent      Child      Relationship
cfs_backup   cvm        online local firm
# hagrp -unlink cfs_backup cvm
Finally delete the old RG
# hagrp -delete cfs_backup

Remember to repeat this for any other RGs we cleaned up, in this case the sg-ip group

Symantec Cluster Server. VCS Moving a Mount Point




Today after I getting all the filesystem mount points configured and set up in a cluster I was was asked change them. Annoying but not something I was initially sure how to do. Little bit of searching and clarification of the VCS commands this is what I was able to figure out.

First lets take a look at the mount we are about to move:

# hares -display cfsmount1 | grep -i MountPoint
cfsmount1    ArgListValues         server1   MountPoint        1       /mount/path  BlockDevice     1       /dev/vx/dsk/mountdg/mount_vol MountOpt        1       cluster CloneSkip       1       no      Primary 1       server2        AMFMountType    1       vxfs
cfsmount1    ArgListValues         server2   MountPoint        1       /mount/path  BlockDevice     1       /dev/vx/dsk/mountdg/mount_vol MountOpt        1       cluster CloneSkip       1       no      Primary 1       server2        AMFMountType    1       vxfs

The plan is that we just move the /mount/path to our new path of /mount. So first we need to un-mount our mounts from the nodes in the cluster:

# cfsumount /mount/path
  Unmounting...
  /mount/path got successfully unmounted from server1
  /mount/path got successfully unmounted from server2

Once this is done we need to update and clean up the filesystems and old/new paths:
# ls -dl /mount/path
drwxr-xr-x    2 oracle   dba             256 06 Jun 16:26 /mount/path
# ls -dl /mount
drwxrwxr-x    4 root   dba             256 08 Jun 14:41 /mount

In this example the path needs to be owned by the Oracle user:
# chown -R oracle:dba /mount

Then we are going to check that the old mount point is empty and delete it.
NOTE* Make sure you've unmounted the FS here, else you could destroy all your data. Ours is empty, a good sign that nothing is sitting under the mount point or that nothing is mounted.
NOTE** These filesystem updates need to be actioned on ALL the nodes in the cluster!
# ls -l /mount/path
total 0
# rmdir /mount/path
# ls -dl /mount
drwxrwxr-x    3 oracle   dba             256 14 Jun 10:41 /mount

Our new mount point looks good, so we can now make the required change:
# hares -modify cfsmount1 MountPoint "/mount"
# hares -display cfsmount1 | grep -i MountPoint
cfsmount1    ArgListValues         server1   MountPoint        1       /mount BlockDevice     1       /dev/vx/dsk/mountdg/mount_vol MountOpt        1       cluster CloneSkip       1       no      Primary 1       server2        AMFMountType    1       vxfs
cfsmount1    ArgListValues         server2   MountPoint        1       /mount BlockDevice     1       /dev/vx/dsk/mountdg/mount_vol MountOpt        1       cluster CloneSkip       1       no      Primary 1       server2        AMFMountType    1       vxfs
cfsmount1    MountPoint            global     /mount

The information above looks perfect, so go ahead and remount it all.
# cfsmount /mount
  Mounting...
  [/dev/vx/dsk/mountdg/mount_vol] mounted successfully at /mount on server1
  [/dev/vx/dsk/mountdg/mount_vol] mounted successfully at /mount on server2
# df -g /mount
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/vx/dsk/mountdg/mount_vol     99.00     92.66    7%      339     1% /mount

How to expand Solaris Volume Manager filesystem which is exported to zones from Global Zone

How to expand Solaris Volume Manager filesystem which is exported to zones from Global Zone
Ok, it's been long that no updates on blog... Anyways, Today I'm having some good information on - "How to expand Solaris Volume Manager (Metadevice) filesystem which is exported to zones from Global Zone"

I've a uniqe system which is having little diffrent configuration than other systems. I've SPARC-Enterprise M4000 system and having 2 zones running on it. Here is the zone configuration example for one of them.

# zonecfg -z zone1 info
zonename: zone1
zonepath: /zone1/zonepath
brand: native
autoboot: true
bootargs:
pool: oracpu_pool
limitpriv: default,dtrace_proc,dtrace_user
scheduling-class:
ip-type: shared
[cpu-shares: 32]
fs:
dir: /oracle
special: /dev/md/dsk/d56
raw: /dev/md/rdsk/d56
type: ufs
options: []
fs:
dir: /oradata1
special: /dev/md/dsk/d59
raw: /dev/md/rdsk/d59
type: ufs
options: []
fs:
dir: /oradata2
special: /dev/md/dsk/d62
raw: /dev/md/rdsk/d62
type: ufs
options: []
fs:
dir: /oradata3
special: /dev/md/dsk/d63
raw: /dev/md/rdsk/d63
type: ufs
options: []
[...]

Ok, So here you can see that I've metadevices which are exported to the zone from the global zone. I need to expand one of the filesystem say /oaradata1 by XXG so how am I going to perform this? Take a look at below procedure to understand on how we can do it.

global:/
# zonecfg -z zone1 info fs dir=/oradata1
fs:
dir: /oradata1
special: /dev/md/dsk/d59
raw: /dev/md/rdsk/d59
type: ufs
options: []
global:/
# metattach d59 Storage_LUN_ID

global:/
# growfs -M /zone1/zonepath/root/oradata1 /dev/md/rdsk/d59

This all operation needs to be performed from global zone.

How to Increase Cluster File System - CVM Cluster



In this post I would like to discuss and demonstrate about increasing the Cluster File System (CFS) in CVM environment.

    - The requirement is to increase the filesystem by 1TB.

As a best practice, in a CVM/CFS environment, volume should be grown on CVM master and file system should be grown on a CFS Primary. Please note that the CVM Master and the SF CFS node can be two different nodes.

Just off the topic, to know how to grow volume on CVM master & then grow the filesystem on CFS Primary -

To increase the size of the file system, execute the following on CVM master -

# vxassist –g shared_disk_group growto volume_name newlength

And then on CFS node, execute-

# fsadm –F vxfs –b newsize –r device_name mount_point

On other hand, if the system is both CVM master and CFS primary, then "vxresize" command could be executed on the system without any issues.

The above mentioned statement is true below VERITAS version 3.5 but from and above VERITAS version 3.5 vxresize can be run on any nodes within the cluster provided that attribute named 'HacliUserLevel' is set to value "COMMANDROOT". The default value of this attribute is 'NONE' which prevents user to run vxresize command from all the nodes in the cluster.

In my case it is set to "COMMANDROOT"

# /opt/VRTSvcs/bin/haclus -display | grep Hacli
HacliUserLevel         COMMANDROOT

But if the value of attribute 'HacliUserLevel' is set to "NONE" then below is method can be used to change the value.

To change the value to COMMANDROOT, run:

# /opt/VRTSvcs/bin/haconf -makerw
# /opt/VRTSvcs/bin/haclus -modify HacliUserLevel COMMANDROOT
# /opt/VRTSvcs/bin/haconf -dump -makero

This change allows vxresize to call hacli command, which then allows any command to be run on any system within the cluster.

Okay, back to our requirement -

So first let's find out the CFS primary node -

# fsclustadm -v showprimary /u01-zz/oradata
XXXXX

Now let's find out the master CVM node in the cluster by -

# vxdctl -c mode
mode: enabled: cluster active - SLAVE
master: XXXXX

Now that I need to increase the filesystem by 1TB hence I'm checking if disk group has enough space in it.

# vxassist -g dde1ZZGO0 maxsize
Maximum volume size: 2136743936 (1043332Mb)

Well, we have enough space available under disk group dde1ZZGO0.

Let increase the FS by 1TB now -

# vxresize -b -F vxfs -g dde1ZZGO0 ZZGO0_v0 +1000g

BEFORE:

# df -kh /u01-zz/oradata
Filesystem             size   used  avail capacity  Mounted on
/dev/vx/dsk/dde1ZZGO0/ZZGO0_v0
                       5.0T   5.0T    17G   100%    /u01-zz/oradata

AFTER:

# df -kh /u01-zz/oradata
Filesystem             size   used  avail capacity  Mounted on
/dev/vx/dsk/dde1ZZGO0/ZZGO0_v0

                       6.0T   5.0T   954G    85%    /u01-zz/oradata

Changing the CVM master manually




You can change the CVM master manually from one node in the cluster to another node, while the cluster is online. CVM migrates the master node, and reconfigures the cluster.

Symantec recommends that you switch the master when the cluster is not handling VxVM configuration changes or cluster reconfiguration operations. In most cases, CVM aborts the operation to change the master, if CVM detects that any configuration changes are occurring in the VxVM or the cluster. After the master change operation starts reconfiguring the cluster, other commands that require configuration changes will fail.

See Errors during CVM master switching.

To change the master online, the cluster must be cluster protocol version 100 or greater.

To change the CVM master manually

To view the current master, use one of the following commands:

# vxclustadm nidmap
Name              CVM Nid    CM Nid    State
system01            0        0         Joined: Slave
system02            1          1         Joined: Master
# vxdctl -c mode
mode: enabled: cluster active - MASTER
master: system02
In this example, the CVM master is system02.

From any node on the cluster, run the following command to change the CVM master:

# vxclustadm setmaster nodename
where nodename specifies the name of the new CVM master.

The following example shows changing the master on a cluster from system02 to system01:

# vxclustadm setmaster system01
To monitor the master switching, use the following command:

# vxclustadm -v nodestate
 state: cluster member
        nodeId=0
        masterId=0
        neighborId=1
        members[0]=0xf
        joiners[0]=0x0
        leavers[0]=0x0
        members[1]=0x0
        joiners[1]=0x0
        leavers[1]=0x0
        reconfig_seqnum=0x9f9767
        vxfen=off
state: master switching in progress
reconfig: vxconfigd in join
In this example, the state indicates that master is being changed.

To verify whether the master has successfully changed, use one of the following commands:

# vxclustadm nidmap
Name              CVM Nid    CM Nid    State
system01            0        0         Joined: Master
system02            1         1         Joined: Slave
# vxdctl -c mode
mode: enabled: cluster active - MASTER

master: system01

Migrating VCS clusters on Solaris 10 systems- Solaris 11 branded zone

Configuring VCS/SF in a branded zone environment

You must perform the following steps on the Solaris 11 systems.

To configure VCS/SF in a branded zone environment
1.       Install VCS, SF, or SFHA as required in the global zone.
See the Symantec Cluster Server Installation Guide.
See the Symantec Storage Foundation and High Availability Installation Guide.
2.       Configure a solaris10 branded zone. For example, this step configures a solaris10 zone.
•        Run the following command in the global zone as the global administrator:
•        # zonecfg -z sol10-zone
•        sol10-zone: No such zone configured
Use 'create' to begin configuring a new zone.
•        Create the solaris10 branded zone using the SYSsolaris10 template.
zonecfg:sol10-zone> create -t SYSsolaris10
•        Set the zone path. For example:
zonecfg:sol10-zone> set zonepath=/zones/sol10-zone
Note that zone root for the branded zone can either be on the local storage or the shared storage VxFS, UFS, or ZFS.
•        Add a virtual network interface.
•        zonecfg:sol10-zone> add net
•        zonecfg:sol10-zone:net> set physical=net1
•        zonecfg:sol10-zone:net> set address=192.168.1.20
zonecfg:sol10-zone:net> end
•        Verify the zone configuration for the zone and exit the zonecfg command prompt.
•        zonecfg:sol10-zone> verify
zonecfg:sol10-zone> exit
The zone configuration is committed.
3.       Verify the zone information for the solaris10 zone you configured.
# zonecfg -z sol10-zone info
Review the output to make sure the configuration is correct.
4.       Install the solaris10 zone that you created using the flash archive you created previously.
See Preparing to migrate a VCS cluster.
# zoneadm -z sol10-zone install -p -a /tmp/sol10image.flar
After the zone installation is complete, run the following command to list the installed zones and to verify the status of the zones.
# zoneadm list -iv
5.       Boot the solaris10 branded zone.
6.       # /usr/lib/brand/solaris10/p2v sol10-zone 
# zoneadm -z sol10-zone boot
After the zone booting is complete, run the following command to verify the status of the zones.
# zoneadm list -v
7.       Configure the zone with following command:
# zlogin -C sol10-zone
8.       Install VCS in the branded zone:
•        Install only the following VCS 6.1 packages:
•        VRTSperl
•        VRTSvlic
•        VRTSvcs
•        VRTSvcsag
9.       If you configured Oracle to run in the branded zone, then install the VCS agent for Oracle packages (VRTSvcsea) and the patch in the branded zone.
See the Symantec Cluster Server Agent for Oracle Installation and Configuration Guide for installation instructions.
10.     For ODM support, install the following additional packages and patches in the branded zone:
•        Install the following 6.1 packages:
•        VRTSvlic
•        VRTSvxfs
•        VRTSodm
11.     If using ODM support, relink Oracle ODM library in solaris10 branded zones:
•        Log into Oracle instance.
•        Relink Oracle ODM library.
If you are running Oracle 10gR2:
$ rm $ORACLE_HOME/lib/libodm10.so
$ ln -s /opt/VRTSodm/lib/sparcv9/libodm.so \
$ORACLE_HOME/lib/libodm10.so
If you are running Oracle 11gR1:
$ rm $ORACLE_HOME/lib/libodm11.so
$ ln -s /opt/VRTSodm/lib/sparcv9/libodm.so \ 
$ORACLE_HOME/lib/libodm11.so
•        To enable odm inside branded zone, enable odm in global zone using smf scripts as described as below:
•        global# svcadm enable vxfsldlic
global# svcadm enable vxodm
To use ODM inside branded zone, export /dev/odm, /dev/fdd, /dev/vxportal devices and /etc/vx/licenses/lic directory.
global# zoneadm -z myzone halt 
global# zonecfg -z myzone 
zonecfg:myzone> add device 
zonecfg:myzone:device> set match=/dev/vxportal 
zonecfg:myzone:device> end 
zonecfg:myzone> add device 
zonecfg:myzone:device> set match=/dev/fdd 
zonecfg:myzone:device> end 
zonecfg:myzone> add device 
zonecfg:myzone:device> set match=/dev/odm 
zonecfg:myzone:device> end 
zonecfg:myzone> add device 
zonecfg:myzone:device> set match=/dev/vx/rdsk/dg_name/vol_name 
zonecfg:myzone:device> end 
zonecfg:myzone> add device 
zonecfg:myzone:device> set match=/dev/vx/dsk/dg_name/vol_name 
zonecfg:myzone:device> end 
zonecfg:myzone> add fs 
zonecfg:myzone:fs> set dir=/etc/vx/licenses/lic 
zonecfg:myzone:fs> set special=/etc/vx/licenses/lic 
zonecfg:myzone:fs> set type=lofs 
zonecfg:myzone:fs> end 
zonecfg:myzone> set fs-allowed=vxfs,odm 
zonecfg:myzone> verify 
zonecfg:myzone> commit 
zonecfg:myzone> exit 
global# zoneadm -z myzone boot
          Configure the resources in the VCS configuration file in the global zone. The following example shows the VCS configuration when VxVM volumes are exported to zones via zone configuration file:
          group zone-grp (
                   SystemList = { vcs_sol1 = 0, vcs_sol2 = 1 }
                   ContainterInfo@vcs_sol1 {Name = sol10-zone, Type = Zone,Enabled = 1 }
                   ContainterInfo@vcs_sol2 {Name = sol10-zone, Type = Zone, Enabled = 1 }
                   AutoStartList = { vcs_sol1 }
                   Administrators = { "z_z1@vcs_lzs@vcs_sol2.symantecexample.com" }
          )
         
                    DiskGroup zone-oracle-dg (
                          DiskGroup = zone_ora_dg
                          )
                 Volume zone-oracle-vol (
                          Volume = zone_ora_vol
                          DiskGroup = zone_ora_dg
                          )
         
                  Netlsnr zone-listener (
                          Owner = oracle
                          Home = "/u01/oraHome"
                          )
         
                  Oracle zone-oracle (
                          Owner = oracle
                          Home = "/u01/oraHome"
                          Sid = test1
                             )
                  Zone zone-res (
                          )
                 zone-res requires zone-oracle-vol
                zone-oracle-vol requires zone-oracle-dg
      zone-oracle requires zone-res


Migrating VCS clusters on Solaris 10 systems


You can migrate VCS clusters that run on Solaris 10 systems to solaris10 branded zones on Solaris 11 systems. For example, with branded zones you can emulate a Solaris 10 operating system as Solaris 10 container in the Solaris 11 branded zone. This Solaris 10 non-global zone acts as a complete runtime environment for Solaris 10 applications on Solaris 11 systems. You can directly migrate an existing Solaris 10 system into a Solaris 10 container.
Figure: Workflow to migrate VCS cluster to branded zones illustrates the workflow to migrate a VCS cluster on Solaris 10 systems to branded zones on Solaris 11 systems.




Assign the storage used for the VxVM volumes and VxFS file systems used by application inside the Solaris 10 physical system to the new system. Mount the VxFS file system using loop back mount or direct mount.