Sunday, November 6, 2016

Solaris OS patching on VCS or SVM set up

Solaris OS patching on VCS or SVM set up



Here I am providing step by step patching plan for VCS cluster setup environment.


General Note:

1. Download latest patch bundle from oracle site

2. Take full OS backup

3. Take all the configuration screenshot backup as well as explorer (Save it to your Local "Laptop")

4. Check server console access.

5. Check the server's hardware health.

6. Application and Database team contacts details.


Pre-Requisite:

I. Download patch bundle from oracle site and copy to server /var/tmp/ (10_Recommended.zip).

II. Make necessary free space on /,/opt and /var FS.


Pre-Implementation Plan:

1. Take all the configuration screenshot backup as well as explorer (Save it to your Local "Laptop").

Here is the list of general commands -

2. Snooze the servers alerts.


These are the steps to patch VCS cluster servers:

Ensure Application / Databases are brought down (If it is running) Check the server where the cluster groups are running:


[root@tpt01]# hastatus -sum | grep -i online

B ClusterService tpt01 Y N ONLINE

B nbu_sg tpt01 Y N ONLINE

[root@tpt01]#

If the cluster service groups are running in patching node, switch over to another node This command will failover all the running cluster service groups to another node and stop the cluster on current node (from where we are executing the command) 


[root@tpt01]# hastop -local -evacuate

Pre- reboot the server before doing patching activity and check if the server booting without any issue. 


[root@tpt01]# sync;sync;sync

[root@tpt01]# init 6

Unzip the patch bundle:

[root@tpt01]# /usr/local/bin/unzip 10_Recommended.zip

Move the VCS startup files (to prevent cluster from start while reboot): 


[root@tpt01]# mv /etc/rc3.d/S99vcs /etc/rc3.d/bake_S99vcs

[root@tpt01]# mv /etc/llthosts /etc/back_llthosts

[root@tpt01]# mv /etc/llttab /etc/back_llttab

[root@tpt01]# mv /etc/gabtab /etc/back_gabtab

Stop the cluster on current node (Patching node): 


[root@tpt01]# hastop -local

Compare below three outputs to identify the rootdisk and rootmirror disk before detaching the secondary rootmirror disk: 


[root@tpt01]# prtconf -pv | grep -i bootpath

bootpath: '/pci@9,600000/SUNW,qlc@2/fp@0,0/disk@w2100001862f7d137,0:a'

[root@tpt01]#

 

[root@tpt01]# metastat d10

d10: Mirror

Submirror 0: d11

State: Okay

Submirror 1: d12

State: Okay

Pass: 1

Read option: roundrobin (default)

Write option: parallel (default)

Size: 75505920 blocks (36 GB)

 

d11: Submirror of d10

State: Okay

Size: 75505920 blocks (36 GB)

Stripe 0:

Device Start Block Dbase State Reloc Hot Spare

c1t0d0s0 0 No Okay Yes

d12: Submirror of d10

State: Okay

Size: 75505920 blocks (36 GB)

Stripe 0:

Device Start Block Dbase State Reloc Hot Spare

c1t1d0s0 0 No Okay Yes

Device Relocation Information:

Device Reloc Device ID

c1t0d0 Yes id1,ssd@n2000001862f7d137

c1t1d0 Yes id1,ssd@n2000001862f7f113

[root@tpt01]#

 

[root@tpt01]# echo | format

Searching for disks...done

AVAILABLE DISK SELECTIONS:

0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>

/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w2100001862f7d137,0

1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>

/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w2100001862f7f113,0

Detach secondary root mirror disk from root disk (For fail back the change if not success) Validate which are the MD device to detach since this server we have only / and swap FS on root disk. (Mainly we have consider /,/var,/opt and /export/home if we have these FS are separately : To confirm swap device name: 


[root@tpt01]# swap -l

swapfile dev swaplo blocks free

/dev/md/dsk/d20 85,20 16 16790384 15817232

[root@tpt01]#

To confirm root device name: 


[root@tpt01]# df -h /

Filesystem size used avail capacity Mounted on

/dev/md/dsk/d10 35G 29G 6.2G 83% /

[root@tpt01]#

Detach secondary root mirror disk: 


[root@tpt01]# metadetach d10 d12

[root@tpt01]# metadetach d20 d22

Run fsck in secondary disk: 


[root@tpt01]# fsck -y /dev/md/rdsk/d12

Mount the secondary disk on /mnt: 


[root@tpt01]# mount /dev/dsk/c1t1d0s0 /mnt

Take back of system and vfstab files: 


[root@tpt01]# cd /mnt/etc

[root@tpt01]# cp system system.29JAN2014

[root@tpt01]# cp vfstab vfstab.29JAN2014

Edit vfstab and add the entries: 


[root@tpt01]# vi vfstab

/dev/dsk/c1t1d0s0 /dev/rdsk/c1t1d0s0 / ufs 1 no -

/dev/dsk/c1t1d0s1 - - swap - no -

Comment out the root information in /etc/system file (to prevent system to boot without MD device: 


[root@tpt01]# vi system

*rootdev:/pseudo/md@0:0,10,blk

[root@tpt01]# sync;sync;sync

Un mount the mnt and boot the system with secondary disk: 


[root@tpt01]# umount /mnt

Shutdown and bring into OBP Prompt: 

Note: It is a good practice to test secondary mirror disk . Reboot the system using the secondary mirror disk to make sure it is booting up properly: 


[root@tpt01]#shutdown -i0 -g0 -y

ok>devalias

ok>boot disk1

Shutdown the system again, then boot the primary disk into single-user mode for patching: 


[root@tpt01]# shutdown -i0 -g0 -y

ok>boot -s

Go to the patch directory: 


[root@tpt01]# cd /var/tmp/patches

[root@tpt01]# cd 10_Recommended 

[root@tpt01]# ./installpatchset --s10patchset

 

Setup ………………………..

Once patch installation done then reboot the server: 


[root@tpt01]# init 6

Do a health check and find the new patch level: 


[root@tpt01]# uname –a

Revoke the configuration as like earlier: 


[root@tpt01]# mv /etc/rc3.d/bake_S99vcs /etc/rc3.d/S99vcs

[root@tpt01]# mv /etc/back_llthosts /etc/llthosts

[root@tpt01]# mv /etc/back_llttab /etc/llttab

[root@tpt01]# mv /etc/back_gabtab /etc/gabtab

Start the VCS cluster: 


[root@tpt01]# hastart


Determine if the upgrade was successful.  Attached it back after few days: [root@tpt01]#metattach d10 d12

[root@tpt01]#metattach d20 d22

If not success reboot the server with secondary disk (roll-back) procedure. Posted dedicated post for revert back the OS patching

No comments:

Post a Comment