Veritas Cluster VCS Cheat Sheet





Concepts



VCS is built on three components: LLT, GAB, and VCS itself. LLT

handles kernel-to-kernel communication over the LAN heartbeat links,

GAB handles shared disk communication and messaging between cluster

members, and VCS handles the management of services.



Once cluster members can communicate via LLT and GAB, VCS is started.

In the VCS configuration, each Cluster contains systems, Service

Groups, and Resources. Service Groups contain a list of systems

belonging to that group, a list of systems on which the Group should

be started, and Resources. A Resource is something controlled or

monitored by VCS, like network interfaces, logical IP's, mount point,

physical/logical disks, processes, files, etc. Each resource

corresponds to a VCS agent which actually handles VCS control over the

resource.



VCS configuration can be set either statically through a configuration

file, dynamically through the CLI, or both. LLT and GAB configurations

are primarily set through configuration files.



Configuration



VCS configuration is fairly simple. The three configurations to worry

about are LLT, GAB, and VCS resources.



LLT

LLT configuration requires two files: /etc/llttab and /etc/llthosts.

llttab contains information on node-id, cluster membership, and

heartbeat links. It should look like this:



# llttab -- low-latency transport configuration file



# this sets our node ID, must be unique in cluster

set-node 0



# set the heartbeat links

link hme1 /dev/hme:1 - ether - -

# link-lowpri is for public networks

link-lowpri hme0 /dev/hme:0 - ether - -



# set cluster number, must be unique

set-cluster 0



start



The "link" directive should only be used for private links.

"link-lowpri" is better suited to public networks used for heartbeats,

as it uses less bandwidth. VCS requires at least two heartbeat signals

(although one of these can be a communication disk) to function

without complaints.



The "set-cluster" directive tells LLT which cluster to listen to. The

llttab needs to end in "start" to tell LLT to actually run.



The second file is /etc/llthosts. This file is just like /etc/hosts,

except instead of IP->hostnames, it does llt node numbers (as set in

set-node). You need this file for VCS to start. It should look like

this:



0 daldev05

1 daldev06



GAB

GAB requires only one configuration file, /etc/gabtab. This file lists

the number of nodes in the cluster and also, if there are any

communication disks in the system, configuration for them. Ex:



/sbin/gabconfig -c -n2



tells GAB to start GAB with 2 hosts in the cluster. To specify VCS

communication disks:



/sbin/gabdisk -a /dev/dsk/cXtXdXs2 -s 16 -p a

/sbin/gabdisk -a /dev/dsk/cXtXdXs2 -s 144 -p h

/sbin/gabdisk -a /dev/dsk/cYtYdYs2 -s 16 -p a

/sbin/gabdisk -a /dev/dsk/cYtYdYs2 -s 144 -p h



-a specifies the disk, -s specifies the start block for each

communication region, and -p specifies the port to use, "a" being the

GAB seed port and "h" the VCS port. The ports are the same as the

network ports used by LLT and GAB, but are simulated on a disk.



VCS

The VCS configuration file(s) are in /etc/VRTSvcs/conf/config. The two

most important files are main.cf and types.cf. I like to set $VCSCONF

to that directory to make my life easier. main.cf contains the actual

VCS configuration for Clusters, Groups, and Resources, while types.cf

contains C-like prototypes for each possible Resource.



The VCS configurationis very similar to the C language, but all you

are doing is defining variables. Comments are "//" (if you try to use

#'s, you'll be unhappy with the result), and you can use "include"

statements if you want to break up your configuration to make it more

readable. One file you must include is types.cf.



In main.cf, you need to specify a Cluster definition:



cluster iMS ( )



You can specify variables within this cluster definition, but for the

most part, the defaults are acceptible. Cluster variables include

maximum number of groups per cluster, link monitoring, log size,

maximum number of resources, maximum number of types, and a list of

user names for the GUI that you will never use and shouldn't install.



You then need to specify the systems in the cluster:



system daldev05 ( )

system daldev06 ( )



These systems must be in /etc/llthosts for VCS to start.



You can also specify SNMP settings for VCS:



snmp vcs (

Enabled = 1

IPAddr = 0.0.0.0

TrapList = { 1 = "A new system has joined the VCS Cluster",

2 = "An existing system has changed its state",

3 = "A service group has changed its state",

4 = "One or more heartbeat links has gone down",

5 = "An HA service has done a manual restart",

6 = "An HA service has been manually idled",

7 = "An HA service has been successfully started" }

)



IPAddr is the IP address of the trap listener. Enabled defaults to 0,

so you need to include this if you want VCS to send traps. You can

also specify a list of numerical traps; listed above are the VCS

default traps.



Each cluster can have multiple Service Group definitions. The most

basic Service Group looks like this:



group iMS5a (

SystemList = { daldev05, daldev06 }

AutoStartList = { daldev05 }

)



You can also set the following variables (not a complete list):



* FailOverPolicy - you can set which policy is used to determine

which system to fail over to, choose from Priority (numerically based

on node-id), Load (system with the lowest system load gets failover),

or RoundRobin (system with the least number of active services is

chosen).

* ManualOps - whether VCS allows manual (CLI) operation on this Group

* Parallel - indicats if the service group is parallel or failover



Inside each Service Group you need to define Resources. These are the

nuts and bolts of VCS. A full description of the bundled Resources can

be found in the Install Guide and a full description of the

configuration language can be found in the User's Guide.



Here are a couple of Resource examples:



NIC networka (

Device = hme0

NetworkType = ether

)



IP logical_IPa (

Device = hme0

Address = "10.10.30.156"

)



The first line begins with a Resource type (e.g. NIC or IP) and then a

globally unique name for that particular resource. Inside the paren

block, you can set the variables for each resource.



Once you have set up resources, you need to build a resource

dependancy tree for the group. The syntax is "child_resource requires

parent_resource." A dependancy tree for the above resources would look

like this:



logical_IPa requires networka



The dependancy tree tells VCS which resources need to be started

before other resources can be activated. In this case, VCS knows that

the NIC hme0 has to be working before resource logical_IPa can be

started. This works well with things like volumes and volumegroups;

without a dependancy tree, VCS could try to mount a volume before

importing the volume group. VCS deactivates all VCS controlled

resources when it shuts down, so all virtual interfaces (resource type

IP) are unplumbed and volumes are unmounted/exported at VCS shutdown.



Once the configuration is buld, you can verify it by running

/opt/VRTSvcs/bin/hacf -verify and then you can start VCS by running

/opt/VRTSvcs/bin/hastart.



Commands and Tasks



Here are some important commands in VCS. They are in /opt/VRTSvcs/bin

unless otherwise noted. It's a good idea to set your PATH to include

that directory.



Manpages for these commands are all installed in /opt/VRTS/man.



* hastart starts VCS using the current seeded configuration.

* hastop stops VCS. -all stops it on all VCS nodes in the cluster,

-force keeps the service groups up but stops VCS, and -local stop VCS

on the current node, and -sys systemname stop VCS on a remote system.

* hastatus shows VCS status for all nodes, groups, and resources.

It waits for new VCS status, so it runs forever unless you run it with

the -summary option.

* /sbin/lltstat shows network statistics (for only the local host)

much like netstat -s. Using the -nvv option shows detailed information

on all hosts on the network segment, even if they aren't members of

the cluster.

* /sbin/gabconfig sets the GAB configuration just like in

/etc/gabtab. /sbin/gabconfig -a show current GAB port status. Output

should look like this:



daldev05 # /sbin/gabconfig -a

GAB Port Memberships

===============================================================

Port a gen f6c90005 membership 01

Port h gen 3aab0005 membership 01



The last digits in each line are the node IDs of the cluster

members. Any mention of "jeopardy" ports means there's a problem with

that node in the cluster.

* haclus displays information about the VCS cluster. It's not

particularly useful because there are other, more detailed tools you

can use:

* hasys controls information about VCS systems. hasys -display

shows each host in the cluster and it's current status. You can also

set this to add, delete, or modify existing systems in the cluster.

* hagrp controls Service Groups. It can offline, online (or swing)

groups from host to host. This is one of the most useful VCS tools.

* hares controls Resources. This is the finest granular tool for

VCS, as it can add, remove, or modify individual resources and

resource attributes.



Here are some useful things you can do with VCS:



Activate VCS: run "hastart" on one system. All members of the cluster

will use the seeded configuration. All the resources come up.



Swing a whole Group administratively:



Assuming the system you're running GroupA on is sysa, and you want to

swing it to sysb



hagrp -switch GroupA -to sysb



Turn off a particular resource (say, ResourceA on sysa):



hares -offline ResourceA -sys sysa



In a failover Group, you can only online the resource on system on

which the group is online, so if ResourceA is a member of GroupA, you

can only bring ResourceA online on the system that is running GroupA.

To online a resource:



hares -online ResourceA -sys sysa



If you get a fault on any resource or group, you need to clear the

Fault on a system before you can bring that resource/group up on it.

To clear faults:



hagrp -clear GroupA

hares -clear ResourceA



Caveats



Here are some tricks for VCS:



VCS likes to have complete control of all its resources. It brings up

all its own virtual interfaces, so don't bother to do that in your

init scripts. VCS also likes to have complete control of all the

Veritas volumes and groups, so you shouldn't mount them at boot. VCS

will fail to mount a volume unless it is responsible for importing the

Volume Group; if you import the VG and then start VCS, it will fail

after about 5 minutes and drop the volume without cleaning the FS. So

make sure all VCS-controlled VG's are exported before starting VCS.



Resource and Group names have no scope in VCS, so each must be a

unique identifier or VCS will fail to load your new configuration.

There is no equivalent to perl's my or local. VCS is also very case

sensitive, so all Types, Groups, Resources, and Systems must be the

same every time. To make matters worse, most of the VCS bundled types

use random capitalization to try to fool you. Copy and paste is your

friend.



Make sure to create your Resource Dependancy Tree before your start

VCS or you could fuck up your whole cluster.



The default time-out for LLT/GAB communication is 15 seconds. If VCS

detects a system is down on all communcations channels for 15 seconds,

it fails all of that system's resource groups over to a new system.



If you use Veritas VM, VCS can't manage volumes in rootdg, so what I

do is encapsulate the root disk into rootdg and create new volume in

their own VCS managed VG. Don't put VCS and non-VCS volumes in the

same VG.



Don't let VCS manage non-virtual interfaces. I did this in testing,

and if you fail a real interface, VCS will unplumb it, fail it over to

a virtual on the fail-over system. Then when you try to swing it back,

it will fail.



Notes on how the configuration is loaded



Because VCS doesn't have any determination of primary/slave for the

cluster, VCS needs to determine who has the valid configuration for

the cluster. As far as I can tell (because of course it's not

documented), this is how it works: When VCS starts, GAB waits a

predetermined timeout for the number of systems in /etc/gabtab to join

the cluster. At this point, all the systems in the cluster compare

local configurations, and the system with the newest config tries to

load it. If it's invalid, it pulls down the second newest valid

config. If it is valid, all the systems in VCS load that config.



#######################################################################################



Creating Service Group



# haconf -makerw àTo open Cluster configuration



#hagrp -add àTo create service group



#hagrp -modify SystemList 1 2



To Allow service group to run in two system with priority 1 to system1



#hagrp -modify AutoStartList



To allow system 1 to start specified service group.



#hagrp -display



To verify service group can auto start and



it is a failover service group.



#haconf -dump -makero



à To save cluster and make configuration



file main.cf read only.



#view /etc/VRTSvcs/conf/config/main.cf



To view configuration file for changes



Adding Resources to a Service Group – Nic



#hares -makerw



#hares -add NIC



#hares -modify Critical 0 (zero)



#hares -modify Device



#hares -modify Enabled 1



#hares -online -sys



#hares -display



#haconf -dump



#view /etc/VRTSvcs/conf/config/main.cf



Adding Resources to a Service Group — IP



#hares -add IP



#hares -modify Critical 0 (zero)



#hares -modify Device



#hares -modify Address



#hares -modify Enabled 1



#hares -online -sys



#hares -display



#ifconfig -a



#haconf -dump



#view /etc/VRTSvcs/conf/config/main.cf



Adding Resources to a Service Group – Disk Group



#hares -add DiskGroup



#hares -modify < nameDG1> Critical 0 (zero)



#hares -modify < nameDG1> DiskGroup < nameDG1>



#hares -modify < nameDG1> Enabled 1



#hares -online < nameDG1> -sys



#hares -display < nameDG1 >



#vxprint -g



#haconf -dump



#view /etc/VRTSvcs/conf/config/main.cf



Adding Resources to a Service Group – Disk Volume



#hares -add Volume



#hares -modify < nameVOL1> Critical 0 (zero)



#hares -modify < nameVOL1> Volume < nameVOL1>



#hares -modify < nameVOL1> DiskGroup < nameDG1>



#hares -modify < nameVOL1> Enabled 1



#hares -online < nameVOL1> -sys



#hares -display < nameVOL1 >



#vxprint -g



#haconf -dump



#view /etc/VRTSvcs/conf/config/main.cf



Adding Resources to a Service Group – Mount



#hares -add Mount



#hares -modify < nameMount1> Critical 0 (zero)



#hares -modify < nameMount1> BlockDevice < path_nameVOL1>



#hares -modify < nameMount1> MountPoint < mount_point>



#hares -modify FSType vxfs



#hares -modify FsckOpt %<-y/-n>



#hares -modify < nameMOunt1> Enabled 1



#hares -online < nameMount1> -sys



#hares -display < nameMount1>



#mount



#haconf -dump



#view /etc/VRTSvcs/conf/config/main.cf



Adding Resources to a Service Group – Process Resource



#hares -add Process



#hares -modify < nameProcess1> Critical 0 (zero)



#hares -modify < nameProcess1> Pathname < /bin/sh>



#hares -modify < nameProcess1> Arguments < “/name1/loopy spacename 1”>



#hares -modify < nameProcess1> Enabled 1



#hares -online < nameProcess1> -sys



#hares -display < nameProcess1 >



#haconf -dump



#view /etc/VRTSvcs/conf/config/main.cf



Linking Resources in Service Group



#hares -link



#hares -link



#hares -link



#hares -link



#hares -link



#hares -dep



#hares -dump



#view /etc/VRTSvcs/conf/config/main.cf



Testing the Service Group



#hagrp -switch -to



#hastatus -system



#hagrp -switch -to



#hastatus -summary



Setting Resources to Critical



#hares -modify Critical 1



#hares -modify Critical 1



#hares -modify Critical 1



#hares -modify Critical 1



#hares -modify Critical 1



#hares -modify Critical 1



#haconf -dump



#view /etc/VRTSvcs/conf/config/main.cf



#haconf -dump -makero





########################################################################################

Veritas Cluster Tasks



Create a Service Group



hagrp -add groupw

hagrp -modify groupw SystemList sun1 1 sun2 2

hagrp -autoenable groupw -sys sun1



Create a disk group resource , volume and filesystem resource



We have to create a disk group resource, this will ensure that the disk group has been imported before we start any volumes

hares -add appDG DiskGroup groupw

hares -modify appDG Enabled 1

hares -modify appDG DiskGroup appdg

hares -modify appDG StartVolumes 0



Once the disk group resource has been created we can create the volume resource

hares -add appVOL Volume groupw

hares -modify appVOL Enabled 1

hares -modify appVOL Volume app01

hares -modify appVOL DiskGroup appdg



Now that the volume resource has been created we can create the filesystem mount resource

hares -add appMOUNT Mount groupw

hares -modify appMOUNT Enabled 1

hares -modify appMOUNT MountPoint /apps

hares -modify appMOUNT BlockDevice /dev/vx/dsk/appdg/app01

hares -modify appMOUNT FSType vxfs



To ensure that all resources are started in order, we create dependencies against each other

hares -link appVOL appDG

hares -link appMOUNT appVOL



Create a application resource



Once the filesystem resource has been created we cab add a application resource, this will start, stop and monitor the application.

hares -add sambaAPP Application groupw

hares -modify sambaAPP Enabled 1

hares -modify sambaAPP User root

hares -modify sambaAPP StartProgram "/etc/init.d/samba start"

hares -modify sambaAPP StopProgram "/etc/init.d/samba stop"

hares -modify sambaAPP CleanProgram "/etc/init.d/samba clean"

hares -modify sambaAPP PidFiles "/usr/local/samba/var/locks/smbd.pid" "/usr/local/samba/var/locks/nmbd.pid"

hares -modify sambaAPP MonitorProcesses "smbd -D" "nmdb -D"



Create a single virtual IP resource



create a single NIC resource

hares -add appNIC NIC groupw

hares -modify appNIC Enabled 1

hares -modify appNIC Device qfe0



Create the single application IP resource

hares -add appIP IP groupw

hres -modify appIP Enabled 1

hres -modify appIP Device qfe0

hres -modify appIP Address 192.168.0.3

hres -modify appIP NetMask 255.255.255.0

hres -modify appIP IfconfigTwice 1



Create a multi virtual IP resource



Create a multi NIC resource

hares -add appMultiNICA MultiNICA groupw

hares -local appMultiNICA Device

hares -modify appMulitNICA Enabled 1

hares -modify appMulitNICA Device qfe0 192.168.0.3 qfe1 192.168.0.3 -sys sun1 sun2

hares -modify appIPMultiNIC NetMask 255.255.255.0

hares -modify appIPMultiNIC ArpDelay 5

hares -modify appIPMultiNIC IfconfigTwice 1



Create the multi Ip address resource, this will monitor the virtual IP addresses.

hares -add appIPMultiNIC IPMultiNIC groupw

hares -modify appIPMultiNIC Enabled 1

hares -modify appIPMultiNIC Address 192.168.0.3

hares -modify appIPMultiNIC NetMask 255.255.255.0

hares -modify appIPMultiNIC MultiNICResName appMultiNICA

hares -modify appIPMultiNIC IfconfigTwice 1



Clear resource fault



# hastatus -sum



-- SYSTEM STATE

-- System State Frozen



A sun1 RUNNING 0

A sun2 RUNNING 0



-- GROUP STATE

-- Group System Probed AutoDisabled State



B groupw sun1 Y N OFFLINE

B groupw sun2 Y N STARTING|PARTIAL



-- RESOURCES ONLINING

-- Group Type Resource System IState



E groupw Mount app02MOUNT sun2 W_ONLINE



# hares -clear app02MOUNT



Flush a group



# hastatus -sum



-- SYSTEM STATE

-- System State Frozen



A sun1 RUNNING 0

A sun2 RUNNING 0



-- GROUP STATE

-- Group System Probed AutoDisabled State



B groupw sun1 Y N STOPPING|PARTIAL

B groupw sun2 Y N OFFLINE|FAULTED



-- RESOURCES FAILED

-- Group Type Resource System



C groupw Mount app02MOUNT sun2



-- RESOURCES ONLINING

-- Group Type Resource System IState



E groupw Mount app02MOUNT sun1 W_ONLINE_REVERSE_PROPAGATE



-- RESOURCES OFFLINING

-- Group Type Resource System IState



F groupw DiskGroup appDG sun1 W_OFFLINE_PROPAGATE



# hagrp -flush groupw -sys sun1



##################################################################################################################################################################################

1 VERITAS CLUSTER SOLARIS

1.1 Overview

* Conf files:

Llt conf: /etc/llttab [should NOT need to access this]

Network conf: /etc/gabtab

If has: /sbin/gabconfig -c -n2 , will need to run
/sbin/gabconfig -c -x if only one system comes up and
both systems were down.

Cluster conf: /etc/VRTSvcs/conf/config/main.cf

Has exact details on what the cluster contains.

* Most executables are in: /opt/VRTSvcs/bin or /sbin

1.2 Stato delle licenze:

* /opt/VRTS/bin/vxlicrep

* Per aggiungere licenze:

cd /opt/VRTSvcs/install

./licensevcs

1.3 Amministrazione via web (porta 8181)

* http://10.74.24.122:8181/vcs/index

1.4 Amministrazione con interfaccia grafica:

* hagui

1.5 Utili informazioni sul cluster:

* hastatus -summary

* haclus -display

* hares -list

* hasys -display

* hatype -list

* hagrp -list

* hagrp -display

* Per la conf HW, può essere utile lanciare anche
prtdiag. Su Fujitsu ci sono:

* /opt/FJSVmadm/sbin/hrdconf -l

* /usr/platform/FJSV,GPUSK/sbin/prtdiag

1.6 Verifica LLT - Low Latency Transport

* /etc/llthosts

* /etc/llttab

* Per verificare i links attivi per LLT.

lltstat -n (da eseguire su ogni sistema)

Si può usare anche lltstat -nvv. Mostra i sistemi nel
cluster e gli heartbeat di rete (i links, sono 2 in
genere per sistemi ben configurati)

* Per lo stato delle porte: lltstat -p

* Verifico che il modulo è caricato dal kernel:

modinfo | grep llt

* Se devo fare l'unload del modulo dal kernel

modunload -i llt_id

1.7 Verifica GAB - Group Membership and Atomic Broadcast

* /etc/gabtab (c'è l'heartbeat dei dischi)

* Esempio di gabtab:

/sbin/gabdiskhb -a /dev/dsk/c2t1d2s3 -s 16 -p a

/sbin/gabdiskhb -a /dev/dsk/c2t1d2s3 -s 144 -p h

/sbin/gabconfig -c -n2 (il numero dopo n indica il
quorum, ovvero i sistemi che devono essere attivi per
formare il cluster VCS affinchè parta)

* Lanciare il comando /sbin/gabconfig -a

Ci sono situazioni particolari:

* Output vuoto: GAB non sta girando

* Se appare jeopardy (che significa "pericolo") invece
di solo membership, allora un link è broken

* verifica del GAB sui dischi: mostra l'heartbeat dei dischi.

gabdiskhb -l

* Verifico che il modulo è caricato dal kernel:

modinfo | grep gab

* Se devo fare l'unload del modulo dal kernel

modunload -i gab_id

1.8 Verifica main.cf

* Per verificare la sintassi del file
/etc/VRTSvcs/conf/config/main.cf si usa hacf:

# cd /etc/VRTSvcs/conf/config

# cd /etc/VRTSvcs/conf/config

# ./hacf -verify .

1.9 Configurazione globale dei gruppi gestiti dal cluster

* basta guardare il file
/etc/VRTSvcs/conf/config/main.cf. Si vedono molti
dettagli, i parametri di default per esempio, col
comando seguente:

* /opt/VRTS/bin/hagrp -display

Si verifica la configurazione. Sono incluse le
dipendenze dei pacchetti. Ad esempio:

#Group Attribute System Value

ClusterService Administrators global

ClusterService AutoFailOver global 1

ClusterService AutoRestart global 1

ClusterService AutoStart global 1

ClusterService AutoStartIfPartial global 1

ClusterService AutoStartList global prodsshr1 prodsshr0

ClusterService AutoStartPolicy global Order

ClusterService Evacuate global 1

ClusterService ExtMonApp global

ClusterService ExtMonArgs global

ClusterService FailOverPolicy global Priority

ClusterService FaultPropagation global 1

ClusterService Frozen global 0

ClusterService GroupOwner global

ClusterService IntentOnline global 1

ClusterService Load global 0

ClusterService ManageFaults global ALL

ClusterService ManualOps global 1

ClusterService NumRetries global 0

ClusterService OnlineRetryInterval global 0

ClusterService OnlineRetryLimit global 0

ClusterService Operators global

ClusterService Parallel global 0

ClusterService PreOffline global 0

ClusterService PreOnline global 0

ClusterService PreonlineTimeout global 300

ClusterService Prerequisites global

ClusterService PrintTree global 1

ClusterService Priority global 0

ClusterService Restart global 0

ClusterService SourceFile global ./main.cf

ClusterService SystemList global prodsshr1 1
prodsshr0 2

ClusterService SystemZones global

ClusterService TFrozen global 0
ClusterService TFrozen global 0

ClusterService Tag global

ClusterService TriggerEvent global 1

ClusterService TriggerResStateChange global 0

ClusterService TypeDependencies global

ClusterService UserIntGlobal global 0

ClusterService UserStrGlobal global

ClusterService AutoDisabled prodsshr0 0

ClusterService AutoDisabled prodsshr1 0

ClusterService Enabled prodsshr0 1

ClusterService Enabled prodsshr1 1

ClusterService PreOfflining prodsshr0 0

ClusterService PreOfflining prodsshr1 0

ClusterService PreOnlining prodsshr0 0

ClusterService PreOnlining prodsshr1 0

ClusterService Probed prodsshr0 1

ClusterService Probed prodsshr1 1

ClusterService ProbesPending prodsshr0 0

ClusterService ProbesPending prodsshr1 0

ClusterService State prodsshr0 |OFFLINE|

ClusterService State prodsshr1 |ONLINE|

ClusterService UserIntLocal prodsshr0 0

ClusterService UserIntLocal prodsshr1 0

ClusterService UserIntLocal prodsshr1 0

ClusterService UserStrLocal prodsshr0

ClusterService UserStrLocal prodsshr1

#

beadm_sg Administrators global

beadm_sg AutoFailOver global 1

beadm_sg AutoRestart global 1

beadm_sg AutoStart global 1

beadm_sg AutoStartIfPartial global 1

beadm_sg AutoStartList global prodsshr1 prodsshr0

beadm_sg AutoStartPolicy global Order

beadm_sg Evacuate global 1

beadm_sg ExtMonApp global

beadm_sg ExtMonArgs global

beadm_sg FailOverPolicy global Priority

beadm_sg FaultPropagation global 1

beadm_sg Frozen global 0

beadm_sg GroupOwner global

beadm_sg IntentOnline global 1

beadm_sg Load global 0

beadm_sg ManageFaults global ALL

beadm_sg ManualOps global 1
beadm_sg ManualOps global 1

beadm_sg NumRetries global 0

beadm_sg OnlineRetryInterval global 0

beadm_sg OnlineRetryLimit global 0

beadm_sg Operators global

beadm_sg Parallel global 0

beadm_sg PreOffline global 0

beadm_sg PreOnline global 0

beadm_sg PreonlineTimeout global 300

beadm_sg Prerequisites global

beadm_sg PrintTree global 1

beadm_sg Priority global 0

beadm_sg Restart global 0

beadm_sg SourceFile global ./main.cf

beadm_sg SystemList global prodsshr1 1 prodsshr0 2

beadm_sg SystemZones global

beadm_sg TFrozen global 0

beadm_sg Tag global

beadm_sg TriggerEvent global 1

beadm_sg TriggerResStateChange global 0

beadm_sg TypeDependencies global

beadm_sg UserIntGlobal global 0

beadm_sg UserStrGlobal global
beadm_sg AutoDisabled prodsshr0 0

beadm_sg AutoDisabled prodsshr1 0

beadm_sg Enabled prodsshr0 1

beadm_sg Enabled prodsshr1 1

beadm_sg PreOfflining prodsshr0 0

beadm_sg PreOfflining prodsshr1 0

beadm_sg PreOnlining prodsshr0 0

beadm_sg PreOnlining prodsshr1 0

beadm_sg Probed prodsshr0 1

beadm_sg Probed prodsshr1 1

beadm_sg ProbesPending prodsshr0 0

beadm_sg ProbesPending prodsshr1 0

beadm_sg State prodsshr0 |ONLINE|

beadm_sg State prodsshr1 |OFFLINE|

beadm_sg UserIntLocal prodsshr0 0

beadm_sg UserIntLocal prodsshr1 0

beadm_sg UserStrLocal prodsshr0

beadm_sg UserStrLocal prodsshr1

#

1.10 VERITAS Cluster Basic Administrative Operations

1.10.1 Administering Service Groups

* To start a service group and bring its resources
online

# hagrp -online service_group -sys system

* To start a service group on a system (System 1) and
bring online only the resources already online on
another system (System 2)

# hagrp -online service_group -sys system

-checkpartial other_system

If the service group does not have resources online
on the other system, the service group is brought
online on the original system and the checkpartial
option is ignored. Note that the checkpartial option
is used by the Preonline trigger during failover.
When a service group configured with Preonline =1
fails (system 1) fails over to another system (system
2), the only resources brought online on system 1 are
those that were previously online on system 2 prior
to failover.

* To stop a service group and take its resources
offline

# hagrp -offline service_group -sys system

* To stop a service group only if all resources are
probed on the system

# hagrp -offline [-ifprobed] service_group -sys
system

* To switch a service group from one system to another

# hagrp -switch service_group -to system

The -switch option is valid for failover groups only.
A service group can be switched only if it is fully
or partially online.


* To freeze a service group (disable onlining,
offlining, and failover)

# hagrp -freeze service_group [-persistent]

The option -persistent enables the freeze to be
remembered when the cluster is rebooted.

* To thaw a service group (reenable onlining,
offlining, and failover)

# hagrp -unfreeze service_group [-persistent]

* To enable a service group

# hagrp -enable service_group [-sys system]

A group can be brought online only if it is enabled.

* To disable a service group

# hagrp -disable service_group [-sys system]

A group cannot be brought online or switched if it is
disabled.

* To enable all resources in a service group

# hagrp -enableresources service_group

* To disable all resources in a service group

# hagrp -disableresources service_group

Agents do not monitor group resources if resources
are disabled.

* To clear faulted, non-persistent resources in a
service group

# hagrp -clear [service_group] -sys [system]

Clearing a resource automatically initiates the
online process previously blocked while waiting for
the resource to become clear. - If system is
specified, all faulted, non-persistent resources are
cleared from that system only. - If system is not
specified, the service group is cleared on all
systems in the group s SystemList in which at least
one non-persistent resource has faulted.

1.10.2 Risorse di un gruppo di risorse

* prodsshr0:{root}:/>hagrp -resources beadm_sg

tws_client

beadm_dg

beadm_mip

beadm_mnt

twsdm_mnt

prodsshr_mnic

bea_admin

TWS_vol

beadm_vol

prodsshr0:{root}:/>

1.11 VERITAS: Comandi Amministrazione VCS

1.11.1 Cluster Start/Stop:

* stop VCS on all systems:

# hastop -all

* stop VCS on bar_c and move all groups out:

# hastop [ -local ] -sys bar_c -evacuate

* start VCS on local system:

# hastart

1.11.2 Users:

* add gui root user:

# haconf -makerw

# hauser -add root

# haconf -dump -makero

1.11.3 - Set/update VCS super user password:

* add root user:

# haconf -makerw

# hauser -add root

password:...

# haconf -dump -makero

* change root password:

# haconf -makerw

# hauser -update root

password:...

# haconf -dump -makero

1.11.4 Group:

* group start, stop:

# hagrp -offline groupx -sys foo_c

# hagrp -online groupx -sys foo_c

* switch a group to other system:

# hagrp -switch groupx -to bar_c

* freeze a group:

# hagrp -freeze groupx

* unfreeze a group:

# hagrp -unfreeze groupx

* enable a group:

# hagrp -enable groupx

* disable a group:

# hagrp -disable groupx

* enable resources a group:

# hagrp -enableresources groupx

* disable resources a group:

# hagrp -disableresources groupx

* flush a group:

# hagrp -flush groupx -sys bar_c

1.11.5 Node:

* freeze node:

# hasys -freeze bar_c

* thaw node:

# hasys -unfreeze bar_c

1.11.6 Resources:

* online a resouce:

# hares -online IP_192_168_1_54 -sys bar_c

* offline a resouce:

# hares -offline IP_192_168_1_54 -sys bar_c

* offline a resouce and propagte to children:

# hares -offprop IP_192_168_1_54 -sys bar_c

* probe a resouce:

# hares -probe IP_192_168_1_54 -sys bar_c

* clear faulted resource:

# hares -clear IP_192_168_1_54 -sys bar_c

1.11.7 Agents:

* start agent:

# haagent -start IP -sys bar_c

* stop agent:

# haagent -stop IP -sys bar_c

1.11.8 Reboot a node with evacuation of all service groups:

* (groupy is running on bar_c)

* # hastop -sys bar_c -evacuate

* # init 6

* # hagrp -switch groupy -to bar_c

1.12 Starting Cluster Manager (Java Console) and
Configuration Editor

1. After establishing a user account and setting the
display, type the following commands to start Cluster
Manager and Configuration Editor:

* # hagui

* # hacfed

2. Run /opt/VRTSvcs/bin/hagui.



##############################################################################################################################################################################

Hi All,

I'm newbie in vcs & trying to learn cli before switchover to learn hagui. If you dont mind, could you verify the below commands & their sequences?
DiskGroup ora9idg (
DiskGroup = ora9idg
StartVolumes = 1
StopVolumes = 0
)
IP ora9idg_IP (
Device = qfe3
Address = "192.168.1.2"
NetMask = "255.255.255.0"
)

Netlsnr ora9idb_lsnr (
Owner = oracle
Home = "/oraclesw/product/9.2.0.1.0"
TnsAdmin = "/oraclesw/product/9.2.0.1.0/network/admin"
Listener = ora9idb_lsnr
MonScript = "./bin/Netlsnr/LsnrTest.pl"
)
..

# haconf makerw

# hares add ora9idg DiskGroup ora9idb-Group (assuming ora9idb-Group already exists)
# hares add ora9idg_IP IP ora9idb-Group
# hares add ora9idb_lsnr Netlsnr ora9idb-Group

# hatype add DiskGroup
# hatype add IP
# hatype add Netlsnr

# hares modify ora9idg DiskGroup ora9idg sys vcs-test-phys2
# hares modify ora9idg StartVolumes 1 sys vcs-test-phys2
# hares modify ora9idg StopVolumes 0 sys vcs-test-phys2

# hares modify ora9idg_IP Device add qfe3 192.168.1.2 sys vcs-test-phys2
# hares modify ora9idg_IP NetMask 255.255.255.0

# hares modify ora9idb_lsnr Owner oracle sys vcs-test-phys2 -sys vcs-test-phys2
# hares modify ora9idb_lsnr Home "/oraclesw/product/9.2.0.1.0" -sys vcs-test-phys2
# hares modify ora9idb_lsnr TnsAdmin "/oraclesw/product/9.2.0.1.0/network/admin" -sys vcs-test-phys2
# hares modify ora9idb_lsnr Listener ora9idb_lsnr -sys vcs-test-phys2
# hares modify ora9idb_lsnr MonScript "./bin/Netlsnr/LsnrTest.pl" -sys vcs-test-phys2

# haconf dump makero
After that I can online either a service-group or each of a resource to make sure they are fine before I fails over? In that case, will it automatically copy the new resource entries in main.cf on 2nd node to the 1st node? Im trying to add & bring a new service-group online w/o disturbing a running cluster.
I also see a User s guide doc mentions that after creating a resource type, use haattr command to add its attributes. However, I see hares cli also has a capability to add attributes as well. So, Im wondering which one would you recommend? Any helps are appreciated. Thx.
TIA,
-Chris

5 Response to Veritas Cluster VCS Cheat Sheet

  1. Unknown says:

    This blog is having the general information. Got a creative work and this is very different one.We have to develop our creativity mind.This blog helps for this. Thank you for this blog. This is very interesting and useful.
    Android App Development Company in India

  2. jeslin says:

    wow really superb you had posted one nice information through this. Definitely it will be useful for many people. So please keep update like this.

    Sofa Cleaning Services Mumbai

  3. Sowmiya says:

    Really Good blog post about vertias cluster.provided a helpful information.I hope that you will post more updates like this.
    MSBI Training in Chennai

  4. Este comentario ha sido eliminado por el autor.
  5. sylvia says:

    Really Good tips and advises you have just shared. Thank you so much for taking the time to share such a piece of nice information. Looking forward for more views and ideas, Keep up the good work! Visit here for Product Engineering Services | Product Engineering Solutions.

Publicar un comentario