Data Services in the Cluster
HAStoragePlus helps configure a local filesystem into a highly available one. It provides following capabilities:
- additional filesystem checks
- mounts and unmounts
- enables Sun cluster to failover local file systems (to failover, local file system must reside on global dgs with affinity switchovers enabled)
Data Service Agent — is a specially written software that allows a data service in a cluster to operate properly.
Data Service Agent (or Agent) does the following to a standard application:
- stop/start application
- monitor faults
- validate configuration
- provides a registration information file that allows Sun Cluster to store all the info about the methods.
Sun Cluster 2.x runs Fault Monitoring components on failover node, and can initiate a takeover. On Cluster 3.x software, it is not allowed. Monitor can either monitor to restart or failover on primary (active host) node.
Failover resource groups:
Logical host resource — SUNW.Logicalhostname Data Storage Resource — SUNW.HAStoragePlus NFS resource — SUNW.nfs
Shutdown a resource group:
scswitch -F -g
Turn on a resourec group:
scswitch -Z -g
Switch a failover group over to another node:
scswitch -z -g -h
Restart a resource group:
scswitch -R -h -g
Evacuate all resources and rgs from a node:
scswitch -S -h node
Disable a res and it’s fault monitor:
scswitch -n -j
Enable a resource and it’s fault monitor:
scswitch -e -j
Clear the STOP_FAILED flag:
scswitch -c -j -h -f STOP_FAILED
How to add a diskgroup and voluem to Cluster configuration
1. Create the disk group and volume.
2. Register the local disk group with the cluster.
root@aesnsra1:../ # scconf -a -D type=vxvm,name=patroldg2,nodelist=aesnsra2
root@aesnsra2:../ # scswitch -z -h aesnsra2 -D patroldg2
3. Create your file system.
4. Update /etc/vfstab to change ‘-‘ boot options
- example:
/dev/vx/dsk/patroldg2/patroldg02 /dev/vx/rdsk/patroldg2/patroldg02 /patrol02 vxfs 3 no suid
5. Set up a resource group with a HAStoragePlus resource for local filesystem:
root@aesnsra2:../ # scrgadm -a -g aescib1-hastp-rg -h aescib1
root@aesnsra2:../ # scrgadm -a -g aescib1-hastp-rg -j sapmntdg01-rs -t SUNW.HAStoragePlus -x FilesystemMountPoints=/sapmnt
6. Bring the resource group online which will mount the specified filesystem:
root@aesnsra2:../ # scswitch -Z -g hastp-aesnsra2-rg
7. Enable resource
root@aesnsra2:../# scswitch -e -j osdumps-dev-rs
Optional step:
8. reboot and test.
Fault monitor operations
Disable the fault monitor for a resource:
scswitch -n -M -j
Enable the Fault monitor for a resource:
scswitch -e -M -j
scstat -g #shows status of all resource groups
Using scrgadm to register and configure Data service software
eg:
scrgadm -a -t SUNW.nfs
scrgadm -a -t SUNW.HAStoragePlus
scrgadm -p
Create a fail over res:
scrgadm -a -f nfs-rg -h node1,node2 \
-y Pathprefix=/global/nfs/admin
Add logical host name res to rg:
scrgadm -a -L -g nfs-rg -l clustername-nfs
Create a HAStoragePlus res:
scrgadm -a -j nfs-stor -g nfs-rg \
-t SUNW.HAStoragePlus \
-x FilesystemMountpoints=/global/nfs -x AffinityOn=True
Create SUNW.nfs resource:
scrgadm -a -j nfs-res -g nfs-rg \
-t SUNW.nfs -y Resource_dependencies=nfs-stor
Print the various resource/resource group dependencies via scrgadm:
scrgadm -pvv|grep -i depend #And then parse this output
Enable res and res monitors, manage rg and switch rg to online state:
scswitch -Z -f nfs-rg
scstat -g
Show current RG configuration:
scrgadm -p[v[v]] [ -t resource_type_name ] [ -g resgrpname ] \
[ -j resname ]
Resizing a VxVM/VxfS vol/fs under sun cluster
# vxassist -g aesnfsp growby saptrans 5g
# scconf -c -D name=aesnfsp,sync
root@aesrva1:../ # vxprint -g aesnfsp -v saptrans
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
v saptrans fsgen ENABLED 188743680 - ACTIVE - -
root@aesrva1:../ # fsadm -F vxfs -b 188743680 /saptrans
UX:vxfs fsadm: INFO: /dev/vx/rdsk/aesnfsp/saptrans is currently 178257920 sector
s - size will be increased
# root@aesrva1:../ # scconf -c -D name=aesnfsp,sync
Command Quick Reference
scstat
scconf
scrgadm
scha_
scdidadm
Sun Terminal Concentrator (Annex NTS)
Enable setup mode by pressing TC test button until TC power indicator starts to blink rapidly, then release the button and press it briefly.
On entering the Setup mode, a “monitor:” prompt is displayed.
Set up IP address using:
monitor::addr
Setting up Load source:
monitor::seq
Specifying image:
monitor::image
Telnet into the TC IP address:
enter "cli"
Elevate to privileged acct using "su"
Run "admin" at the TC OS prompt:
get admin: subprompt:
show port=1 type mode
set port=type mode #Choose various options
quit (to exit the boot prompt)
boot