Sun cluster Cheat Sheet — 3

Displays existing DG resources in the Cluster

scstat -D

Registering VxVM DGs

scconf -a -D type=vxvm,name=. \
nodelist=:, \
preferenced=true,failback=enabled

  • nodelist should contain only nodes that are physically connected to the disks of that dg.
  • preferenced=true/false affects whether nodelist indiciates an order of failover preference. On a two-node cluster, this options is only meaningful if failback is enabled.
  • failback=disabled/enabled affects whether a preferred node “takes back” it’s device group when it joins the cluster. The default value is disabled. When faileback is disabled, preferenced is set to false. If it is enabled, preferenced also must be set to true.

Moving DGs across nodes of a cluster

When VxVM dgs are registered as Sun Cluster resources, NEVER USE vxdg import/deport commands to change ownership (node-wise) of the dgs. This will cause SC to treat dg as failed resource.

Use the following command instead:

# scswitch -z -D  -h 

Resyncing Device Groups

scconf -c -D name=,sync

Changing DG configuration

scconf -c -D name=,preferenced=,failback=

Maintenance mode

scswitch -m -D 

NOTE: all volumes in the dg must be unopened or unmounted (not being used) in order to do that.

To come back out of maintenance mode

scswitch -z -D  -h 

Repairing DID device database after replacing JBOD disks

  • ‘Make sure you know which disk to update …’

scdidadm -l c1t1d0

returns node1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d7

scdidadm -l d7

returns node1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d7

Then use following cmds to update and verify the DID info:

scdidadm -R d7
scdidadm -l -o diskid d7

returns a large string with disk id.

Replacing a failed disk in a A5200 Array (similar concept with other FC disk arrays)

vxdisk list - get the failed disk name

vxprint -g dgname -- determine state of the volume(s) that might be affected

On the hosting node, replace the failed disk:

luxadm remove enclosure,position
luxadm insert enclosure,position

On either node of the cluster (that hosts the dg):

scdidadm -l c#t#d#
scdidadm -R d#

On the hosting node:

vxdctl enable

vxdiskadm (replace failed disk in vxvm)

vxprint -g
vxtask list #ensure that resyncing is completed

Remove any relocated submirrors/plexes (if hot-relocation had to move something out of the way):

vxunreloc repaired-diskname

Solaris Vol Mgr (SDS) in Sun Clustered Env

Preferred method of using Soft partitions is to use single slices to create mirrors and then create volumes (soft partitions) from that (kind of similar to VxVM public region in an initialized disk).

Shared Disksets and Local Disksets

Only disks that are physically located in the multi-ported storage will be members of shared disksets. Only disks that are in the same diskset operate as a unit; they can be used together to build mirrored volumes, and primary ownership of the diskset transfers as a while from node to node.

Boot disks are the local disksets. This is a pre-requisite in order to have shared disksets.

Replica management

  • Add local replicas manually.
  • Put local state db replicas on slice 7 of disks (as a convention) in order to maintain uniformity. Shared disksets have to have replicas on slice 7.
  • Spread local replicas evenly across disks and controllers.
  • Support for Shared disksets is provided by Pkg SUNWmdm

Modifying /kernel/drv/md.conf

nmd == max num of volumes (default 128)
md_nsets == max is 32, default 4.

Creating shared disksets and mediators

scdidadm -l c1t3d0

  • — returns d17 as DID device

scdidadm -l d17
metaset -s -a -h # creates metaset
metaset -s -a -m # creates mediator
metaset -s -s /dev/did/rdsk/d9 /dev/did/rdsk/d17
metaset # returns values
metadb -s
medstat -s (reports mediator status)

Remaining syntax vis-a-vis Sun Cluster is identical to that for VxVM.

IPMP and sun cluster

IPMP is cluster un-aware. To work around that, Sun Cluster uses Cluster-specific public network mgr daemon (pnmd) to integrate IPMP into the cluster.

pmnd daemon has two capabilities:

  • populate CCR with public network adapter status
  • facilitate application failover

When pnmd detects all members of a local IPMP group have failed, it consults a file called /var/cluster/run/pnm_callbacks. This file contains entries that would have been created by the activation of Log icalHostname and SharedAddress resources. It is the job of hafoip_ipmp_callback to device whether to migrate resources to another node.

scstat -i       #view IPMP configuration

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.