Jul 182007
 

Displays existing DG resources in the Cluster

<span style="font-size:85%;">scstat -D<br /><span class="anchor" id="line-327"></span></span>

Registering VxVM DGs

<span style="font-size:85%;">scconf -a -D type=vxvm,name=<dgname>. \<br /><span class="anchor" id="line-333"></span>nodelist=<node1>:<node2>, \<br /><span class="anchor" id="line-334"></span>preferenced=true,failback=enabled<br /><span class="anchor" id="line-335"></span></node2></node1></dgname></span>

  • nodelist should contain only nodes that are physically connected to the disks of that dg.
  • preferenced=true/false affects whether nodelist indiciates an order of failover preference. On a two-node cluster, this options is only meaningful if failback is enabled.
  • failback=disabled/enabled affects whether a preferred node “takes back” it’s device group when it joins the cluster. The default value is disabled. When faileback is disabled, preferenced is set to false. If it is enabled, preferenced also must be set to true.

Moving DGs across nodes of a cluster

When VxVM dgs are registered as Sun Cluster resources, NEVER USE vxdg import/deport commands to change ownership (node-wise) of the dgs. This will cause SC to treat dg as failed resource.

Use the following command instead:

<span style="font-size:85%;"># scswitch -z -D <dgname> -h <node_to_switch_to><br /><span class="anchor" id="line-351"></span></node_to_switch_to></dgname></span>

Resyncing Device Groups

<span style="font-size:85%;">scconf -c -D name=<dgname>,sync<br /><span class="anchor" id="line-357"></span></dgname></span>

Changing DG configuration

<span style="font-size:85%;">scconf -c -D name=<dgname>,preferenced=<true|false>,failback=<enabled|disabled><br /><span class="anchor" id="line-363"></span></enabled|disabled></true|false></dgname></span>

Maintenance mode

<span style="font-size:85%;">scswitch -m -D <dgname><br /><span class="anchor" id="line-369"></span></dgname></span>

NOTE: all volumes in the dg must be unopened or unmounted (not being used) in order to do that.

To come back out of maintenance mode

<span style="font-size:85%;">scswitch -z -D <dgname> -h <new_primary_node><br /><span class="anchor" id="line-377"></span></new_primary_node></dgname></span>

Repairing DID device database after replacing JBOD disks

  • ‘Make sure you know which disk to update …’

<span style="font-size:85%;">scdidadm -l c1t1d0<br /><span class="anchor" id="line-384"></span></span>

returns node1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d7

<span style="font-size:85%;">scdidadm -l d7<br /><span class="anchor" id="line-390"></span></span>

returns node1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d7

Then use following cmds to update and verify the DID info:

<span style="font-size:85%;">scdidadm -R d7<br /><span class="anchor" id="line-398"></span>scdidadm -l -o diskid d7<br /><span class="anchor" id="line-399"></span></span>

returns a large string with disk id.

Replacing a failed disk in a A5200 Array (similar concept with other FC disk arrays)

<span style="font-size:85%;">vxdisk list - get the failed disk name<br /><span class="anchor" id="line-407"></span><br /><span class="anchor" id="line-408"></span>vxprint -g dgname -- determine state of the volume(s) that might be affected<br /><span class="anchor" id="line-409"></span></span>

On the hosting node, replace the failed disk:

<span style="font-size:85%;">luxadm remove enclosure,position<br /><span class="anchor" id="line-415"></span>luxadm insert enclosure,position<br /><span class="anchor" id="line-416"></span></span>

On either node of the cluster (that hosts the dg):

<span style="font-size:85%;">scdidadm -l c#t#d#<br /><span class="anchor" id="line-422"></span>scdidadm -R d#<br /><span class="anchor" id="line-423"></span></span>

On the hosting node:

<span style="font-size:85%;">vxdctl enable<br /><span class="anchor" id="line-429"></span><br /><span class="anchor" id="line-430"></span>vxdiskadm (replace failed disk in vxvm)<br /><span class="anchor" id="line-431"></span><br /><span class="anchor" id="line-432"></span>vxprint -g <dgname><br /><span class="anchor" id="line-433"></span>vxtask list     #ensure that resyncing is completed<br /><span class="anchor" id="line-434"></span></dgname></span>

Remove any relocated submirrors/plexes (if hot-relocation had to move something out of the way):

<span style="font-size:85%;">vxunreloc repaired-diskname<br /><span class="anchor" id="line-441"></span></span>

Solaris Vol Mgr (SDS) in Sun Clustered Env

Preferred method of using Soft partitions is to use single slices to create mirrors and then create volumes (soft partitions) from that (kind of similar to VxVM public region in an initialized disk).

Shared Disksets and Local Disksets

Only disks that are physically located in the multi-ported storage will be members of shared disksets. Only disks that are in the same diskset operate as a unit; they can be used together to build mirrored volumes, and primary ownership of the diskset transfers as a while from node to node.

Boot disks are the local disksets. This is a pre-requisite in order to have shared disksets.

Replica management

  • Add local replicas manually.
  • Put local state db replicas on slice 7 of disks (as a convention) in order to maintain uniformity. Shared disksets have to have replicas on slice 7.
  • Spread local replicas evenly across disks and controllers.
  • Support for Shared disksets is provided by Pkg SUNWmdm

Modifying /kernel/drv/md.conf

<span style="font-size:85%;">nmd == max num of volumes (default 128)<br /><span class="anchor" id="line-464"></span>md_nsets == max is 32, default 4.<br /><span class="anchor" id="line-465"></span></span>

Creating shared disksets and mediators

<span style="font-size:85%;">scdidadm -l c1t3d0<br /><span class="anchor" id="line-471"></span></span>

  • — returns d17 as DID device

<span style="font-size:85%;">scdidadm -l d17<br /><span class="anchor" id="line-475"></span>metaset -s <disksetname> -a -h <node1> <node2>  # creates metaset<br /><span class="anchor" id="line-476"></span>metaset -s <disksetname> -a -m <node1> <node2>  # creates mediator<br /><span class="anchor" id="line-477"></span>metaset -s <disksetname> -s /dev/did/rdsk/d9 /dev/did/rdsk/d17<br /><span class="anchor" id="line-478"></span>metaset # returns values<br /><span class="anchor" id="line-479"></span>metadb -s <disksetname><br /><span class="anchor" id="line-480"></span>medstat -s <disksetname> (reports mediator status)<br /><span class="anchor" id="line-481"></span></disksetname></disksetname></disksetname></node2></node1></disksetname></node2></node1></disksetname></span>

Remaining syntax vis-a-vis Sun Cluster is identical to that for VxVM.

IPMP and sun cluster

IPMP is cluster un-aware. To work around that, Sun Cluster uses Cluster-specific public network mgr daemon (pnmd) to integrate IPMP into the cluster.

pmnd daemon has two capabilities:

  • populate CCR with public network adapter status
  • facilitate application failover

When pnmd detects all members of a local IPMP group have failed, it consults a file called /var/cluster/run/pnm_callbacks. This file contains entries that would have been created by the activation of Log icalHostname and SharedAddress resources. It is the job of hafoip_ipmp_callback to device whether to migrate resources to another node.

<span style="font-size:85%;">scstat -i       #view IPMP configuration</span><br />

 Posted by at 8:50 pm

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)