Monday, September 13, 2010

Auto LUN Migration in HP XP24000

Auto LUN migration helps to play around with the V-Vols in XP. Auto LUN migration is transparent to the end user and sys admins. The CU:LDEV remains same as it was before the migration.

Create the V-Vol in the desired pool. While creating the V-Vol it should be kept in mind that the block size must be exactly same as the source V-Vol. Check the same by navigating through
GO->Customized Volume. Select the size in blocks.

Once its done then make the new LUN as reserve LUN. Navigate through to make it as a reserved LUN.
AutoLUN -->AutoLUN -->Physical -->Plan -->Attribute --> V-Vol group (V-Vol). Just right click on the LUN and make it as a reserve LUN.

Now we are ready for the migration.

Manual Migration --> Source V-Vol -->Target -->Destination V-Vol -->Set

Once its applied it will show the migration status. (It took around 45 minutes for migrating a 250GB LUN from internal to external pool)

The migration status can also be checked from the History tab.

Once the migration is done it should be kept in mind that the CU:LDEV will not change in host. The xBox will change i.e the source CU:LDEV will be now shown in the new destination xBox that was created for the migration. The same can be also seen from the Migration History tab.

Now we need to delete the source LUN (inside the old xBox). But before that we need to make it as a normal LUN. Navigate through
AutoLUN -->AutoLUN -->Physical -->Plan -->Attribute --> V-Vol group (V-Vol). Just right and make it as a normal LUN.

Then go to Release V-Vol from a pool and select the appropriate Cache partition & pool of the source LUN and just select and do SET.
And finally delete the V-Vol group. Apply it once done.


Now finally before we leave the XP we need to run Discard Zero Data on the LUN (on which we did Auto LUN migration) to reclaim the unused space.



Thursday, September 9, 2010

LINUX - New Device Addition & Deletion

Often we come across a situation where Storage was presented to the LINUX machine but the System admins are not to find the new device. There are some utilities from Qlogic and HP which help us to scan the new device but might not help in some situation. Use the below command to scan the new LUN. It helped me in almost 95% of the cases (had to reboot in the remaining).

#ls /sys/class/scsi_host

You will receive the list of scsi hosts on Linux machine

After that run the following command:
#echo "- - -" > /sys/class/scsi_host/host#/scan
where ‘host#’ should be changed to host name with number, received in previous command

#fdisk –l
#lvmdiskscan
#multipath -ll


Deleting a LUN from the host.

#multipath -ll
mpath5 (36006016037e02200a800e0558015df11) dm-9 DGC,RAID 5
[size=50G][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=2][active]
\_ 2:0:1:1 sdg 8:96 [active][ready]
\_ 3:0:1:1 sdm 8:192 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:1 sdd 8:48 [active][ready]
\_ 3:0:0:1 sdj 8:144 [active][ready]

Note down the device name (dm) and the complete scsi address (Host #:SCSI channel:ID, LUN #) and execute the following
#pvremove /dev/dm-9
#echo "scsi remove-single-device 3 0 0 1" >/proc/scsi/scsi
Repeat the above for all scsi address

Once done then lvmdiskscan & multipath -ll won't show the disk. And now it can removed from the storage.

Friday, September 3, 2010

EMC Symmetrix DMX

EMC Symmetrix DMX-3 brings one of the most powerful networked storage solution available. It builds on the proven success of the Symmetrix Direct Matrix Architecture—while bringingthe new levels of capacity and performance. So it benefits from a powerful combination of incremental scalability, constant availability, and exceptional data mobility.
Symmetrix DMX-3 enables massive consolidation to deliver all the benefits of tiered storage in one system. And it offers the flexibility to address the changing needs of business—quickly and effectively. For the most extreme, demanding storage environments, Symmetrix DMX-3 provides a powerful solution that’s also remarkably simple to manage.

The DMX-3 consists of a single system bay and from one to eight storage bays. The system bay contains the 24-slot card cage, service processor, power modules, and battery backup unit (BBU) assemblies. The storage bays contain disk drives and associated BBU modules. In a highly scalable component and cabinet configuration,the DMX-3 has the capacity, connectivity, and throughput to handle a wide range of high-end storage application.

Its an Active-active storage system .In an active-active storage system, if there are multiple interfaces to a logical device, they all provide equal access to the logical device. Active-active means that all interfaces to a device are active simultaneously.

DMX-3 system applies a high degree of virtualization between what host sees and the actual disk drives. This device has logical volume address that the host can address.A symmetric Device is not a physical disk.Before actually hosts see the symmetric device, it needs to define path, means mapping the devices to Front-end director and then it needs to set FA-PORT attribute for specific Host.Symmetric device is not a physical disk.

We can create up to four mirrors for each Symmetric device. The Mirror positions are designed M1, M2, M3 and M4. When we create a device and specify its configuration type, the Symmetrix system maps the device to one or more complete disks or part of disks known as Hyper Volumes/Hypers. As a rule, a device maps to at least two mirror means hypers on two different disks, to maintain multiple copies of data.

HP-UX: With HP-UX DMX supports the PV link, Native 11.31 multipath & powerpath. Unlike CX we can use any of the path as 1st device in volume group as all paths are active. But it should be kept in mind that the load should be distributed between the HBAs. So check the complete HW path and then extend the volume group.

Set the below flag for HP-UX 11.11 & 11.23 on FA port

#set port 7a:0 Volume_Set_Addressing=enable;
#set port 8a:0 Volume_Set_Addressing=enable;

Set the below flag for HP-UX 11.31 on FA port

#set port 7a:1 Volume_Set_Addressing=enable, SPC2_Protocol_Version=enable, SCSI_Support1=enable;
#set port 9a:1 Volume_Set_Addressing=enable, SPC2_Protocol_Version=enable, SCSI_Support1=enable;

As given above earlier port usage were based on the specific flavor of UNIX and in particular for HP-UX. We can also do the same but based on host port WWN and not on FA port basis. This enables us to map different host platforms to the same storage ports. Flagging per HBA WWN does not solve all problems with HP-UX. This platform still does not understand LUN IDs greater than 7. When mapping LUNs for HP-UX to storage ports that are already in use by any other platforms in most cases we get LUN IDs that are greater than 7. If we use ordinary mapping/masking scripts, those LUNs will be invisible by HP-UX.
A solution is to use ‘-dynamic_lun’ option in masking script. It assigns host LUN ID according to HBA and not according Storage Port LUN ID:

#cat mask.hpux1
symmask -sid 888 -wwn 50060b0000306d6e add devs 021F -dynamic_lun -dir 7B -p 1 -nopropmt
symmask -sid 888 -wwn 50060b0000306de2 add devs 021F -dynamic_lun -dir 9B -p 1 -nopropmt
symmask refresh -nop
symcfg discover

Generally we can assign specific host LUN IDs with –lun option but in most cases it is not required.

When connecting a new HP-UX 11.31 host we need to set the below port flag which will help us in using the agile devices.

symmask -wwn 500110a00085bcf6 set hba_flags on SC3,SPC2,OS2007 -enable -dir 8a -p 0
symmask -wwn 500110a00085bcf4 set hba_flags on SC3,SPC2,OS2007 -enable -dir 10a -p 0
symmask refresh -nop
symcfg discover

Verify the above by using
#symmaskdb list db -dir 8a -p 0 -v
It should show as below for the specific WWN

Port Flag Overrides : Yes
Enabled : SPC2_Protocol_Version(SPC2)
SCSI_Support1(OS2007)

To mask a new device to a host we need to 1st check the free devices on the DMX. Below command is used to see the free devices
#symdev list -noport

Make a note of the required Sym device and also check the Config to whether it's 2-Way Mir or
RAID-5 device. For this we need to find on which FA port the host is zoned. Below command will tell us on which FA port the host is zoned.

#symmask list hba
#symmaskdb list devs -wwn 500110a00085bcf4

Once the FA ports are found then we need to map the device to it. Below commands illustrates both meta creation and mapping.

#vi meta.server
form meta from dev 052A, config=striped, stripe_size=2cyl;
add dev 052B to meta 052A;
form meta from dev 052C, config=striped, stripe_size=2cyl;
add dev 052D to meta 052C;

And then execute it using
#symconfigure -sid 888 -f meta.server preview
#ymconfigure -sid 888 -f meta.server prepare
ymconfigure -sid 888 -f meta.server commit

Once done then find out the free using below command

#symcfg list -dir 8a -p 0 -addr -avail
Make a note of vbus, TID and LUN and the free address are marked as *. Create the map file as below. You can also use : and keyword starting target when you want to use range of Sym devices

#vi map.server
map dev 05AA to dir 9D:0, target=0, lun=02C;
map dev 05AA to dir 7D:0, target=0, lun=02B;
map dev 05AC:05AE to dir 9D:0 starting target=0, lun=035;
map dev 05AC:05AE to dir 7D:0 starting target=0, lun=035;

Save the file and run the commit the mapping using below command
#symconfigure -sid 888 -f map.server preview
#symconfigure -sid 888 -f map.server prepare
#symconfigure -sid 888 -f map.server commit

If there is any problem in the map file then during preview itself it will throw the errors.

Once mapping is done then just mask the devices to the specific host WWN

#vi mask.server
symmask -sid 888 -wwn 2100001b321a8fef add devs 02AD,0582 -dir 7D -p 0 -nopropmt
symmask -sid 888 -wwn 2101001b323a8fef add devs 02AD,0582 -dir 9D -p 0 -nopropmt
symmask refresh -nop
symcfg discover

Just execute the command using
#sh -x mask.server

You can also check the masking again by using the below command. It will show on which FA port and WWNs the Sym device is mapped and masked.
#symmaskdb list assignments -dev 01CE

To delete a device from host side, make a note of the Sym device from the below command
#sympd list
The above command shows both physical path of the disk and the corresponding Sym device ID.

Make sure that the device is mapped & masked to the specific FA and host WWN respectively using the above symmaskdb command. There may a situation where the device is mapped and masked to multiple host of a cluster and decommissioning a node of a cluster doesn't mean deleting the LUN.

Create unmask file as below
#vi unmask.server
symmask -sid 888 -wwn 2100001b321a39ef -dir 7D -p 0 remove devs 059C,059D,05A5,05A0,059F,05A4
symmask -sid 888 -wwn 2101001b323a39ef -dir 9D -p 0 remove devs 059C,059D,05A5,05A0,059F,05A4
symmask refresh -nop
symcfg discover

Execute it using
#sh -x unmask.server

Once it's unmasked do a write disable of the device (Note: If the device is used in a cluster then just exit after unmask)

#vi write_disable.server
symdev -sid 888 -nop write_disable 052A
symdev -sid 888 -nop write_disable 052B

Execute it using
#sh -x write_disable.server

Once done and go ahead with unmap
#vi unmap.server
unmap dev 052A from dir 9D:0;
unmap dev 052A from dir 7D:0;

Execute is using
#symconfigure -sid 888 -f unmap.server preview
#symconfigure -sid 888 -f unmap.server prepare
#symconfigure -sid 888 -f unmap.server commit

Now the below command will show if the device is free or not
#symdev list -noport

A ???:? can be see in the device if its free. (* represent that the device is being mapped to FA port)

If the device that we unmapped is a meta then we can also dissolve it using below command

#vi dissolve.server
dissolve meta dev 052A

Execute it using
#symconfigure -sid 888 -f dissolve.server preview
#symconfigure -sid 888 -f dissolve.server prepare
#symconfigure -sid 888 -f dissolve.server commit

Solaris: With Solaris it supports Veritas DMP & Powerpath. Check the Symmetrix specific ASL with Veritas.

LINUX: With LINUX we can use powerpath or the inbuilt device mapper.

#vi port.server
symmask -wwn 2101001b32a8c3d5 set hba_flags on D -enable -dir 7b -p 1
symmask -wwn 2100001b3288c3d5 set hba_flags on D -enable -dir 9b -p 1
symmask refresh -nop
symcfg discover

Execute it using
#sh -x vi port.server



Thursday, September 2, 2010

Veritas DMP & Volume Manager

Veritas DMP & Volume manger are mainly used in Solaris (its does support all flavors of UNIX). DMP supports almost all storage arrays.
Before we actually start using DMP admins 1st need to check to whether the Veritas ASL (Array support Library) for the specific array is installed .

#vxddladm listsupport
The above command will list you all the libraries that are installed and the storage that is being supported. If the specific storage library is not present then get the same from Veritas. Once installed you can check the same with below command

#vxddladm listsupport libname=
It will show you the exact storage model and the controller version that is being supported.

Once the above is done then admin can go ahead with using the disk.

Command to configure the new disk and see the same
#devfsadm
#format

Format the above disk and run
#vxdctl enable
#vxdisk list
Now the new disk is under Veritas and can be used now.

Other primarily use DMP commands
#vxdisk list
#vxdisk path
#vxdmpadm iostat show all


The below commands are used for initializing, creating disk group, volumes and snapshots etc.
#vxdisksetup -i c1t0d0s0
#vxdg init datadg datadg01=c1t0d0s0
#vxdg -g datadg add disk datadg02=c2t0d0s0
#vxassist -g datadg make datavol 10m layout=mirror (can use stripe also)
#mkfs -F vxfs /dev/vx/rdsk/datadg/datavol
#mount -F vxfs /dev/vx/dsk/datadg/datavol /FS

To create a mirror volume
#vxassist -g datadg mirror datavol

To create mirror to a specific disk
#vxassist -g datadg mirror datavol datadg03

To remove a plex that contain subdisk from from disk datadg02
#vxassist -g datadg remove mirror datavol !datadg02

Resizing commands
#vxresize -g datadg datavol 50m
#vxresize -g datadg datavol +50m

Removing a disk
#vxassist -g datadg remove volume datavol
#vxdg -g datadg rmdisk datadg02
#vxdg destroy datadg (for the last disk)

Snapshots
#vxassist -g datadg -b snapstart datavol
#vxassist -g datadg snapshot datavol snapvol

Reassociate
#vxassist -g datadg snapback snapvol

Deassociate
#vxassist -g datadg snapclear snapvol

Destroy
#vxassist -g datadg remove volume snapvol
Checking the size of DG
vxprint -g vg01 -dF "%publen" qwk `BEGIN {s = 0} {s += $1} END {print s/2097152, "GB"}`

Solaris Volume Manager

The Solaris volume manager (SVM) is a free component of Solaris 9 and Solaris 10. It was previously known as Solstice DiskSuite. SVM software provides mechanisms to configure physical slices of harddrive into logical volumes. As such it introduces additional level of complexity and should not be used unless absolutely necessary. On Solaris 10 using ZFS is a better alternative. Logical volumes can be configured to provide mirroring and RAID5. In its simplest form SVM uses traditional Solaris disk partitioning (up to eight partitions or slices in Solaris terminology) to build virtual disks called volumes.

Any partition can be used to create volumes, but it is common practice to reserve slice s7 for the state database replicas. Database replicas are created on selected disks and hold the SVM configuration data. It is the administrator’s responsibility to create these state databases (using the metadb command) and distribute them across disks and controllers to avoid any single points of failure.

Commands used in SVM

#metadb -a -f c0t0d0s3
a - add
f - force

#metadb
#metainit d0 3 1 c0t0d03 c0t0d0s4 c0t0d0s5
d0 - address
The above command creates concatenation for 3 slices

#metainit d0 1 c0t0d03 c0t0d0s4 c0t0d0s5
The above command uses stripping

#newfs /dev/md/rdsk/d0
#mkdir /FS
#mount /dev/md/dsk/d0 /FS
#metastat
#metaclear d0
#metadb -d -f c0t0d0s3
#metastat

Now if you want to create a partition from a meta

#metainit d10 -p d1 1500m
d10 - partition name
d1 - source
#metainit d11 -p d1 1500m
#newfs /dev/md/rdsk/d10
#newfs /dev/md/rdsk/d11
#mkdir /FS
#mount /dev/md/dsk/d10 /FS

Adding 1500m to d10
#metaattach d10 1500m
#growfs -M /FS /dev/md/rdsk/d10

For adding a disk to meta
#metattach d0 c0t0d0s6
#growfs -M /FS /dev/md/rdsk/d0

Mirroring
#metainit d0 2 1 c0t0d0s3 c0t0d1s3
#metainit d1 2 1 c0t0d2s3 c0t0d3s3
#metainit d10 -m d0
#init 6
#metattach d10 d1
#metastat
#newfs /dev/md/rdsk/s10

To clear the meta information
#metaclear d10
#metaclear d0 d1

RAID-5
#metainit d10 -r c0t0d0s3 c0t0d1s3 c0t0d2s3 c0t0d3s3
#metastat
#newfs /dev/md/rdsk/d10



EMC Symmetrix VMAX

EMC Symmetrix VMAX provides high-end storage for the virtual data center. Symmetrix VMAX scales up to 2 PB of usable protected capacity and consolidates more workloads with a much smaller footprint than alternative arrays.

Its innovative EMC Symmetrix Virtual Matrix Architecture seamlessly scales performance, capacity, and connectivity on demand to meet all application requirements. Symmetrix VMAX can be deployed with Flash Drives, Fibre Channel, and Serial Advanced Technology Attachment (SATA) drives, with tiering fully automated with FAST. It supports virtualized and physical servers, including open systems, mainframe, and IBM i hosts. (Source www.emc.com)

VMAX follows a virtual matrix architecture. The processors are called as engines and has got global memory. So if one engine receives any request then the computation can be done by any of the engines (using global memory) and once done the result will be returned back to the engine who had received the original request from server.

Unlike the DMX there is no need to set port specific flag in VMAX. UNIX, Windows and ESX all can can be connected to the same FA port. Also there is no need to check the vbus, LUN ID while mapping a device. Infact there is no mapping in VMAX, just masking. But for HP-UX we need to set flag specific to the host WWN.


When integrating a new host we need to make sure that a port group is already created.

1st we need to create port group of the FA ports before we actually start any storage allocation.

# symaccess -sid 0444 create -name VMAX_0444_PG_9E0 -type port -dirport 9E:0
# symaccess -sid 0444 create -name VMAX_0444_PG_7E0 -type port -dirport 7E:0

Once port group is created then the Initiator group, aliases, storage group and view needs to be created.Below is an example where we are creating a script for integrating two hosts (fe1 & fe2 in a cluster named fecl).

WWN of fe1 hba0 : 5001438003bc3480
WWN of fe2 hba0 : 5001438003bc3a3c
SID of VMAX : 0444

#vi symaccess.fecl

# Initiator Groups
symaccess -sid 0444 create -name fecl_IG_c61 -type init -wwn 5001438003bc3480
symaccess -sid 0444 -name fecl_IG_c61 add -type init -wwn 5001438003bc3a3c

# Aliases
symaccess -sid 0444 rename -wwn 5001438003bc3480 -alias fe1/fe1_vmhba0
symaccess -sid 0444 rename -wwn 5001438003bc3a3c -alias fe2/fe2_vmhba0

# Storage Groups (after device created and bound)
symaccess -sid 0444 create -name fecl_SG -type storage devs 081C

# VIEWs (in condition that port group exists)
symaccess create view -name fecl_VW_c61 -ig fecl_IG_c61 -sg fecl_SG -pg VMAX_0444_PG_9E0


But before be actually run the above script we need to create thin devices so that the same ID can be added in storage group.

#vi symconfigure.fecl
create dev count=2, size=100GB emulation=FBA, config=TDEV, binding to pool Tier1_400GB;

In the above script we are creating 2 thin devices of size 100GB from Tier1_400GB pool. The maximum LUN size that can be created is 240GB and if someone needs to create a LUN of 500GB then it would be a meta. Now if we want the meta member size to be a specific one then need to add below keyword before we the create device command.

set symmetrix auto_meta_member_size=100GB;
create dev count=4, size=500GB emulation=FBA, config=TDEV, binding to pool Tier1_400GB;

Now the above command 1st sets the meta member size to be 100GB and then creates 4 data device(LUN) of 500GB each. So now each LUN of 500GB contains 5 x 100GB meta members.
To execute it just run as below

#setenv SYMCLI_SID 0444 (Sets the env for Symmetrix SID 0444, if you have more than 1 Symmetrix)

#symconfigure -f symconfigure.fecl preview -nop
#symconfigure -f symconfigure.fecl prepare -nop
#symconfigure -f symconfigure.fecl commit -nop

Now once you have the data device just add this ID to the storage group.

symaccess -sid 0444 -name fecl_SG -type storage add devs 0BC8

To execute the script just run

#sh -x symaccess.fecl

If we are integrating a HP-UX server in VMAX we need to set flags based on HP-UX server's PWWN.

# Flags for HP-UX
symaccess -sid 0444 -name hpux1_IG_c61 -type initiator set ig_flags on V -enable
symaccess -sid 0444 -name hpux1_IG_c100 -type initiator set ig_flags on V -enable
symcfg discover


To remove a device from a server, 1st it needs to unmapped

#vi symaccess.fecl
symaccess -sid 0444 -name fecl_SG -type storage remove devs 0A36 -unmap


#sh -x symaccess.fecl


Once it's unmapped from the storage group then unbind the device from storage pool, dissolve if it's meta and finally delete it

#vi symconfigure.fecl
unbind tdev 0A36 from pool Tier1_400GB;
dissolve meta dev 0A36;
delete dev 0A36;

Run the script as
#symconfigure -f symconfigure preview(prepare/commit) -nop

Some of the primarily used commands in VMAX

#symcfg list -pools
#symcfg list -pool -thin -v -gb
#symcfg list -pool -gb
#symaccess list -type init
#symaccess show #ig-name# -type init
#symaccess list -type port
#symaccess show #port-name# -type port
#symaccess list -type storage
#symcfg list -pool -tdev -gb
#symaccess list assignment -dev 0807
#symaccess list view
#symaccess show view
#symaccess -name #sg-nam# -type storage remove devs 0807 -unmap
#symconfigure -cmd "unbind tdev 0807 from pool ;" preview -nop
#symconfigure -cmd "unbind tdev 0807 from pool ;" prepare -nop
#symconfigure -cmd "unbind tdev 0807 from pool ;" commit -nop
#symconfigure -cmd "delete dev 0807;" preview -nop
#symconfigure -cmd "delete dev 0807;" prepare -nop
#symconfigure -cmd "delete dev 0807;" commit -nop
#symdev list -noport (To check for the traditional thick LUN mainly used for redo logs)






HP StorageWorks XP24000

HP StorageWorks XP24000/XP20000 Disk Arrays are large enterprise-class storage systems designed for organizations that simply cannot afford any downtime. The XP combines a fully redundant hardware platform with unique data replication capabilities that are integrated with clustering solutions for complete business continuity.

XP Software decreases the complexities of data management. Through thin provisioning, organizations can improve storage utilization. Virtualization simplifies the management of diverse systems. Dynamic partitioning allows for the distribution of storage resources. And consolidation becomes a reality by managing open systems, mainframe, and HP NonStop applications all on a single XP. With the XP array, organizations can confidently manage mission-critical IT.

With XP24K we create thin pools and allocate the disks whenever required to the host. Create different pool for internal and external storage. Also need to select the appropriate cache partitioning while creating the V-Vol group.

Solaris: With Solaris we mostly use Veritas DMP. Check for the appropriate ASL. Once done select the appropriate host mode and check as per the below recommendation.

XP24000 Host mode: 09[Solaris]
Host mode option: 7

Recommended Kernel parameters:
Verification of timeout & queue depth
#adb -k
physmen 7d6f94
ssd_max_throttle/D #if defined ssd_max_throttle -show its value
ssd_max_throttle"
ssd_max_throttle: 256
sd_max_throttle/D
sd_max_throttle:
sd_max_throttle: 256
ssd_io_time/D
ssd_io_time:
ssd_io_time: 60
sd_io_time/D
sd_io_time"
sd_io_time:60

$q

Changing timeout and queue depth values through /etc/system. Do it only if values showed by adb are different with recommended. New values are activated after reboot.
set ssd:ssd_io_time=0x3c
# if defined in kernel ssd_io_time and its value is not default#
set ssd:sd_io_time=0x3c
# if defined in kernel sd_io_time and its value is not default#
set ssd:ssd_max_throttle=8
# if defined in kernel ssd_max_throttle and its value is not default#
set sd:sd_max_throttle=8
# if defined in kernel ssd_max_throttle and its value is not default

Load-balancing (currently is not changed, we use default)
"Configuration guide for SUN/Solaris does not include recommendation for specific load-balancing between HBAs with VxDMP."
Default iopolicy=minimumq : I/O is sent on paths that have the minimum number of I/O requests in the queue.
To verify current policy:
# vxdmpadm listenclosure
ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE LUN_COUNT
=======================================================================================
xp24k0 xp24k 133BE CONNECTED A/A 3
# vxdmpadm getattr enclosure xp24k0 iopolicy
ENCLR_NAME DEFAULT CURRENT
============================================
xp24k0 MinimumQ MinimumQ
To set another policy:
vxdmpadm setattr enclosure xp24k0 iopolicy=round-robin
Re-label disk imported from external STU (only for imported external LUNs):
Collect the VTOC into a file:
prtvtoc -h /dev/rdsk/c#t#d#s2 > c#t#d#s2.vtoc.before
Autoconfigure
format -> pick LUN -> type -> auto configure
Label the drive from format prompt
format> label
Verify updated drive information from format prompt
format> current
format> quit
Print the new VTOC into a file:
prtvtoc -h /dev/rdsk/c#t#d#s2 > c#t#d#s2.vtoc.after
Compare vtoc.before and vtoc.after files. If not the same then edit vtoc.before file so that it would reflect number of sectors in vtoc.after file.
Write the previous partition table to the LUN
fmthard -s c#t#d#s2.vtoc.before /dev/rdsk/c#t#d#s2
Confirm geometry the same by checking the partition table against the one in step 1.
format -> pick LUN -> partition -> print
Migration with vxassist mirror (after new disks are allocated and visible):
Verify whether swap disk is on the external storage. If yes - initialize new disk, add it to VG, creat LV and add it to the swap configuration (swap -a). Swap disks do not require migration.
root # swap -l
root# vxdctl enable
root# vxdisk list
root# vxdisk init XP10K-12K0_0
root # vxdisk init XP10K-12K0_1
root # vxdg -g vgeva01 adddisk vgeva0103=XP10K-12K0_0 vgeva0104=XP10K-12K0_1
root # vxdisk list
DEVICE TYPE DISK GROUP STATUS
EMC_CLARiiON0_0 auto:cdsdisk vgeva0101 vgeva01 online
EMC_CLARiiON0_1 auto:cdsdisk vgeva0102 vgeva01 online
XP10K-12K0_0 auto:cdsdisk vgeva0103 vgeva01 online
XP10K-12K0_1 auto:cdsdisk vgeva0104 vgeva01 online
c1t1d0s2 auto:none - - online invalid
root # vxprint -g vgeva01 -v
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
v WebSphere7.0 fsgen ENABLED 10485760 - ACTIVE - -
v WebSphere7_64 fsgen ENABLED 10485760 - ACTIVE - -
v omsusers fsgen ENABLED 41943040 - ACTIVE - -
v omsuser01 fsgen ENABLED 41943040 - ACTIVE - -
v oravl01 fsgen ENABLED 35651584 - ACTIVE - -
The following should be done for each volume:
root # vxassist -g vgeva01 mirror oravl01 alloc=vgeva0103,vgeva0104
root # vxprint -g vgeva01 -ht oravl01
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
SR NAME KSTATE
v oravl01 - ENABLED ACTIVE 35651584 SELECT - fsgen
pl oravl01-01 oravl01 ENABLED ACTIVE 35651584 CONCAT - RW
sd vgeva0101-01 oravl01-01 vgeva0101 0 20971520 0 EMC_CLARiiON0_0 ENA
sd vgeva0101-03 oravl01-01 vgeva0101 62914560 14680064 20971520 EMC_CLARiiON0_0 ENA
pl oravl01-02 oravl01 ENABLED ACTIVE 35651584 CONCAT - RW
sd vgeva0103-03 oravl01-02 vgeva0103 52428800 35651584 0 XP10K-12K0_0 ENA
root # vxplex -g vgeva01 det oravl01-01
root# vxplex -g vgeva01 -o rm dis oravl01-01
After all volumes are migrated, check that old disks are not in use (no vgeva0101, vgeva0102) and remove them from VG:
If SmartMove was not active (due to VxVM version) - perform Discard Zero Data on new V-Vols.

In order to Space Reclamation /Smart Move to be available, specific disks should be visible as thinrclm wit 'vxdisk list'. Supported starting with VRTS SF 5.0 MP3:
root- # vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 auto:none - - online invalid
c1t1d0s2 auto:none - - online invalid
c2t50001FE1500FD709d1s2 auto:cdsdisk c2t50001FE1500FD709d1s2 xp24k online
c2t50001FE1500FD709d2s2 auto:cdsdisk - - online
c2t50060E801533BE01d0s2 auto:cdsdisk xp24k01 xp24k online thinrclm
c2t50060E801533BE01d1s2 auto:cdsdisk xp24k02 xp24k online thinrclm
Disk that are visible as 'thinrclm' enable both spcae reclamation and SmartMove operations.
Example of Space Reclamation on both disks:
root- # vxdisk reclaim c2t50060E801533BE01d0s2 c2t50060E801533BE01d1s2
Reclaiming thin storage on:
Disk c2t50060E801533BE01d1s2 : Done!
Put attention: in spite of informing on last disk only - both disks are included in reclaiming process.

Example of Space Reclamation on both disks:
root- # fsadm –R /xplv3

Smart Move operations for 'thinrclm' disks are checked on by default. They are telavnt for sll VRTS mirror, move commands, like:
root- # vxassist -g xp24k mirror lv3 xp24k01
root- # vxassist -g xp24k mirror lv3 xp24k01
One can explicitely check on/off SmartMove feature with setting appropriate attribute value in /etc/default/vxsf file:
usefssmartmove=none # Smartmove is disabled on the host
usefssmartmove=all # Smartmove is enabled for all copy operations on the host
usefssmartmove=thinonly # Smartmove is enabled for volumes that contain thin
thinrclm devices


AIX:
XP24K Host Mode: 0F[AIX]
Host Mode Option: N/A.
For AIX HACMP check on Mode Option: 15
Prerequisites for XP24K connection to AIX: Installation of HP MPIO for XP
Prerequisites for HP MPIO for XP installation 5.4.0.2
The MPIO solution cannot coexist with other non-MPIO multipathing products on the same server. The MPIO solution cannot coexist with HDLM on the same server (lslpp -al | egrep -i dlm)
SCSI-3 persistent reservations are not supported.
HACMP is supported when using enhanced concurrent volume groups within the cluster. Volume groups not controlled by the cluster can be standard volume groups.
After re-enabling previously failed paths it can take a while (few minutes) until MPIO takes all IO paths online again. This is not a defect.
In the progress of an initial open to a raw MPIO disk device MPIO will check all IO paths and tries to re-enable previously failed paths automatically. This is also not considered to be a defect.
"AIX 5.3 TL04, SP01 or greater. For AIX 5.3, TL06, it is necessary to
install SP02, or greater (check with ‘oslevel –s’)."
For AIX 5.3:
IY67625, IY79741 and IY79862 must be installed for MPIO ODM (check with ‘instfix -i | egrep 'IY67625|IY79741|IY79862')
MPIO can be used with HACMP only when Enhanced Concurrent Volume Groups are used with ML03 and IY73087 (or higher).
Base XP ODM must not be installed (the same a Single Path ODM – see below).
"AIX 6.1 TL00, SP01 or greater. AIX 6.1 is not supported on all XP
disk arrays. Check the Streams
documents for supported arrays."
"P5 VIOS ioslevel 1.2.1.1. Check with IBM for the currently
supported versions of VIOS."
The Single Path ODM fileset cannot coexist with the fileset of the HP XP MPIO solution on the same server.
"The HP XP MPIO solution can coexist with the HP EVA MPIO solution on the same server utilizing the same HBA (lslpp -al | egrep -i hsv)
"
Installation of MPIO for XP 5.4.0.2
# mkdir /tmp/hpmpio # or any working directory
# cp XPMPIO2.tar /tmp/hpmpio
# cd /tmp/hpmpio
# tar xvf XPMPIO2.tar
# inutoc $PWD
# installp –acd . –einstallp.log ALL
# lslpp -L devices.fcp.disk.HP.xparray.mpio.rte # verification
Change XP24K adapter/disks attributes
# lsdev -Ct "*scsi*" # Find SCSI adapters
# chdev -a fc_err_recov=fast_fail -l fscsi0 [-P] # -P is when actual change will occur during reboot only
# chdev -a fc_err_recov=fast_fail -l fscsi1 [-P] # -P is when actual change will occur during reboot only
# cfgmgr -v
# chdev –l hdiskX –a reserve_policy=no_reserve
# chdev –l hdiskX –a algorithm=round_robin
# chdev -l hdiskX -a queue_depth=8
# chdev -l hdiskX -a rw_timeout=60 # usually not required since this is default
# lattr -El hdiskX
Migration data volumes with migratepv (after new disks are visible and configured - see above)
Verify whether swap disk is on the external storage. If yes - initialize new disk, add it to VG, create LV and add it to the swap configuration (swap -a). Swap disks do not require migration.
root # swap -l
# lsvg | egrep -v 'root'
vgeva
# lsvg -p vgeva
hdisk2
hdisk3
# extendvg vgeva hisk6 hdisk7
# migratepv hdisk2 hdisk6
# migratepv hdisk3 hdisk7
# lspv hdisk2
# lspv hdisk3
# reducevg vgeva hdisk2 hdisk3
# rmdev -l hdisk2 -dR
# rmdev -l hdisk2 -dR
# cfgmgr
# lsedv -Ccdisk
AIX VIO:
XP24K Host Mode: 0F[AIX]
Host Mode Option: N/A.
For AIX HACMP check on Mode Option: 15
Installation of MPIO for XP and changing devices (on physical host) - the same as on ordinary AIX host.
Migration Plan (based on migratepv at the guest level)
Install MPIO for XP
Zone host with XP24K. If original STU is CX, then one HBA should be un-zoned with CX and zoned with XP24K only.
Allocate and present LUNs of the same sizes as original ones.
Discover new disks
#cfgmgr

Change attributes of HBAs and new disks according to appropriate procedure.
Verify current mapping under padmin
$ lsmap -all
SVSA Physloc Client Partition ID
--------------- -------------------------------------------- ------------------
vhost0 U7778.23X.068B34A-V1-C11 0x00000002
VTD vtscsi0
Status Available
LUN 0x8100000000000000
Backing device hdisk2
Physloc U78A5.001.WIH70AE-P1-C12-T1-W50001FE15012AADF-L1000000000000
VTD vtscsi1
Status Available
LUN 0x8200000000000000
Backing device hdisk3
Physloc U78A5.001.WIH70AE-P1-C12-T1-W50001FE15012AADF-L2000000000000
VTD vtscsi2
Status Available
LUN 0x8300000000000000
Backing device hdisk4
Physloc U78A5.001.WIH70AE-P1-C12-T1-W50001FE15012AADF-L3000000000000
VTD vtscsi3
Status Available
LUN 0x8400000000000000
Backing device hdisk5
Physloc U78A5.001.WIH70AE-P1-C12-T1-W50001FE15012AADF-L4000000000000

Map new LUNs to with the same sizes to appropriate Server Virtual Adapters:
mkvdev -vdev hdisk6 -vadapter vhost0 -dev hdisk4_050
mkvdev -vdev hdisk7 -vadapter vhost0 -dev hdisk5_050
mkvdev -vdev hdisk8 -vadapter vhost0 -dev hdisk6_050
mkvdev -vdev hdisk9 -vadapter vhost0 -dev hdisk7_050
Login as root on a guest (ibm101) and discover new disks
ibm101:root: cfgmgr
ibm101:root: lsdev -Ccdisk
hdisk0 Available Virtual SCSI Disk Drive
hdisk1 Available Virtual SCSI Disk Drive
hdisk2 Available Virtual SCSI Disk Drive
hdisk3 Available Virtual SCSI Disk Drive
hdisk4 Available Virtual SCSI Disk Drive
hdisk5 Available Virtual SCSI Disk Drive
hdisk6 Available Virtual SCSI Disk Drive
hdisk7 Available Virtual SCSI Disk Drive
Add new disks to appropriate data VGs (not rootvg and not altinst_rootvg)
ibm101:root: extendvg vg501 hdisk6 hdisk7
Migrate original disks to new ones on a guest, verify that all PEs were really copied, exclude original disks from VG
ibm101:root: time migratepv hdisk2 hdisk6
ibm101:root: time migratepv hdisk3 hdisk7
ibm101:root: lspv hdisk2
ibm101:root: lspv hdisk3
ibm101:root: lspv hdisk6
ibm101:root: lspv hdisk7
ibm101:root: reducevg vg501 hdisk2 hdisk3
Inactivate and export altinst_rootvg group, then import it under another name
ibm101:root: varyoffvg altinst_rootvg
ibm101:root: exportvg altinst_rootvg
ibm101:root: importvg -y newinst_rootvg -n hdisk1
Create a clone of rootvg on a new disk and reboot from it.
Software Installation and Maintenance -> Alternate Disk Installation' - I did not succeed to boot with a new disk.>
padmin -> View/Modify Partitions -> select physical host -> Open Terminal Window -> passwd of padmin). Run 'mkvt -id 2', where 2 is ID of virtual guest LPAR. To check it use
/usr/local/ADMIN/scripts/Clone_Rootvg.ksh
ibm101:root: bootlist -m normal -o hdisk4
ibm101:root: reboot
Create new altinst_rootvg and run crontab script to copy system image
Software Installation and Maintenance -> Alternate Disk Installation'. It's convinient since we system creates altinst_rootvg with required arguments.>
/usr/local/ADMIN/scripts/Clone_Rootvg.ksh
Clean old_rootvg, newinst_rootvg and remove their disks on virtual guest
ibm101:root: varyoffvg old_rootvg
ibm101:root: exportvg old_rootvg
ibm101:root: varyoffvg newinst_rootvg
ibm101:root: exportvg newinst_rootvg
ibm101:root: rmdev -l hdisk0 -dR
ibm101:root: rmdev -l hdisk1 -dR
ibm101:root: rmdev -l hdisk2 -dR
ibm101:root: rmdev -l hdisk3 -dR
Unmap old disks on physical host vio101 from virtual guest (under padmin account)
$ rmvdev -vdev hdisk2
$ rmvdev -vdev hdisk3
$ rmvdev -vdev hdisk4
$ rmvdev -vdev hdisk5
Remove old disks on physical host vio101 (under root account)
vio101:root: rmdev -l hdisk2 -dR
vio101:root: rmdev -l hdisk3 -dR
vio101:root: rmdev -l hdisk4 -dR
vio101:root: rmdev -l hdisk5 -dR
Unzone old STU and reboot for verification. Put attention - first shutdown virtual guest, then reboot physical host, then start guest.
Perform Discard Zero Data on new V-Vols.


HP-UX:
XP24K Host Mode: 08[HP]
Host Mode Option: 12

Change XP24K disks attributes
Configuration Guide for HP-UX recommends setting timeout=60 for volumes on XP24K. So after disks on XP24K are allocated, perform:
# pvcreate /dev/dsk/c10t0d0
# pvchange -t 60 /dev/dsk/c10t0d0
# pvdisplay /dev/dsk/c10t0d0
Other attributes - queue depth and load balancing - default
Configuration Guide for HP-UX does not include any recommendations on queue depth and load balncing configuration.
On HP-UX before version 11v3 (11.31) - PVLink does not enable any load balancing between paths to a LUN. So we suggest manual distribution of LUNs between different primary path links.
On HP-UX 11v3 (11.31) native MPIO enables load balancing to a LUN. Load balancing may be managed with command scsimgr. Default load balancing policy for high-end STUs is round-robin.
On HP-UX 11v3 (11.31) to verify current load-balancing policy:
# scsimgr get_attr -D /dev/rdisk/disk151 -a load_bal_policy
SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk151
name = load_bal_policy
current = round_robin
default = round_robin
saved =
I do not how to manage queue depth on HP-UX versions less than 11v3 (11.31).
On HP-UX 11.31 it can be managed with scsimgr. Default is 8.
On HP-UX 11v3 (11.31) to verify current queue depth:
# scsimgr get_attr -D /dev/rdisk/disk151 -a max_q_depth
SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk151
name = max_q_depth
current = 8
default = 8
saved =
Migration data volumes with migratepv (after new disks are visible and configured - see above)
vgdisplay | egrep 'VG Name' | egrep -v 'vg00|vgroot'
VG Name /dev/vgeva01
vgdisplay -v /dev/vgeva01 | grep 'PV Name'
PV Name /dev/dsk/c0t0d1
PV Name /dev/dsk/c4t0d1
PV Name /dev/dsk/c0t0d2
PV Name /dev/dsk/c4t0d2
pvcreate /dev/dsk/c9t0d0
pvcreate /dev/dsk/c10t0d1
vgextend /dev/vgeva01 /dev/dsk/c9t0d0
vgextend /dev/vgeva01 /dev/dsk/c10t0d0
vgextend /dev/vgeva01 /dev/dsk/c10t0d1
vgextend /dev/vgeva01 /dev/dsk/c9t0d1
pvmove /dev/dsk/c0t0d1 /dev/dsk/c9t0d0
pvmove /dev/dsk/c0t0d2 /dev/dsk/c10t0d1
pvdisplay /dev/dsk/c0t0d1
pvdisplay /dev/dsk/c0t0d2
vgreduce vgeva01 /dev/dsk/c4t0d1
vgreduce vgeva01 /dev/dsk/c0t0d1
vgreduce vgeva01 /dev/dsk/c4t0d2
vgreduce vgeva01 /dev/dsk/c0t0d2
ioscan -nfCdisk | grep NO_HW
rmsf -H X.X.X…
HP-UX VM:

XP24K Host Mode: 08[HP] This value is set for physical host.
Host Mode Option: 12
Disks attributes are set exactly as on HP-UX. Timeout is defined for PVs and thus is set on guest (not on physical host).
Migration Plan(based on importing external LUNs)
Create zones in fabrics with XP24K, but do not activate new zones (don't include them in active zoneset).
On external STU present original LUNs to XP24K. Don't unpresent from original host

On XP24K, perform importing of new external LUNs. Attributes: path group = 0 (Right click on existing path!), ExG=1, CU=20. Fill external disks IDs (CU:LDEV- 20:X1,2,...)
On XP24K, create V-VOLs with the same capacities (by blocks!) as original disks. Allocate all new V-Vols under CU FE (FE:Z1, FE:Z2, ...). Put attention on pools.
Fill new V-Vols IDs in Excel Map. Declare new V-Vols as Reserved through AutoLUN.

After getting outage time: stop all guests
hpvm101# shutdown -h now
# hpvmstatus

Change guest startup to manual
# hpvmmodify -P hpvm101 -B manual
hpvm102:/opt/hpvm/bin >hpvmstatus -P hpvm101 -V | egrep -i 'start type' Start type : Manual

Remove old disk mappings from guest configuration
# hpvmmodify -P hpvm101 -d disk:avio_stor:0,2,0:disk:/dev/rdisk/disk24
# hpvmmodify -P hpvm101 -d disk:avio_stor:0,2,3:disk:/dev/rdisk/disk31
# hpvmmodify -P hpvm101 -d disk:avio_stor:0,2,1:disk:/dev/rdisk/disk41
# hpvmmodify -P hpvm101 -d disk:avio_stor:0,2,2:disk:/dev/rdisk/disk46

Unzone a physical host with original STU and remove NO_HW disks
# ioscan -nNfCdisk
# rmsf -H …
Zone a physical host with XP24K. Perform scanning to PLOGI on XP24K.
# ioscan -nNfCdisk
Create a new host group on XP24K and map external disks to the host group.
Scan for new disks on the host. Fill external disk names in Excel map.
# ioscan -nNfCdisk
# xpinfo -i
Map external disks according to Excel map to appropriate guests:
# hpvmmodify -P hpvm101 -adisk:avio_stor:0,2,0:disk:/dev/rdisk/diskY1
# hpvmmodify -P hpvm101 -a disk:avio_stor:0,2,3:disk:/dev/rdisk/diskY2
# hpvmmodify -P hpvm101 -a disk:avio_stor:0,2,1:disk:/dev/rdisk/diskY3
# hpvmmodify -P hpvm101 -a disk:avio_stor:0,2,2:disk:/dev/rdisk/diskY4
Start a guest
# hpvmstart -P hpvm101
Verify a guest
hpvm101# ioscan -nfCdisk
hpvm101# bdf
On each guest change timeout for all PVs to 60:
hpvm101# vgdisplay -v | grep 'PV Name'
hpvm101# pvchange -t 60 /dev/disk/diskXX
Change guest startup to automatic
# hpvmmodify -P hpvm101 -B auto
Verify with reboot
hpvm101# shutdown-h now
# reboot
Perform AutoLUN migrations (parallel or sequential) according to Excel Map.
Change threshold on V-Vols to 300% (after V-Vol it is set to 5%).
Perform Discard Zero Data on migrated V-Vols (after V-Vol usage rate is always 100%).
Verify a guest


LINUX:
XP24K Host Mode: 00[Standard]
Host Mode Option: N/A


VMware:
XP24K Host Mode: 01[VMware] There are also 21[VMware extension] and 40. Now we prefer to use 01.
Host Mode Option: N/A.

Set Round Robin as Default Path Selection Policy (on ESX with CLI)
# esxcli nmp device list
# see current disks SATP (Storage Array Type Plug-Ins) and PSP (Path Selection Path Policy)

# esxcli nmp satp setdefaultpsp --satp VMW_SATP_DEFAULT_AA --psp VMW_PSP_RR
# Change Default PSP for specific SATP. This maybe also performed through editing appropriate default PSP in /etc/vmware/esx.conf

# esxcli nmp satp listrule
# Information only: see mapping between STUs and SATP (Storage Array Type Plug-Ins)

Set Round Robin as PSP for existent disks (on ESX with CLI)
# setpolicy --psp VMW_PSP_RR --device naa.600508b4001063af0001e00005390000
# Change PSP for specific disk

To change Queue Depth (on ESX with CLI)
[root@vmh ~]# cat /proc/scsi/qla2xxx/1 # Verify current queue depth
...........
Device queue depth = 0x20 # By default queue depth = 32
......................
[root@vmh1 ~]# vmkload_mod -l | grep qla # Find module name
[root@vmh1 ~]# vmkload_mod -s qla2xxx | grep depth # Find parameter name
ql2xmaxqdepth: int
Maximum queue depth to report for target devices.
[rootvmh1 ~]# esxcfg-module -s ql2xmaxqdepth=8 qla2xxx # Set queue depth = 8. Put attention - actual queue depth is not changed until reboot.