Sunday, July 3, 2011

HDS Virtual Storage Platform (VSP)

Unlike most storage arrays from other vendors, the VSP is a purpose built system designed from the ground up to be a storage array and comprised of ( where appropriate) custom logic and processor ASICs. All Hitachi midrange and enterprise storage arrays are purpose built. In the VSP (like all Hitachi designs), there are separate intelligent components from which the array is created. These components are operated in parallel to achieve high performance, scalability, and reliability. Products such as IBM’s DS8000 series, and EMC’s CX and VMAX series are essentially small servers running standard operating systems (AIX or Linux). [EMC’s DMX series was also a purpose built design.] The “storage engine” in these systems is software running in the server, controlling simple host and disk interface cards (usually FC). The server RAM holds the operating system, the storage software, the storage system metadata, and all user data in a cache. Briefly (the boards are discussed below in greater detail), the VSP is built as a single or dual chassis array, each chassis having from one to three racks. Each chassis can have one control rack and up to two disk expansion racks. The control rack has the logic box that holds all of the control boards for a chassis, along with one or two disk container boxes (DKUs). The disk expansion racks can hold three DKUs each. There are two types of DKUs: Small Form Factor (128 2.5” disks) and Large Form Factor (80 3.5” disks). When using two chassis as a single integrated array, the two units are cross connected at the Grid Switch level. The array behaves as a single unit, not as a pair of units operating as a cluster.

The VSP uses five types of logic boards:
• Grid Switches
• Data Cache Adapters (cache boards)
• Front‐end Directors (FC or FICON ports)
• Back‐end Directors (disk controllers)
• Virtual Storage Directors (processor boards).

Monday, September 13, 2010

Auto LUN Migration in HP XP24000

Auto LUN migration helps to play around with the V-Vols in XP. Auto LUN migration is transparent to the end user and sys admins. The CU:LDEV remains same as it was before the migration.

Create the V-Vol in the desired pool. While creating the V-Vol it should be kept in mind that the block size must be exactly same as the source V-Vol. Check the same by navigating through
GO->Customized Volume. Select the size in blocks.

Once its done then make the new LUN as reserve LUN. Navigate through to make it as a reserved LUN.
AutoLUN -->AutoLUN -->Physical -->Plan -->Attribute --> V-Vol group (V-Vol). Just right click on the LUN and make it as a reserve LUN.

Now we are ready for the migration.

Manual Migration --> Source V-Vol -->Target -->Destination V-Vol -->Set

Once its applied it will show the migration status. (It took around 45 minutes for migrating a 250GB LUN from internal to external pool)

The migration status can also be checked from the History tab.

Once the migration is done it should be kept in mind that the CU:LDEV will not change in host. The xBox will change i.e the source CU:LDEV will be now shown in the new destination xBox that was created for the migration. The same can be also seen from the Migration History tab.

Now we need to delete the source LUN (inside the old xBox). But before that we need to make it as a normal LUN. Navigate through
AutoLUN -->AutoLUN -->Physical -->Plan -->Attribute --> V-Vol group (V-Vol). Just right and make it as a normal LUN.

Then go to Release V-Vol from a pool and select the appropriate Cache partition & pool of the source LUN and just select and do SET.
And finally delete the V-Vol group. Apply it once done.


Now finally before we leave the XP we need to run Discard Zero Data on the LUN (on which we did Auto LUN migration) to reclaim the unused space.



Thursday, September 9, 2010

LINUX - New Device Addition & Deletion

Often we come across a situation where Storage was presented to the LINUX machine but the System admins are not to find the new device. There are some utilities from Qlogic and HP which help us to scan the new device but might not help in some situation. Use the below command to scan the new LUN. It helped me in almost 95% of the cases (had to reboot in the remaining).

#ls /sys/class/scsi_host

You will receive the list of scsi hosts on Linux machine

After that run the following command:
#echo "- - -" > /sys/class/scsi_host/host#/scan
where ‘host#’ should be changed to host name with number, received in previous command

#fdisk –l
#lvmdiskscan
#multipath -ll


Deleting a LUN from the host.

#multipath -ll
mpath5 (36006016037e02200a800e0558015df11) dm-9 DGC,RAID 5
[size=50G][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=2][active]
\_ 2:0:1:1 sdg 8:96 [active][ready]
\_ 3:0:1:1 sdm 8:192 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:1 sdd 8:48 [active][ready]
\_ 3:0:0:1 sdj 8:144 [active][ready]

Note down the device name (dm) and the complete scsi address (Host #:SCSI channel:ID, LUN #) and execute the following
#pvremove /dev/dm-9
#echo "scsi remove-single-device 3 0 0 1" >/proc/scsi/scsi
Repeat the above for all scsi address

Once done then lvmdiskscan & multipath -ll won't show the disk. And now it can removed from the storage.

Friday, September 3, 2010

EMC Symmetrix DMX

EMC Symmetrix DMX-3 brings one of the most powerful networked storage solution available. It builds on the proven success of the Symmetrix Direct Matrix Architecture—while bringingthe new levels of capacity and performance. So it benefits from a powerful combination of incremental scalability, constant availability, and exceptional data mobility.
Symmetrix DMX-3 enables massive consolidation to deliver all the benefits of tiered storage in one system. And it offers the flexibility to address the changing needs of business—quickly and effectively. For the most extreme, demanding storage environments, Symmetrix DMX-3 provides a powerful solution that’s also remarkably simple to manage.

The DMX-3 consists of a single system bay and from one to eight storage bays. The system bay contains the 24-slot card cage, service processor, power modules, and battery backup unit (BBU) assemblies. The storage bays contain disk drives and associated BBU modules. In a highly scalable component and cabinet configuration,the DMX-3 has the capacity, connectivity, and throughput to handle a wide range of high-end storage application.

Its an Active-active storage system .In an active-active storage system, if there are multiple interfaces to a logical device, they all provide equal access to the logical device. Active-active means that all interfaces to a device are active simultaneously.

DMX-3 system applies a high degree of virtualization between what host sees and the actual disk drives. This device has logical volume address that the host can address.A symmetric Device is not a physical disk.Before actually hosts see the symmetric device, it needs to define path, means mapping the devices to Front-end director and then it needs to set FA-PORT attribute for specific Host.Symmetric device is not a physical disk.

We can create up to four mirrors for each Symmetric device. The Mirror positions are designed M1, M2, M3 and M4. When we create a device and specify its configuration type, the Symmetrix system maps the device to one or more complete disks or part of disks known as Hyper Volumes/Hypers. As a rule, a device maps to at least two mirror means hypers on two different disks, to maintain multiple copies of data.

HP-UX: With HP-UX DMX supports the PV link, Native 11.31 multipath & powerpath. Unlike CX we can use any of the path as 1st device in volume group as all paths are active. But it should be kept in mind that the load should be distributed between the HBAs. So check the complete HW path and then extend the volume group.

Set the below flag for HP-UX 11.11 & 11.23 on FA port

#set port 7a:0 Volume_Set_Addressing=enable;
#set port 8a:0 Volume_Set_Addressing=enable;

Set the below flag for HP-UX 11.31 on FA port

#set port 7a:1 Volume_Set_Addressing=enable, SPC2_Protocol_Version=enable, SCSI_Support1=enable;
#set port 9a:1 Volume_Set_Addressing=enable, SPC2_Protocol_Version=enable, SCSI_Support1=enable;

As given above earlier port usage were based on the specific flavor of UNIX and in particular for HP-UX. We can also do the same but based on host port WWN and not on FA port basis. This enables us to map different host platforms to the same storage ports. Flagging per HBA WWN does not solve all problems with HP-UX. This platform still does not understand LUN IDs greater than 7. When mapping LUNs for HP-UX to storage ports that are already in use by any other platforms in most cases we get LUN IDs that are greater than 7. If we use ordinary mapping/masking scripts, those LUNs will be invisible by HP-UX.
A solution is to use ‘-dynamic_lun’ option in masking script. It assigns host LUN ID according to HBA and not according Storage Port LUN ID:

#cat mask.hpux1
symmask -sid 888 -wwn 50060b0000306d6e add devs 021F -dynamic_lun -dir 7B -p 1 -nopropmt
symmask -sid 888 -wwn 50060b0000306de2 add devs 021F -dynamic_lun -dir 9B -p 1 -nopropmt
symmask refresh -nop
symcfg discover

Generally we can assign specific host LUN IDs with –lun option but in most cases it is not required.

When connecting a new HP-UX 11.31 host we need to set the below port flag which will help us in using the agile devices.

symmask -wwn 500110a00085bcf6 set hba_flags on SC3,SPC2,OS2007 -enable -dir 8a -p 0
symmask -wwn 500110a00085bcf4 set hba_flags on SC3,SPC2,OS2007 -enable -dir 10a -p 0
symmask refresh -nop
symcfg discover

Verify the above by using
#symmaskdb list db -dir 8a -p 0 -v
It should show as below for the specific WWN

Port Flag Overrides : Yes
Enabled : SPC2_Protocol_Version(SPC2)
SCSI_Support1(OS2007)

To mask a new device to a host we need to 1st check the free devices on the DMX. Below command is used to see the free devices
#symdev list -noport

Make a note of the required Sym device and also check the Config to whether it's 2-Way Mir or
RAID-5 device. For this we need to find on which FA port the host is zoned. Below command will tell us on which FA port the host is zoned.

#symmask list hba
#symmaskdb list devs -wwn 500110a00085bcf4

Once the FA ports are found then we need to map the device to it. Below commands illustrates both meta creation and mapping.

#vi meta.server
form meta from dev 052A, config=striped, stripe_size=2cyl;
add dev 052B to meta 052A;
form meta from dev 052C, config=striped, stripe_size=2cyl;
add dev 052D to meta 052C;

And then execute it using
#symconfigure -sid 888 -f meta.server preview
#ymconfigure -sid 888 -f meta.server prepare
ymconfigure -sid 888 -f meta.server commit

Once done then find out the free using below command

#symcfg list -dir 8a -p 0 -addr -avail
Make a note of vbus, TID and LUN and the free address are marked as *. Create the map file as below. You can also use : and keyword starting target when you want to use range of Sym devices

#vi map.server
map dev 05AA to dir 9D:0, target=0, lun=02C;
map dev 05AA to dir 7D:0, target=0, lun=02B;
map dev 05AC:05AE to dir 9D:0 starting target=0, lun=035;
map dev 05AC:05AE to dir 7D:0 starting target=0, lun=035;

Save the file and run the commit the mapping using below command
#symconfigure -sid 888 -f map.server preview
#symconfigure -sid 888 -f map.server prepare
#symconfigure -sid 888 -f map.server commit

If there is any problem in the map file then during preview itself it will throw the errors.

Once mapping is done then just mask the devices to the specific host WWN

#vi mask.server
symmask -sid 888 -wwn 2100001b321a8fef add devs 02AD,0582 -dir 7D -p 0 -nopropmt
symmask -sid 888 -wwn 2101001b323a8fef add devs 02AD,0582 -dir 9D -p 0 -nopropmt
symmask refresh -nop
symcfg discover

Just execute the command using
#sh -x mask.server

You can also check the masking again by using the below command. It will show on which FA port and WWNs the Sym device is mapped and masked.
#symmaskdb list assignments -dev 01CE

To delete a device from host side, make a note of the Sym device from the below command
#sympd list
The above command shows both physical path of the disk and the corresponding Sym device ID.

Make sure that the device is mapped & masked to the specific FA and host WWN respectively using the above symmaskdb command. There may a situation where the device is mapped and masked to multiple host of a cluster and decommissioning a node of a cluster doesn't mean deleting the LUN.

Create unmask file as below
#vi unmask.server
symmask -sid 888 -wwn 2100001b321a39ef -dir 7D -p 0 remove devs 059C,059D,05A5,05A0,059F,05A4
symmask -sid 888 -wwn 2101001b323a39ef -dir 9D -p 0 remove devs 059C,059D,05A5,05A0,059F,05A4
symmask refresh -nop
symcfg discover

Execute it using
#sh -x unmask.server

Once it's unmasked do a write disable of the device (Note: If the device is used in a cluster then just exit after unmask)

#vi write_disable.server
symdev -sid 888 -nop write_disable 052A
symdev -sid 888 -nop write_disable 052B

Execute it using
#sh -x write_disable.server

Once done and go ahead with unmap
#vi unmap.server
unmap dev 052A from dir 9D:0;
unmap dev 052A from dir 7D:0;

Execute is using
#symconfigure -sid 888 -f unmap.server preview
#symconfigure -sid 888 -f unmap.server prepare
#symconfigure -sid 888 -f unmap.server commit

Now the below command will show if the device is free or not
#symdev list -noport

A ???:? can be see in the device if its free. (* represent that the device is being mapped to FA port)

If the device that we unmapped is a meta then we can also dissolve it using below command

#vi dissolve.server
dissolve meta dev 052A

Execute it using
#symconfigure -sid 888 -f dissolve.server preview
#symconfigure -sid 888 -f dissolve.server prepare
#symconfigure -sid 888 -f dissolve.server commit

Solaris: With Solaris it supports Veritas DMP & Powerpath. Check the Symmetrix specific ASL with Veritas.

LINUX: With LINUX we can use powerpath or the inbuilt device mapper.

#vi port.server
symmask -wwn 2101001b32a8c3d5 set hba_flags on D -enable -dir 7b -p 1
symmask -wwn 2100001b3288c3d5 set hba_flags on D -enable -dir 9b -p 1
symmask refresh -nop
symcfg discover

Execute it using
#sh -x vi port.server



Thursday, September 2, 2010

Veritas DMP & Volume Manager

Veritas DMP & Volume manger are mainly used in Solaris (its does support all flavors of UNIX). DMP supports almost all storage arrays.
Before we actually start using DMP admins 1st need to check to whether the Veritas ASL (Array support Library) for the specific array is installed .

#vxddladm listsupport
The above command will list you all the libraries that are installed and the storage that is being supported. If the specific storage library is not present then get the same from Veritas. Once installed you can check the same with below command

#vxddladm listsupport libname=
It will show you the exact storage model and the controller version that is being supported.

Once the above is done then admin can go ahead with using the disk.

Command to configure the new disk and see the same
#devfsadm
#format

Format the above disk and run
#vxdctl enable
#vxdisk list
Now the new disk is under Veritas and can be used now.

Other primarily use DMP commands
#vxdisk list
#vxdisk path
#vxdmpadm iostat show all


The below commands are used for initializing, creating disk group, volumes and snapshots etc.
#vxdisksetup -i c1t0d0s0
#vxdg init datadg datadg01=c1t0d0s0
#vxdg -g datadg add disk datadg02=c2t0d0s0
#vxassist -g datadg make datavol 10m layout=mirror (can use stripe also)
#mkfs -F vxfs /dev/vx/rdsk/datadg/datavol
#mount -F vxfs /dev/vx/dsk/datadg/datavol /FS

To create a mirror volume
#vxassist -g datadg mirror datavol

To create mirror to a specific disk
#vxassist -g datadg mirror datavol datadg03

To remove a plex that contain subdisk from from disk datadg02
#vxassist -g datadg remove mirror datavol !datadg02

Resizing commands
#vxresize -g datadg datavol 50m
#vxresize -g datadg datavol +50m

Removing a disk
#vxassist -g datadg remove volume datavol
#vxdg -g datadg rmdisk datadg02
#vxdg destroy datadg (for the last disk)

Snapshots
#vxassist -g datadg -b snapstart datavol
#vxassist -g datadg snapshot datavol snapvol

Reassociate
#vxassist -g datadg snapback snapvol

Deassociate
#vxassist -g datadg snapclear snapvol

Destroy
#vxassist -g datadg remove volume snapvol
Checking the size of DG
vxprint -g vg01 -dF "%publen" qwk `BEGIN {s = 0} {s += $1} END {print s/2097152, "GB"}`

Solaris Volume Manager

The Solaris volume manager (SVM) is a free component of Solaris 9 and Solaris 10. It was previously known as Solstice DiskSuite. SVM software provides mechanisms to configure physical slices of harddrive into logical volumes. As such it introduces additional level of complexity and should not be used unless absolutely necessary. On Solaris 10 using ZFS is a better alternative. Logical volumes can be configured to provide mirroring and RAID5. In its simplest form SVM uses traditional Solaris disk partitioning (up to eight partitions or slices in Solaris terminology) to build virtual disks called volumes.

Any partition can be used to create volumes, but it is common practice to reserve slice s7 for the state database replicas. Database replicas are created on selected disks and hold the SVM configuration data. It is the administrator’s responsibility to create these state databases (using the metadb command) and distribute them across disks and controllers to avoid any single points of failure.

Commands used in SVM

#metadb -a -f c0t0d0s3
a - add
f - force

#metadb
#metainit d0 3 1 c0t0d03 c0t0d0s4 c0t0d0s5
d0 - address
The above command creates concatenation for 3 slices

#metainit d0 1 c0t0d03 c0t0d0s4 c0t0d0s5
The above command uses stripping

#newfs /dev/md/rdsk/d0
#mkdir /FS
#mount /dev/md/dsk/d0 /FS
#metastat
#metaclear d0
#metadb -d -f c0t0d0s3
#metastat

Now if you want to create a partition from a meta

#metainit d10 -p d1 1500m
d10 - partition name
d1 - source
#metainit d11 -p d1 1500m
#newfs /dev/md/rdsk/d10
#newfs /dev/md/rdsk/d11
#mkdir /FS
#mount /dev/md/dsk/d10 /FS

Adding 1500m to d10
#metaattach d10 1500m
#growfs -M /FS /dev/md/rdsk/d10

For adding a disk to meta
#metattach d0 c0t0d0s6
#growfs -M /FS /dev/md/rdsk/d0

Mirroring
#metainit d0 2 1 c0t0d0s3 c0t0d1s3
#metainit d1 2 1 c0t0d2s3 c0t0d3s3
#metainit d10 -m d0
#init 6
#metattach d10 d1
#metastat
#newfs /dev/md/rdsk/s10

To clear the meta information
#metaclear d10
#metaclear d0 d1

RAID-5
#metainit d10 -r c0t0d0s3 c0t0d1s3 c0t0d2s3 c0t0d3s3
#metastat
#newfs /dev/md/rdsk/d10



EMC Symmetrix VMAX

EMC Symmetrix VMAX provides high-end storage for the virtual data center. Symmetrix VMAX scales up to 2 PB of usable protected capacity and consolidates more workloads with a much smaller footprint than alternative arrays.

Its innovative EMC Symmetrix Virtual Matrix Architecture seamlessly scales performance, capacity, and connectivity on demand to meet all application requirements. Symmetrix VMAX can be deployed with Flash Drives, Fibre Channel, and Serial Advanced Technology Attachment (SATA) drives, with tiering fully automated with FAST. It supports virtualized and physical servers, including open systems, mainframe, and IBM i hosts. (Source www.emc.com)

VMAX follows a virtual matrix architecture. The processors are called as engines and has got global memory. So if one engine receives any request then the computation can be done by any of the engines (using global memory) and once done the result will be returned back to the engine who had received the original request from server.

Unlike the DMX there is no need to set port specific flag in VMAX. UNIX, Windows and ESX all can can be connected to the same FA port. Also there is no need to check the vbus, LUN ID while mapping a device. Infact there is no mapping in VMAX, just masking. But for HP-UX we need to set flag specific to the host WWN.


When integrating a new host we need to make sure that a port group is already created.

1st we need to create port group of the FA ports before we actually start any storage allocation.

# symaccess -sid 0444 create -name VMAX_0444_PG_9E0 -type port -dirport 9E:0
# symaccess -sid 0444 create -name VMAX_0444_PG_7E0 -type port -dirport 7E:0

Once port group is created then the Initiator group, aliases, storage group and view needs to be created.Below is an example where we are creating a script for integrating two hosts (fe1 & fe2 in a cluster named fecl).

WWN of fe1 hba0 : 5001438003bc3480
WWN of fe2 hba0 : 5001438003bc3a3c
SID of VMAX : 0444

#vi symaccess.fecl

# Initiator Groups
symaccess -sid 0444 create -name fecl_IG_c61 -type init -wwn 5001438003bc3480
symaccess -sid 0444 -name fecl_IG_c61 add -type init -wwn 5001438003bc3a3c

# Aliases
symaccess -sid 0444 rename -wwn 5001438003bc3480 -alias fe1/fe1_vmhba0
symaccess -sid 0444 rename -wwn 5001438003bc3a3c -alias fe2/fe2_vmhba0

# Storage Groups (after device created and bound)
symaccess -sid 0444 create -name fecl_SG -type storage devs 081C

# VIEWs (in condition that port group exists)
symaccess create view -name fecl_VW_c61 -ig fecl_IG_c61 -sg fecl_SG -pg VMAX_0444_PG_9E0


But before be actually run the above script we need to create thin devices so that the same ID can be added in storage group.

#vi symconfigure.fecl
create dev count=2, size=100GB emulation=FBA, config=TDEV, binding to pool Tier1_400GB;

In the above script we are creating 2 thin devices of size 100GB from Tier1_400GB pool. The maximum LUN size that can be created is 240GB and if someone needs to create a LUN of 500GB then it would be a meta. Now if we want the meta member size to be a specific one then need to add below keyword before we the create device command.

set symmetrix auto_meta_member_size=100GB;
create dev count=4, size=500GB emulation=FBA, config=TDEV, binding to pool Tier1_400GB;

Now the above command 1st sets the meta member size to be 100GB and then creates 4 data device(LUN) of 500GB each. So now each LUN of 500GB contains 5 x 100GB meta members.
To execute it just run as below

#setenv SYMCLI_SID 0444 (Sets the env for Symmetrix SID 0444, if you have more than 1 Symmetrix)

#symconfigure -f symconfigure.fecl preview -nop
#symconfigure -f symconfigure.fecl prepare -nop
#symconfigure -f symconfigure.fecl commit -nop

Now once you have the data device just add this ID to the storage group.

symaccess -sid 0444 -name fecl_SG -type storage add devs 0BC8

To execute the script just run

#sh -x symaccess.fecl

If we are integrating a HP-UX server in VMAX we need to set flags based on HP-UX server's PWWN.

# Flags for HP-UX
symaccess -sid 0444 -name hpux1_IG_c61 -type initiator set ig_flags on V -enable
symaccess -sid 0444 -name hpux1_IG_c100 -type initiator set ig_flags on V -enable
symcfg discover


To remove a device from a server, 1st it needs to unmapped

#vi symaccess.fecl
symaccess -sid 0444 -name fecl_SG -type storage remove devs 0A36 -unmap


#sh -x symaccess.fecl


Once it's unmapped from the storage group then unbind the device from storage pool, dissolve if it's meta and finally delete it

#vi symconfigure.fecl
unbind tdev 0A36 from pool Tier1_400GB;
dissolve meta dev 0A36;
delete dev 0A36;

Run the script as
#symconfigure -f symconfigure preview(prepare/commit) -nop

Some of the primarily used commands in VMAX

#symcfg list -pools
#symcfg list -pool -thin -v -gb
#symcfg list -pool -gb
#symaccess list -type init
#symaccess show #ig-name# -type init
#symaccess list -type port
#symaccess show #port-name# -type port
#symaccess list -type storage
#symcfg list -pool -tdev -gb
#symaccess list assignment -dev 0807
#symaccess list view
#symaccess show view
#symaccess -name #sg-nam# -type storage remove devs 0807 -unmap
#symconfigure -cmd "unbind tdev 0807 from pool ;" preview -nop
#symconfigure -cmd "unbind tdev 0807 from pool ;" prepare -nop
#symconfigure -cmd "unbind tdev 0807 from pool ;" commit -nop
#symconfigure -cmd "delete dev 0807;" preview -nop
#symconfigure -cmd "delete dev 0807;" prepare -nop
#symconfigure -cmd "delete dev 0807;" commit -nop
#symdev list -noport (To check for the traditional thick LUN mainly used for redo logs)