HP StorageWorks XP24000/XP20000 Disk Arrays are large enterprise-class storage systems designed for organizations that simply cannot afford any downtime. The XP combines a fully redundant hardware platform with unique data replication capabilities that are integrated with clustering solutions for complete business continuity.
XP Software decreases the complexities of data management. Through thin provisioning, organizations can improve storage utilization. Virtualization simplifies the management of diverse systems. Dynamic partitioning allows for the distribution of storage resources. And consolidation becomes a reality by managing open systems, mainframe, and HP NonStop applications all on a single XP. With the XP array, organizations can confidently manage mission-critical IT.
With XP24K we create thin pools and allocate the disks whenever required to the host. Create different pool for internal and external storage. Also need to select the appropriate cache partitioning while creating the V-Vol group.
Solaris: With Solaris we mostly use Veritas DMP. Check for the appropriate ASL. Once done select the appropriate host mode and check as per the below recommendation.
XP24000 Host mode: 09[Solaris]
Host mode option: 7
Recommended Kernel parameters:
Verification of timeout & queue depth
#adb -k
physmen 7d6f94
ssd_max_throttle/D #if defined ssd_max_throttle -show its value
ssd_max_throttle"
ssd_max_throttle: 256
sd_max_throttle/D
sd_max_throttle:
sd_max_throttle: 256
ssd_io_time/D
ssd_io_time:
ssd_io_time: 60
sd_io_time/D
sd_io_time"
sd_io_time:60
$q
Changing timeout and queue depth values through /etc/system. Do it only if values showed by adb are different with recommended. New values are activated after reboot.
set ssd:ssd_io_time=0x3c
# if defined in kernel ssd_io_time and its value is not default#
set ssd:sd_io_time=0x3c
# if defined in kernel sd_io_time and its value is not default#
set ssd:ssd_max_throttle=8
# if defined in kernel ssd_max_throttle and its value is not default#
set sd:sd_max_throttle=8
# if defined in kernel ssd_max_throttle and its value is not default
Load-balancing (currently is not changed, we use default)
"Configuration guide for SUN/Solaris does not include recommendation for specific load-balancing between HBAs with VxDMP."
Default iopolicy=minimumq : I/O is sent on paths that have the minimum number of I/O requests in the queue.
To verify current policy:
# vxdmpadm listenclosure
ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE LUN_COUNT
=======================================================================================
xp24k0 xp24k 133BE CONNECTED A/A 3
# vxdmpadm getattr enclosure xp24k0 iopolicy
ENCLR_NAME DEFAULT CURRENT
============================================
xp24k0 MinimumQ MinimumQ
To set another policy:
vxdmpadm setattr enclosure xp24k0 iopolicy=round-robin
Re-label disk imported from external STU (only for imported external LUNs):
Collect the VTOC into a file:
prtvtoc -h /dev/rdsk/c#t#d#s2 > c#t#d#s2.vtoc.before
Autoconfigure
format -> pick LUN -> type -> auto configure
Label the drive from format prompt
format> label
Verify updated drive information from format prompt
format> current
format> quit
Print the new VTOC into a file:
prtvtoc -h /dev/rdsk/c#t#d#s2 > c#t#d#s2.vtoc.after
Compare vtoc.before and vtoc.after files. If not the same then edit vtoc.before file so that it would reflect number of sectors in vtoc.after file.
Write the previous partition table to the LUN
fmthard -s c#t#d#s2.vtoc.before /dev/rdsk/c#t#d#s2
Confirm geometry the same by checking the partition table against the one in step 1.
format -> pick LUN -> partition -> print
Migration with vxassist mirror (after new disks are allocated and visible):
Verify whether swap disk is on the external storage. If yes - initialize new disk, add it to VG, creat LV and add it to the swap configuration (swap -a). Swap disks do not require migration.
root # swap -l
root# vxdctl enable
root# vxdisk list
root# vxdisk init XP10K-12K0_0
root # vxdisk init XP10K-12K0_1
root # vxdg -g vgeva01 adddisk vgeva0103=XP10K-12K0_0 vgeva0104=XP10K-12K0_1
root # vxdisk list
DEVICE TYPE DISK GROUP STATUS
EMC_CLARiiON0_0 auto:cdsdisk vgeva0101 vgeva01 online
EMC_CLARiiON0_1 auto:cdsdisk vgeva0102 vgeva01 online
XP10K-12K0_0 auto:cdsdisk vgeva0103 vgeva01 online
XP10K-12K0_1 auto:cdsdisk vgeva0104 vgeva01 online
c1t1d0s2 auto:none - - online invalid
root # vxprint -g vgeva01 -v
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
v WebSphere7.0 fsgen ENABLED 10485760 - ACTIVE - -
v WebSphere7_64 fsgen ENABLED 10485760 - ACTIVE - -
v omsusers fsgen ENABLED 41943040 - ACTIVE - -
v omsuser01 fsgen ENABLED 41943040 - ACTIVE - -
v oravl01 fsgen ENABLED 35651584 - ACTIVE - -
The following should be done for each volume:
root # vxassist -g vgeva01 mirror oravl01 alloc=vgeva0103,vgeva0104
root # vxprint -g vgeva01 -ht oravl01
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
SR NAME KSTATE
v oravl01 - ENABLED ACTIVE 35651584 SELECT - fsgen
pl oravl01-01 oravl01 ENABLED ACTIVE 35651584 CONCAT - RW
sd vgeva0101-01 oravl01-01 vgeva0101 0 20971520 0 EMC_CLARiiON0_0 ENA
sd vgeva0101-03 oravl01-01 vgeva0101 62914560 14680064 20971520 EMC_CLARiiON0_0 ENA
pl oravl01-02 oravl01 ENABLED ACTIVE 35651584 CONCAT - RW
sd vgeva0103-03 oravl01-02 vgeva0103 52428800 35651584 0 XP10K-12K0_0 ENA
root # vxplex -g vgeva01 det oravl01-01
root# vxplex -g vgeva01 -o rm dis oravl01-01
After all volumes are migrated, check that old disks are not in use (no vgeva0101, vgeva0102) and remove them from VG:
If SmartMove was not active (due to VxVM version) - perform Discard Zero Data on new V-Vols.
In order to Space Reclamation /Smart Move to be available, specific disks should be visible as thinrclm wit 'vxdisk list'. Supported starting with VRTS SF 5.0 MP3:
root- # vxdisk list
DEVICE TYPE DISK GROUP STATUS
c1t0d0s2 auto:none - - online invalid
c1t1d0s2 auto:none - - online invalid
c2t50001FE1500FD709d1s2 auto:cdsdisk c2t50001FE1500FD709d1s2 xp24k online
c2t50001FE1500FD709d2s2 auto:cdsdisk - - online
c2t50060E801533BE01d0s2 auto:cdsdisk xp24k01 xp24k online thinrclm
c2t50060E801533BE01d1s2 auto:cdsdisk xp24k02 xp24k online thinrclm
Disk that are visible as 'thinrclm' enable both spcae reclamation and SmartMove operations.
Example of Space Reclamation on both disks:
root- # vxdisk reclaim c2t50060E801533BE01d0s2 c2t50060E801533BE01d1s2
Reclaiming thin storage on:
Disk c2t50060E801533BE01d1s2 : Done!
Put attention: in spite of informing on last disk only - both disks are included in reclaiming process.
Example of Space Reclamation on both disks:
root- # fsadm –R /xplv3
Smart Move operations for 'thinrclm' disks are checked on by default. They are telavnt for sll VRTS mirror, move commands, like:
root- # vxassist -g xp24k mirror lv3 xp24k01
root- # vxassist -g xp24k mirror lv3 xp24k01
One can explicitely check on/off SmartMove feature with setting appropriate attribute value in /etc/default/vxsf file:
usefssmartmove=none # Smartmove is disabled on the host
usefssmartmove=all # Smartmove is enabled for all copy operations on the host
usefssmartmove=thinonly # Smartmove is enabled for volumes that contain thin
thinrclm devices
AIX:
XP24K Host Mode: 0F[AIX]
Host Mode Option: N/A.
For AIX HACMP check on Mode Option: 15
Prerequisites for XP24K connection to AIX: Installation of HP MPIO for XP
Prerequisites for HP MPIO for XP installation 5.4.0.2
The MPIO solution cannot coexist with other non-MPIO multipathing products on the same server. The MPIO solution cannot coexist with HDLM on the same server (lslpp -al | egrep -i dlm)
SCSI-3 persistent reservations are not supported.
HACMP is supported when using enhanced concurrent volume groups within the cluster. Volume groups not controlled by the cluster can be standard volume groups.
After re-enabling previously failed paths it can take a while (few minutes) until MPIO takes all IO paths online again. This is not a defect.
In the progress of an initial open to a raw MPIO disk device MPIO will check all IO paths and tries to re-enable previously failed paths automatically. This is also not considered to be a defect.
"AIX 5.3 TL04, SP01 or greater. For AIX 5.3, TL06, it is necessary to
install SP02, or greater (check with ‘oslevel –s’)."
For AIX 5.3:
IY67625, IY79741 and IY79862 must be installed for MPIO ODM (check with ‘instfix -i | egrep 'IY67625|IY79741|IY79862')
MPIO can be used with HACMP only when Enhanced Concurrent Volume Groups are used with ML03 and IY73087 (or higher).
Base XP ODM must not be installed (the same a Single Path ODM – see below).
"AIX 6.1 TL00, SP01 or greater. AIX 6.1 is not supported on all XP
disk arrays. Check the Streams
documents for supported arrays."
"P5 VIOS ioslevel 1.2.1.1. Check with IBM for the currently
supported versions of VIOS."
The Single Path ODM fileset cannot coexist with the fileset of the HP XP MPIO solution on the same server.
"The HP XP MPIO solution can coexist with the HP EVA MPIO solution on the same server utilizing the same HBA (lslpp -al | egrep -i hsv)
"
Installation of MPIO for XP 5.4.0.2
# mkdir /tmp/hpmpio # or any working directory
# cp XPMPIO2.tar /tmp/hpmpio
# cd /tmp/hpmpio
# tar xvf XPMPIO2.tar
# inutoc $PWD
# installp –acd . –einstallp.log ALL
# lslpp -L devices.fcp.disk.HP.xparray.mpio.rte # verification
Change XP24K adapter/disks attributes
# lsdev -Ct "*scsi*" # Find SCSI adapters
# chdev -a fc_err_recov=fast_fail -l fscsi0 [-P] # -P is when actual change will occur during reboot only
# chdev -a fc_err_recov=fast_fail -l fscsi1 [-P] # -P is when actual change will occur during reboot only
# cfgmgr -v
# chdev –l hdiskX –a reserve_policy=no_reserve
# chdev –l hdiskX –a algorithm=round_robin
# chdev -l hdiskX -a queue_depth=8
# chdev -l hdiskX -a rw_timeout=60 # usually not required since this is default
# lattr -El hdiskX
Migration data volumes with migratepv (after new disks are visible and configured - see above)
Verify whether swap disk is on the external storage. If yes - initialize new disk, add it to VG, create LV and add it to the swap configuration (swap -a). Swap disks do not require migration.
root # swap -l
# lsvg | egrep -v 'root'
vgeva
# lsvg -p vgeva
hdisk2
hdisk3
# extendvg vgeva hisk6 hdisk7
# migratepv hdisk2 hdisk6
# migratepv hdisk3 hdisk7
# lspv hdisk2
# lspv hdisk3
# reducevg vgeva hdisk2 hdisk3
# rmdev -l hdisk2 -dR
# rmdev -l hdisk2 -dR
# cfgmgr
# lsedv -Ccdisk
AIX VIO:
XP24K Host Mode: 0F[AIX]
Host Mode Option: N/A.
For AIX HACMP check on Mode Option: 15
Installation of MPIO for XP and changing devices (on physical host) - the same as on ordinary AIX host.
Migration Plan (based on migratepv at the guest level)
Install MPIO for XP
Zone host with XP24K. If original STU is CX, then one HBA should be un-zoned with CX and zoned with XP24K only.
Allocate and present LUNs of the same sizes as original ones.
Discover new disks
#cfgmgr
Change attributes of HBAs and new disks according to appropriate procedure.
Verify current mapping under padmin
$ lsmap -all
SVSA Physloc Client Partition ID
--------------- -------------------------------------------- ------------------
vhost0 U7778.23X.068B34A-V1-C11 0x00000002
VTD vtscsi0
Status Available
LUN 0x8100000000000000
Backing device hdisk2
Physloc U78A5.001.WIH70AE-P1-C12-T1-W50001FE15012AADF-L1000000000000
VTD vtscsi1
Status Available
LUN 0x8200000000000000
Backing device hdisk3
Physloc U78A5.001.WIH70AE-P1-C12-T1-W50001FE15012AADF-L2000000000000
VTD vtscsi2
Status Available
LUN 0x8300000000000000
Backing device hdisk4
Physloc U78A5.001.WIH70AE-P1-C12-T1-W50001FE15012AADF-L3000000000000
VTD vtscsi3
Status Available
LUN 0x8400000000000000
Backing device hdisk5
Physloc U78A5.001.WIH70AE-P1-C12-T1-W50001FE15012AADF-L4000000000000
Map new LUNs to with the same sizes to appropriate Server Virtual Adapters:
mkvdev -vdev hdisk6 -vadapter vhost0 -dev hdisk4_050
mkvdev -vdev hdisk7 -vadapter vhost0 -dev hdisk5_050
mkvdev -vdev hdisk8 -vadapter vhost0 -dev hdisk6_050
mkvdev -vdev hdisk9 -vadapter vhost0 -dev hdisk7_050
Login as root on a guest (ibm101) and discover new disks
ibm101:root: cfgmgr
ibm101:root: lsdev -Ccdisk
hdisk0 Available Virtual SCSI Disk Drive
hdisk1 Available Virtual SCSI Disk Drive
hdisk2 Available Virtual SCSI Disk Drive
hdisk3 Available Virtual SCSI Disk Drive
hdisk4 Available Virtual SCSI Disk Drive
hdisk5 Available Virtual SCSI Disk Drive
hdisk6 Available Virtual SCSI Disk Drive
hdisk7 Available Virtual SCSI Disk Drive
Add new disks to appropriate data VGs (not rootvg and not altinst_rootvg)
ibm101:root: extendvg vg501 hdisk6 hdisk7
Migrate original disks to new ones on a guest, verify that all PEs were really copied, exclude original disks from VG
ibm101:root: time migratepv hdisk2 hdisk6
ibm101:root: time migratepv hdisk3 hdisk7
ibm101:root: lspv hdisk2
ibm101:root: lspv hdisk3
ibm101:root: lspv hdisk6
ibm101:root: lspv hdisk7
ibm101:root: reducevg vg501 hdisk2 hdisk3
Inactivate and export altinst_rootvg group, then import it under another name
ibm101:root: varyoffvg altinst_rootvg
ibm101:root: exportvg altinst_rootvg
ibm101:root: importvg -y newinst_rootvg -n hdisk1
Create a clone of rootvg on a new disk and reboot from it.
/usr/local/ADMIN/scripts/Clone_Rootvg.ksh
ibm101:root: bootlist -m normal -o hdisk4
ibm101:root: reboot
Create new altinst_rootvg and run crontab script to copy system image
/usr/local/ADMIN/scripts/Clone_Rootvg.ksh
Clean old_rootvg, newinst_rootvg and remove their disks on virtual guest
ibm101:root: varyoffvg old_rootvg
ibm101:root: exportvg old_rootvg
ibm101:root: varyoffvg newinst_rootvg
ibm101:root: exportvg newinst_rootvg
ibm101:root: rmdev -l hdisk0 -dR
ibm101:root: rmdev -l hdisk1 -dR
ibm101:root: rmdev -l hdisk2 -dR
ibm101:root: rmdev -l hdisk3 -dR
Unmap old disks on physical host vio101 from virtual guest (under padmin account)
$ rmvdev -vdev hdisk2
$ rmvdev -vdev hdisk3
$ rmvdev -vdev hdisk4
$ rmvdev -vdev hdisk5
Remove old disks on physical host vio101 (under root account)
vio101:root: rmdev -l hdisk2 -dR
vio101:root: rmdev -l hdisk3 -dR
vio101:root: rmdev -l hdisk4 -dR
vio101:root: rmdev -l hdisk5 -dR
Unzone old STU and reboot for verification. Put attention - first shutdown virtual guest, then reboot physical host, then start guest.
Perform Discard Zero Data on new V-Vols.
HP-UX:
XP24K Host Mode: 08[HP]
Host Mode Option: 12
Change XP24K disks attributes
Configuration Guide for HP-UX recommends setting timeout=60 for volumes on XP24K. So after disks on XP24K are allocated, perform:
# pvcreate /dev/dsk/c10t0d0
# pvchange -t 60 /dev/dsk/c10t0d0
# pvdisplay /dev/dsk/c10t0d0
Other attributes - queue depth and load balancing - default
Configuration Guide for HP-UX does not include any recommendations on queue depth and load balncing configuration.
On HP-UX before version 11v3 (11.31) - PVLink does not enable any load balancing between paths to a LUN. So we suggest manual distribution of LUNs between different primary path links.
On HP-UX 11v3 (11.31) native MPIO enables load balancing to a LUN. Load balancing may be managed with command scsimgr. Default load balancing policy for high-end STUs is round-robin.
On HP-UX 11v3 (11.31) to verify current load-balancing policy:
# scsimgr get_attr -D /dev/rdisk/disk151 -a load_bal_policy
SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk151
name = load_bal_policy
current = round_robin
default = round_robin
saved =
I do not how to manage queue depth on HP-UX versions less than 11v3 (11.31).
On HP-UX 11.31 it can be managed with scsimgr. Default is 8.
On HP-UX 11v3 (11.31) to verify current queue depth:
# scsimgr get_attr -D /dev/rdisk/disk151 -a max_q_depth
SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk151
name = max_q_depth
current = 8
default = 8
saved =
Migration data volumes with migratepv (after new disks are visible and configured - see above)
vgdisplay | egrep 'VG Name' | egrep -v 'vg00|vgroot'
VG Name /dev/vgeva01
vgdisplay -v /dev/vgeva01 | grep 'PV Name'
PV Name /dev/dsk/c0t0d1
PV Name /dev/dsk/c4t0d1
PV Name /dev/dsk/c0t0d2
PV Name /dev/dsk/c4t0d2
pvcreate /dev/dsk/c9t0d0
pvcreate /dev/dsk/c10t0d1
vgextend /dev/vgeva01 /dev/dsk/c9t0d0
vgextend /dev/vgeva01 /dev/dsk/c10t0d0
vgextend /dev/vgeva01 /dev/dsk/c10t0d1
vgextend /dev/vgeva01 /dev/dsk/c9t0d1
pvmove /dev/dsk/c0t0d1 /dev/dsk/c9t0d0
pvmove /dev/dsk/c0t0d2 /dev/dsk/c10t0d1
pvdisplay /dev/dsk/c0t0d1
pvdisplay /dev/dsk/c0t0d2
vgreduce vgeva01 /dev/dsk/c4t0d1
vgreduce vgeva01 /dev/dsk/c0t0d1
vgreduce vgeva01 /dev/dsk/c4t0d2
vgreduce vgeva01 /dev/dsk/c0t0d2
ioscan -nfCdisk | grep NO_HW
rmsf -H X.X.X…
HP-UX VM:
XP24K Host Mode: 08[HP] This value is set for physical host.
Host Mode Option: 12
Disks attributes are set exactly as on HP-UX. Timeout is defined for PVs and thus is set on guest (not on physical host).
Migration Plan(based on importing external LUNs)
Create zones in fabrics with XP24K, but do not activate new zones (don't include them in active zoneset).
On external STU present original LUNs to XP24K. Don't unpresent from original host
On XP24K, perform importing of new external LUNs. Attributes: path group = 0 (Right click on existing path!), ExG=1, CU=20. Fill external disks IDs (CU:LDEV- 20:X1,2,...)
On XP24K, create V-VOLs with the same capacities (by blocks!) as original disks. Allocate all new V-Vols under CU FE (FE:Z1, FE:Z2, ...). Put attention on pools.
Fill new V-Vols IDs in Excel Map. Declare new V-Vols as Reserved through AutoLUN.
After getting outage time: stop all guests
hpvm101# shutdown -h now
Change guest startup to manual
# hpvmmodify -P hpvm101 -B manual
Remove old disk mappings from guest configuration
# hpvmmodify -P hpvm101 -d disk:avio_stor:0,2,0:disk:/dev/rdisk/disk24
# hpvmmodify -P hpvm101 -d disk:avio_stor:0,2,3:disk:/dev/rdisk/disk31
# hpvmmodify -P hpvm101 -d disk:avio_stor:0,2,1:disk:/dev/rdisk/disk41
# hpvmmodify -P hpvm101 -d disk:avio_stor:0,2,2:disk:/dev/rdisk/disk46
Unzone a physical host with original STU and remove NO_HW disks
# ioscan -nNfCdisk
# rmsf -H …
Zone a physical host with XP24K. Perform scanning to PLOGI on XP24K.
# ioscan -nNfCdisk
Create a new host group on XP24K and map external disks to the host group.
Scan for new disks on the host. Fill external disk names in Excel map.
# ioscan -nNfCdisk
# xpinfo -i
Map external disks according to Excel map to appropriate guests:
# hpvmmodify -P hpvm101 -adisk:avio_stor:0,2,0:disk:/dev/rdisk/diskY1
# hpvmmodify -P hpvm101 -a disk:avio_stor:0,2,3:disk:/dev/rdisk/diskY2
# hpvmmodify -P hpvm101 -a disk:avio_stor:0,2,1:disk:/dev/rdisk/diskY3
# hpvmmodify -P hpvm101 -a disk:avio_stor:0,2,2:disk:/dev/rdisk/diskY4
Start a guest
# hpvmstart -P hpvm101
Verify a guest
hpvm101# ioscan -nfCdisk
hpvm101# bdf
On each guest change timeout for all PVs to 60:
hpvm101# vgdisplay -v | grep 'PV Name'
hpvm101# pvchange -t 60 /dev/disk/diskXX
Change guest startup to automatic
# hpvmmodify -P hpvm101 -B auto
Verify with reboot
hpvm101# shutdown-h now
# reboot
Perform AutoLUN migrations (parallel or sequential) according to Excel Map.
Change threshold on V-Vols to 300% (after V-Vol it is set to 5%).
Perform Discard Zero Data on migrated V-Vols (after V-Vol usage rate is always 100%).
Verify a guest
LINUX:
XP24K Host Mode: 00[Standard]
Host Mode Option: N/A
VMware:
XP24K Host Mode: 01[VMware] There are also 21[VMware extension] and 40. Now we prefer to use 01.
Host Mode Option: N/A.
Set Round Robin as Default Path Selection Policy (on ESX with CLI)
# esxcli nmp device list
# see current disks SATP (Storage Array Type Plug-Ins) and PSP (Path Selection Path Policy)
# esxcli nmp satp setdefaultpsp --satp VMW_SATP_DEFAULT_AA --psp VMW_PSP_RR
# Change Default PSP for specific SATP. This maybe also performed through editing appropriate default PSP in /etc/vmware/esx.conf
# esxcli nmp satp listrule
# Information only: see mapping between STUs and SATP (Storage Array Type Plug-Ins)
Set Round Robin as PSP for existent disks (on ESX with CLI)
# setpolicy --psp VMW_PSP_RR --device naa.600508b4001063af0001e00005390000
# Change PSP for specific disk
To change Queue Depth (on ESX with CLI)
[root@vmh ~]# cat /proc/scsi/qla2xxx/1 # Verify current queue depth
...........
Device queue depth = 0x20 # By default queue depth = 32
......................
[root@vmh1 ~]# vmkload_mod -l | grep qla # Find module name
[root@vmh1 ~]# vmkload_mod -s qla2xxx | grep depth # Find parameter name
ql2xmaxqdepth: int
Maximum queue depth to report for target devices.
[rootvmh1 ~]# esxcfg-module -s ql2xmaxqdepth=8 qla2xxx # Set queue depth = 8. Put attention - actual queue depth is not changed until reboot.
1 comment:
Re: 01[VMware] There are also 21[VMware extension]
Thanks for the Info.. b t w how to decide 01[VMware] or [VMware extension]? Scenario?
Post a Comment