HP ENTERPRISE VIRTUAL ARRAY FAMILY WITH VMWARE VSPHERE 4.0 , 4.1 AND 5.0 CONFIGURATION BEST PRACTICES Technical white paper Table of contents Exe
10 The Storage Module for vCenter also provides automated provisioning for datastores and VMs. After the storage administrator has configured the plug
11 The resulting topology should be similar to that presented in Figure 3, which shows a vSphere 4.x server attached to an EVA4400 array through a red
12 In a direct-connect environment, the same principles can be achieved with two more HBA or HBA ports; however, the configuration is slightly differe
13 Note When configuring VMware Consolidated Backup (VCB) with an EVA array, all vSphere hosts must be set to VMware. However, the VCB proxy host, whi
14 The overhead created by sparing is calculated as follows: "Sparing capacity = " ("Size of largest disk in disk group * 2" )&quo
15 Sizing storage for any application that is being virtualized begins with understanding the characteristics of the workload. In the white paper, “Be
16 Optimizing for availability When optimizing for availability, your goal is to accommodate particular levels of failures in the array. Availability
17 Vdisk provisioning All EVA active-active arrays are asymmetrical and comply with the SCSI ALUA standard. When creating a Vdisk on an EVA array, yo
18 Vdisks should be created with their controller failover/failback preference alternating between Controller A and B. The above recommendations provi
19 Figure 7 shows a typical multi-pathing implementation using ESX 3.5 or earlier. Figure 7. EVA connectivity with ESX 3.5 or earlier Here, Vdisks
Monitoring EVA performance in order to balance throughput ... 42 Optimizing I/O size ...
20 Multi-pathing in vSphere 4.x and 5.0 vSphere 4 introduced the concept of path selection plug-ins (PSPs), which are essentially I/O multi-pathing op
21 Fixed_AP Introduced in vSphere 4.1, Fixed_AP I/O path policy extends the functionality of Fixed I/O path policy to active-passive and ALUA-comp
22 Figure 8 shows a typical multi-pathing implementation using vSphere 4.x/5. Figure 8. EVA connectivity with vSphere 4.x/5 All I/Os to Vdisks 1 –
23 Note that the preferred controller for accessing a Vdisk in an ALUA-capable array is defined in SCSI by the PREF bit, which is found in byte 0, bit
24 Summary In vSphere 4.x/5, ALUA compliance and support for round robin I/O path policy have eliminated the intricate configuration required to imple
25 Figure 10 outlines key components of the multi-pathing stack. Figure 10. vSphere 4.x and 5 multi-pathing stack The key features of the multi-pa
26 NMP The NMP ties together the functionality delivered by the SATP and PSP by handling many non-array specific activities, including: – Periodic
27 Table 3. vSphere 4.x and 5 SATP rules table, with entries that are relevant to EVA arrays denoted by an asterisk SATP Default PSP Description VMW_S
28 Connecting to an active-active EVA array in vSphere 4 When connecting a vSphere 4 host to an active-active EVA array, you should use the VMW_SATP_A
29 for i in `esxcli nmp device list | grep naa.600` ; do esxcli nmp roundrobin setconfig -t iops –I 1 -d $i; done For ESXi5 for i in `esxcli storage n
3 Executive summary The HP Enterprise Virtual Array (EVA) family has been designed for mid-range and enterprise customers with critical requirements t
30 Connecting to an active-active EVA array in vSphere 4.1/5 In vSphere 4.1, VMware introduced more granular SATP and PSP configuration options. As i
31 – Create a new rule in the SATP rule table for the array specified with –vendor and –model – Set the default SATP for this array to VMW_SATP_ALUA
32 Deleting a manually-added rule To delete a manually-added rule, use the esxcli nmp satp deleterule command; specify the same options used to create
33 Arrays from two or more vendors are ALUA-compliant There are different default recommendations for PSPs vSphere 4.1/5 deployment If the multi
34 Figure 11. Relationship between Vdisks and the DR group Just like a Vdisk, a DR group is managed through one controller or the other; in turn,
35 Upgrading EVA microcode An online upgrade of EVA microcode is supported with vSphere 4.x. When performing such upgrades it is critical to follow t
36 Using VMFS VMFS is a high-performance cluster file system designed to eliminate single points of failure, while balancing storage resources. This f
37 – pRDM requires the guest to use the virtual LSI Logic SAS controller – pRDM is most commonly used when configuring MSCS clustering There are som
38 naming convention because the number for a Vdisk in Datacenter A may not be maintained when the Vdisk is replicated and presented to a host in Data
39 Table 5 outlines the various components of this naming convention. Table 5. Sample naming convention Component Description Example <Location>
4 Successfully addressing these challenges is imperative if you wish to maximize the return on investment (ROI) for your SAN while continuing to meet
40 Best practice for naming datastores When naming a datastore, utilize the same name used in Command View when the Vdisk was created Use simple
41 Best practices for aligning the file system No alignment is required with Windows Vista, Windows 7, or Windows Server 2008. Use the vSphere c
42 vSphere administrators often enable adaptive queuing as a means to address storage congestion issues. However, while this approach can temporarily
43 single controller despite the environment being balanced from the perspective of Vdisk access. Each port on Controller 2 is processing 300 MB/s of
44 Figure 15 shows a better-balanced environment, achieved by moving the controller ownerships of VDISK 5 and 6 to Controller 1 and of VDISK1 and 2 to
45 Note VMware makes a similar recommendation in their knowledge base article 1003469. Best practice for improving the performance of VMs that generat
46 In an exclusively EVA environment, change the default PSP option for VMW_SATP_ALUA to VMW_PSP_RR. Round robin I/O path policy is recommended
47 Summary In most environments, the best practices highlighted in this document can help you reduce configuration time and improve storage performanc
48 Glossary Array In the context of this document, an array is a group of disks that is housed in one or more disk enclosures. The disks are connecte
49 Management server A management server runs management software such as HP Command View EVA and HP Replication Solutions Manager. RDM VMwar
5 Explicit ALUA mode (explicit transition) – A host driver can set or change the managing controller for the Vdisk EVA arrays also support the fol
50 Appendix A – Using SSSU to configure the EVA The sample SSSU script provided in this appendix creates and present multiple Vdisks to vSphere hosts.
51 ADD VDISK "\Virtual Disks\DATA_DISKS\Vdisk004" DISK_GROUP="\Disk Groups\DG1" SIZE=180 REDUNDANCY=VRAID5 WRITECACHE=WRITEBACK MI
52 Appendix B – Miscellaneous scripts/commands This appendix provides scripts/utilities/commands for the following actions: Set I/O path policy to
53 Linux guest Use one of the following commands to verify that the SCSI disk timeout has been set to a minimum of 60 seconds: cat /sys/bus/scsi/d
54 Appendix C – Balancing I/O throughput between controllers The example described in this appendix is based on an environment (shown in Figure C-1) w
55 Figure C-2. I/O routes In this example, even though the EVA array has a total of eight controller ports (four on each controller), all I/O seems
56 Figure C-4. Path information for VDISK5 Alternatively, you can review Vdisk properties in Command View EVA to determine controller ownership, as
57 Figure C-6. Vdisk properties for VDISK5 For a more granular view of throughput distribution, use the following command: evaperf vd –sz <ar
58 After a rescan or vCenter refresh, you can verify that the change has been implemented, as shown in Figure C-8. Figure C-8. Confirming that owners
59 Appendix D – Caveat for data-in-place upgrades and Continuous Access EVA The vSphere datastore may become invisible after one of the following acti
6 Configuring EVA arrays HP provides tools to help you configure and maintain EVA arrays. For example, intuitive Command View EVA can be used to simpl
60 Note A similar mismatch would occur if you attempted to use Continuous Access EVA to replicate from the EVA4400 to the EVA8400. When such a mismatc
61 Appendix E – Configuring VMDirectPath I/O for Command View EVA in a VM This appendix describes how to configure VMDirectPath I/O in a vSphere 4.x e
62 Figure E-1. Storage Adapters view, available under the Configuration tab of vSphere Client This appendix shows how to assign HBA3 to VM2 in vSp
63 Table E-2. EVA array configuration summary Component Description EVA disk group Default disk group, with 13 physical disks Vdisks \VMDirectPath
64 Fibre Channel configuration This example uses two HP 4/64 SAN switches, with a zone created on each. The Fibre Channel configuration is summarized
65 The procedure is as follows: 1. Identify which HBAs are present on the vSphere server by issuing the following command: [root@lx100 ~]# lspci |
66 However, if your server hardware does not support Intel® Virtualization Technology for Directed I/O (VT-d) or AMD Extended Page Tables (EPT), Neste
67 3. If your server has compatible hardware, click on the Configure Passthrough… link to move to the Mark devices for passthrough page, as shown in
68 4. Select the desired devices for VMDirectPath; select and accept the passthrough device dependency check shown in Figure E-6. IMPORTANT If you se
69 As shown in Figure E-7, the VMDirectPath Configuration screen reflects the changes you have made. Device icons indicate that the changes will only
7 Running Command View EVA within a VM Your ability to deploy Command View EVA within a VM may be impacted by the following: EVA model Command V
70 6. After the reboot, confirm that device icons are green, as shown in Figure E-8, indicating that the VMDirectPath-enabled HBA ports are ready to
71 Note The changes you have just made are stored in file /etc/vmware/esx.conf. Configuring the array Use Command View EVA to perform the following st
72 Procedure Carry out the following steps to add VMDirectPath devices to a selected VM: 1. From the vSphere client, select VM2 from the inventory, e
73 5. From the list of VMDirectPath devices, select the desired device to assign to the VM, as shown in Figure E-12. In the example, select Port 1 o
For more information Data storage from HP http://welcome.hp.com/country/us/en/prodserv/storage.html HP virtualization with VMware http://h18004.www
8 Best practices for deploying Command View EVA in a VM with VMDirectPath I/O Deploy Command View EVA on the local datastore of the particular vSph
9 Figure 1 shows the overview screen for the HP Insight Control Storage Module for vCenter plug-in. Figure 1. Typical overview screen for the vCenter
Comentários a estes Manuais