HP Scalable File Share User's GuideG3.0-0HP Part Number: SFSUGG3-DPublished: December 2008Edition: 1
10
1 What's In This Version1.1 About This ProductHP SFS G3.0-0 uses the Lustre File System on MSA2000fc hardware to provide a storage systemfor stan
• Keyboard, video, and mouse (KVM) switch• TFT console displayAll of the DL380 G5 file system servers must have their eth0 Ethernet interfaces connect
Figure 1-2 MDS and Administration ServerFigure 1-2 shows a block diagram of an MDS server and an administration server with twoMSA2000fc enclosures. T
Figure 1-3 OSS ServerFigure 1-3 shows a block diagram of a pair of OSS servers with two HP MSA2000fc enclosures.1.3.1.1 Fibre Channel Switch ZoningIf
2 Installing and Configuring MSA2000fc ArraysThis chapter provides a summary of steps to install and configure MSA2000fc arrays for use inHP SFS G3.0-
2.3.2 Creating New VolumesTo create new volumes on a set of MSA2000 arrays, follow these steps:1. Power on all the MSA2000 shelves.2. Define an alias.
# forostmsas create vdisk level raid6 disks 16-26 spare \27 vdisk2 ; doneNOTE: For MGS and MDS nodes, HP SFS uses RAID10. An example MSA2000 CLIcomman
18
3 Installing and Configuring HP SFS Software on ServerNodesThis chapter provides information about installing and configuring HP SFS G3.0-0 Software o
© Copyright 2008 Hewlett-Packard Development Company, L.P.Confidential computer software. Valid license from HP required for possession, use or copyin
For the minimum firmware versions supported, see Table 3-1.Upgrade the firmware versions, if necessary. You can download firmware from the HP ITResour
• Provide root password information.These edits must be made, or the Kickstart process will halt, prompt for input, and/or fail.There are also some op
## Template ADD bootloader --location=mbr --driveorder=%{ks_harddrive} ## Template ADD ignoredisk --drives=%{ks_ignoredisk} ## Template ADD clearpart
On another system, if it has not already been done, you must create and mount a Linux filesystem on the thumb drive. After you insert the thumb drive
%{nfs_server} must be replaced by the installation NFS server address or FQDN.%{nfs_iso_path} must be replaced by the NFS path to the RHEL5U2 ISO dire
HOSTNAME=mynode13.5.2 Creating the /etc/hosts fileCreate an /etc/hosts file with the names and IP addresses of all the Ethernet interfaces oneach syst
4 Installing and Configuring HP SFS Software on ClientNodesThis chapter provides information about installing and configuring HP SFS G3.0-0 Software o
NOTE: The network address shown above is the InfiniBand IPoIB ib0 interface for the HPSFS G3.0-0 Management Server (MGS) node, which must be accessibl
• kernel-source-xxx RPM to go with the installed kernel1. Install the Lustre source RPM as provided on the HP SFS G3.0-0 Software tarball in the/opt/h
Table of ContentsAbout This Document...7Intended
5 Using HP SFS SoftwareThis chapter provides information about creating, configuring, and using the file system.5.1 Creating a Lustre File SystemThe f
mpath5 (3600c0ff000d5455bc8c95f4801000000) dm-3 HP,MSA2212fc [size=4.1T][features=1 queue_if_no_path][hwhandler=0] \_ round-robin 0 [prio=50][active]
how Heartbeat is configured. Manual fail back can prevent system oscillation if, for example, abad node reboots continuously.Heartbeat nodes send mess
NOTE: The gen_hb_config_files.pl scripts only works if the host names in the /etc/hosts file appear with the plain node name first, as follows:192.168
It is possible to generate the simple files ha.cf, haresources, and authkeys by hand ifnecessary. One set of ha.cf with haresources is needed for each
5.2.3.2 Editing cib.xmlThe haresources2cib.py script places a number of default values in the cib.xml file thatare unsuitable for HP SFS G3.0-0.• By d
10 Resources configured.============Node: n4 (0236b688-3bb7-458a-839b-c19a69d75afa): onlineNode: n3 (48610537-c58e-48c5-ae4c-ae44d56527c6): onlineFile
• Heartbeat uses iLO for STONITH I/O fencing. If a Heartbeat configuration has two nodesin a failover pair, Heartbeat would like both of those nodes t
5.5 Testing Your ConfigurationThe best way to sanity test your Lustre file system is to perform normal file system operations,such as normal Linux fil
5 Using HP SFS Software...315.1 Creating a Lustre File Sy
13 UP osc hpcsfsc-OST0004-osc hpcsfsc-mdtlov_UUID 5 14 UP osc hpcsfsc-OST0006-osc hpcsfsc-mdtlov_UUID 5 15 UP osc hpcsfsc-OST0007-osc hpcsfsc-mdtlov_U
Run this command on each server node for all the mpaths which that node normally mounts.4. chkconfig heartbeat off on all server nodes and reboot them
Also see man collectl.42 Using HP SFS Software
6 Licensing InformationWhen you ordered the licenses for your HP SFS G3.0-0 system, you received letters with the titleLicense-To-Use from HP. There w
7 Known Issues and WorkaroundsThe following items are known issues and workarounds.7.1 Server RebootAfter the server reboots, it checks the file syste
4. Manually mount mgs on the MGS node:# mount /mnt/mgs5. Manually mount mds on the MDS node:# mount /mnt/mdsIn the MDS /var/log/messages file, look fo
A HP SFS G3.0-0 PerformanceA.1 Benchmark PlatformHP SFS G3.0-0, based on Lustre File System Software, is designed to provide the performanceand scalab
The Lustre servers were DL380 G5s with two quad-core processors and 16GB of memory, runningRHEL v5.1. These servers were configured in failover pairs
Figure A-3 shows single stream performance for a single process writing and then reading asingle 8GB file. The file was written in a directory with a
List of Figures1-1 Platform Overview...
The test shown in Figure A-5 did not use direct I/O. Nevertheless, it shows the cost of client cachemanagement on throughput. In this test, two proces
Figure A-6 Multi-Client Throughput ScalingIn general, Lustre scales quite well with additional OSS servers if the workload is evenlydistributed over t
A.4 One Shared FileFrequently in HPC clusters, a number of clients share one file either for read or for write. Forexample, each of N clients could wr
Another way to measure throughput is to only average over the time while all the clients areactive. This is represented by the taller, narrower box in
For workloads that require a lot of disk head movement relative to the amount of data moved,SAS disk drives provide a significant performance benefit.
IndexSymbols/etc/hosts fileconfiguring, 25Bbenchmark platform, 47Ccache limit, 50cib.xml file, 35CLI, 15collectl, 41configuration instructions, 24conf
stonewalling, 52stonith, 33supportHP support, 8Tthroughput scaling, 50thumb drive, 22Uupgradingclients, 29servers, 11user accessconfiguring, 25Vvolume
List of Tables1-1 Supported Configurations ...
About This DocumentThis document provides installation and configuration information for HP Scalable File Share(SFS) G3.0-0. Overviews of installing a
WARNING A warning calls attention to important information that if notunderstood or followed will result in personal injury ornonrecoverable system pr
• HP StorageWorks Scalable File Share for SFS20 Enclosure Hardware Installation Guide Version 2.2at:http://docs.hp.com/en/8958/HP_StorageWorks_SFS_SFS
Comentários a estes Manuais