HP M0T30A manual

View a manual of the HP M0T30A below. All manuals on ManualsCat.com can be viewed completely free of charge. By using the 'Select a language' button, you can choose the language of the manual you want to view.

  • Brand: HP
  • Product: Disk array
  • Model/name: M0T30A
  • Filetype: PDF
  • Available languages: English, Spanish

Table of Contents

Page: 0
HP MSA 2040
User Guide
For firmware release GL200 or later
HP Part Number: 723983-003
Published: September 2014
Edition: 1
Abstract
This document describes initial hardware setup for HP MSA 2040 controller enclosures, and is intended for use by storage system
administrators familiar with servers and computer networks, network administration, storage system installation and configuration,
storage area network management, and relevant protocols.
Page: 1
© Copyright 2014 Hewlett-Packard Development Company, L.P.
Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.21
1 and 12.212, Commercial
Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall
not be liable for technical or editorial errors or omissions contained herein.
Acknowledgments
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
UNIX® is a registered trademark of The Open Group.
Warranty
WARRANTY STATEMENT: To obtain a copy of the warranty for this product, see the warranty information website:
http://www.hp.com/go/storagewarranty
Page: 2
Contents 3
1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
MSA 2040 Storage models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
MSA 2040 SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
MSA 2040 SAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Features and benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Front panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
MSA 2040 Array SFF enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
MSA 2040 Array LFF or supported drive expansion enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Disk drives used in MSA 2040 enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Controller enclosure—rear panel layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
MSA 2040 SAN controller module—rear panel components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
MSA 2040 SAS controller module—rear panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Drive enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
LFF drive enclosure — rear panel layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
SFF drive enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Transportable CompactFlash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Supercapacitor pack. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Upgrading to MSA 2040 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3 Installing the enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
FDE considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Connecting controller and drive enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Connecting the MSA 2040 controller to the SFF drive enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Connecting the MSA 2040 controller to the LFF drive enclosure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Connecting the MSA 2040 controller to mixed model drive enclosures. . . . . . . . . . . . . . . . . . . . . . . . 21
Cable requirements for MSA 2040 enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Testing enclosure connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Powering on/powering off. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
AC power supply. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
DC and AC power supplies equipped with a power switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Connect power cable to DC power supply. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Connect power cord to legacy AC power supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Power cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4 Connecting hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Host system requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Connecting the enclosure to data hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
MSA 2040 SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Fibre Channel protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
10GbE iSCSI protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1 Gb iSCSI protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
MSA 2040 SAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Connecting direct attach configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Single-controller configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
One server/one HBA/single path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Dual-controller configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
One server/one HBA/dual path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Two servers/one HBA per server/dual path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Four servers/one HBA per server/dual path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Contents
Page: 3
4 Contents
Connecting switch attach configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Dual controller configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Two servers/two switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Four servers/multiple switches/SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Connecting remote management hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Connecting two storage systems to replicate volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Cabling for replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Host ports and replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Single-controller configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
One server/single network/two switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Dual-controller configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Multiple servers/single network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Multiple servers/different networks/multiple switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Updating firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5 Connecting to the controller CLI port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Device description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Preparing a Linux computer before cabling to the CLI port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Downloading a device driver for Windows computers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Obtaining IP values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Setting network port IP addresses using DHCP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Setting network port IP addresses using the CLI port and cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Using the CLI port and cable—known issues on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Workaround . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6 Basic operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Accessing the SMU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Configuring and provisioning the storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
USB CLI port connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Fault isolation methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Basic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Options available for performing basic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Use the SMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Use the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Monitor event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
View the enclosure LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Performing basic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Gather fault information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Determine where the fault is occurring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Review the event logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Isolate the fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
If the enclosure does not initialize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Correcting enclosure IDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Stopping I/O. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Diagnostic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Is the enclosure front panel Fault/Service Required LED amber?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Is the enclosure rear panel FRU OK LED off? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Is the enclosure rear panel Fault/Service Required LED amber? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Are both disk drive module LEDs off (Online/Activity and Fault/UID)? . . . . . . . . . . . . . . . . . . . . . . . . 57
Is the disk drive module Fault/UID LED blinking amber? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Is a connected host port Host Link Status LED off? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Is a connected port Expansion Port Status LED off?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Is a connected port Network Port Link Status LED off?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Is the power supply Input Power Source LED off? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Is the power supply Voltage/Fan Fault/Service Required LED amber? . . . . . . . . . . . . . . . . . . . . . . . . 59
Controller failure in a single-controller configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
If the controller has failed or does not start, is the Cache Status LED on/blinking? . . . . . . . . . . . . . . . . 60
Page: 4
Contents 5
Transporting cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Isolating a host-side connection fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Host-side connection troubleshooting featuring host ports with SFPs . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Host-side connection troubleshooting featuring SAS host ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Isolating a controller module expansion port connection fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Isolating Remote Snap replication faults. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Cabling for replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Replication setup and verification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Diagnostic steps for replication setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Can you successfully use the Remote Snap feature?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Can you view information about remote links? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Can you create a replication set? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Can you replicate a volume? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Can you view a replication image?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Can you view remote systems? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Resolving voltage and temperature warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Sensor locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Power supply sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Cooling fan sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Temperature sensors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Power supply module voltage sensors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8 Support and other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Contacting HP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Subscription service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Product advisories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Related information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Troubleshooting resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Typographic conventions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Rack stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Customer self repair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Product warranties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
9 Documentation feedback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
A Regulatory information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Belarus Kazakhstan Russia marking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Turkey RoHS material content declaration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Ukraine RoHS material content declaration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Warranty information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
HP Proliant Servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
HP Enterprise Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
HP Storage Products. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
HP Networking Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
B LED descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Front panel LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
MSA 2040 Array SFF enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
MSA 2040 Array LFF or supported 12-drive expansion enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Disk drive LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Rear panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Controller enclosure—rear panel layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
MSA 2040 SAN controller module—rear panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
MSA 2040 SAS controller module—rear panel LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Power supply LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
MSA 2040 6 Gb 3.5" 12-drive enclosure—rear panel layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
D2700 6Gb drive enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Page: 5
6 Contents
C Specifications and requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Safety requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Site requirements and guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Site wiring and AC power requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Site wiring and DC power requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Weight and placement guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Electrical guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Ventilation requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Cabling requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Management host requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Physical requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Environmental requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Electrical requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Site wiring and power requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Power cord requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
D Electrostatic discharge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Preventing electrostatic discharge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Grounding methods to prevent electrostatic discharge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
E SFP option for host ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Locate the SFP transceivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Install an SFP transceiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Verify component operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Page: 6
7
Figures
1 MSA 2040 Array SFF enclosure: front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2 MSA 2040 Array LFF or supported 12-drive enclosure: front panel . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 MSA 2040 Array: rear panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4 MSA 2040 SAN controller module face plate (FC or 10GbE iSCSI). . . . . . . . . . . . . . . . . . . . . . . . . 15
5 MSA 2040 SAN controller module face plate (1 Gb RJ-45) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6 MSA 2040 SAS controller module face plate (HD mini-SAS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7 LFF 12-drive enclosure: rear panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
8 MSA 2040 CompactFlash card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
9 Cabling connections between the MSA 2040 controller and a single drive enclosure. . . . . . . . . . . . . 22
10 Cabling connections between the MSA 2040 controller and a single drive enclosure. . . . . . . . . . . . . 22
11 Cabling connections between MSA 2040 controllers and LFF drive enclosures . . . . . . . . . . . . . . . . . 23
12 Cabling connections between MSA 2040 controllers and SFF drive enclosures . . . . . . . . . . . . . . . . . 24
13 Cabling connections between MSA 2040 controllers and drive enclosures of mixed model type . . . . . 25
14 Fault-tolerant cabling connections showing maximum number of enclosures of same type . . . . . . . . . . 26
15 Cabling connections showing maximum enclosures of mixed model type . . . . . . . . . . . . . . . . . . . . . 27
16 AC power supply. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
17 DC and AC power supplies with power switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
18 DC power cable featuring sectioned D-shell and lug connectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
19 Connecting hosts: direct attach—one server/one HBA/single path . . . . . . . . . . . . . . . . . . . . . . . . . 36
20 Connecting hosts: direct attach—one server/one HBA/dual path . . . . . . . . . . . . . . . . . . . . . . . . . . 36
21 Connecting hosts: direct attach—two servers/one HBA per server/dual path . . . . . . . . . . . . . . . . . . 36
22 Connecting hosts: direct attach—four servers/one HBA per server/dual path . . . . . . . . . . . . . . . . . . 37
23 Connecting hosts: switch attach—two servers/two switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
24 Connecting hosts: switch attach—four servers/multiple switches/SAN fabric. . . . . . . . . . . . . . . . . . . 38
25 Connecting two storage systems for Remote Snap: one server/two switches/one location. . . . . . . . . . 40
26 Connecting two storage systems for Remote Snap: multiple servers/one switch/one location. . . . . . . . 40
27 Connecting two storage systems for Remote Snap: multiple servers/switches/one location . . . . . . . . . 41
28 Connecting two storage systems for Remote Snap: multiple servers/switches/two locations. . . . . . . . . 41
29 Connecting two storage systems for Remote Snap: multiple servers/SAN fabric/two locations . . . . . . 42
30 Connecting a USB cable to the CLI port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
31 LEDs: MSA 2040 Array SFF enclosure front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
32 LEDs: MSA 2040 Array LFF enclosure front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
33 LEDs: Disk drive combinations — enclosure front panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
34 MSA 2040 SAN Array: rear panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
35 LEDs: MSA 2040 SAN controller module (FC and 10GbE SFPs) . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
36 LEDs: MSA 2040 SAN controller module (1 Gb RJ-45 SFPs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
37 LEDs: MSA 2040 SAS controller module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
38 LEDs: MSA 2040 Storage system enclosure power supply modules . . . . . . . . . . . . . . . . . . . . . . . . . 86
39 LEDs: MSA 2040 6 Gb 3.5" 12-drive enclosure rear panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
40 Install a qualified SFP option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Page: 8
Tables 9
Tables
1 Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2 Terminal emulator display settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3 Terminal emulator connection settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4 Diagnostics LED status: Front panel “Fault/Service Required” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5 Diagnostics LED status: Rear panel “FRU OK” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6 Diagnostics LED status: Rear panel “Fault/Service Required”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7 Diagnostics LED status: Front panel disks “Online/Activity” and “Fault/UID” . . . . . . . . . . . . . . . . . . . . 57
8 Diagnostics LED status: Front panel disks “Fault/UID” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
9 Diagnostics LED status: Rear panel “Host Link Status” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
10 Diagnostics LED status: Rear panel “Expansion Port Status” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
11 Diagnostics LED status: Rear panel “Network Port Link Status” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
12 Diagnostics LED status: Rear panel power supply “Input Power Source” . . . . . . . . . . . . . . . . . . . . . . . 59
13 Diagnostics LED status: Rear panel power supply: “Voltage/Fan Fault/Service Required” . . . . . . . . . . . 59
14 Diagnostics LED status: Rear panel “Cache Status”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
15 Diagnostics for replication setup: Using Remote Snap feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
16 Diagnostics for replication setup: Viewing information about remote links . . . . . . . . . . . . . . . . . . . . . . 64
17 Diagnostics for replication setup: Creating a replication set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
18 Diagnostics for replication setup: Replicating a volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
19 Diagnostics for replication setup: Viewing a replication image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
20 Diagnostics for replication setup: Viewing a remote system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
21 Power supply sensor descriptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
22 Cooling fan sensor descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
23 Controller module temperature sensor descriptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
24 Power supply temperature sensor descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
25 Voltage sensor descriptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
26 Document conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
27 Rackmount enclosure dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
28 Rackmount enclosure weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Page: 10
MSA 2040 Storage models 11
1 Overview
HP MSA Storage models are high-performance storage solutions combining outstanding performance with
high reliability, availability, flexibility, and manageability. MSA 2040 enclosure models are designed to
meet NEBS Level 3, MIL-STD-810G (storage requirements), and European Telco specifications.
MSA 2040 Storage models
The MSA 2040 enclosures support either large form factor (LFF 12-disk) or small form factor (SFF 24-disk)
2U chassis, using either AC or DC power supplies. HP MSA Storage models include MSA 2040 SAN and
MSA 2040 SAS controllers, which are introduced below.
NOTE: For additional information about MSA 2040 controller modules, see the following subsections:
• "Controller enclosure—rear panel layout" (page 14)
• "MSA 2040 SAN controller module—rear panel components" (page 15)
• "MSA 2040 SAS controller module—rear panel components" (page 16)
The MSA 2040 enclosures support both traditional linear storage and new virtual storage, which uses
paged-storage technology. For linear storage, a group of disks with an assigned RAID level is called a
vdisk or linear disk group. For virtual storage, a group of disks with an assigned RAID level is called a
virtual disk group. This guide uses the term vdisk when specifically referring to linear storage, and uses the
term disk group otherwise.
MSA 2040 enclosure user interfaces
The MSA 2040 enclosures support two versions of the Storage Management Utility (SMU), which is a
web-based application for configuring, monitoring, and managing the storage system. Both SMU versions
(v3 and v2) and the command-line interface are briefly described.
• v3 is the new primary web interface for the enclosures, providing access to all common management
functions for both linear and virtual storage.
• v2 is a secondary web interface for the enclosures, providing access to traditional linear storage
functions. This legacy interface provides certain functionality that is not available in the primary
interface.
• The command-line interface (CLI) enables you to interact with the storage system using command syntax
entered via the keyboard or scripting. You can set a CLI preference to use v3 or v2 terminology in
command output and system messages.
NOTE: For more information about the web-based application, see the HP MSA 2040 SMU Reference
Guide or online help. For more information about the CLI, see the HP MSA 2040 CLI Reference Guide.
MSA 2040 SAN
MSA 2040 SAN models use Converged Network Controller technology, allowing you to select the desired
host interface protocol from the available Fibre Channel (FC) or Internet SCSI (iSCSI) host interface
protocols supported by the system. You can use the CLI to set all controller module host ports to use one of
these host interface protocols:
• 16 Gb FC
• 8 Gb FC
• 4 Gb FC
• 10 GbE iSCSI
• 1 GbE iSCSI
Page: 11
12 Overview
Alternatively, you can use the management interfaces to set Converged Network Controller ports to support
a combination of host interface protocols. When configuring a combination of host interface protocols,
host ports 1 and 2 are set to FC (either both16 Gbit/s or both 8 Gbit/s), and host ports 3 and 4 must be
set to iSCSI (either both 10 GbE or both 1 GbE), provided the Converged Network Controller ports use the
qualified SFP connectors and cables required for supporting the selected host interface protocol. See
"MSA 2040 SAN controller module—rear panel LEDs" (page 83) for more information.
IMPORTANT: See the “HP MSA 2040 SAN Storage array and iSCSI SFPs Read This First” document for
important information pertaining to iSCSI SFPs.
TIP: See the “Configuring host ports” topic within the SMU Reference Guide for information about
configuring Converged Network Controller ports with host interface protocols of the same type or a
combination of types.
MSA 2040 SAS
MSA 2040 SAS models provide four high density mini-SAS (HD mini-SAS) ports per controller module. The
HD mini-SAS host interface protocol uses the SFF-8644 external connector interface defined for SAS3.0 to
support a link rate of 12 Gbit/s using the qualified connectors and cable options. See "MSA 2040 SAS
controller module—rear panel LEDs" (page 85) for more information.
Features and benefits
Product features and supported options are subject to change. Online documentation describes the latest
product and product family characteristics, including currently supported features, options, technical
specifications, configuration data, related optional software, and product warranty information.
NOTE: Check the QuickSpecs for a complete list of supported servers, operating systems, disk drives, and
options. See http://www.hp.com/support/msa2040/QuickSpecs.
Page: 12
Front panel components 13
2 Components
Front panel components
HP MSA 2040 models support small form factor (SFF) and large form factor (LFF) enclosures. The SFF
chassis, configured with 24 2.5" SFF disks, is used as a controller enclosure. The LFF chassis, configured
with 12 3.5" LFF disks, is used as either a controller enclosure or a drive enclosure.
Supported drive enclosures, used for adding storage, are available in LFF or SFF chassis. The MSA 2040
6 Gb 3.5" 12-drive enclosure is the large form factor drive enclosure used for storage expansion. The HP
D2700 6 Gb enclosure, configured with 25 2.5" SFF disks, is the small form factor drive enclosure used for
storage expansion. See "SFF drive enclosure" (page 17) for a description of the D2700.
MSA 2040 Array SFF enclosure
Figure 1 MSA 2040 Array SFF enclosure: front panel
MSA 2040 Array LFF or supported drive expansion enclosure
Figure 2 MSA 2040 Array LFF or supported 12-drive enclosure: front panel
1 Enclosure ID LED
2 Disk drive Online/Activity LED
3 Disk drive Fault/UID LED
4 Unit Identification (UID) LED
5 Heartbeat LED
6 Fault ID LED
1 3
2
4
5
6
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Note: Integers on disks indicate drive slot numbering sequence.
1 Enclosure ID LED
2 Disk drive Online/Activity LED
3 Disk drive Fault/UID LED
4 Unit Identification (UID) LED
5 Heartbeat LED
6 Fault ID LED
1 4 7 10
3 6 9 12
1 3
2
4
5
6
1
2
3
4
5
6
7
8
9
10
11
12
Note: Integers on disks indicate drive slot numbering sequence.
Left ear Right ear
Left ear Right ear
Page: 13
14 Components
Disk drives used in MSA 2040 enclosures
MSA 2040 enclosures support LFF/SFF Midline SAS, LFF/SFF Enterprise SAS, and SFF SSD disks. They
also support LFF/SFF Midline SAS and LFF/SFF Enterprise self-encrypting disks that work with the Full Disk
Encryption (FDE) features. For information about creating disk groups and adding spares using these
different disk drive types, see the HP MSA 2040 SMU Reference Guide and HP MSA 2040 Solid State
Drive Read This First document. Also see"FDE considerations" (page 19).
Controller enclosure—rear panel layout
The diagram and table below display and identify important component items comprising the rear panel
layout of the MSA 2040 controller enclosure (MSA 2040 SAN is shown in the example).
Figure 3 MSA 2040 Array: rear panel
A controller enclosure accommodates two power supply FRUs of the same type—either both AC or both
DC—within the two power supply slots (see two instances of callout 1 above). The controller enclosure
accommodates two controller module FRUs of the same type within the I/O module slots (see callouts 2
and 3 above).
IMPORTANT: If the MSA 2040 controller enclosure is configured with a single controller module, the
controller module must be installed in the upper slot (see callout 2 above), and an I/O module blank must
be installed in the lower slot (see callout 3 above). This configuration is required to allow sufficient air flow
through the enclosure during operation.
The diagrams with tables that immediately follow provide descriptions of the different controller modules
and power supply modules that can be installed into the rear panel of an MSA 2040 controller enclosure.
Showing controller modules and power supply modules separately from the enclosure provides improved
clarity in identifying the component items called out in the diagrams and described in the tables.
Descriptions are also provided for optional drive enclosures supported by MSA 2040 controller enclosures
for expanding storage capacity.
1 AC Power supplies
2 Controller module A (see face plate detail figures)
3 Controller module B (see face plate detail figures)
4 DC Power supply (2) — (DC model only)
5 DC Power switch
CACHE
LINK
ACT
6Gb/s
CACHE
LINK
ACT
6Gb/s
CLI
CLI
PORT 3 PORT 4
SERVICE−1
SERVICE−2
PORT 1 PORT 2
CLI
CLI
PORT 3 PORT 4
SERVICE−1
SERVICE−2
PORT 1 PORT 2
1 1
5
4
2
3
MSA 2040 controller enclosure
(rear panel locator illustration)
Page: 14
Controller enclosure—rear panel layout 15
NOTE: MSA 2040 controller enclosures support hot-plug replacement of redundant controller modules,
fans, power supplies, and I/O modules. Hot-add of drive enclosures is also supported.
MSA 2040 SAN controller module—rear panel components
Figure 4 shows host ports configured with either 8/16 Gb FC or 10GbE iSCSI SFPs. The SFPs look
identical. Refer to the LEDs that apply to the specific configuration of your Converged Network Controller ports.
Figure 4 MSA 2040 SAN controller module face plate (FC or 10GbE iSCSI)
Figure 5 shows Converged Network Controller ports configured with 1 Gb RJ-45 SFPs.
Figure 5 MSA 2040 SAN controller module face plate (1 Gb RJ-45)
1 Host ports: used for host connection or replication
[see "Install an SFP transceiver" (page 95)]
2 CLI port (USB - Type B)
3 Service port 2 (used by service personnel only)
4 Reserved for future use
5 Network port
6 Service port 1 (used by service personnel only)
7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 SAS expansion port
1 Host ports: used for host connection or replication
[see "Install an SFP transceiver" (page 95)]
2 CLI port (USB - Type B)
3 Service port 2 (used by service personnel only)
4 Reserved for future use
5 Network port
6 Service port 1 (used by service personnel only)
7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 SAS expansion port
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1
SERVICE−2
PORT 1 PORT 2 PORT 3 PORT 4
1 5 7
3 4
6
8
2
= FC LEDs
= 10GbE iSCSI LEDs
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1
SERVICE−2
PORT 1 PORT 2 PORT 3 PORT 4
= 1 Gb iSCSI LEDs (all host ports use 1 Gb RJ-45 SFPs in this figure)
1 5 7
3 4
6
8
2
= FC LEDs
Page: 15
16 Components
NOTE: See "MSA 2040 SAN" (page 1
1) for more information about Converged Network Controller
technology. For port configuration, see the “Configuring host ports” topic within the
HP MSA 2040 SMU Reference Guide or online help.
MSA 2040 SAS controller module—rear panel components
Figure 6 shows host ports configured with 12 Gbit/s HD mini-SAS connectors.
Figure 6 MSA 2040 SAS controller module face plate (HD mini-SAS)
IMPORTANT: See Connecting to the controller CLI port for information about enabling the controller
enclosure USB Type - B CLI port for accessing the Command-line Interface via a telnet client.
Drive enclosures
Drive enclosure expansion modules attach to MSA 2040 controller modules via the mini-SAS expansion
port, allowing addition of disk drives to the system. MSA 2040 controller enclosures support adding the
6 Gb drive enclosures described below.
1 HD mini-SAS ports: used for host connection
2 CLI port (USB - Type B)
3 Service port 2 (used by service personnel only)
4 Reserved for future use
5 Network port
6 Service port 1 (used by service personnel only)
7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 SAS expansion port
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1
SERVICE−2
ACT
LINK 12Gb/s
S
S
A
ACT
LINK
SAS 1 SAS 2
ACT
LINK 12Gb/s
S
S
A
ACT
LINK
SAS 3 SAS 4
1 5 7
3 4
6
8
2
Page: 16
Cache 17
LFF drive enclosure — rear panel layout
MSA 2040 controllers support the MSA 2040 6 Gb 3.5" 12-drive enclosure shown below.
Figure 7 LFF 12-drive enclosure: rear panel
SFF drive enclosure
MSA 2040 controllers support the D2700 6 Gb drive enclosure for adding storage. For information about
this product, visit http://www.hp.com/support. Pictorial representations of this drive enclosure are also
provided in the MSA 2040 Quick Start Instructions and MSA 2040 Cable Configuration Guide.
Cache
To enable faster data access from disk storage, the following types of caching are performed:
• Write-back or write-through caching. The controller writes user data in the cache memory on the
module rather than directly to the drives. Later, when the storage system is either idle or aging—and
continuing to receive new I/O data—the controller writes the data to the drive array.
• Read-ahead caching. The controller detects sequential array access, reads ahead into the next
sequence of data, and stores the data in the read-ahead cache. Then, if the next read access is for
cached data, the controller immediately loads the data into the system memory, avoiding the latency of
a disk access.
NOTE: See the HP MSA 2040 SMU Reference Guide for more information about volume cache options.
Transportable CompactFlash
During a power loss or array controller failure, data stored in cache is saved off to non-volatile memory
(CompactFlash). The data is then written to disk after the issue is corrected. To protect against writing
incomplete data to disk, the image stored on the CompactFlash is verified before committing to disk.
1 Power supplies (AC shown)
2 I/O module A
3 I/O module B
4 Disabled button (used by engineering only)
5 Service port (used by service personnel only)
6 SAS In port
7 SAS Out port
0 0
IN OUT
0 0
IN OUT
1 4 5 7 1
6
2
3
Page: 17
18 Components
The CompactFlash card is located at the midplane-facing end of the controller module as shown below.
Figure 8 MSA 2040 CompactFlash card
In single-controller configurations, if the controller has failed or does not start, and the Cache Status LED is
on or blinking, the CompactFlash will need to be transported to a replacement controller to recover data
not flushed to disk (see "Controller failure in a single-controller configuration" (page 59) for more
information).
CAUTION: The CompactFlash card should only be removed for transportable purposes. To preserve the
existing data stored in the CompactFlash, you must transport the CompactFlash from the failed controller to
the replacement controller using a procedure outlined in the HP MSA Controller Module Replacement
Instructions shipped with the replacement controller module. Failure to use this procedure will result in the
loss of data stored in the cache module. The CompactFlash must stay with the same enclosure. If the
CompactFlash is used/installed in a different enclosure, data loss/data corruption will occur.
IMPORTANT: In dual controller configurations featuring one healthy partner controller, there is no need to
transport failed controller cache to a replacement controller because the cache is duplicated between the
controllers (subject to volume write optimization setting).
Supercapacitor pack
To protect RAID controller cache in case of power failure, MSA 2040 controllers are equipped with
supercapacitor technology, in conjunction with CompactFlash memory, built into each controller module to
provide extended cache memory backup time. The supercapacitor pack provides energy for backing up
unwritten data in the write cache to the CompactFlash in the event of a power failure. Unwritten data in
CompactFlash memory is automatically committed to disk media when power is restored. While the cache
is being maintained by the supercapacitor, the Cache Status LED flashes at a rate of 1/10 second on and
9/10 second off.
Upgrading to MSA 2040
For information about upgrading components for use with MSA 2040 controllers, refer to: Upgrading to
the HP MSA 2040.
D
o
n
o
t
r
e
m
o
v
e
U
s
e
d
f
o
r
c
a
c
h
e
r
e
c
o
v
e
r
y
o
n
l
y
Controller module pictorial
CompactFlash card
(Midplane-facing rear view)
Page: 18
Installation checklist 19
3 Installing the enclosures
Installation checklist
The following table outlines the steps required to install the enclosures and initially configure the system. To
ensure a successful installation, perform the tasks in the order they are presented.
1
The SMU is introduced in "Accessing the SMU" (page 51). See the SMU Reference Guide or online help for additional information.
2
If the systems are cabled for replication and licensed to use the Remote Snap feature, you can use the Replication Setup Wizard to
prepare to replicate an existing volume to another vdisk. See the SMU Reference Guide for additional information.
FDE considerations
The Full Disk Encryption feature available via the management interfaces requires use of self-encrypting
drives (SED) which are also referred to as FDE-capable disk drive modules. When installing FDE-capable
disk drive modules, follow the same procedures for installing disks that do not support FDE. The exception
occurs when you move FDE-capable disk drive modules for one or more disk groups to a different system,
which requires additional steps.
The procedures for using the FDE feature, such as securing the system, viewing disk FDE status, and
clearing and importing keys are performed using the SMU or CLI commands (see the SMU Reference
Guide or CLI Reference Guide for more information).
NOTE: When moving FDE-capable disk drive modules for a disk group, stop I/O to any volumes in the
disk group before removing the disk drive modules. Follow the “Removing the failed drive” and “Installing
the replacement drive” procedures within the HP MSA Drive Module Replacement Instructions. Import the
keys for the disks so that the disk content becomes available.
Table 1 Installation checklist
Step Task Where to find procedure
1. Install the controller enclosure and optional
drive enclosures in the rack, and attach ear
caps.
See the racking instructions poster.
2. Connect the controller enclosure and LFF/SFF
drive enclosures.
See "Connecting controller and drive enclosures"
(page 20).
3. Connect power cords. See the quick start instructions.
4. Test enclosure connections. See "Testing enclosure connections" (page 28).
5. Install required host software. See "Host system requirements" (page 33).
6. Connect data hosts. See "Connecting the enclosure to data hosts" (page 33).
If using the optional Remote Snap feature, also see
"Connecting two storage systems to replicate volumes"
(page 38).
7. Connect remote management hosts. See "Connecting remote management hosts" (page 38).
8. Obtain IP values and set management port IP
properties on the controller enclosure.
See "Obtaining IP values" (page 45).
See Connecting to the controller CLI port; with Linux and
Windows topics.
9. Perform initial configuration tasks1
:
• Sign in to the web-based Storage
Management Utility (SMU).
• Initially configure and provision the
storage system using the SMU.2
Topics below correspond to bullets at left:
See “Getting Started” in the HP MSA 2040 SMU Reference
Guide.
See “Configuring the System” and “Provisioning the
System” topics (SMU Reference Guide or online help).
Page: 19
20 Installing the enclosures
While replacing or installing FDE-capable disk drive modules, consider the following:
• If you are installing FDE-capable disk drive modules that do not have keys into a secure system, the
system will automatically secure the disks after installation. Your system will associate its existing key
with the disks, and you can transparently use the newly-secured disks.
• If the FDE-capable disk drive modules originate from another secure system, and contain that system’s
key, the new disks will have the Secure, Locked status. The data will be unavailable until you enter the
passphrase for the other system to import its key. Your system will then recognize the metadata of the
disk groups and incorporate it. The disks will have the status of Secure, Unlocked and their contents will
be available.
• To view the FDE status of disks, use the SMU or the show fde-state CLI command.
• To import a key and incorporate the foreign disks, use the SMU or the set fde-import-key CLI
command.
NOTE: If the FDE-capable disks contain multiple keys, you will need to perform the key importing
process for each key to make the content associated with each key become available.
• If you do not want to retain the disks’ data, you can repurpose the disks. Repurposing disks deletes all
disk data, including lock keys, and associates the current system’s lock key with the disks.
To repurpose disks, use the SMU or the set disk CLI command.
• You need not secure your system to use FDE-capable disks. If you install all FDE-capable disks into a
system that is not secure, they will function exactly like disks that do not support FDE. As such, the data
they contain will not be encrypted. If you decide later that you want to secure the system, all of the disks
must be FDE-capable.
• If you install a disk module that does not support FDE into a secure system, the disk will have the
Unusable status and will be unavailable for use.
If you are re-installing your FDE-capable disk drive modules as part of the process to replace the chassis
FRU, you must insert the original disks and re-enter their FDE passprhase.
Connecting controller and drive enclosures
MSA 2040 controller enclosures support up to eight enclosures (including the controller enclosure). You
can cable drive enclosures of the same type or of mixed LFF/SFF model type.
The firmware supports both straight-through and fault-tolerant SAS cabling. Fault-tolerant cabling allows
any drive enclosure to fail—or be removed—while maintaining access to other enclosures. Fault tolerance
and performance requirements determine whether to optimize the configuration for high availability or
high performance when cabling. MSA 2040 controller enclosures support 6 Gbit/s internal disk drive
speeds, together with 6 Gbit/s (SAS2.0) expander link speeds. When connecting multiple drive
enclosures, use fault-tolerant cabling to ensure the highest level of fault tolerance.
For example, the illustration on the left in Figure 1
1 (page 23) shows controller module 1A connected to
expansion module 2A, with a chain of connections cascading down (blue). Controller module 1B is
connected to the lower expansion module (5B) of the last drive enclosure, with connections moving in
the opposite direction (green).
Connecting the MSA 2040 controller to the SFF drive enclosure
The SFF D2700 25-drive enclosure, supporting 6 Gb internal disk drive and expander link speeds, can be
attached to an MSA 2040 controller enclosure using supported mini-SAS to mini-SAS cables of 0.5 m
(1.64') to 2 m (6.56') length [see Figure 10 (page 22)].
Connecting the MSA 2040 controller to the LFF drive enclosure
The LFF MSA 2040 6 Gb 3.5"12-drive enclosure, supporting 6 Gb internal disk drive and expander link
speeds, can be attached to an MSA 2040 controller enclosure using supported mini-SAS to mini-SAS
cables of 0.5 m (1.64') to 2 m (6.56') length [see Figure 10 (page 22)].
Page: 20
Connecting controller and drive enclosures 21
Connecting the MSA 2040 controller to mixed model drive enclosures
MSA 2040 controllers support cabling of 6 Gb SAS link-rate LFF and SFF expansion modules—in mixed
model fashion—as shown in Figure 13 (page 25), and further described in the HP MSA 2040 Cable
Configuration Guide; the HP MSA 2040 Quick Start Instructions; the QuickSpecs; and HP white papers
(listed below).
Cable requirements for MSA 2040 enclosures
IMPORTANT:
• When installing SAS cables to expansion modules, use only supported mini-SAS x4 cables with
SFF-8088 connectors supporting your 6 Gb application.
• Mini-SAS to mini-SAS 0.5 m (1.64') cables are used to connect cascaded enclosures in the rack.
• See the QuickSpecs for information about which cables are provided with your MSA 2040 products.
http://www.hp.com/support/msa2040/QuickSpecs
• If additional or longer cables are required, they must be ordered separately (see relevant MSA 2040
QuickSpecs or P2000 G3 QuickSpecs for your products).
• The maximum expansion cable length allowed in any configuration is 2 m (6.56').
• Cables required, if not included, must be separately purchased.
• When adding more than two drive enclosures, you may need to purchase additional 1 m or 2 m
cables, depending upon number of enclosures and cabling method used:
• Spanning 3, 4, or 5 drive enclosures requires 1 m (3.28') cables.
• Spanning 6 or 7 drive enclosures requires 2 m (6.56') cables.
• See the QuickSpecs (link provided above) regarding information about cables supported for host
connection:
• Qualified Fibre Channel SFP and cable options
• Qualified 10GbE iSCSI SFP and cable options
• Qualified 1 Gb RJ-45 SFP and cable options
• Qualified HD mini-SAS cable options
For additional information concerning cabling of MSA 2040 controllers and D2700 drive enclosures, visit:
http://www.hp.com/support/msa2040
Browse for the following reference documents:
• HP MSA 2040 Cable Configuration Guide
• HP Remote Snap technical white paper
• HP MSA 2040 best practices
Page: 21
22 Installing the enclosures
NOTE: For clarity, the schematic illustrations of controller and expansion modules shown in this section
provide only relevant details such as expansion ports within the module face plate outline. For detailed
illustrations showing all components, see "Controller enclosure—rear panel layout" (page 14).
Figure 9 Cabling connections between the MSA 2040 controller and a single drive enclosure
The figure above shows examples of the MSA 2040 controller enclosure—equipped with a single
controller module—cabled to a single drive enclosure equipped with a single expansion module. The
empty I/O module slot in each of the enclosures is covered with an IOM blank to ensure sufficient air flow
during enclosure operation. The remaining illustrations in the section feature enclosures equipped with dual
IOMs.
IMPORTANT: If the MSA 2040 controller enclosure is configured with a single controller module, the
controller module must be installed in the upper slot, and an I/O module blank must be installed in the
lower slot (shown above). This configuration is required to allow sufficient air flow through the enclosure
during operation.
Figure 10 Cabling connections between the MSA 2040 controller and a single drive enclosure
In Out
1B
1A
2A
2B
Controller A
IOM blank
P1 P2
Controller A
IOM blank
= LFF 12-drive enclosure = SFF 25-drive enclosure
2
1
1 2
IOM blank
IOM blank
1B
1A
2A
2B
In Out
1B
1A
2A
2B
Controller A
Controller B
In Out
P1 P2
Controller A
Controller B
P1 P2
= LFF 12-drive enclosure = SFF 25-drive enclosure
2
1
1 2
1B
1A
2A
2B
Page: 22
Connecting controller and drive enclosures 23
Figure 1
1 Cabling connections between MSA 2040 controllers and LFF drive enclosures
The diagram at left (above) shows fault-tolerant cabling of a dual-controller enclosure cabled to MSA 2040
6 Gb 3.5" 12-drive enclosures featuring dual-expansion modules. Controller module 1A is connected to
expansion module 2A, with a chain of connections cascading down (blue). Controller module 1B is
connected to the lower expansion module (5B), of the last drive enclosure, with connections moving in the
opposite direction (green). Fault-tolerant cabling allows any drive enclosure to fail—or be removed—while
maintaining access to other enclosures.
The diagram at right (above) shows the same storage components connected using straight-through
cabling. Using this method, if a drive enclosures fails, the enclosures that follow the failed enclosure in the
chain are no longer accessible until the failed enclosure is repaired or replaced.
Controller A
Controller B
1A
1B
In
Out
2A
2B
3A
3B
4A
4B
5A
5B
In
Out
In
Out
In
Out
In
Out
In
Out
In Out
Out
In
Fault-tolerant cabling
Controller A
Controller B
1A
1B
In
Out
2A
2B
3A
3B
4A
4B
5A
5B
In
Out
In
Out
In
Out
In
Out
In
Out
In Out
Out
In
Straight-through cabling
Page: 23
24 Installing the enclosures
Figure 12 Cabling connections between MSA 2040 controllers and SFF drive enclosures
The figure above provides sample diagrams reflecting cabling of MSA 2040 controller enclosures and
D2700 6 Gb drive enclosures.
The diagram at left shows fault-tolerant cabling of a dual-controller enclosure and D2700 6 Gb drive
enclosures featuring dual-expansion modules. Controller module 1A is connected to expansion module
2A, with a chain of connections cascading down (blue). Controller module 1B is connected to the lower
expansion module (5B), of the last drive enclosure, with connections moving in the opposite direction
(green). Fault-tolerant cabling allows any drive enclosure to fail—or be removed—while maintaining
access to other enclosures.
The diagram at right shows the same storage components connected using straight-through cabling. Using
this method, if a drive enclosures fails, the enclosures that follow the failed enclosure in the chain are no
longer accessible until the failed enclosure is repaired or replaced.
P1
Controller A
Controller B
1A
1B
P2
P1
P1
P1
P1
P1
P2
P1
P2
P1
2A
2B
3A
3B
4A
4B
5A
5B
P2
P2
P2
P2
P2
P1
Controller A
Controller B
P2
P1
P2
P1
P2
P1
P2
P1 P2
P1 P2
1A
1B
2A
2B
3A
3B
4A
4B
5A
5B
Fault-tolerant cabling Straight-through cabling
P1
P2
P1
P2
Page: 24
Connecting controller and drive enclosures 25
Figure 13 Cabling connections between MSA 2040 controllers and drive enclosures of mixed model type
The figure above provides sample diagrams reflecting cabling of MSA 2040 controller enclosures and
supported mixed model drive enclosures. In this example, the SFF drive enclosures follow the LFF drive
enclosures. Given that both drive enclosure models use 6 Gb SAS link-rate and SAS2.0 expanders, they
can be ordered in desired sequence within the array, following the controller enclosure.
The diagram at left shows fault-tolerant cabling of a dual-controller enclosure and mixed model drive
enclosures, and the diagram at right shows the same storage components connected using straight-through
cabling.
1B
1A
Controller B
Controller A
Out
In
Out
In
3B
3A
P1
P1 4B
4A
P2
P2
P2
P2
P1
P1 5B
5A
2B
2A
Out
In
Out
In
Fault-tolerant cabling
1
1
2
2
= LFF 12-drive enclosure
1
= SFF 25-drive enclosure
2
Drive enclosure IOM face plate key:
1B
1A
Controller B
Controller A
Out
In
Out
In
3B
3A
P1
P1
4B
4A P2
P2
P2
P2
P1
P1
5B
5A
2B
2A
Out
In
Out
In
Straight-through cabling
1
1
2
2
Page: 25
26 Installing the enclosures
Figure 14 Fault-tolerant cabling connections showing maximum number of enclosures of same type
The figure above provides sample diagrams reflecting fault-tolerant cabling of a maximum number of
supported MSA 2040 enclosures. The diagram at left shows fault-tolerant cabling of an MSA 2040
controller enclosure and seven LFF drive enclosures; whereas the diagram at right shows fault-tolerant
cabling of an MSA 2040 controller enclosure and seven D2700 drive enclosures.
P1
Controller A
Controller B
1A
1B
P2
P1
P1
P1
P1
P1
2A
2B
3A
3B
4A
4B
P2
P2
P2
P2
P2
= LFF 12-drive enclosure
1
= SFF 25-drive enclosure
2
Drive enclosure IOM face plate key:
Controller A
Controller B
In
Out
In Out
In
Out
In
Out
In Out
1A
1B
2A
2B
3A
3B
4A
4B
8A
8B
In
Out
In
Out
5A
5B
In
Out
In
Out
6A
6B
In
Out
In
Out
7A
7B
In
Out
In
Out
Note:
The maximum number of supported drive
enclosures (7) may require purchase of
additional longer cables.
2
2
2
1
1
1
1
1
1
1
P2
P1
P2
P1
8A
8B
P1
P1
5A
5B
P2
P2
2
P1
P1
6A
6B
P2
P2
2
P1
P1
7A
7B
P2
P2
2
2
In Out
Page: 26
Connecting controller and drive enclosures 27
Figure 15 Cabling connections showing maximum enclosures of mixed model type
The illustration above shows a sample maximum enclosures configuration. The diagram shows mixed
model drive enclosures within the dual-controller array using fault-tolerant cabling. In this example, the LFF
drive enclosures follow the SFF drive enclosures. Given that both drive enclosure models use 6 Gb SAS
link-rate and SAS2.0 expanders, they can be ordered in desired sequence within the array, following the
controller enclosure. MSA 2040 controller enclosures support up to eight enclosures (including the
controller enclosure) for adding storage.
IMPORTANT: For comprehensive configuration options and associated illustrations, refer to the HP MSA
2040 Cable Configuration Guide.
1B
1A
Controller A
Controller B
2B
2A
3B
3A
4B
4A
5B
5A
6B
6A
7B
7A
8B
8A
Continued above right
(see enclosure 5)
Continued from below left
(see enclosure 4)
P1
P1
P2
P2
1
P1
P1
P2
P2
1
P1
P1
P2
1
P1
P1
P2
P2
1
Out
In
Out
In
2
Out
In
Out
In
2
Out
In
Out
In
2
= SFF 25-drive enclosure
1
= LFF 12-drive enclosure
2
Drive enclosure IOM face plate key:
P2
Page: 27
28 Installing the enclosures
Testing enclosure connections
NOTE: Once the power-on sequence for enclosures succeeds, the storage system is ready to be
connected to hosts, as described in "Connecting the enclosure to data hosts" (page 33).
Powering on/powering off
Before powering on the enclosure for the first time:
• Install all disk drives in the enclosure so the controller can identify and configure them at power-up.
• Connect the cables and power cords to the enclosures as explained in the quick start instructions.
NOTE: MSA 2040 controller enclosures and drive enclosures do not have power switches (they are
switchless). They power on when connected to a power source, and they power off when disconnected.
• Generally, when powering up, make sure to power up the enclosures and associated data host in the
following order:
• Drive enclosures first
This ensures that disks in each drive enclosure have enough time to completely spin up before being
scanned by the controller modules within the controller enclosure.
While enclosures power up, their LEDs blink. After the LEDs stop blinking—if no LEDs on the front
and back of the enclosure are amber—the power-on sequence is complete, and no faults have been
detected. See "LED descriptions" (page 79) for descriptions of LED behavior.
• Controller enclosure next
Depending upon the number and type of disks in the system, it may take several minutes for the
system to become ready.
• Data host last (if powered down for maintenance purposes)
TIP: Generally, when powering off, you will reverse the order of steps used for powering on.
Power cycling procedures vary according to the type of power supply unit included with the enclosure. For
controller and drive enclosures configured with the switchless AC power supplies, refer to the procedure
described under AC power supply below. For procedures pertaining to a) controller enclosures configured
with DC power supplies, or b) previously installed drive enclosures featuring power switches, see "DC and
AC power supplies equipped with a power switch" (page 29).
IMPORTANT: See "Power cord requirements" (page 92) and the QuickSpecs for more information about
power cords supported by MSA 2040 enclosures.
AC power supply
Enclosures equipped with switchless power supplies rely on the power cord for power cycling. Connecting
the cord from the power supply power cord connector to the appropriate power source facilitates power
on, whereas disconnecting the cord from the power source facilitates power off.
Page: 28
Powering on/powering off 29
Figure 16 AC power supply
To power on the system:
1. Obtain a suitable AC power cord for each AC power supply that will connect to a power source.
2. Plug the power cord into the power cord connector on the back of the drive enclosure (see Figure 16).
Plug the other end of the power cord into the rack power source. Wait several seconds to allow the
disks to spin up.
Repeat this sequence for each power supply within each drive enclosure.
3. Plug the power cord into the power cord connector on the back of the controller enclosure (see
Figure 16). Plug the other end of the power cord into the rack power source.
Repeat the sequence for the controller enclosure’s other switchless power supply.
To power off the system:
1. Stop all I/O from hosts to the system [see "Stopping I/O" (page 55)].
2. Shut down both controllers using either method described below:
• Use the SMU to shut down both controllers, as described in the online help and web-posted HP
MSA 2040 SMU Reference Guide.
Proceed to step 3.
• Use the CLI to shut down both controllers, as described in the HP MSA 2040 CLI Reference Guide.
3. Disconnect the power cord male plug from the power source.
4. Disconnect the power cord female plug from the power cord connector on the power supply.
NOTE: Power cycling for enclosures equipped with a power switch is described below.
DC and AC power supplies equipped with a power switch
DC power supplies and legacy AC power supplies are shown below. Each model has a power switch.
Figure 17 DC and AC power supplies with power switch
Power cord connect
Power
switch
Power
cable
connect
Power
switch
Power
cord
connect
DC power supply unit Legacy AC power supply unit
Page: 29
30 Installing the enclosures
Connect power cable to DC power supply
Locate two DC power cables that are compatible with your controller enclosure.
Figure 18 DC power cable featuring sectioned D-shell and lug connectors
See Figure 18 and the illustration at left (in Figure 17) when performing the following steps:
1. Verify that the enclosure power switches are in the Off position.
2. Connect a DC power cable to each DC power supply using the D-shell connector.
Use the UP> arrow on the connector shell to ensure proper positioning (see adjacent
left side view of D-shell connector).
3. Tighten the screws at the top and bottom of the shell, applying a torque between 1.7
N-m (15 in-lb) and 2.3 N-m (20 in-lb), to securely attach the cable to the DC power
supply module.
4. To complete the DC connection, secure the other end of each cable wire component
of the DC power cable to the target DC power source.
Check the three individual DC cable wire labels before connecting each cable wire lug to its power
source. One cable wire is labeled ground (GND) and the other two wires are labeled positive (+L) and
negative (-L), respectively (shown in Figure 18 above).
CAUTION: Connecting to a DC power source outside the designated -48V DC nominal range
(-36V DC to -72V DC) may damage the enclosure.
Connect power cord to legacy AC power supply
Obtain two suitable AC power cords: one for each AC power supply that will connect to a separate power
source. See the illustration at right [in Figure 17 (page 29)] when performing the following steps:
1. Verify that the enclosure power switches are in the Off position.
2. Identify the power cord connector on the power supply, and locate the target power source.
3. For each power supply, perform the following actions:
a. Plug one end of the cord into the power cord connector on the power supply.
b. Plug the other end of the power cord into the rack power source.
4. Verify connection of primary power cords from the rack to separate external power sources.
Power cycle
To power on the system:
1. Power up drive enclosure(s).
Press the power switches at the back of each drive enclosure to the On position. Allow several seconds
for the disks to spin up.
2. Power up the controller enclosure next.
Press the power switches at the back of the controller enclosure to the On position. Allow several
seconds for the disks to spin up.
To power off the system:
1. Stop all I/O from hosts to the system [see "Stopping I/O" (page 55)].
2. Shut down both controllers using either method described below:
+L
GND
-L
+L
GND
-L
+L
GND
-L
+L
GND
-L
Connector pins (typical 2 places)
Connector (front view)
Ring/lug connector (typical 3 places)
D-shell
(left side view)
Page: 30
Powering on/powering off 31
• Use the SMU to shut down both controllers, as described in the online help and HP MSA 2040
SMU Reference Guide.
Proceed to step 3.
• Use the CLI to shut down both controllers, as described in the HP MSA 2040 CLI Reference Guide.
3. Press the power switches at the back of the controller enclosure to the Off position.
4. Press the power switches at the back of each drive enclosure to the Off position.
Page: 31
32 Installing the enclosures
Page: 32
Host system requirements 33
4 Connecting hosts
Host system requirements
Data hosts connected to HP MSA 2040 arrays must meet requirements described herein. Depending on
your system configuration, data host operating systems may require that multi-pathing is supported.
If fault-tolerance is required, then multi-pathing software may be required. Host-based multi-path software
should be used in any configuration where two logical paths between the host and any storage volume
may exist at the same time. This would include most configurations where there are multiple connections to
the host or multiple connections between a switch and the storage.
• Use native Microsoft MPIO DSM support with Windows Server 2008 and Windows Server 2012. Use
either the Server Manager or the command-line interface (mpclaim CLI tool) to perform the installation.
Refer to the following web sites for information about using Windows native MPIO DSM:
http://support.microsoft.com
http://technet.microsoft.com (search the site for “multipath I/O overview”)
• Use the HP Multi-path Device Mapper for Linux Software with Linux servers. To download the
appropriate device mapper multi-path enablement kit for your specific enterprise Linux operating
system, go to http://www.hp.com/storage/spock.
Connecting the enclosure to data hosts
A host identifies an external port to which the storage system is attached. The external port may be a port
in an I/O adapter (such as an FC HBA) in a server. Cable connections vary depending on configuration.
Common cable configurations are shown in this section. A list of supported configurations resides on the
MSA 2040 manuals site at http://www.hp.com/support/msa2040/manuals:
• HP MSA 2040 Quick Start Instructions
• HP MSA 2040 Cable Configuration Guide
These documents provide installation details and describe newly-supported direct attach, switch-connect,
and storage expansion configuration options for MSA 2040 products. For specific information about
qualified host cabling options, see "Cable requirements for MSA 2040 enclosures" (page 21).
Any number or combination of LUNs can be shared among a maximum of 64 host ports (initiators),
provided the total does not exceed 1,024 LUNs per MSA 2040 SAN storage system (single-controller or
dual-controller configuration).
MSA 2040 SAN
MSA 2040 SAN models use Converged Network Controller technology, allowing you to select the desired
host interface protocol(s) from the available FC or iSCSI host interface protocols supported by the system.
The small form-factor pluggable (SFP transceiver or SFP) connectors used in host ports are further described
in the subsections below. Also see "MSA 2040 SAN" (page 1
1) for more information concerning use of
these host ports.
IMPORTANT: Controller modules are not shipped with pre-installed SFPs. Within your product kit, locate
the qualified SFP options, and install them into the host ports. See "Install an SFP transceiver" (page 95).
IMPORTANT: Use the set host-port-mode CLI command to set the host interface protocol for
MSA 2040 SAN host ports using qualified SFP options. MSA 2040 SAN models ship with host ports
configured for FC. When connecting host ports to iSCSI hosts, you must use the CLI (not the SMU) to
specify which ports will use iSCSI. It is best to do this before inserting the iSCSI SFPs into the host ports.
Page: 33
34 Connecting hosts
NOTE: MSA 2040 SAN controller enclosures support the optionally-licensed Remote Snap replication
feature. When using the web-browser interface, replication sets must be created, viewed, and managed
using the SMU v2. Replication sets can also be created and viewed using CLI commands.
Fibre Channel protocol
The MSA 2040 SAN controller enclosures support one or two controller modules using the Fibre Channel
interface protocol for host connection. Each controller module provides four host ports designed for use
with an FC SFP supporting data rates up to 16 Gbit/s. When configured with FC SFPs, MSA 2040 SAN
controller enclosures can also be cabled to support the optionally-licensed Remote Snap replication feature
via the FC ports.
The MSA 2040 SAN controller supports Fibre Channel Arbitrated Loop (public or private) or point-to-point
topologies. Loop protocol can be used in a physical loop or in a direct connection between two devices.
Point-to-point protocol is used to connect to a fabric switch. See the set host-parameters command
within the HP MSA 2040 CLI Reference Guide for command syntax and details about connection mode
parameter settings relative to supported link speeds.
Fibre Channel ports are used in either of two capacities:
• To connect two storage systems through a Fibre Channel switch for use of Remote Snap replication.
• For attachment to FC hosts directly, or through a switch used for the FC traffic.
The first usage option requires valid licensing for the Remote Snap replication feature, whereas the second
option requires that the host computer supports FC and optionally, multipath I/O.
TIP: Use the SMU Configuration Wizard to set FC port speed. Within the SMU Reference Guide, see
“Using the Configuration Wizard” and scroll to FC port options. Use the CLI command set
host-parameters to set FC port options, and use the show ports command to view information
about host ports.
10GbE iSCSI protocol
The MSA 2040 SAN controller enclosures support one or two controller modules using the Internet SCSI
interface protocol for host connection. Each controller module provides four host ports designed for use
with a 10GbE iSCSI SFP supporting data rates up to 10 Gbit/s, using either one-way or mutual CHAP
(Challenge-Handshake Authentication Protocol).
TIP: See the “Configuring CHAP” topic in the SMU Reference Guide. Also see the important statement
about CHAP preceding the “Using the Replication Setup Wizard” procedure within that guide.
TIP: Use the SMU Configuration Wizard to set iSCSI port options. Within the SMU Reference Guide, see
“Using the Configuration Wizard” and scroll to iSCSI port options. Use the CLI command set
host-parameters to set iSCSI port options, and use the show ports command to view information
about host ports.
The 10GbE iSCSI ports are used in either of two capacities:
• To connect two storage systems through a switch for use of Remote Snap replication.
• For attachment to 10GbE iSCSI hosts directly, or through a switch used for the 10GbE iSCSI traffic.
The first usage option requires valid licensing for the Remote Snap replication feature, whereas the second
option requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O.
Page: 34
Connecting the enclosure to data hosts 35
1 Gb iSCSI protocol
The MSA 2040 SAN controller enclosures support one or two controller modules using the Internet SCSI
interface protocol for host port connection. Each controller module provides four iSCSI host ports
configured with an RJ-45 SFP supporting data rates up to 1 Gbit/s, using either one-way or mutual CHAP.
TIP: See the “Configuring CHAP” topic in the SMU Reference Guide. Also see the admonition about
CHAP preceding the “Using the Replication Setup Wizard” procedure within that guide.
TIP: Use the SMU Configuration Wizard to set iSCSI port options. Within the SMU Reference Guide, see
“Using the Configuration Wizard” and scroll to iSCSI port options. Use the CLI command set
host-parameters to set iSCSI port options, and use the show ports command to view information
about host ports.
The 1 Gb iSCSI ports are used in either of two capacities:
• To connect two storage systems through a switch for use of Remote Snap replication.
• For attachment to 1 Gb iSCSI hosts directly, or through a switch used for the 1 Gb iSCSI traffic.
The first usage option requires valid licensing for the Remote Snap replication feature, whereas the second
option requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O.
MSA 2040 SAS
MSA 2040 SAS controller enclosures support one or two controller modules using the Serial Attached SCSI
(Small Computer System Interface) interface protocol for host connection. Each controller module provides
four HD mini-SAS host ports supporting data rates up to 12 Gbit/s. HD mini-SAS host ports connect to hosts
or switches; they are not used for replication.
Connecting direct attach configurations
The MSA 2040 controller enclosures support up to eight direct-connect server connections, four per
controller module. Connect appropriate cables from the server HBAs to the controller host ports as
described below, and shown in the following illustrations.
To connect the MSA 2040 SAN controller to a server or switch—using FC SFPs in controller ports—select
Fibre Channel cables supporting 4/8/16 Gb data rates, that are compatible with the host port SFP
connector (see the QuickSpecs). Such cables are also used for connecting a local storage system to a
remote storage system via a switch, to facilitate use of the optional Remote Snap replication feature.
To connect the MSA 2040 SAN controller to a server or switch—using 10GbE iSCSI SFPs in controller
ports—select the appropriate qualified 10GbE SFP option (see the QuickSpecs). Such cables are also used
for connecting a local storage system to a remote storage system via a switch, to facilitate use of the
optional Remote Snap replication feature.
To connect the MSA 2040 SAN controller to a server or switch—using the 1 Gb SFPs in controller
ports—select the appropriate qualified RJ-45 SFP option (see the QuickSpecs). Such cables are also used
for connecting a local storage system to a remote storage system via a switch, to facilitate use of the
optional Remote Snap replication feature.
To connect the MSA 2040 SAS controller to a server or switch—using the SFF-8644 dual HD mini-SAS
ports—select the appropriate qualified HD mini-SAS cable option (see the QuickSpecs). A qualified
SFF-8644 to SFF-8644 cable option is used for connecting to a 12 Gbit/s enabled host; whereas a
qualified SFF-8644 to SFF-8088 cable option is used for connecting to a 6 Gbit/s host.
NOTE: The MSA 2040 SAN diagrams that follow use a single representation for each cabling example.
This is due to the fact that the port locations and labeling are identical for each of the three possible
interchangeable SFPs supported by the system.
Within each cabling connection category, the MSA 2040 SAS model is shown beneath the MSA 2040
SAN model.
Page: 35
36 Connecting hosts
Single-controller configurations
One server/one HBA/single path
Figure 19 Connecting hosts: direct attach—one server/one HBA/single path
Dual-controller configurations
One server/one HBA/dual path
Figure 20 Connecting hosts: direct attach—one server/one HBA/dual path
Two servers/one HBA per server/dual path
Figure 21 Connecting hosts: direct attach—two servers/one HBA per server/dual path
6Gb/s
S
S
A
Server
MSA 2040 SAN
IOM blank
12Gb/s
S
S
A
6Gb/s
S
S
A
12Gb/s
S
S
A
Server
MSA 2040 SAS
IOM blank
6Gb/s
6Gb/s
S
S
A
S
S
A
Server
MSA 2040 SAN
12Gb/s
S
S
A
6Gb/s
6Gb/s
S
S
A
S
S
A
12Gb/s
S
S
A
12Gb/s
S
S
A 12Gb/s
S
S
A
Server
MSA 2040 SAS
6Gb/s
6Gb/s
S
S
A
S
S
A
Server 1 Server 2
MSA 2040 SAN
12Gb/s
S
S
A 12Gb/s
S
S
A
6Gb/s
6Gb/s
S
S
A
S
S
A
12Gb/s
S
S
A 12Gb/s
S
S
A
Server 1 Server 2
MSA 2040 SAS
Page: 36
Connecting the enclosure to data hosts 37
Four servers/one HBA per server/dual path
Figure 22 Connecting hosts: direct attach—four servers/one HBA per server/dual path
Connecting switch attach configurations
Dual controller configuration
Two servers/two switches
Figure 23 Connecting hosts: switch attach—two servers/two switches
6Gb/s
S
S
A
6Gb/s
S
S
A
Server 1 Server 2
Server 3 Server 4
MSA 2040 SAN
12Gb/s
S
S
A
12Gb/s
S
S
A
6Gb/s
S
S
A
6Gb/s
S
S
A
12Gb/s
S
S
A 12Gb/s
S
S
A
Server 1 Server 2
Server 3 Server 4
MSA 2040 SAS
6Gb/s
S
S
A
6Gb/s
S
S
A
Server 1 Server 2
Switch A Switch B
MSA 2040 SAN
Page: 37
38 Connecting hosts
Four servers/multiple switches/SAN fabric
Figure 24 Connecting hosts: switch attach—four servers/multiple switches/SAN fabric
Connecting remote management hosts
The management host directly manages systems out-of-band over an Ethernet network.
1. Connect an RJ-45 Ethernet cable to the network management port on each MSA 2040 controller.
2. Connect the other end of each Ethernet cable to a network that your management host can access
(preferably on the same subnet).
NOTE: Connections to this device must be made with shielded cables—grounded at both ends—with
metallic RFI/EMI connector hoods, in order to maintain compliance with NEBS and FCC Rules and
Regulations.
Connecting two storage systems to replicate volumes
Remote Snap replication is a licensed disaster-recovery feature that performs asynchronous (batch)
replication of block-level data from a volume on a local (primary) storage system to a volume that can be
on the same system or a second, independent system. The second system can be located at the same site
as the first system, or at a different site.
The two associated standard volumes form a replication set, and only the primary volume (source of data)
can be mapped for access by a server. Both systems must be licensed to use Remote Snap, and must be
connected through switches to the same fabric or network (i.e., no direct attach). The server accessing the
replication set need only be connected to the primary system. If the primary system goes offline, a
connected server can access the replicated data from the secondary system.
Replication configuration possibilities are many, and can be cabled—in switch attach fashion—to support
MSA 2040 SAN systems on the same network, or on different networks (MSA 2040 SAS systems do not
support replication). As you consider the physical connections of your system—specifically connections for
replication—keep several important points in mind:
• Ensure that controllers have connectivity between systems, whether local or remote.
• Assign specific ports for replication whenever possible. By specifically assigning ports available for
replication, you free the controller from scanning and assigning the ports at the time replication is
performed.
6Gb/s
S
S
A
6Gb/s
S
S
A
Server 1 Server 2
SAN
Server 3 Server 4
MSA 2040 SAN
Page: 38
Connecting two storage systems to replicate volumes 39
• For remote replication, ensure that all ports assigned for replication are able to communicate
appropriately with the remote replication system (see verify remote-link in the CLI Reference Guide for
more information).
• Allow two ports to perform replication. This permits the system to balance the load across those ports as
I/O demands rise and fall. On dual-controller enclosures, if some of the volumes replicated are owned
by controller A and others are owned by controller B, then allow at least one port for replication on
each controller module—and possibly more than one port per controller module—depending on
replication traffic load.
• Be sure of the desired link type before creating the replication set, because you cannot change the
replication link type after creating the replication set.
• For the sake of system security, do not unnecessarily expose the controller module network port to an
external network connection.
Conceptual cabling examples are provided addressing cabling on the same network and cabling relative
to physically-split networks. Both single and dual-controller MSA 2040 SAN environments support
replication.
IMPORTANT: Remote Snap must be licensed on all systems configured for replication, and the controller
module firmware version must be compatible on all systems licensed for replication.
NOTE: Systems must be correctly cabled before performing replication. See the following documents for
more information about using Remote Snap to perform replication tasks:
• HP Remote Snap technical white paper
• HP MSA 2040 Best Practices
• HP MSA 2040 SMU Reference Guide
• HP MSA 2040 CLI Reference Guide
• HP MSA Event Descriptions Reference Guide
• HP MSA 2040 Cable Configuration Guide
To access user documents, see the MSA 2040 manuals site:
http://www.hp.com/support/msa2040/manuals.
To access a technical white paper about Remote Snap replication software, navigate to the link shown:
http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA1-0977ENW&cc=us&lc=en.
Cabling for replication
This section shows example replication configurations for MSA 2040 SAN controller enclosures. The
following illustrations provide conceptual examples of cabling to support Remote Snap replication. Blue
cables show I/O traffic and green cables show replication traffic.
NOTE: A simplified version of the MSA 2040 SAN controller enclosure rear panel is used in cabling
illustrations to portray either the FC or iSCSI host interface protocol. The rear panel layouts for the three
configurations are identical, only the external connectors used in the host interface ports differ.
Once the MSA 2040 systems are physically cabled, see the SMU Reference Guide or online help for
information about configuring, provisioning, and using the optional Remote Snap feature.
NOTE: See the HP MSA 2040 SMU Reference Guide for more information about using Remote Snap to
perform replication tasks. The SMU Replication Setup Wizard guides you through replication setup.
Page: 39
40 Connecting hosts
Host ports and replication
MSA 2040 SAN controller modules can use qualified SFP options of the same type, or they can use a
combination of qualified SFP options supporting different interface protocols. If you use a combination of
different protocols, then host ports 1 and 2 are set to FC (either both16 Gbit/s or both 8 Gbit/s), and host
ports 3 and 4 must be set to iSCSI (either both 10GbE or both 1 Gb). Each host port can perform I/O or
replication. In combination environments, one interface—for example FC—might be used for I/O, and the
other interface type—10GbE or 1 Gb iSCSI—might be used for replication.
Single-controller configuration
One server/single network/two switches
The diagram below shows the rear panel of two MSA 2040 SAN controller enclosures with both I/O and
replication occurring on the same network. Each enclosure is equipped with a single controller module.
The controller modules can use qualified SFP options of the same type, or they can use a combination of
qualified SFP options supporting different interface protocols.
Figure 25 Connecting two storage systems for Remote Snap: one server/two switches/one location
Host ports used for replication must be connected to at least one switch. For optimal protection, use two
switches, with one replication port from each controller connected to the first switch, and the other
replication port from each controller connected to the second switch. Using two switches in tandem avoids
the potential single point of failure inherent to using a single switch.
Dual-controller configuration
Each of the following diagrams show the rear panel of two MSA 2040 SAN controller enclosures
equipped with dual-controller modules. The controller modules can use qualified SFP options of the same
type, or they can use a combination of qualified SFP options supporting different interface protocols.
Multiple servers/single network
The diagram below shows the rear panel of two MSA 2040 SAN controller enclosures with both I/O and
replication occurring on the same physical network.
Figure 26 Connecting two storage systems for Remote Snap: multiple servers/one switch/one location
6Gb/s
S
S
A 6Gb/s
S
S
A
MSA 2040 SAN Storage system
I/O switch
(replication)
Switch
To host server
MSA 2040 SAN Storage system
IOM blank IOM blank
6Gb/s
S
S
A
6Gb/s
S
S
A
6Gb/s
S
S
A
6Gb/s
S
S
A
MSA 2040 SAN Storage system
Switch To host server(s)
MSA 2040 SAN Storage system
Page: 40
Connecting two storage systems to replicate volumes 41
The diagram below shows host interface connections and replication, with I/O and replication occurring
on different networks. For optimal protection, use two switches. Connect two ports from each controller
module to the first switch to facilitate I/O traffic, and connect two ports from each controller module to the
second switch to facilitate replication. Using two switches in tandem avoids the potential single point of
failure inherent to using a single switch, however, if one switch fails, either I/O or replication will fail,
depending on which of the switches fails.
Figure 27 Connecting two storage systems for Remote Snap: multiple servers/switches/one location
Virtual Local Area Network (VLAN) and zoning can be employed to provide separate networks for iSCSI
and FC, respectively. Whether using a single switch or multiple switches for a particular interface, you can
create a VLAN or zone for I/O and a VLAN or zone for replication to isolate I/O traffic from replication
traffic. Since each switch would include both VLANs or zones, the configuration would function as multiple
networks.
Multiple servers/different networks/multiple switches
The diagram below shows the rear panel of two MSA 2040 SAN controller enclosures with both I/O and
replication occurring on different networks.
Figure 28 Connecting two storage systems for Remote Snap: multiple servers/switches/two locations
The diagram below also shows the rear-panel of two MSA 2040 SAN controller enclosures with both I/O
and replication occurring on different networks. This diagram represents two branch offices cabled to
enable disaster recovery and backup. In case of failure at either the local site or the remote site, you can
fail over the application to the available site.
6Gb/s
S
S
A
6Gb/s
S
S
A
6Gb/s
S
S
A
6Gb/s
S
S
A
MSA 2040 SAN Storage system MSA 2040 SAN Storage system
To host server(s)
I/O switch Switch
(replication)
6Gb/s
S
S
A
6Gb/s
S
S
A
6Gb/s
S
S
A
6Gb/s
S
S
A
Remote site "A"
Ethernet
WAN
I/O switch
To host servers
Remote site "B"
To host servers
I/O switch
Peer sites with failover
MSA 2040 SAN Storage system MSA 2040 SAN Storage system
Page: 41
42 Connecting hosts
Figure 29 Connecting two storage systems for Remote Snap: multiple servers/SAN fabric/two locations
Although not shown in the preceding cabling examples, you can cable replication-enabled MSA 2040
SAN systems and compatible P2000 G3 systems—via switch attach—for performing replication tasks.
6Gb/s
S
S
A
6Gb/s
S
S
A
6Gb/s
S
S
A
6Gb/s
S
S
A
Peer sites with failover
Key — Server Codes:
= "A" File servers
Data Restore Modes:
- Replicate back over WAN
- Replicate via physical media transfer
Failover Modes
- VMware
- Hyper V failover to servers
A1
A2
B2
A1
B1
A2
B2
A1
B1
A1
A2
B1
B2
= "A" Application servers
A2
= "B" File servers
B1
= "B" Application servers
B2
Remote site “B”
Remote site “A”
Corporate
end-users
Corporate
end-users
LAN LAN
WAN
Ethernet
SAN
FC
SAN
FC
data
FS “A”
data
App “A”
replica
App “B”
replica
FS “B”
replica
FS “A”
replica
App “A”
data
FS “B”
data
App “B”
MSA 2040 SAN Storage
system (typ. 2 places)
Page: 42
Updating firmware 43
Updating firmware
After installing the hardware and powering on the storage system components for the first time, verify that
the controller modules, expansion modules, and disk drives are using the current firmware release.
• If using the SMU v3, in the System topic, select Action > Update Firmware.
The Update Firmware panel opens. The Update Controller Modules tab shows versions of firmware
components currently installed in each controller.
NOTE: The SMU v3 does not provide a check-box for enabling or disabling Partner Firmware Update
for the partner controller. To enable or disable the setting, use the set advanced-settings
command, and set the partner-firmware-upgrade parameter. See the CLI Reference Guide
for more information about command parameter syntax.
• If using the SMU v2, right-click the system in the Configuration View panel, and select Tools Update >
Firmware.
The Update Firmware panel displays the currently installed firmware versions, and enables you to
update them.
Optionally, you can update firmware using FTP (File Transfer Protocol) as described in the MSA 2040 SMU
Reference Guide.
IMPORTANT: See the “About firmware update” and “Updating firmware” topics within the MSA 2040
SMU Reference Guide before performing a firmware update.
NOTE: To locate and download the latest software and firmware updates for your product, go to
http://www.hp.com/support.
Page: 43
44 Connecting hosts
Page: 44
Device description 45
5 Connecting to the controller CLI port
Device description
The MSA 2040 controllers feature a command-line interface port used to cable directly to the controller
and initially set IP addresses, or perform other configuration tasks. This port employs a mini-USB Type B
form factor, requiring a cable that is supplied with the controller, and additional support, so that a server
or other computer running a Linux or Windows operating system can recognize the controller enclosure as
a connected device. Without this support, the computer might not recognize that a new device is
connected, or might not be able to communicate with it. For Linux computers, no new driver files are
needed, but a Linux configuration file must be created or modified.
For Windows computers, the Windows USB device driver must be downloaded from a CD or HP website,
and installed on the computer that will be cabled directly to the controller command-line interface port.
NOTE: Directly cabling to the CLI port is an out-of-band connection because it communicates outside the
data paths used to transfer information from a computer or network to the controller enclosure.
Preparing a Linux computer before cabling to the CLI port
Although Linux operating systems do not require installation of a device driver, certain parameters must be
provided during driver loading to enable recognition of the MSA 2040 controller enclosures. To load the
Linux device driver with the correct parameters, the following command is required:
modprobe usbserial vendor=0x210c product=0xa4a7 use_acm=1
Optionally, the information can be incorporated into the /etc/modules.conf file.
Downloading a device driver for Windows computers
A Windows USB device driver download is provided for communicating directly with the controller
command-line interface port using a USB cable to connect the controller enclosure and the computer.
NOTE: Access the download from your HP MSA support page at http://www.hp.com/support.
The USB device driver is also available from the Software Support and Documentation CD that shipped
with your product.
Obtaining IP values
One method of obtaining IP values for your system is to use a network management utility to discover “HP
MSA Storage” devices on the local LAN through SNMP. Alternative methods for obtaining IP values for
your system are described in the following subsections.
Setting network port IP addresses using DHCP
In DHCP mode, network port IP address, subnet mask, and gateway values are obtained from a DHCP
server if one is available. If a DHCP server is unavailable, current addressing is unchanged.
1. Look in the DHCP server’s pool of leased addresses for two IP addresses assigned to “HP MSA
Storage.”
2. Use a ping broadcast to try to identify the device through the ARP table of the host.
If you do not have a DHCP server, you will need to ask your system administrator to allocate two IP
addresses, and set them using the command-line interface during initial configuration (described
below).
Page: 45
46 Connecting to the controller CLI port
NOTE: For more information, see “Using the Configuration Wizard” and scroll to the network ports topic
within the HP MSA 2040 SMU Reference Guide.
Setting network port IP addresses using the CLI port and cable
You can set network port IP addresses manually using the command-line interface port and cable. If you
have not done so already, you need to enable your system for using the command-line interface port [also
see "Using the CLI port and cable—known issues on Windows" (page 49)].
NOTE: For Linux systems, see "Preparing a Linux computer before cabling to the CLI port" (page 45). For
Windows systems see "Downloading a device driver for Windows computers" (page 45).
Network ports on controller module A and controller module B are configured with the following
factory-default IP settings:
• Management Port IP Address: 10.0.0.2 (controller A), 10.0.0.3 (controller B)
• IP Subnet Mask: 255.255.255.0
• Gateway IP Address: 10.0.0.1
If the default IP addresses are not compatible with your network, you must set an IP address for each
network port using the command-line interface embedded in each controller module. The command-line
interface enables you to access the system using the USB (universal serial bus) communication interface
and terminal emulation software. The USB cable and CLI port support USB version 2.0.
Use the CLI commands described in the steps below to set the IP address for the network port on each
controller module. Once new IP addresses are set, you can change them as needed using the SMU. Be
sure to change the IP address via the SMU before changing the network configuration.
NOTE: Changing IP settings can cause management hosts to lose access to the storage system.
1. From your network administrator, obtain an IP address, subnet mask, and gateway address for
controller A, and another for controller B.
Record these IP addresses so that you can specify them whenever you manage the controllers using the
SMU or the CLI.
2. Use the provided USB cable to connect controller A to a USB port on a host computer. The USB mini 5
male connector plugs into the CLI port as shown in Figure 30 (generic controller module is shown).
Figure 30 Connecting a USB cable to the CLI port
CACHE
LINK
DIRTY
LINK
ACT
CLI
CLI
Host Interface
Not Shown
SERVICE−2
SERVICE−1
6Gb/s
Connect USB cable to CLI
port on controller faceplate
Page: 46
Obtaining IP values 47
3. Enable the CLI port for subsequent communication:
• Linux customers should enter the command syntax provided in "Preparing a Linux computer before
cabling to the CLI port" (page 45).
• Windows customers should locate the downloaded device driver described in "Downloading a
device driver for Windows computers" (page 45), and follow the instructions provided for proper
installation.
4. Start and configure a terminal emulator, such as HyperTerminal or VT-100, using the display settings in
Table 2 and the connection settings in Table 3 (also, see the note following this procedure).
.
1
Your server or laptop configuration determines which COM port is used for Disk Array USB Port.
2
Verify the appropriate COM port for use with the CLI.
5. In the terminal emulator, connect to controller A.
6. Press Enter to display the CLI prompt (#).
The CLI displays the system version, MC version, and login prompt:
a. At the login prompt, enter the default user manage.
b. Enter the default password !manage.
If the default user or password—or both—have been changed for security reasons, enter the secure
login credentials instead of the defaults shown above.
NOTE: The following CLI commands enable you to set the management mode to v3 or v2:
• Use set protocols to change the default management mode.
• Use set cli-parameters to change the current management mode for the CLI session.
The system defaults to v3 for new customers and v2 for existing users (see the CLI Reference Guide
for more information).
7. At the prompt, type the following command to set the values you obtained in step 1 for each network
port, first for controller A and then for controller B:
Table 2 Terminal emulator display settings
Parameter Value
Terminal emulation mode VT-100 or ANSI (for color support)
Font Terminal
Translations None
Columns 80
Table 3 Terminal emulator connection settings
Parameter Value
Connector COM3 (for example)1,2
Baud rate 1
15,200
Data bits 8
Parity None
Stop bits 1
Flow control None
Page: 47
48 Connecting to the controller CLI port
set network-parameters ip address netmask netmask gateway gateway controller a|b
where:
• address is the IP address of the controller
• netmask is the subnet mask
• gateway is the IP address of the subnet router
• a|b specifies the controller whose network parameters you are setting
For example:
# set network-parameters ip 192.168.0.10 netmask 255.255.255.0 gateway
192.168.0.1 controller a
# set network-parameters ip 192.168.0.11 netmask 255.255.255.0 gateway
192.168.0.1 controller b
8. Type the following command to verify the new IP addresses:
show network-parameters
Network parameters, including the IP address, subnet mask, and gateway address are displayed for
each controller.
9. Use the ping command to verify network connectivity.
For example:
# ping 192.168.0.1 (gateway)
Info: Pinging 192.168.0.1 with 4 packets.
Success: Command completed successfully. - The remote computer responded with 4
packets.
10.In the host computer's command window, type the following command to verify connectivity, first for
controller A and then for controller B:
ping controller-IP-address
If you cannot access your system for at least three minutes after changing the IP address, your network
might require you to restart the Management Controller(s) using the CLI. When you restart a
Management Controller, communication with it is temporarily lost until it successfully restarts.
Type the following command to restart the management controller on both controllers:
restart mc both
1
1. When you are done using the CLI, exit the emulator.
12. Retain the new IP addresses to access and manage the controllers, using either the SMU or the CLI.
NOTE: Using HyperTerminal with the CLI on a Microsoft Windows host:
On a host computer connected to a controller module’s mini-USB CLI port, incorrect command syntax in a
HyperTerminal session can cause the CLI to hang. To avoid this problem, use correct syntax, use a different
terminal emulator, or connect to the CLI using telnet rather than the mini-USB cable.
Be sure to close the HyperTerminal session before shutting down the controller or restarting its Management
Controller. Otherwise, the host’s CPU cycles may rise unacceptably.
If communication with the CLI is disrupted when using an out-of-band cable connection, communication
can sometimes be restored by disconnecting and reattaching the mini-USB cable as described in step 2 on
page 46.
The USB device driver is also available from the Software Support and Documentation CD that shipped
with your product.
NOTE: Access the download from your HP MSA support website at http://www.hp.com/support.
Page: 48
Using the CLI port and cable—known issues on Windows 49
Using the CLI port and cable—known issues on Windows
When using the CLI port and cable for setting controller IP addresses, be aware of the following known
issues on Microsoft Windows platforms.
Problem
On Windows operating systems, the USB CLI port may encounter issues preventing the terminal emulator
from reconnecting to storage after the Management Controller (MC) restarts or the USB cable is unplugged
and reconnected.
Workaround
Follow these steps when using the mini-USB cable and USB Type B CLI port to communicate out-of-band
between the host and controller module for setting network port IP addresses.
To create a new connection or open an existing connection (HyperTerminal):
1. From the Windows Control Panel, select Device Manager.
2. Connect using the USB COM port and Detect Carrier Loss option.
a. Select Connect To > Connect using: > pick a COM port from the list.
b. Select the Detect Carrier Loss check box.
The Device Manager page should show “Ports (COM & LPT)” with an entry entitled “Disk Array USB
Port (COMn)”—where n is your system’s COM port number.
3. Set network port IP addresses using the CLI (see procedure on page 46).
To restore a hung connection when the MC is restarted (any supported terminal emulator):
1. If the connection hangs, disconnect and quit the terminal emulator program.
a. Using Device Manager, locate the COMn port assigned to the Disk Array Port.
b. Right-click on the hung Disk Array USB Port (COMn), and select Disable.
c. Wait for the port to disable.
2. Right-click on the previously hung—now disabled—Disk Array USB Port (COMn), and select Enable.
3. Start the terminal emulator and connect to the COM port.
4. Set network port IP addresses using the CLI (see procedure on page 46).
Page: 49
50 Connecting to the controller CLI port
Page: 50
Accessing the SMU 51
6 Basic operation
Verify that you have completed the sequential “Installation Checklist” instructions in Table 1 (page 19).
Once you have successfully completed steps 1 through 8 therein, you can access the management
interface using your web browser to complete the system setup.
Accessing the SMU
Upon completing the hardware installation, you can access the web-based management interface—SMU
(Storage Management Utility)—from the controller module to monitor and manage the storage system.
Invoke your web browser, and enter the IP address of the controller module’s network port in the address
field (obtained during completion of “Installation Checklist” step 8), then press Enter. To Sign In to the
SMU, use the default user name manage and password !manage. If the default user or password—or
both—have been changed for security reasons, enter the secure login credentials instead of the defaults.
This brief Sign In discussion assumes proper web browser setup.
IMPORTANT: For detailed information on accessing and using the SMU, see the “Getting started” section
in the web-posted HP MSA 2040 SMU Reference Guide.
The Getting Started section provides instructions for signing-in to the SMU, introduces key concepts,
addresses browser setup, and provides tips for using the main window and the help window.
TIP: After signing in to the SMU, you can use online help as an alternative to consulting the reference guide.
Configuring and provisioning the storage system
Once you have familiarized yourself with the SMU, use it to configure and provision the storage system. If
you are licensed to use the optional Remote Snap feature, you may also need to set up storage systems for
replication. Refer to the following topics within the SMU Reference Guide or online help:
• Configuring the system
• Provisioning the system
• Using Remote Snap to replicate volumes
NOTE: See the “Installing a license” topic within the SMU Reference Guide for instructions about creating
a temporary license or installing a permanent license.
IMPORTANT: If the system is used in a VMware environment, set the system Missing LUN Response option to
use its Illegal Request setting. To do so, see either the configuration topic “Changing the missing LUN response”
in the SMU Reference Guide or the command topic “set-advanced-settings” in the CLI Reference Guide.
Page: 51
52 Basic operation

Question & answers

There are no questions about the HP M0T30A yet.

Ask a question about the HP M0T30A

Have a question about the HP M0T30A but cannot find the answer in the user manual? Perhaps the users of ManualsCat.com can help you answer your question. By filling in the form below, your question will appear below the manual of the HP M0T30A. Please make sure that you describe your difficulty with the HP M0T30A as precisely as you can. The more precies your question is, the higher the chances of quickly receiving an answer from another user. You will automatically be sent an e-mail to inform you when someone has reacted to your question.