Configuration Planning Guide EonStor v1.1b
Configuration Planning Guide EonStor v1.1b
0 (1, 2006)
Contact Information
Asia Pacific (International Headquarters) Infortrend Technology, Inc. 8F, No. 102 Chung-Shan Rd., Sec. 3 Chung-Ho City, Taipei Hsien, Taiwan Tel: +886-2-2226-0126 Fax: +886-2-2226-0020 [email protected] [email protected] http://esupport.infortrend.com.tw http://www.infortrend.com.tw China Infortrend Technology, Limited Room 1210, West Wing, Tower One, Junefield Plaza, No. 6 Xuanwumen Street, Xuanwu District, Beijing, China Post code: 100052 Tel: +86-10-6310-6168 Fax: +86-10-6310-6188 [email protected] [email protected] http://esupport.infortrend.com.tw http://www.infortrend.com.cn Japan Infortrend Japan, Inc. 6F, Okayasu Bldg., 1-7-14 Shibaura Minato-ku, Tokyo, 105-0023 Japan Tel: +81-3-5730-6551 Fax: +81-3-5730-6552 [email protected] [email protected] http://esupport.infortrend.com.tw http://www.infortrend.co.jp Americas Infortrend Corporation 2200 Zanker Road, Unit D, San Jose, CA. 95131 USA Tel: +1-408-988-5088 Fax: +1-408-988-6288 [email protected] http://esupport.infortrend.com http://www.infortrend.com
Europe (EMEA) Infortrend Europe Limited 1 Cherrywood, Stag Oak Lane Chineham Business Park Basingstoke, Hampshire RG24 8WF, UK Tel: +44-1256-707-700 Fax: +44-1256-707-889 [email protected] [email protected] http://esupport.infortrend-europe.com/ http://www.infortrend.com Germany Infortrend Deutschland GmbH Werner-Eckert-Str.8 81829 Munich Germany Tel: +49 (0)89 45 15 18 7 - 0 Fax: +49 (0)89 45 15 18 7 - 65 [email protected] [email protected] http://www.infortrend.com/germany
Copyright 2008
This Edition First Published 2008
All rights reserved. No part of this publication may be reproduced, transmitted, transcribed, stored in a retrieval system, or translated into any language or computer language, in any form or by any means, electronic, mechanical, magnetic, optical, chemical, manual or otherwise, without the prior written consent of Infortrend Technology, Inc.
Disclaimer
Infortrend Technology makes no representations or warranties with respect to the contents hereof and specifically disclaims any implied warranties of merchantability or fitness for any particular purpose. Furthermore, Infortrend Technology reserves the right to revise this publication and to make changes from time to time in the content hereof without obligation to notify any person
of such revisions or changes. Product specifications are also subject to change without notice.
Trademarks
Infortrend, Infortrend logo, EonStor and SANWatch are all registered trademarks of Infortrend Technology, Inc. Other names prefixed with IFT and ES are trademarks of Infortrend Technology, Inc. All other names, brands, products or services are trademarks or registered trademarks of their respective owners.
Table of Contents
Contact Information ................................................................................................. 3 Copyright 2008.......................................................................................................... 3 This Edition First Published 2008 ...................................................................... 3 Disclaimer .......................................................................................................... 3 Trademarks ........................................................................................................ 4 Table of Contents ..................................................................................................... 4 Organization of this Guide ...................................................................................... 5 Revision History ....................................................................................................... 5 Related Documentations ......................................................................................... 5
Chapter 2
Chapter 3
Appendix 1. Tunable firmware parameters and firmware limitations. Appendix 2. Using hot spares.
Revision History
Rev. 1.0: Initial release Rev. 1.1: - Removed JBOD from the RAID level introduction. NRAID provides similar functionality. - Added definitions for the Active and Passive data paths in a scenario involving redundant controllers, redundant paths, and the EonPath multi-pathing driver. Rev. 1.1a: Dynamic switch of LD ownership in the event of external link failure is now supported by firmware release 3.64h. Rev. 1.1b: Replaced the sample drawings for the Dynamic LD Assignment feature.
Related Documentations
Firmware Operation Manual
SANWatch Users Manual EonPath Users Manual Embedded RAIDWatch Users Manual Installation and Hardware Reference Manual Quick Installation Guide Rackmount Rail Installation Guide (for later models, rackmounting details are described in its Quick Installation Guide) System Troubleshooting Guide LCD Keypad Navigation Map These documents can be found in the product utility CD included with your system package and are continuously updated according to the progress of technologies and specification changes.
Chapter
1
Host Interface and Storage Configuration Basics
1-1. Host Interface Types:
The EonStor series storage systems are equipped with prevalent types of host link interfaces including: 1. Fibre Channel, 2. Serial Attached SCSI (SAS), 3. Internet SCSI (iSCSI). Parallel SCSI is gradually being replaced by SAS and is not included in the following discussion.
SAN Storage Area Network. Refers to configurations that include storage systems connected through Fibre Channel data links. SAN configurations often include interconnect hardware such as fabric switches. Fibre Channel SAN can span across an entire enterprise or further and enable the connections to almost limitless number of application servers in a storage network.
IP SAN Often considered as a cost-down alternative to Fibre Channel SAN. Refers to the configurations with iSCSI storage that attaches to an existing Ethernet network. iSCSI storage reduces the implementation cost by exchanging SCSI commands over the TCP/IP infrastructure.
Host ports:
1. SAS links for DAS: There are two different kinds of SAS ports: SFF-8088 and SFF-8470; both are multi-lane wide ports. 1-1. DAS Host Port Example: EonStor B12S
Host Link Cables: SAS cable with SFF-8088 connector SAS cable with SFF-8470 connector
One 120cm host link cable (with SFF-8088 or SFF-8470 connectors) is shipped with the EonStor DAS series. A 50cm version is also available. Other SAS link cables are separately purchased. 2. FC links for SAN: SAN Host Port Example: EonStor B12F
Fibre Channel host ports are SFP sockets that receive separately purchased Fibre Channel transceivers. The transceiver converts electrical signals into optical signals and transmits data over fiber optical links. Fibre Channel transceiver: optical Fiber optical cable (LC-to-LC):
3. Ethernet links for IP SAN: IP SAN Host Port Example: EonStor S16E
Host Links cables: The Ethernet cables are user-supplied. Use Cat5e or better performance cables for cabling iSCSI storage to an IP SAN.
10
Cabling and configuring a storage system powered by redundant controllers can be tricky because attentions must be paid to prepare fault-tolerant paths as a precaution for device failure. For a mission-critical application, down time can be very costly. Shown below are sample topologies that help you design your own configurations. There are more connection samples in the EonStor series Hardware manual. The key elements in each topology are briefly described.
Legends
HBA: LD: AID: BID: Host bus adapter Logical drive; logical group of 6, 8, or other number of disk drives. e.g., A112; a host ID managed by controller A e.g., B113; a host ID managed by controller B CH0: CH1: RCC: FC switch Host channel 0 Host channel 1 The communications between controllers paths
Fibre Channel switch that provides intermediate connectivity to form a storage area network. FC switches also provide access control such as zoning.
LUN Mapping:
Host LUN mapping is presented by the encircled numbers either placed by the LD or on the data paths. The RAID controllers within storage system. Controllers are identified as controller A or controller B.
Controller
NOTE: 1. The samples below are made with the Fibre Channel connectivity. 2. The default host IDs can vary on the EonStor models:
FC SAS iSCSI 112 and 113 0, 1 (single controller) 6, 7 (dual-controller) 0, 1 (single controller) 6, 7 (dual-controller)
11
An optimal system performance depends on a careful planning with the concerns for various component factors. HDD Speed: Todays HDD can deliver a throughput speed between 70MB/s and 100MB/s, and 150 IOPS. You can use the performance data by disk vendor as a basis for speculating an optimal deployment. LD: Logical drives provide combined performance by grouping multiple hard drives. For a logical drive composed of RAID3, 5, and 6, parity or spare drives do not contribute to RAID performance. LD Size (Stripe width): Combine a reasonable number of hard disks into a logical drive. A logical drive consisting of too many members will take a very long time to rebuild. A combination of 6 or 8 members can be the optimal. Of course, RAID0 provides the best performance but with no fault tolerance. LD Performance: With the above measures, we can come up with a rough LD performance by subtracting 20% off the combined performance because a certain amount of system resource has to be consumed for generating and distributing parity data. Taking a RAID5 LD of 8 members as an example, (8 - 1) x 70MB/s parity handling efforts = 420MB/s
12
The LD performance can roughly fill a 4Gbps Fibre host channel. Multi-pathing Driver: With the EonPath multi-pathing driver, traffic on multiple host links can be balanced by presenting a logical drive on them. 1-4-2. System Overall Performance:
You can fully utilize the powerful engine in the EonStor series through the configuration means. A combination of 32 HDD in a RAID and a JBOD can theoretically make a best use of the power of a 16-bay redundant controller system: There are 4 LDs: Each of 8 members; 2 in the RAID enclosure and 2 in the JBOD. Each LD delivers a 420MB/s performance (see previous description. Each RAID controller manages 2 LDs (LD assignment). There are 4 host channels (2 per controller). 4 LDs deliver a total of 1600MB/s performance, which is slightly lower than the approximate of system capability.
13
For the fact that your application servers may not always generate I/Os that fully stress the arrays, more disk drives can be attached. In a storage configuration, logical drives, host LUN mapping, and other configurations can be re-arranged, if the nature of host applications and data has been changed throughout the time of use.
Other Considerations: For high-speed I/O channels, use host bus adaptors that are at least with a PCI-X x8 lane. Using outdated HBAs on a narrow bus can hinder the best host-storage performance. For a higher level of fault tolerance, say, if you connect 4 host links from redundant RAID controllers, use dual-ported HBAs for making the connections instead of linking all 4 ports to a quad-ported HBA. Perform throughput testing on the whole deployment before starting your applications. Understand and fine-tune your I/Os. Create logical drives to your needs for performance, fault tolerance, or for both. Some minor details, such as HBA BIOS settings and queue depth configurations, can be important but are easily ignored.
1-4-3. Single-controller storage: Preparing a single-controller system is comparatively simple. Elements in this drawing are: LD: Logical drives are configured by grouping physical drives. IDs: Infortrend firmware comes with 1 host ID on each channel. Other IDs are manually created. ID Mapping: Logical drives are mapped to IDs on both host channels. Mapping a logical drive to IDs on different channels provides access from 2 data paths. storage
14
1-4-4. Redundant-controller storage in a switched fabric: Preparing a redundant-controller system requires both AID and BID. Resource distribution is also determined by Logical Drive Assignment. If a logical drive is assigned to controller A, then controller A manages the I/Os to that logical drive. Elements in this drawing are: LD: Logical drives are configured by grouping physical drives. LD assignment: Each logical drive is either assigned to controller A or to controller B. ID Mapping: Logical drives are mapped to IDs on all host channels to leverage all host port bandwidth.
Infortrend firmware comes with 1 host ID on each channel. You need to manually create more IDs. More IDs can be associated with each LD to provide more active paths.
Data Paths: Data paths are routed from different RAID controllers, between FC switches, and to different servers. This way, a server can still access data when a cabling failure occurs. Multi-pathing: The EonPath software is necessary on the servers.
NOTE:
1. Multiple IDs on a Fibre Channel host channel is not allowed if they are configured into the point-to-point mode. The maximum number of LUN is: Point-to-point: 4 (host channels) x 1 (IDs per channel) x 32 (LUNs per ID) = 128 FC-AL: 4 (host channels) x 8 (IDs per channel) x 32 (LUNs per ID) = 1024 You can seldom use the maximum number, and having too many LUN can cause a performance drag. 2. It is recommended to set your storage and switch ports to the loop mode (FC-AL). In some circumstances with cabling/controller failures, a server may not regain the access to storage through a switch port configured in the fabric mode (point-to-point).
15
1-4-5. Redundant-controller storage for dedicated performance: Some storage applications may not require high level of fault tolerance, e.g., AV post-production editing. Elements in this drawing are: LD: Logical drives are configured by grouping physical drives. LD assignment: Each logical drive is either assigned to controller A or to controller B. ID Mapping: Logical drives are mapped to IDs on all host channels to leverage all host port bandwidth.
Infortrend firmware comes with 1 host ID on each channel. You need to manually create more IDs. More IDs can be associated with each LD to provide more active paths.
Data Paths: Data paths are directly routed to an application server. A special firmware is required to disable the RCC communications between controllers to conserve the most for I/O service. Multi-pathing: The software is necessary servers. EonPath on the
NOTE:
The sample topologies in this document do not cover the cases of using the onboard hub (onboard FC bypass) such as those applied in the ASIC266 models. The onboard hub turns host ports of partner RAID controllers into a host loop.
16
1-4-6. Redundant-controller, high availability, for clustered servers: Provides shared storage for high availability clustered servers. Elements in this drawing are: LD: Logical drives are configured by grouping physical drives. LD assignment: Each logical drive is either assigned to controller A or to controller B. ID Mapping: Logical drives are mapped to IDs on all host channels to leverage all host port bandwidth. The IDs in green circles are standby IDs. The stand-bys provide alternate access in the event when the controller having the original ownership fails.
Infortrend firmware comes with 1 host ID on each channel. You need to manually create more IDs. More IDs can be associated with each LD to provide more active paths.
Data Paths: Data paths are directly routed to clustered servers so that both servers can access the LD. Multi-pathing: The EonPath software is necessary on the servers.
17
1-4-7. One controller failed in a redundant-controller storage: Elements in this drawing are: Controller failure: Controller B fails. All AID and BID are taken over by controller A, the surviving controller. Disk Access: LD1 is accessed through the alternate data paths on the backplane. The failover process takes only a few seconds and is transparent to users.
18
1-4-8. Cable link failure. Before Dynamic LD Assignment with FW3.64J, a cabling failure can cause a degraded performance in the scenario diagrammed below. A cabling failure occurs, e.g., an HBA failure. If a data route is disconnected, I/Os will be directed through the RCC links between partner controllers. Because it is a cabling failure, controller A still holds the ownership of LD0. Re-directing I/Os through the alternate data paths and RCC links consumes considerable resources. Performance will be compromised although both controllers are still working normally.
19
1-4-9. Dynamic Switch of LD Ownership in a redundant-controller storage: Dynamic LD Assignment can dramatically improve system performance in the same cabling failure scenario. Since firmware revision 3.64J, LD ownership can be temporarily shifted to the partner controller to avoid the overhead of re-directing I/Os through the RCC links. The LD0 ownership is temporarily handed to Controller B. As the result, the Dynamic Assignment approach can produce a 50% performance gain over the traditional approach of routing through the RCC links,
NOTE: The Dynamic LD Assignment complies with multi-pathing driver capable of TPGS (Target Port Group Service) so that path preference can be restored once the broken host links are restored.
20
1-4-10. The Active and Passive path mechanism to a redundant-controller storage: The data paths Active/Passive status is determined by the logical drive ownership. If a logical drive (LD0) is assigned to controller A, the data paths to controller A are considered as the Active or optimal paths for the access to LD0. I/Os will be distributed through the Active paths.
The path status is negotiated between firmware and the EonPath driver on the host side. In the event of Active path or controller failure, I/Os will be directed through the Passive paths.
21
Chapter
2
RAID Levels
Redundant Arrays of Independent Disks, or RAID, offers the following advantages: availability, capacity, and performance. Choosing the right RAID level and drive failure management can increase capacity and performance, subsequently increasing availability. Infortrend's external RAID controllers and subsystems provide complete RAID functionality and enhanced drive failure management.
A RAID storage delivers the following advantages: Capacity: Provides disk spanning by weaving multiple disk drives into one single volume. Performance: Increases disk access speed by breaking data into several blocks when reading/writing to several drives in parallel. With RAID, storage speed increases as more drives are added as the host channel bandwidth allows. Fault Tolerance: Provides fault-tolerance by mirroring or distributing parity across member drives.
22
NOTE:
Logical volumes, such as RAID50, can provide a higher level of fault tolerance than RAID5. However, the use of logical volumes is not always necessary. Using logical volumes can create the load on system hardware and may not be the optimal for most applications.
Sample Applications
RAID Level RAID0 Performance Sequential RAID0 can deliver the best performance, but please be reminded it provides no protection to your data. RAID0 is ideal for applications needing a temporary data pool for high-speed access. RAID1 is useful as a small group of drives pertaining high availability and fast write access although it is expensive in terms of its usable drive capacity. RAID3 works well with single-task applications featuring large transfers such as video/audio post-production editing, medical imaging, or scientific research requiring a purpose-oriented performance. RAID5 is most widely-used and is ideal for a media, legal, or financial database repository with lower write requests. RAID5 can adapt to multi-task applications with various I/O sizes. A RAID5 with an adequate stripe size is also applicable with large I/O transfers. RAID6 provides a high level of data availability, benefits of RAID5, with the minor trade-off of a slightly lower write performance. RAID6 can mend the defects of using cost-effective SATA drives where magnetic defects can cause problems if another member drive fails at the same time.
RAID1 (0+1)
RAID3
RAID5
RAID6
23
N No
NRAID stands for Non-RAID. The capacity of all drives is combined to become one logical drive (no block striping). In other words, the capacity of the logical drive is the total capacity of the physical member drives. NRAID does not provide data redundancy. Some vendors provide a self-defined RAID level, JBOD, as a way to concatenate disk drives into a volume. NRAID can be made of 1 or multiple disk drives in a way very similar to the use of JBOD.
N No
RAID0 provides the highest performance but no redundancy. Data in the logical drive is striped (distributed) across physical members.
24
N/2 Yes
RAID1 mirrors the data stored in one hard drive to another. By Infortrends definition, RAID1 can only be performed with two hard drives. If there are more than two hard drives, RAID (0+1) will be automatically applied.
N/2 Yes
RAID (0+1) combines RAID0 and RAID1 - Mirroring and Striping. RAID (0+1) allows multiple drive failures because of the full redundancy of mirrored pairs. Multiple members can fail if they are not in a mirrored pair. If there are more than two hard drives included in a RAID1, RAID (0+1) will be automatically applied.
25
IMPORTANT!
RAID (0+1) will not appear in the list of RAID levels supported by the controller. If you wish to perform RAID1, the system firmware will determine whether to perform RAID1 or RAID (0+1). This will depend on the number of disk drives selected to compose a logical drive.
N-1 Yes
RAID3 performs Block Striping with Dedicated Parity. One drive member is dedicated to storing the parity data. When a drive member fails, the controller can recover or regenerate the lost data in the failed drive by comparing and re-calculating data on the remaining members.
N-1 Yes
RAID5 is similar to RAID3 but the parity data is not stored in a dedicated hard drive. Parity information is interspersed across all members of the logical drive. In the event of a drive failure, the controller can recover or regenerate the lost data of the failed drive by comparing and re-calculating data on the remaining members.
26
RAID6 NOTE: A RAID6 array can withstand simultaneous failures of two disk drives, or one drive failure and bad blocks on another member drive. Minimum Disks required Capacity Redundancy 4 N-2 Yes
RAID5 has been popular for it provides combined performance from its member drives and reasonable protection against a single disk failure. However, when storage systems grow larger and need to serve a wide variety of applications, the RAID5 protection can be insufficient. In the event of single drive failure, the occurrence of bad blocks on another member drive can render the affected data stripes unusable. RAID6 improves RAID5 and provides significantly higher redundancy level in terms of its ability to withstand two simultaneous drive failures.
RAID6 is similar to RAID5 but two parity blocks are available within each data stripe across the member drives. Each RAID6 array uses two (2) member drives for storing parity data. The RAID6 algorithm computes two separate sets of parity data and distribute them to different member drives when writing to disks. A RAID6 array requires the capacity of two disk drives for storing parity data. Each disk drive contains the same number of data blocks. Parity information is consequentially interspersed across the array following the preset algorithms. A RAID6 array can tolerate the failure of more than one disk drive; or, in the degraded condition, one drive failure and bad blocks on the other. In the event of disk drive failure, the controller can recover or regenerate the lost data of the failed drive(s) without interruption to normal I/Os.
27
Chapter
3
Sample RAID Configuration Procedure
Use a notebook and sketch the planned application for future reference.
28
2. Use Worksheets to keep a hard record of how your storage is configured. An example is shown below:
Application File system RAID level of LUN LUN ID LUN capacity Server details (OS) Host links info. (HBA, switch, etc.)
You can expand the worksheet to include more details such as the disk drive channel on which the disks reside, JBOD enclosure ID, whether the LUNs are shared, and shared by which servers, etc.
3. Drive Location:
Tray Numbering:
The same disk tray layout always applies to all Infortrends storage enclosures. Trays are numbered, from left to right and then from top to bottom. It is advised you select members for a logical drive following the tray numbering rule, in order to avoid confusing yourself using the LCD keypad or the text-based firmware utility.
29
For example, a typical single enclosure configuration can look like this:
Disk drives in slots 1 to 8 are included in LD0, Logical Drive #0. Disk drives in slots 9 to 15 are included in LD1, Logical Drive #1. Slot 16 is configured as a Global Spare, which will participate in the rebuild of any logical drives.
A firmware utility screen to physical drive information looks like this. Following drive numbering sequence helps avoid configuration errors.
Step 1. Use the included serial cable to connect the COM1 serial ports. COM1 is always located on the RAID controllers.
30
Step 2. If your system is powered by a single RAID controller, connect the single end-to-end cable. If your system is powered by redundant RAID controllers, use the Y-cable. If you prefer a telnet console, connect Ethernet cables to the controllers 10/100BaseT Ethernet ports.
Step 3. If using the serial port connection for local management, attach a null modem to the DB9 end of the serial cable.
Step 2.
31
Step 3.
The next screen requires you to select a serial port on your PC.
Step 4.
Select appropriate baud rate and data/stop bit values (identical to those set for the COM1 port on your RAID subsystem). Click OK, and you should then be able to establish a management console. The firmware defaults are: Baud rate Data bit Parity Stop bit Flow control 38400 8 none 1 Hardware
32
Step 5.
The initial screen for the text-based utility should display. Use the following keys to start using the utility: [Enter] [Esc] [Ctrl]+[L] To move around menu options To enter a sub-menu or to execute a selected option To cancel an option or return to the previous menu To refresh the screen information
Step 6.
Use the cursor keys to select a display mode. Press Enter to enter the main menu.
2. Telnet via Ethernet Step 1. Step 2. Use an Ethernet cable with RJ-45 phone jacks to connect the Ethernet port on the controller module. Connect the other end of the Ethernet cable to your local area network. An IP address should be acquired for the subsystems Ethernet port. The subsystem firmware also supports automatic client configuration such as DHCP.
33
Step 3. Step 4.
Consult your network administrator for an IP address that will be assigned to the system Ethernet port. Use the LCD keypad or RS-232 console to select "View and Edit Configuration Parameters" from the Main Menu on the terminal screen. Select "Communication Parameters" -> "Internet Protocol (TCP/IP)" -> press ENTER on the chip hardware address -> and then select "Set IP Address."
NOTE:
The IP default is DHCP client. However, if DHCP server can not be found within several seconds, a default IP address 10.10.1.1 will be loaded. This feature is available in the EonStor ASIC400 models.
Provide the IP address, NetMask, and Gateway values accordingly. PING the IP address from your management computer to make sure the link is valid. Open a command prompt window and key in telnet xxx.xxx.xx.xxx (controller IP address) to access the embedded firmware utility.
Step 8.
Enter the preset password for accessing the storage system. If there is no preset password, press Enter to proceed.
34
NOTE:
A management console using SANWatch or the web-based Embedded RAIDWatch is not the topic of this document. Please refer to their specific user documents for details.
3. Secure Link over SSH Firmware supports remote management over the network connection and the security under SSH (Secure Shell) protection. SSH is widely used for its ability to provide strong authentication and secure communications over insecure channels. The SSH secure access can also be found as an option in the connection window of the SANWatch management software. SSH is more readily supported by Linux- or Unix-based systems. The support for SSH on Microsoft Windows platforms can be limited.
For making SSH link using Windows, there are SSH tools such as the PuTTY shareware. If a shareware is used, it may be necessary to configure the display options, e.g., the Character set translation on received data and font type setting in order for the terminal screen to be correctly displayed. The appearance settings may vary on different SSH tools.
35
Appearance menu:
36
Use arrow keys to scroll down and make sure installed hard drives are all present. The list can be a long one if you attach expansion JBODs. HDDs in a JBOD are identified by the number in the JBOD column.
Step 2.
Use the ESC key to return to the Main Menu. Now you can go to the View and Edit Logical Drives menu to begin RAID configuration.
37
Step 3.
Select an index number by pressing Enter on it, usually the configuration starts from LG0. Confirm your selection by moving highlighted area to Yes and press Enter.
Step 4.
Step 5.
Select members to be included in the logical drive by moving the highlighted color bar and pressing Enter on each drive. A selected member will be highlighted and its index number shown in the index column.
The above screen shows that 8 members have been selected. The number of members is determined by
38
the enclosure and also the performance concerns mentioned earlier in this document. If you have a 24-bay enclosure, you might as well create 2 12-member LDs or 3 8-member LDs. With a 12-bay enclosure, you can compromise with 2 6member LDs. Step 6. Press the ESC key when you have selected all members. An LD parameters window will prompt.
Step 6-1. The first option, Maximum Drive Capacity, is useful if you suspect your drive members may have slightly different block numbers, which determines the actual drive capacity you can allocate from each drive. Setting the Max. Drive Capacity slightly lower can get around the issue that one of the members can actually be slightly smaller. Chances are some blocks in some drives might have been marked as defective by drive manufacturers before shipping, and hence the usable number of blocks is reduced. For Infortrends system firmware, all members in a logical drive must be of the same capacity and speed.
You can also specify half of the size. The unused capacity can later be utilized as a secondary RAID partition using the RAID expansion function. Step 6-2. This is where you specify a Local or Dedicated spare drive. For details, please refer to Appendix 2. The Dedicated spare only joins the rebuild of this logical drive.
39
Step 6-3. If you are configuring LDs for a redundant-controller system, you can equally assign LDs to both controllers so that the computing power of the partner controllers can be fully utilized. For example, if you have 4 LDs, you can assign 2 LDs to controller A and another 2 to controller B.
Step 6-4. The Reserved Space option is view-only, skip this option. The space is automatically segregated for keeping logical drive configuration data. Step 6-5. Write-back caching can significantly enhance LD performance. Write-through is only selected if you do not have the protection of battery backup. The Default option enables the LDs caching policy to be automatically adjusted to a system-level caching policy, which is dynamically disabled in critical events such as component failures or thermal alarm. The system-level option is found in View and Edit Configuration Parameters -> Caching Parameters.
Step 6-6. The Online Initialization Mode allows you to continue with the rest of the system setup steps without having to wait for the logical drive to be fully initialized. Initializing an LD terabytes in size can take hours.
40
Step 6-7. The default stripe size (128KB) is applicable to most applications. The stripe size can be adjusted in situations when the I/O characteristics are predictable and simple. For example, logical drives in a RAID system serving an AV stream editing application have a dedicated purpose. In such environment, you can match the size of host I/O transfers to the LD stripe size so that 1 or 2 host I/Os can be efficiently served within a parallel write.
Step 7.
Press the ESC key once you have set all configurable details. A confirm message box will prompt. Check the details before moving to the Yes option. Press Enter on Yes to begin the creation process.
Step 8.
A succession of events will prompt. Use the ESC key several times to skip them if no erroneous events occurred.
41
Step 9.
Press ESC to hide this progress indicator. The progress bar will run in the background. If the online mode was selected, you can continue with the rest of the procedure, such as host LUN mapping.
Step 10.
You should return to the View and Edit Logical Drives screen. Press Enter on the LD you just created, and select Logical Drive Name. Enter a name for ease of identification, such as ExchangeServer.
NOTE:
You may divide a logical drive or logical volume into partitions of desired capacity, or use the entire capacity as a single volume. 1. It is not a requirement to partition any logical configuration. Partitioning helps when multiple servers or applications need its disk space and you do not have the measures such as File Locking to prevent access contention. 2. With the concerns for the limited number of logical drives, partitioning can easily divide logical drives into volumes of the sizes you prefer. 3. You can not create partitions on a logical drive that already contains data. Partitioning will destroy data.
42
Step 11.
Select another entry in the LD list to repeat the process to create more logical drives using the methods described above.
Step 12.
Create more host IDs in the View and Edit Channels menu.
Step 12-1. Press Enter to select a host channel. Step 12-2. Press Enter on View and edit SCSI ID.
43
Step 12-5. Select Slot A or Slot B controller. Slot A and Slot B determines ownerships of logical drives. A logical drive associated with a Slot A ID will be managed by the Slot A controller (controller A); one associated with a Slot B ID by the Slot B controller.
Step 12-7. Confirm the Add action by selecting Yes, and continue the Add ID process by selecting No. Repeat the process to create more AIDs or BIDs as is planned for your configuration.
Step 14.
Reset the controller after you created all the AIDs and BIDs you planned for your configuration.
Step 15.
A reset may take several minutes. Enter the View and Edit Host LUNs menu.
44
Step 16.
Press Enter on a host ID. It is now necessary to refer to the topology plan you made previously. The below example makes for a dedicated DAS topology. The LUN mapping process associate LDs with host channel IDs, and in this way LDs are presented through different host links. The topology here only shows a basic, direct-attached configuration. Mapping multiple volumes in a SAN environment can be more complicated.
45
Step 16-1. Step 16-2. Step 16-3. Step 16-4. Step 16-5.
Select a host channel ID. Note it is a Slot A or Slot B ID. Select an LUN number under this ID. Press Enter on seeing Map Host LUN. Select the volume type you are mapping to this host ID, Logical Drive or Logical Volume. Select a logical drive. Note the LG column. A0 indicates the first LD, LD0, is assigned to controller A. The A0 LD is managed by controller A. Select a RAID partition within the LD. In this case, there is only one partition. Press Enter to proceed. Confirm your LUN mapping. It is recommended to check the details against your application plan and worksheet.
Step 16-6.
Step 16-7.
46
Step 17.
Repeat the mapping process until you present all your LDs properly on the host busses according to your application plan. You should then see the volumes on your application server (using Windows Server 2003 as an example). 2 LDs on 4 data paths will appear 4 devices in the Disk drives menu of the Computer Management utility.
Step 18.
After installing the EonPath multi-pathing driver, the same LD appearing on 2 data paths will become a Multi-Path Disk Device. Installing EonPath requires your to reboot server. For details, please refer to EonPaths User Manual.
47
NOTE:
Make sure the firmware on your subsystem is EonPath compatible. Some earlier firmware revision, e.g., 3.42, may not work with EonPath.
48
TIPS:
1. For the answers to some difficulties you might encounter during the initial configuration process, you can refer to Infortrends website, the Support -> FAQ sections. 2. For specific, hardware-related details, such as the onboard hub or jumper settings, please refer to the Installation and Hardware Reference Manual that is included with your system package.
49
Appendix
1
Tunable Parameters
Fine-tune the subsystem and the array parameters for your host applications. Although the factory defaults guarantee the optimized operation, you may refer to the table below to facilitate tuning of your array. Some of the performance and fault-tolerance settings may also be changed later during the preparation process of your disk array. Use this table as a checklist and make sure you have each item set to an appropriate value.
Parameters that should be configured at the initial stage of system configuration Parameters that can be changed later Non-critical
User-Defined Parameters
Default
Alternate Settings
Fault Management:
(1) Automatic Logical
Drive Rebuild - Spare Drive Enabled when Spare Drive is available RAID 1 + Local Spare RAID 3 + Local Spare RAID 5 + Local Spare RAID 6 + Local Spare Global Spare Enclosure Spare (recommended in a multi-enclosure configuration) Detect Only Perpetual Clone Clone + Replace Fail Drive Replace After Clone Perpetual Clone Low Normal Improved High On LD Initialization On LD Rebuild On Normal Drive Writes Continuous to 10 minutes Disabled, 5 to 60 seconds
S.M.A.R.T.
Disabled
Manual function Low (higher priority requires more system resources) Disabled
Verification on Write
Disabled Disabled
50
Disabled
Disabled, 0.5 to 30 seconds Note this option is not necessary in models using serial drive busses such as SAS or Fibre. Low, normal, improved, high
Normal
* Host and Drive Channel IDs Preset on Controller Unique most models Identifier Auto N/A + 8 hrs
Host, Drive, RCCOM, Drive + RCCOM (RCC options not configurable in the ASIC 400 models) * preset Hex number from 0 to FFFFF (FW 3.25 and above) Depends on problems solving
(2) Data Rate (1) Date and Time (1) Time Zone
Optimization:
(1) Write-back Cache (1) LD Stripe Size (2) Adaptive Write Policy (2) LD Write Policy
Enabled Disabled 32KB to 1024KB Related to controller general setting & application I/O characteristics Disabled Enabled LD-specific or dependent on systems general setting
W/B or W/T
Host- and Drive-side Parameters: (1) Data Transfer Rate (1) (1)
Max Number of Tags Reserved for each Host-LUN Connection Maximum Queued I/O Count * 32 Host Side: Asynchronous to 4GHz Drive Side: Asynchronous to 3GHz 1 to 1024
32 8 Disabled 4
(1)
NOTE: LUN-per-ID x tags reserved= flag A Max. Number of Concurrent Host-LUN connection= flag B If A>B, Max=A; else, Max=B
Tags per Host-LUN Connection Wide Transfer Drive I/O Timeout Drive Spindown Idle Delay Period
32 * 7 Disabled
51
Fibre Channel DualLoop Host ID/WWN Name List RCC through Fibre Channel
(1)
CPU temp: 0~90C Board temp: 0~80C 3.3V: 2.9~3.6V 5V: 4.5~5.5V 12V: 10.8~13.2V
52
Write-back
NOTE: A maximum of 128 members in a logical drive is a theoretical number. Rebuilding or scanning such a logical drive takes a long time.
53
Appendix
2
Protection by Hot Spares
Infortrends firmware provides the flexibility with three different kinds of hot spare drives: Local (dedicated) Spare Enclosure Spare Global Spare When any drives fail in a RAID1, 3, 5, 6 logical drive, the hot spares automatically proceeds with online rebuild. This paper shows how these three types function and introduces related settings.
54
The mechanism above shows how the controllers embedded firmware determines whether to use Local, Enclosure, or Global Spares to rebuild a logical drive. One important issue about rebuilding a logical drive is that users often forget to configure another hot spare after replacing a failed drive. Shown below is a standard procedure:
The Global spare joins LD0 and automatically starts rebuild. Note that the Global spare becomes the member of LD0.
The failed drive is replaced and configured as a Global spare. Doing so protects the array from another drive failure.
55
Every disk drive that is not included in logical drives will be automatically configured into Global spares.
56
Having members across different enclosures may not bring ill effects on logical drive operation, however, it is easy to forget the locations of member drives and thus the chance of making mistakes will increase. For example, you may replace a wrong drive and destroy a logical drive when the logical drive is already in a degraded mode (having one failed member). Global Spare Drive A Global Spare Drive is a general hot spare which participates in the rebuild of all logical drives, even those in different enclosures. When Global spares are applied, make sure that the Global spare has a disk capacity equal or larger than all members in the array. Spare Drive Limitation Spare drives can only rebuild a logical drive with members of an equal or smaller capacity. Therefore, it is considered safer to tune down the Maximum Drive Capacity when creating logical drives. The Maximum Drive Capacity is the maximum of capacity used in each member drive to comprise a logical group. Some times disk drives labeled with the same capacity may actually come with different numbers of logical block units. With different block numbers, a slightly smaller spare may not be able to rebuild a logical drive composed of larger members.
57
58