Linux With Xseries and Fastt
Linux With Xseries and Fastt
Bertrand Dufrasne
Jonathan Wright
ibm.com/redbooks
International Technical Support Organization
September 2003
SG24-7026-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page xi.
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Contents v
vi Linux with xSeries and FAStT - Essentials
Figures
3-1 The concept of xSeries: Learning the best from all IBM eServer product lines . . . . . 24
3-2 IBM BladeCenter chassis and blade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3-3 A generic HA cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3-4 A generic High Performance Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3-5 IBM eServer Cluster 1350 with 39 cluster nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4-1 Evolution of the FAStT Storage Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4-2 IBM TotalStorage FAStT200 Storage Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4-3 IBM TotalStorage FAStT500 Storage Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4-4 IBM Total Storage FAStT 600 Storage Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4-5 IBM Total Storage FAStT 700 Storage Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4-6 IBM TotalStorage FAStT900 Storage Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4-7 Simple example of Storage on a SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4-8 FlashCopy read and write schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4-9 Remote Volume Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5-1 Typical Storage Manager Subsystem Management window . . . . . . . . . . . . . . . . . . . 52
5-2 Typical view from the IBM FAStT Management Suite . . . . . . . . . . . . . . . . . . . . . . . . 54
5-3 Password prompt for shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5-4 netCfgShow screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5-5 Automatic Discovery screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5-6 Storage Manager Enterprise Management window. . . . . . . . . . . . . . . . . . . . . . . . . . 62
5-7 Add Device screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5-8 Subsystem Management window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5-9 Firmware Download Location screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5-10 Firmware update screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5-11 NVSRAM Download Location screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5-12 NVSRAM Update screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5-13 Default Host Type screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5-14 Logical Drive Wizard screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5-15 Create array screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5-16 Logical Drive Parameters screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5-17 Advanced Logical Drive Parameters screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5-18 Create a New Logical Drive screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5-19 Arrays and Logical Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5-20 Mappings Startup Help screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5-21 Host Group added . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5-22 Hosts added. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5-23 Define Host Port screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5-24 Host Ports added . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5-25 Storage Partitioning Wizard screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5-26 Select Host Group or Host screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5-27 Select Logical Drives/LUNs screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5-28 Logical drives added . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5-29 Installation Splash screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5-30 FAStT_MSJ Introduction screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5-31 Product Features screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5-32 Building a ramdisk image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5-33 Example of qlremote running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5-34 FAStT_MSJ screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such provisions are
inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and
distribute these sample programs in any form without payment to IBM for the purposes of developing, using,
marketing, or distributing application programs conforming to IBM's application programming interfaces.
ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United
States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems,
Inc. in the United States, other countries, or both.
C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic
Transaction LLC.
Other company, product, and service names may be trademarks or service marks of others.
IBM® has been committed to Linux long before it became trendy. This early adoption of the
Linux potential explains why today IBM offers the widest range of platforms and products for
deploying solutions on Linux, along with support and services.
IBM TotalStorage® products are known for their high quality and reliability and work well with
Linux. As part of a well designed Linux based e-business infrastructure, they can help you cut
costs, consolidate infrastructure, and position you for the new on demand world.
This IBM Redbook presents high-level information on Linux in conjunction with IBM
and TotalStorage products, giving proof points that these products can be deployed all
together to provide enterprise-class solutions. In particular, this book looks at Linux with the
xSeries® servers and IBM TotalStorage FAStT disk products.
This redbook is intended as a starting point and reference for IBM representatives, Business
Partners, and clients who are planning Linux based solutions with IBM xServers and FAStT
storage products.
Most of the information contained in this book is a compilation of the material from the Linux
Handbook, SG24-7000, and Implementing Linux with IBM Disk Storage, SG24-6262-01. We
encourage the reader to refer to these redbooks for more complete information or
implementation details.
Bertrand Dufrasne is a Certified Consulting I/T Specialist and Project Leader for Disk
Storage Systems at the International Technical Support Organization, San Jose Center. He
has worked at IBM for 21 years in many IT areas. Before joining the ITSO he worked for IBM
Global Services in the US as an IT Architect. He holds a degree in Electrical Engineering.
Cristina Zabeu
IBM Linux Storage Solutions Market Leader
Mary T. Morris
IBM WW Linux Sales Leader
Nick Harris, Ralph Cooley, Cameron Hunt, Randy Kuseke, Dan Lacine, Tomomi Takada, Bob
Waite. Dirk Webbeler, Alexander Zaretsky, original authors of the Linux Handbook,
SG24-7000-00.
Ronald Annuss, James Goodwin, Paul McWatt, Arwed Tschoeke, original authors of the
redbook Implementing Linux with IBM Disk Storage, SG24-6261-01.
Your efforts will help increase product acceptance and customer satisfaction. As a bonus,
you'll develop a network of contacts in IBM development labs, and increase your productivity
and marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our Redbooks™ to be as helpful as possible. Send us your comments about this or
other Redbooks in one of the following ways:
Use the online Contact us review redbook form found at:
ibm.com/redbooks
Send your comments in an Internet note to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. QXXE Building 80-E2
650 Harry Road
San Jose, California 95120-6099
Since the Linux source code is freely available, several companies have developed different
distributions of Linux. A distribution is a complete system. The key component is the Linux
kernel. Other utilities, services, and various applications can be included as well, depending
on the distribution and the intended use. There is no standard distribution. Each distribution
that is available has unique advantages.
IBM believes this investment will benefit its customers as they continue to exploit Linux for
their IT infrastructures and e-business.
This chapter provides a brief introduction to Linux, and an historical perspective to its origins
and relationship to open source and the GNU General Public License. The chapter continues
with a high-level description of the Linux operating systems and its components, where Linux
fits in the IT world, and Linux distributions.
At the time, AT&T, for various legal reasons was permitting free academic access to the
source code to UNIX while charging over $20,000 (in 1976 dollars!) for commercial or
government access. AT&T then halted publication of the source code in university texts as
this revealed proprietary Bell Labs code. The era of collaborative programming had arrived.
In the US, the academic variant of interest to UNIX became the Berkeley Systems
Distribution (BSD),2 where virtual memory and networking were added. These advancements
permitted large collaborative projects with contributors scattered throughout the world.
Lawsuits eventually ensued among AT&T, the Regents of the University of California, and
other parties over access to and distribution of the OS source code. Such constraints on
intellectual property rights to code provided strong motivation for one researcher from the
Artificial Intelligence Laboratories at the Massachusetts Institute of Technology to write an
operating system that was both portable, and also would be licensed in a manner that would
prevent its eventual constraint by intellectual property claims.
The new OS was to be named GNU, a recursive acronym for “Gnu’s Not UNIX.” This work
would be “copylefted” instead of copyrighted, licensed under the GNU General Public License
(GPL), which stipulates that all programs run under or derived from GNU must have their
source code published freely, relinquishing rights of control while retaining rights of
ownership. This was the birth of free (as in freedom) software, in contrast to software in the
public domain. By this time, vendors such as Sun, Hewlett-Packard, and IBM had proprietary
commercial offerings derived from licensed AT&T UNIX, which were gaining popularity with
corporate customers. The nascent GNU development effort began by making tools such as
editors, compilers, and file utilities available in source form that could be compiled and
executed on any platform, standardizing and improving upon those offered by commercial
vendors. Around 1990, programmers had contributed a nearly complete operating
environment to GNU, with the exception of a kernel. The GNU kernel was to be based on a
micro kernel architecture for improved portability.
In the meantime, a small scale UNIX-like skeleton of an operating system3 called Minix was
published in a text to be used as a teaching tool. It is here that Linus Torvalds enters the
story. He decided to write a UNIX-like OS with improved functionality over that of Minix to run
on readily available personal computers. He and colleague Lars Wirzenius published their
source code under the GPL on the Internet for public comment, and Linux was born.
Linux was a kernel without utilities, GNU was an operating environment lacking a finished
kernel, and unencumbered non-kernel BSD pieces were available to complete the picture. In
short order the components were combined with installation and maintenance tools and
1 Ken Thompson, Dennis Ritchie, and J.F. OssannaKen Thompson, Dennis Ritchie, and J.F. Ossanna
2
Twenty Years of Berkeley UNIX: From AT&T-Owned to Freely Redistributable, Marshall Kirk McKusick, in Open
Sources: Voices from the Open Source Revolution, O’Reilly, 1999 1-56592-582-3
3 Dr. Andrew S. Tannenbaum, Vrije Universiteit, Amsterdam, The Netherlands
The term “open source” began to replace the term “free software” as the commercial adoption
of GNU/Linux grew. There is in fact a difference, upon which hinges the fate of commercial
ventures in this arena.
The General Public License promotes free code on the GNU Web page. It also provides
protection for the developer and prevents a user from altering the code and then asserting
proprietorship over the code. This does not mean the code cannot be sold. According to the
GNU Web site, “free software” allows a user to run, copy, distribute, study, change, and
improve the software. It must also be available for commercial use.
IBM has recognized the efficacy of this community and sees the benefit of the rapid and
innovative development of robust and stable code to provide the enabling layer for e-business
applications. As a result of the evolutionary development of Linux, pieces of the code are
located on various Web sites. Without some integration, it is difficult to install and upgrade the
product, keep track of module dependencies and acquire drivers for the hardware.
Distributions provide coherent bundles of software in one location that furnish the capabilities
needed for a usable server or desktop. Generally, the products of different distributors have a
lot of things in common, but there may be particular features or functions, which you may
require that are not available from a given distributor.
In some solutions, typically with clusters, the Linux server does not need the traditional PC
hardware BIOS. The hardware is directly controlled by Linux. This provides phenomenal
boot-up times (three seconds is the current record).
Linux has the same multiuser and multitasking capabilities as large UNIX servers. It provides
the same level of system administration that you find on standard UNIX systems. Users can
run several programs concurrently. You can create user accounts for different users and
define their access rights to the files and system components. Installation of new devices, and
network connection and control are also provided as standard in the Linux operating system.
As a development environment, Linux has a powerful set of development tools for creating
industrial-strength applications. The development toolset includes the GNU C Compiler.
The Linux structure provides the ability for programmers to access the hardware of the
computer and the networks to which it is connected. This is achieved by the provision of a
hardware abstraction layer where programs can take advantage of hardware features
through a standard applications programming interface (API).
Linux programs can be portable to other versions of UNIX systems. Linux can use ANSI C,
combined with one of several portability graphical user interface (GUI) toolkits. These
programs can be written for both UNIX systems and Windows servers.
Web serving
The combination of Linux and Apache offers an attractive package for customers. It provides
a low cost, flexible solution for Web servers, with over 30% of the world’s Web sites running
this combination. The demand is now moving toward high capacity Web sites, which users
can interact with and that support high transaction rates.
Router
Linux is capable of advanced routing using inexpensive commodity hardware. Also, some
router vendors have chosen Linux to be their embedded operating system.
In this book we only highlight two Linux distributors. IBM does not favor any specific Linux
distribution, but instead IBM pushes for the standardization of Linux in general.
Red Hat offers support and maintenance services for their enterprise distributions.
SuSE is available in business versions. These include SuSE Linux Enterprise Server (SLES);
servers for zSeries, xSeries, pSeries, and iSeries; and versions that support Alpha,
PowerPC®, and Intel platforms.
SLES is a package distribution of United Linux (see 1.6.3, “UnitedLinux) intended for server
applications.
1.6.3 UnitedLinux
UnitedLinux is a consortium formed in May 2002, established to combine the development
efforts of several distributors.
The vision is simple: partners combine the very best server operating system technology from
leading distributors into a robust, single code based system. In January 2003 IBM joined
UnitedLinux as a technology partner.
The following United Linux consortium members have identical United Linux distributions:
SuSE Linux Enterprise Server 8 (SLES 8)
Turbolinux Enterprise Server 8 (TLES 8)
Conectiva Linux Enterprise Edition Powered by United Linux
All three companies provide distributions for Intel (xSeries), iSeries, pSeries, and zSeries.
Turbolinux and Conectiva resell the SuSE distribution. Turbolinux markets primarily in the
Asia Pacific, while Conectiva markets primarily in Latin America. Turbolinux and Conectiva
will provide bug fixes to their customers within 30 days of their release by SuSE. SuSE is the
lead UnitedLinux developer.
Key elements of the UnitedLinux distribution include POSIX standard asynchronous I/O
(AIO), raw I/O enhancements that provide high-bandwidth, low-overhead SCSI disk I/O, and
direct I/O that moves data directly between the user space buffer, and the device performing
the I/O, avoiding expensive copy operations, and bypassing the operating system’s page
cache.
The resources portrayed here provide a good, consolidated starting point to more detailed
information. We recommend that you read and learn about these resources to help you stay
current with Linux’s commitments and new services, which continue to grow rapidly.
The information in this chapter was correct at the time this book was written. However, some
of the information may change due to normal variations in the life cycle of each product. This
is true for products other than Linux as well. This means that IBM may not support a specific
product even though it may appear to be supported in this chapter. Therefore, we encourage
you to investigate topics such as end-of-service dates, product withdrawals, and other
support restrictions while gathering information from this chapter.
Important: The services portrayed here are for IBM customers only, and may not be
available in some specific countries or regions. Contact your local IBM Global Services
(IGS) representative for details.
Linux support across IBM is growing across the entire product line with support for
TotalStorage disk and tape. IBM also supports middleware solutions such as DB2®,
Websphere, Lotus® Domino™, MQSeries®, Tivoli® Storage Manager and more, and
continues to work closely with many leading Linux distributors.
Centers allow customers and key ISVs to transition their applications to Linux. IBM supports
application development on all IBM servers.
The LTC has a team of 250-plus, with a small part of the over community located in many
centers around the world. These centers are located in Adelaide, Austin, Bangalore,
Beaverton, Beijing, Bethesda, Boeblingen, Boston, Boulder, Cambridge, Canberra, Chicago,
Endicott, Hawthorne, Hursley, Kirkland, Mt. Laurel, New York City, Portland, Poughkeepsie,
Raleigh, Rochester, San Francisco, Somers, Urbana-Champaign, Yamato, and Yorktown.
JFS is a log-based, byte-level file system, which was developed for transaction-oriented, high
performance, high-throughput server environments, and is key to running intranet and other
high-performance e-business file servers. JFS is a scalable and robust file system and
provides fast file system restarts in the event of a system crash.
JFS is currently shipping with many Linux distributions including Red Hat and SuSE Linux.
EVMS
The LTC is also working on volume management technology in the form of the Enterprise
Volume Management System (EVMS), where the result of the effort is aimed at providing
more volume management capability and interoperability in the kernel.
For more in depth information regarding Linux application solutions refer to Linux Handbook -
A Guide to IBM Linux Solutions and Resources, SG24-7000-00.
IBM Global Services translates these advanced technologies into business value for
customers and helps in making information technology (IT) easy to acquire and manage. To
learn more about IBM Global Services, see:
http://www.ibm.com/services/
IBM Global Services is the largest part of IBM with over 140,000 employees in 164 countries
or regions. It is widely recognized as the largest service company in the world.
Recent analyst studies have shown that among the most important concerns related to Linux
implementations, support is an important consideration for both solution and vendor
selection. IBM offers world-class support for Linux as a standard offering, with several options
including the ability to customize support according to the skills and experience of the
customers current IT staff.
Support Line provides consistent, cross-platform Linux support for IBM platforms
and Intel/AMD OEM hardware. With this service, customers receive telephone access and
electronic access to IBM services specialists.
IBM has the critical mass to deliver support teams in multiple worldwide locations. It has used
some of its best talent in multiple locations to create the Change team. IBM can draw on the
skills of over 200 key members of the Linux Technology Center found in more than 20
locations worldwide.
IBM is committed to providing the same level of support normally associated with enterprise
computing environments as Linux continues to move in to those key business and industry
areas. IBM’s Linux service offerings are designed to help Linux customers achieve better
flexibility and cost-to-benefit rates.
IBM Global Services offers one of the industry’s most comprehensive portfolios of Linux
consultative and support offerings, from planning and design, to implementation and technical
support. IGS also offers a full portfolio of Linux Services and has been doing so since
February 2001. Over 300 IBM consultants skilled in Linux are available worldwide to help
customers design, build, and enhance their Linux solutions.
For information about IBM’s services and support for Linux, refer to the following IBM Linux
Services and support Web sites at:
http://www.ibm.com/services/its/us
2.3.1 WebSphere
Many customers need WebSphere as a front end to their legacy applications. Customers
need to access legacy applications (for example, CICS®) through a browser interface to
provide users (internal or external) with Web-enabled access to multiple existing applications.
They also want solutions that do not require a long development cycle and that support an
open-standards compliant integrated infrastructure. Customers want to leverage existing
transactional applications, database assets, and existing investment in hardware platforms
that have a superior scalability characteristic on which to run their new application.
Every customer with an IBM mainframe capable of having Integrated Feature for Linux (IFL)
presents a good fit for WebSphere for Linux. IFL-capable systems are the 9672 G Series (G5
and G6), Multiprise® 3000, zSeries 800, and zSeries 900 vendors.
IBM provides a variety of services for WebSphere that go from migration to specific training in
all WebSphere products. For WebSphere Software Services, see:
http://www.ibm.com/software/ad/vaws-services/websphere.html
For more information about WebSphere training and technical enablement, see:
http://www.software.ibm.com/wsdd/education/enablement/
IBM also provides support for the wide range of WebSphere products made to fit the Linux
environment. Table 2-1 lists references where you can find support for WebSphere products
that run on Linux.
2.3.3 Tivoli
Customers’ growing use of Linux systems within their Tivoli-managed environments has
extended IBM’s commitment to scale Linux services and support to Tivoli products.
IBM announced Tivoli Linux enablement since the summer of 2002 for its security software
and Web management products. For more information, see:
http://www.ibm.com/software/tivoli/solutions/linux/
You can access services for Linux supported Tivoli products, which include consulting,
training, and certification on the Tivoli services Web site at:
http://www.ibm.com/software/tivoli/services/
Tivoli’s customer support is quite extensive and complete. Some services are available only
for registered users. You can find Tivoli Customer Support on the Web at:
Table 2-2 IBM Tivoli Security Management product matrix for Linux
Tivoli Access Manager for e-business Tivoli Access Manager for SuSE SLES7 for zSeries
Table 2-3 IBM Tivoli Storage Management product matrix for Linux
Tivoli Storage Manager TSM 5.15 Server for Linux on the x86 Platform
TSM 5.15 Clients for Red Hat 7.1 and 7.2 x86
TSM 5.15 Clients for SuSE 7.1, 7.2 and 7.3 x86
Table 2-4 IBM Tivoli Configuration Manager product matrix for Linux
IBM Tivoli Configuration Manager TCM Server for SuSE 7.2 on x86
TCM Endpoint on Red Hat 7.1 7.2 and SuSE 7.2 on x86
IBM Tivoli Workload Scheduler TWS V8.1 Job Scheduler Console on Red Hat 7.1 for x86*
The details for each product are provided in the following tables. Table 2-6 lists product details
for Tivoli Monitoring.
Table 2-6 IBM Tivoli Performance and Availability product matrix for Linux
IBMTivoli Monitoring IBM Tivoli Monitoring Server, Gateway and Endpoint for Red Hat 7.0 and
7.1 on Intel
IBM Tivoli Monitoring Server, Gateway and Endpoint for Turbo Linux 6.1
and 6.5 on Intel
IBM Tivoli Monitoring Server, Gateway and Endpoint for SuSE Linux 7.1
and 7.2 on Intel
Table 2-7 outlines the support availability for each Tivoli Enterprise Console component on
Linux.
Red Hat Linux for Intel Event Gateway Endpoint UI Server Event
7.1 and 7.2 Server Adapters Console
SuSE Linux for Intel 7.0 Event Gateway Endpoint UI Server Event
and 7.1 Server Adapters Console
Turbo Linux for Intel 7.0 Event Gateway Endpoint UI Server Event
Server Adapters Console
Tivoli NetView Tivoli NetView 7.1.2 for Red Hat or SuSE 7.1 on Intel
Table 2-9 TIBM Tivoli Service Level Advisor for Linux product matrix
Tivoli Service Level Advisor Service Level Advisor SLM component for Linux Red Hat or SuSE 7.1
on Intel
Service Level Advisor Reports Server for Linux Red Hat or SuSE 7.1
on Intel
2.3.4 Lotus
To support and enhance your Notes® and Domino environment, Lotus offers a full range of
professional services, including consulting, education, and customer support. See IBM’s
Lotus Software site for more information:
http://www.lotus.com/lotus/products.nsf/fa_prohomepage
Table 2-10 outlines the Lotus Family services and support matrix for Linux.
The IBM Linux Support Line provides operational support services and a premier remote
technical support service. For more information, see:
http://www-1.ibm.com/services/its/us/supportline.html
IBM provides technical support for the major distributions of the Linux operating system
running in the xSeries, zSeries, iSeries, and pSeries platforms, as well as some non-IBM
applications that operate in a Linux environment. IBM helps answer how-to questions,
performs problem source determination, and provides mechanisms for a solution. In addition,
by leveraging partnerships with the key distributors of the Linux operating system, IBM
provides defect-level support for the Linux operating system. Remote assistance is available
through toll-free telephone or electronic access depending on the country or region.
IBM provides services for all currently supported xSeries (including BladeCenter™), zSeries,
iSeries, and pSeries platforms in a different degree of variety and complexity through its Linux
Portal. For more information see:
http://www.ibm.com/linux/
As Linux continues to grow, so will IBM’s commitment to enhance current services or provide
new ones.
With IBM Managed Hosting - Linux virtual services, the customer can tap into managed
server capacity without the up-front expense of buying the physical hardware. Instead of the
physical Web, database, and application servers that businesses currently rely on, virtual
servers on the zSeries running Linux can be leveraged. This means that availability and
reliability gain a boost while IT infrastructure is greatly simplified.
If you are interested in finding out more about this service, refer to:
http://www.ibm.com/services/e-business/hosting/mgdhosting/linux.html
Table 2-11 lists various resources for platform-based information about services that are
available for the zSeries server.
Table 2-12 lists various references for platform-based information about services that are
available for the iSeries server.
IBM provides hardware and software support service for the new pSeries 630 6C4 and 630
6E4 Linux-ready express configurations that are now available.
IBM will provide more services for Linux for pSeries in response to growing interest from
customers to use Linux as their native operating system in this platform.
For more information, see the IBM Linux for pSeries Web site at:
http://www.ibm.com/servers/eserver/pseries/linux/index.html
Table 2-13 lists various references that provide platform-based information about services
that are available for the xSeries server.
Table 2-14 IBM Web site matrix for each Linux platform
zSeries http://www.ibm.com/servers/eserver/zseries/os/linux/index.html
iSeries http://www.ibm.com/servers/eserver/iseries/linux/index.html
xSeries http://www.ibm.com/servers/eserver/xseries/linux/index.html
pSeries http://www.ibm.com/servers/eserver/pseries/linux/index.html
Figure 3-1 The concept of xSeries: Learning the best from all IBM eServer product lines
While the xSeries server delivers advanced technology to customers at a low price, many
applications and services with industry standards are also usable. In the autumn 2001, IBM
announced the Enterprise X-Architecture. This new technology raises the bar on scalability,
flexibility, availability, and performance by a dynamic arrangement of server resources. IBM
offers Enterprise X-Architecture through core logic developed by IBM and realized through
the IBM XA-chip set corresponding to the IA-32 and IA-64 processors.
Operating system support for xSeries changes frequently. This information is available at the
IBM ServerProven® site on the Web at:
http://www.pc.ibm.com/us/compat/nos/matrix.shtml
You can find the drivers and documents for xSeries on the Web at:
http://www.ibm.com/pc/support/site.wss
Today, IBM continues to build on the X-Architecture blueprint with Enterprise X-Architecture
technologies. These technologies yield revolutionary advances in the input/output (I/O),
memory, and performance of xSeries servers. This peerless new server design creates a
flexible “pay as you grow” approach to buying high-end 32-bit and 64-bit xSeries servers. The
results are systems that can be scaled quickly, easily, and inexpensively.
New tools make systems management easier than ever before. With self-diagnosing and
self-healing technologies, such as Active PCI-X and third-generation Chipkill memory,
systems are designed to stay up and running continuously. The xSeries server provides high
availability and exceptional performance for systems that need to be scaled quickly, easily,
and inexpensively. Table 3-2 shows the Linux support that is available for Enterprise
X-Architecture.
Both Red Hat Enterprise Linux and SuSE Linux Enterprise Server provide enterprise class
features that enable Linux-based solutions to be deployed across the widest range of
enterprise IT environments.
On the xSeries pages, you will find the latest drivers and firmware releases, as well as
documentation about installation for specific models.
Clusters of computers must be somewhat self-aware, that is, the work being done on a
specific node often must be coordinated with the work being done on other nodes. This can
result in complex connectivity configurations and sophisticated inter-process communications
between the nodes of a cluster. In addition, the sharing of data between the nodes of a cluster
through a common file system is almost always a requirement. There are many other
complexities that are introduced by clusters, such as the operational considerations of
dealing with a potentially large number of computers as a single resource.
http://www.pc.ibm.com/ww/eserver/xseries/clustering/info.html
It should be noted that the boundaries between these cluster types are somewhat indistinct,
and often an actual cluster may have properties or provide the function of more than one of
these cluster types.
High availability
High-availability clusters are typically built with the intention of providing a fail-safe
environment through redundancy, that is, provide a computing environment where the failure
of one or more components (hardware, software, or networking) does not significantly affect
the availability of the application or applications being used. In the simplest case, two
computers may be configured identically with access to shared storage. During normal
operation, the application environment executes on one system, while the other system
simply stands by ready to take over running the application in the case of a failure. When a
1
T.L. Sterling, J. Salmon, and D.J. Becker. How to build a Beowulf: A Guide to the Implementation and Application of
PC Clusters. MIT Press, Inc., Cambridge, MA, 1999.
The goal is to provide the image of a single system by managing, operating, and coordinating
a large number of discrete computers.
Horizontal scaling
Horizontal scaling clusters are used to provide a single interface to a set of resources that can
arbitrarily grow (or shrink) in size over time. The most common example of this is a Web
server farm. In this example, a single interface (URL) is provided, but requests coming in
Of course, this kind of cluster also provides significant redundancy. If one server out of a
large farm fails, it will likely be transparent to the users. Therefore, this model also has many
of the attributes of a high-availability cluster. Likewise, because of the work being shared
among many nodes, it also is a form of high-performance computing.
IBM provides installation support for the Cluster 1350. If an even higher level of support is
desired, the optional Support Line for Linux Clusters is staffed by experts who understand the
entire cluster environment, not just individual components. And IBM can provide project
management support to coordinate all aspects of delivery and installation, including hardware
and software setup services.
Cluster 1350 nodes significantly reduce the number of cables needed in each system, helping
to speed upgrades while lowering costs. In addition, an integrated systems management
processor enables CSM to remotely manage the system nodes for enhanced server
productivity. System administrators specify which events to monitor and what actions to take
in the event of memory, processor, hard drive, fan, or power issues.
Standard configurations of the Cluster 1350 include a management node, up to 512 cluster
nodes, and up to 32 optional storage nodes that provide shared file storage. Larger
non-standard configurations are available by special bid. Each Cluster 1350 includes a
management Ethernet VLAN for secure internode communications, a cluster Ethernet VLAN
for application internode communication, and a terminal server network. The cluster comes
standard with one 10/100 Mbps Ethernet switch for the management VLAN, and a choice of
the 10/100 Mbps Ethernet switch, Gigabit Ethernet switch, or Myrinet –2000 switch for the
cluster VLAN.
3.5 Consolidation
There are four types of workload consolidation (also referred to as server consolidation).
Each type offers significant benefits in the following areas:
Reduced administrative costs because of central management
Better management of system proliferation and more consistent architecture
Management of purchasing to achieve volume purchasing discounts
Consistent process for security, operating system levels, and updates
This chapter continues with an overview of the IBM TotalStorage FAStT Storage Server
family of products, including a description of some of the Storage Manager premium features.
The entire product family is discussed in the redbook The IBM TotalStorage Solutions
Handbook, SG24-5250. Further details of IBM’s TotalStorage products, software and
solutions, including Linux support, can be found at the following Web sites:
http://www.storage.ibm.com
http://www.storage.ibm.com/linux
IBM continues to support Linux across all its server platforms and leads the industry in
storage networking based on open, industry standards for heterogeneous platforms.
IBM is the leader in delivering world-class disk and tape systems, storage management
software, services, and integrated solutions, and incorporates the following:
Highly scalable storage - Enables administrators to manage growth and quickly respond
to changing business needs, with ease of deployment and configuration
High availability and fault-tolerant storage - Provides continuous and reliable access to
data using technologies like RAID and clustering
Improved data management - Helps administrators better control the security and growth
of their data
Increased storage utilization - Allocates storage through a centrally managed pool of
storage
Reduced administrative costs - Manage additional storage without having to add staff, and
with IBM storage management tools, the ability to perform quicker problem resolution
Platform independence - Enables sharing of data and possibility of simplification of
backup procedures
Centrally managed - Enables administrators to quickly respond to changing business
needs, with ease of deployment and configuration
Reduced administrative costs - Manage additional storage without having to add staff, and
with IBM storage management tools, the ability to perform quicker problem resolution
IBM storage on Linux provides pre-tested combinations of disk (high-end and mid-range),
tape and network storage and the major Linux distributions, and a clear strategy of deploying
Linux in storage.
IBM storage on Linux also provides alignment with IBM and software to offer the
most appropriate combinations to serve a wide variety of customer needs, all exploiting the
benefits of open source and heterogeneous environments made possible by Linux.
Built on open standards, Allows IBM to bring a full array Variety of choice in selecting
demonstrating IBM's intention of products to the Linux the most appropriate storage
to support Linux on all its environment for your needs while limiting
products. investment in proprietary
infrastructures
Pre-tested, documented and Expedites implementation of IT Take the “guesswork” out when
supported Linux storage infrastructures with superior choosing storage for your IT
configurations - disk, tape and reliability and support. needs
networked storage.
Wide variety of choices of Linux Protect investment in IT Freedom of choice for your
distribution's and storage infrastructure preferred Linux distribution
attachments.
IBM TotalStorage’s major advantage is its ability to offer complete storage solutions in a Linux
environment. IBM’s statement of support is also more robust than that of the competition,
encompassing our IBM eServer software and storage products. Linux is part of IBM storage’s
core development, and Linux support for our advanced functions is growing.
For some of the limitations of Linux and the work-arounds, refer to Appendix B, “FAStT
Management Suite Java” on page 75.
The IBM TotalStorage Proven program builds on IBM’s already extensive interoperability
efforts to develop and deliver products and solutions that work together with third party
products. Under the Storage Proven program, IBM will continue its work with hardware
vendors, ISVs, and solution developers to test their products on IBM’s extensive line of
storage products.
You can find an updated list of companies that have tested their products with IBM storage in
a Linux environment at:
http://www.ibm.com/totalstorage/proven/solutions.htm
Any model can be attached to any xSeries server running specific version of Red Hat and
SuSE Linux.
y
anc
und FAST 900
ity,
Red
bil Dual 2Gb Controller(s)
Note: IBM TotalStorage FAStT 500 Storage Server has been withdrawn from marketing
and is only included as a comparison.
Product highlights
Provides an affordable RAID storage solution for workgroup and departmental servers
Fully integrates Fibre Channel technology, from host attachment to disk drives
Helps ensure high availability by using redundant, hot-swappable components
With the High Availability model, supports transparent failover with dual-active RAID
controllers
Provides a wide range of data protection options with RAID levels 0, 1, 3, 5, and 10
Note: The IBM TotalStorage FAStT500 Storage Server has been withdrawn from
marketing.
Product highlights
Data protection with dual redundant components, multiple RAID levels, LUN masking, and
enhanced management options
Storage consolidation for SAN, NAS, and direct-attach environments
The FAStT600 Turbo is a mid-level storage server that can scale to over 16 TB, facilitating
storage consolidation for medium-sized customers. It uses the latest in storage networking
technology to provide an end-to-end 2 Gbps Fibre Channel solution. As part of the IBM FAStT
family, the Model 600 with Turbo uses the same common storage management software and
high performance hardware design, providing customers with enterprise like capabilities
found in high-end models, but at a much lower cost.
The EXP700 Expansion Unit supports four new, slim-profile, dual-loop, hot-swappable disk
drive modules:
The four new 2 GB Fiber Channel disk modules can also be used with the FAStT EXP500
Storage Expansion Unit (3560-1RU/1RX), and with the FAStT200 storage server (3542)
operating at 1 GB.
A new PCI-X/133MHz, 2 GBps Fiber Channel Host Bus Adapter is being introduced for
attaching FAStT storage servers to IBM eServer xSeries and other Intel-based servers. The
IBM TotalStorage FAStT FC2-133 Host Bus Adapter is a 64-bit, low-profile adapter that
supports auto-sensing for 1 GBps or 2 GBps operations on point-to-point Fiber Channel
Arbitrated Loop (AL)-2 and switched fabric topologies with Fiber Channel SCSI and IP
protocols.
Note: Premium Features are a function of the IBM TotalStorage FAStT Storage Server
firmware, and are usable from Linux attached hosts in the same manner as they would
be from a host that is running any of the supported host operating systems.
Intel processor-based host systems connected to the FAStT Storage Server usually run an
operating system with limited storage handling capabilities. Most of these operating systems
can only treat the storage as if it was locally attached to the host system. Two or more
individual host systems or clusters cannot access the same storage space, at least not
without disastrous results, unless third-party file sharing software is used. This is in conflict
with the idea of SAN, where the storage is supposed to be globally accessible to many host
systems.
Without storage partitioning, the logical drives configured on a FAStT Storage Server can
only be accessed by a single host system or by a single cluster. This can surely lead to
inefficient use of storage server hardware.
A host group is a collection of hosts that are allowed to access the same logical drives, for
example a cluster of two systems.
A host port is the FC port of the Host Bus Adapter in the host system. The host port is
identified by its world-wide name (WWN). A single host can contain more than one host port.
If you attach the servers in a redundant way (highly recommended), each server needs two
Host Bus Adapters. That is, it needs two host ports within the same host system.
The FAStT Storage Server only communicates through the use of the WWN. The storage
subsystem is not aware of which Host Bus Adapters are in the same server or in servers that
have a certain relationship, such as a cluster. The host groups, the hosts, and their host ports
actually reflect a logical view of the physical connections of your SAN as well as the logical
connection between servers, such as clusters.
With the logical setup defined previously, mappings are specific assignments of logical drives
to particular host groups or hosts.
The storage partition is the combination of all these components. It ensures proper access to
the different logical drives even if there are several hosts or clusters connected.
With FAStT Storage Manager you have the ability of the FAStT Storage Servers to support up
to 64 storage partitions. For number of maximum storage partitions for specific FAStT model
see Figure 4-2.
FAStT 200 16
FAStT 700 64
FAStT 900 64
Note: Note that on some FAStT models, the number of partitions also depends on the
(feature) licences that have been purchased.
You can also create several FlashCopy of a base logical drive and write data to the FlashCopy
logical drives in order to perform testing and analysis. Before upgrading your database
management system, for example, you can use FlashCopy logical drives to test different
configurations. Then you can use the performance data provided by the storage management
software to help you decide how to configure your live database system.
When you take a FlashCopy, the controller suspends I/O to the base logical drive for a few
seconds while it creates a physical logical drive called the FlashCopy repository logical drive
to store FlashCopy metadata and copy-on-write data. When the controller is finished creating
the FlashCopy repository logical drive, I/O write requests to the base logical drive can
continue. However, before a data block on the base logical drive is modified, a copy-on-write
occurs, copying the contents of blocks that are to be modified into the FlashCopy repository
logical drive for safekeeping. Since the FlashCopy repository logical drive stores copies of the
original data in those data blocks, further changes to those data blocks write directly to the
base logical drive without another copy-on-write. And, since the only data blocks that are
physically stored in the FlashCopy repository logical drive are those that have changed since
the time of the FlashCopy, the FlashCopy technology uses less disk space than a full physical
copy.
When you create a FlashCopy logical drive, you specify where to create the FlashCopy
repository logical drive, its capacity, and other parameters. You can disable the FlashCopy
when you are finished with it, for example after a backup completes. Then, you can re-create
the FlashCopy the next time you do a backup and reuse the same FlashCopy repository
logical drive. Using the Disable FlashCopy and Re-create FlashCopy pull-down menu options
provides a shortcut to creating a new FlashCopy logical drive of a particular base logical drive
because you do not need to create a new FlashCopy repository logical drive.
}
Overwritten
Copy-on-Write Free
Data
I/O Behavior
Base Flashcopy
Check Map:
Read from
Direct Read Base or
Read from Base Drive Repository if
data were
changed
You can also delete a FlashCopy logical drive, which also deletes the associated FlashCopy
repository logical drive. The storage management software provides a warning message
when your FlashCopy repository logical drive nears a user-specified threshold (a percentage
of its full capacity, the default is 50%). When this condition occurs, you can use the storage
management software to expand the capacity of your FlashCopy repository logical drive from
free capacity on the array. If you are out of free capacity on the array, you can even add
unconfigured capacity to the array in order to expand the FlashCopy repository logical drive.
Note: FlashCopy is a premium feature. Contact your IBM reseller or IBM marketing
representative for more information.
SAN SAN
Storage FC Storage
Array Fabric Array
The mirroring is synchronous. The write must be completed to both volumes before the I/O is
complete.
A minimum of two arrays is required. One storage system can have primary volumes being
mirrored to other arrays, and hold secondary volumes from multiple arrays.
The maximum number of storage subsystems that can participate in a remote mirror
configuration is two. The two storage subsystems are called primary and secondary storage
subsystems or as local and remote storage subsystems. These names are used
interchangeably to describe remote mirror setups or concepts. The names do not refer to the
location of storage subsystems or the role storage subsystems have in a remote mirror
relationship.
Note: Remote Mirror is a premium feature. Contact your IBM reseller or IBM marketing
representative for more information.
This feature can be used to copy data from arrays that use smaller capacity drives to arrays
that use larger capacity drives, to back up data, or to restore FlashCopy logical drive data to
the base logical drive. This premium feature includes a Create Copy Wizard to assist in
Migration
As your storage requirements for a logical drive change, the VolumeCopy premium feature
can be used to copy data to a logical drive in an array that utilizes larger capacity disk drives
within the same storage subsystem. This provides an opportunity to move data to larger
drives (for example, 73 GB to 146 GB), change to drives with a higher data transfer rate (for
example, 1 Gb/s to 2 Gb/s), or to change to drives using new technologies for higher
performance.
Backing up data
The VolumeCopy premium feature allows you to create a backup of a logical drive by copying
data from one logical drive to another logical drive in the same storage subsystem. The target
logical drive can be used as a backup for the source logical drive, for system testing, or to
back up to another device, such as a tape drive.
Prior to implementation, all firmware and BIOS levels must be verified and updated if
necessary. To get the latest information and downloads visit the following site:
http://www.pc.ibm.com/support/
On the xSeries pages, you will find the latest drivers and firmware releases as well as
documentation about installation for specific models.
Note: For step-by-step instructions on how to install Linux and configure an xSeries
system for attachment to FAStT, refer to Chapter 5 of the IBM Redbook: Implementing
Linux with IBM Disk Storage, SG24-6261-01.
Red Hat and Suse also have a list of tested and supported hardware with their distribution of
Linux, respectively at:
http://hardware.redhat.com/hcl or
http://www.suse.com/us/business/certifications/certified_hardware
For attaching to external storage, you will need a Host Bus Adapter. You can find the list of
supported adapters by going to:
http://www.storage.ibm.com
These are described in Chapter 5., “IBM FAStT Management software” on page 51.
Note: Linux supports a maximum of 128 logical drives with 32 LUNs per storage partition.
When using two adapters, it is necessary to hide one path to the LUN away from Linux. This
is necessary because, like most operating systems, Linux does not support multipathing by
itself. At boot time, the adapters negotiate the preferred path, which will be the only visible
one. However, with the management tool, it is possible to make an adequate configuration
instead of getting the results of FC protocol negotiations and driver design. Additionally, it is
possible to choose the LUNs visible for the OS. Also, included is static load balancing that
distributes access to the individual LUNs through several paths.
This is the function of IBM FAStT Management Suite Java (MSJ) Diagnostic and
Configuration Utility. The IBM FAStT Management Suite Java (FAStT MSJ) is an application
designed for the monitor and configuration of a SAN environment. This application is
specifically designed for IBM Fibre Channel in such an environment.
Together with HBA components, storage devices, and host systems, this application helps
complete a Storage Area Network.
FAStT MSJ is a network-capable (client/server) application that can connect to and configure
a remote Windows NT, Linux, or Novell Netware systems. The networking capability of the
application allows for centralized management and configuration of the entire SAN.
With the application, you can use the following four types of operations to configure devices in
the system:
Disable (un-configure) a device on a Host Bus Adapter
When a device is set as un-configured by the utility, it is not recognized by the HBA and is
inaccessible to that HBA on that system.
Enable a device
To add a device and make it accessible to the HBA on the system
Designate a path as preferred or alternate
When a LUN is accessible from more than one adapter in the same system, one path can
be assigned as the preferred path and the other one as the alternate path. If for some
reason the preferred path fails, the system switches to the alternate path to assure that the
transfer of data is not interrupted.
Replace a device that has been removed with a device that has been inserted
In a hot-plug environment, the IBM driver does not automatically purge a device that has
been physically removed. Similarly, it does not delete a device that is no longer accessible
because of errors or failure. Internally, the driver keeps the device in its database and
marks it as invisible.
FAStT MSJ provides the function to delete the removed device data from the driver’s
database and to assign the inserted device to the same slot as the one that it replaces.
The IBM FAStT Host Bus Adapters (2200 and above) support full duplex mode which enables
loop back capability.
Figure 5-2 Typical view from the IBM FAStT Management Suite
It provides an intuitive interface and powerful point and click functionality. Through its
graphical display of the complete SAN, administrators of all levels can manage their networks
with ease.
With SANavigator you can monitor the health of your SAN and identify potential problems that
may impact the performance and the availability of your SAN.
Along with the available Problem Determination MAPS (PDs), SANavigator greatly facilitates
problem isolation and repair of your SAN.
5.1.4 FAStT Service Alert, a support option for FAStT Storage Servers
FAStT Service Alert is available to all current and new FAStT Storage Server customers.
FAStT Service Alert (hereafter called Service Alert) enables the IBM TotalStorage FAStT
Storage Manager to monitor system health and automatically notify the IBM Support Center
when problems occur. Service Alert sends an e-mail to an IBM call management center that
identifies your system and location details such as your phone number. The IBM support
center analyzes the contents of the e-mail alert and contacts you to begin problem
determination. The service is available worldwide.
In-band
The Storage Manager client communicates with an agent process over either a physical or
loopback network connection. The agent process runs on one or more of the systems
connected through Fibre Channel to the FAStT storage subsystem. The agent process
communicates with the storage controllers over the Fibre Channel by means of a special
interface in the controller known as an Access LUN. This method is also known as indirect
control because of the intermediary agent process. No connection is required to the network
ports on the FAStT controllers.
Out-of-band
The Storage Manager client communicates over physical network connections to the FAStT
controllers. The Access LUN is not used. This method is also known as direct control since no
additional agent process is required, and no control communication takes place over the Fibre
Channel. The default settings are 192.168.128.101 and 192.168.128.102. Linux
communicates with the FAStT out-of-band, so if these IPs do not fit in with your network then
these will need to be changed.
Use a terminal program such as minicom under Linux or HyperTerminal under Microsoft
Windows.
If using minicom, press Ctrl-A F to send a break. If using HyperTerminal, press Ctrl-Break
every 5 seconds until the ASCII characters become human readable.
The shell prompts you for a password (Figure 5-3). The default is infiniti
1 Supported systems include Linux, AIX, Windows, Solaris, Novell NetWare and HP-UX
Type netCfgSet to change the settings. Change only My IP Address and Subnet Mask. Press
Enter to bypass the other settings.
Important: Do not change any other settings unless advised to do so by IBM Level 2
Technical Support. Any other alterations will place the FAStT in an unsupported
configuration.
For this discussion we assume that the environment is purely Linux, and that out-of-band
control will be used to manage the FAStT.
In our implementation, the FAStT Storage Manager client resides on a Linux management
station that can communicate with the FAStT controllers through the network. Such a
management station may also be a host that is attached to the FAStT storage through Fibre
Channel, but this is not required. To be clear, in the Linux environment, there is no Storage
Manager agent available. So even if the Storage Manager client is installed on a host that is
attached through Fibre Channel to the storage, the management still takes place out-of-band
and requires network connectivity to the FAStT controllers. Both the client and the runtime
environment are provided on the FAStT CD but the latest versions (as well as the latest
firmware for the FAStT controllers and expansion modules, please see Appendix A.3,
“Updating the FAStT firmware” on page 63 for detailed instructions on updating firmware) can
be obtained from the following URL:
http://www.storage.ibm.com
Read and accept the terms and conditions, and you now see a selection of both software and
documentation. Please obtain and read the documentation entitled “IBM TotalStorage FAStT
Storage Manager Version X.Y Installation and Support Guide for Linux” (where X.Y
corresponds to your version number). Use your browser’s “back” button and then select
LINUX operating system; this action takes you to the download page for the code and the
README file. Please look at the README file before installing, as it contains the latest
information for the code release.
The package is rather large (over 30 MB) and requires another 70 MB to install, including the
Java Runtime Environment; make sure you allow enough room for unpacking the code,
storing the rpm files, and the installed code. Also, please note that the Storage Manager client
GUI requires the X window system be running on the management workstation.
You need to have root privilege to install the client. Site administrators may make their own
decisions regarding access control to the Storage Manager client. The FAStT itself can be
protected by password from changes to the storage configuration independently of access to
the Storage Manager client.
First, uninstall any previous version of Storage Manager before installing the new version. To
check for prior installations, query the packages that are installed and select only those
containing the letters “SM” with the following command3:
rpm --query --all | grep SM
Use the output from this command to remove any prior version of the Storage Manager
software by package name. For example:
rpm --uninstall SMruntime-XX.YY.ZZ-1
When you have obtained the latest version of Storage Manager, extract the contents of the
gzipped tar file:
tar zxvf *.tgz
The required rpm files for installation are extracted into ./Linux/SMXclientcode where
X=Storage Manager major version.
The Storage Manager software is installed in /opt/IBM_FAStT, with the client and command
line interface, SMcli, installed in the /opt/IBM_FAStT/client directory.
2 Navigation details may change from time to time, these instructions were correct as of April 2003
3
We have used the long option names for clarity. Experienced users will know of shorthand versions of the options.
Please see the rpm documentation (man rpm) for more information
4 You may need to add the directory to your path. Otherwise, type the full path to SMclient
Select Edit -> Add Device from the menu. On the next screen, enter a Hostname or IP
Address for each controller blade in the FAStT (Figure 5-7).
Enter a Hostname or IP Address you set on the FAStT and click Add. Repeat the process for
the second controller blade.
Highlight Storage Subsystem <unnamed> and select Tools -> Manage Device.
If there was no configuration previously on the storage, you should see a Subsystem
Management screen similar to Figure 5-8.
This extracts the firmware and NVSRAM files for all of the current models of FAStT
controllers to the following folders:
./CONTROLLER CODE/FIRMWARE/<controller model>
./CONTROLLER CODE/NVSRAM/<controller model>
To update the controller firmware select Storage Subsystem -> Download -> Firmware.
The screen shown in Figure 5-9 displays. Enter the location of the update file and click OK
(Figure 5-9).
You now see a screen similar to Figure 5-10. This graphic persists until the firmware is
transferred to the controllers and updated, which may take a few minutes.
Next, update the NVSRAM. Select Storage Subsystem -> Download -> NVSRAM. Enter
the location of the NVSRAM update file and click Add (Figure 5-11).
Confirm both controllers are at the same level by selecting View -> System Profile.
Now that the controllers have been updated, we can use the Storage Manager to prepare the
storage for attachment to the host systems.
Highlight the Unconfigured Capacity and select Logical Drive -> Create from the menu.
Select Linux from the Default Host Type screen and click OK (Figure 5-13).
The Create Logical Drive Wizard starts. The wizard takes you through the stages of creating
arrays and logical drives. The first step is to select from the following options:
Free capacity on existing arrays
Unconfigured capacity (create new array)
As we have no existing arrays at the stage we select Unconfigured capacity, and click Next
(Figure 5-14).
Next, select a RAID level, and the number of drives to be included in the array. You also get
the following choices:
Automatic - select from list of provided capacities/drives
Manual - select your own drives to obtain capacity
In most cases we recommend that you select Automatic. This is because the FAStT
allocation heuristic does a reasonable job of distributing I/O traffic across available resources.
However, manual selection may be preferable if there are but few physical disks to be
allocated, or if other special circumstances warrant. With Automatic mode of this example,
click Next (Figure 5-15).
You can then specify advanced logical drive parameters. You can set the drive I/O
characteristics, preferred controller ownership, and drive-to-LUN mapping.
Make any changes to suit your environment. Logical drive-to-LUN mapping should be left at
Map later with Storage Partitioning. Click Finish (Figure 5-17).
Note: Carefully consider any changes as they could dramatically effect the performance of
your storage.
The next screen (Figure 5-18) prompts you - Would you like to create another logical
drive - and gives you the following options:
Same array
Different array
In our case, we select Different array to create a RAID5 array and then Same array to create
four logical drives within it (Figure 5-19).
The systems involved in storage partitioning should be up and running in order to get the Host
Port Identifiers. You should not use the Default Group for security reasons, but you should
not delete this group.
Select the Mappings View tab in the Subsystem Management window. This displays the
Mappings Startup Help message. Read the text and close the window (Figure 5-20).
Create a group for your storage partitioning. Select Mappings -> Define -> Host Group.
Enter a name a click Add (Figure 5-21).
Highlight the new Host Group and select Mappings -> Define -> Host. Enter a host name
and click Add. Repeat this for each system which is to be a part of this group. We are setting
up a High Availability cluster so we have two systems in the group (Figure 5-22).
Highlight the first Host in the new group. Select Mappings -> Define -> Host Port. Select the
Host Port Identifier (collected earlier in the Fibre HBA BIOS setup), which matches the first
card in this system. Select Linux as the Host Type (Figure 5-23).
Tip: If the Host Port Identifier drop-down is empty or missing the correct entries, close
Storage Manager and restart the relevant system.
Repeat this process for each Fibre HBA in the system. Repeat the process again for each
system in the group (Figure 5-24).
Highlight the new Host Group, select Mappings -> Define -> Storage Partitioning. The
storage partitioning wizard will appear. Read the text and click Next (Figure 5-25).
Select a Host Group or single Host for this partition. Click Next (Figure 5-26).
Select the logical drives to include in the partition and assign LUN IDs. These IDs must start
at zero and continue sequentially, you cannot miss a number out. If you later remove a LUN
you must re-assign the LUN IDs to continue without skipping a number (Figure 5-27).
Highlight the logical drives to be included and click Add (Figure 5-28). Once you have added
all of the required logical drives, click Finish.
The Access LUN (31) will probably be listed under Defined Mapping in your new Host Group.
This LUN is used by some other operating systems. Delete this LUN as it may cause
problems with multipathing under Linux. Highlight LUN31, press Delete, click Yes to confirm.
As final step you have to reboot your system. If you want an automatic mount of your volumes
you have to enter the appropriate parameters in /etc/fstab.
For more about Storage Area Networks, please consult any of several IBM Redbooks on the
topic.
FAStT MSJ has a GUI component. It also has an agent that runs on the host containing the
HBAs. The agent, qlremote, is quite small but the GUI is rather large, so allow sufficient
space (say 50 MB) for unpacking and installing the product.
The FAStT MSJ version is closely tied with the HBA version. Ensure that you have the correct
version. Extract the gzipped tar file:
tar zxvf *.tgz
You should then see the Introduction screen (Figure 5-30). Read the text and click Next and
continue through the License screen and the Readme screen.
The next screen (Figure 5-31) allows you to choose which features of the product you wish to
install. You have four choices:
GUI and Linux Agent - this installs both on one system
GUI - console for remote administration
Linux Agent - for systems to be managed remotely
Custom - to customize the components to be installed
The default selection installs the GUI and Linux Agent. Click Next.
Next, you are prompted for a location in which to install the software. The default folder is
/opt/IBM_FAStT_MSJ. Click Next to accept the default, or choose a different location
according to your preference. Software installation will take a few moments.
To have the drivers at boot time, they may be compiled into the kernel, or the boot loader
(GRUB) can load an initial ramdisk which contains all necessary drivers (see Figure 5-32).
You can see that mkinitrd scans modules.conf for the required modules, including all options
for the particular modules. As you can see in this case, the drivers for the network, the
ServeRAID™, the Fibre Channel, and other modules are included. In addition, a long options
list (generated by FAStT MSJ, we will see how to do this in B.1.3, “Use FAStT MSJ to
configure Fibre Channel paths” on page 79) is submitted as well. These options contain all
the necessary path and LUN information to set up the FAStT multipath driver during boot.
This leads to the following results:
Every time a change to your configuration occurs, you have to generate a new initial
ramdisk and reboot (or unload and reload the Fibre driver).
It is quite useful to keep an additional bootable ramdisk with no options for the FAStT HBA
in reserve. After booting this ramdisk, the driver detects attached devices and sets
preferred paths, but these are not overruled by the options. Then, with MSJ, you can
generate a new, error free option string.
Note: Any change in the configuration of your SAN causes you to reconfigure your
paths and create a new initial ramdisk.
The Linux agent is qlremote. This has to be running on the host before you open the GUI
client software. Open a terminal window and run qlremote (Figure 5-33).
If the MSJ software is installed on the same host, leave this to run and open another terminal
window. Otherwise, open a terminal window on the management workstation. In either case,
type the following to run FAStT MSJ client:
/opt/IBM_FAStT_MSJ/FAStT
When the FAStT MSJ is launched, a screen similar to Figure 5-34 should appear.
Click Connect to connect to the agent you wish to administer. Enter the IP address of the
system where the qlremote agent is running and click Connect. You should see a screen
similar to Figure 5-35 showing you storage and LUNs.
The management system’s /etc/hosts file or the site DNS services should be up to date as
this program will try to resolve a hostname to the IP address. You will get errors if this is not
correct.
To configure your environment, highlight either the host machine or an adapter and click
Configure. You will receive an error message (Figure 5-36). This is because there is no
configuration set up yet. Click OK and continue.
For the easiest way to configure the ports, click Device -> AutoConfigure. You will receive a
message asking: Also configure LUNs after clicking Yes, the preferred path is shown in
white, while the hidden paths are shown in black. Hidden means that these paths are not
visible to the operating system. Because the multipath driver is handling all I/O, they can be
used as redundant paths without interfering with the operating system’s view.
Before balancing the LUNs, the configuration will look something like Figure 5-38. Select
Configure, highlight the Node Name, select Device -> Configure LUNs. You will see the
preferred paths are highlighted in blue, while the alternate paths are yellow. Also, you can see
the preferred paths are marked with a green circle, because this matches the settings that the
drivers did at boot time (and hence the paths are “seen” by Linux).
Click Save, enter the password and click OK. A screen will pop-up (Figure 5-39) informing
you that the configuration has been saved and you must reboot for the changes to take effect.
Of course, this reboot requirement refers to the system containing the HBAs.
The current status of the paths will look something like Figure 5-40, the red circles indicate
the new preferred path. This is marked red as you cannot change the paths during runtime.
Once you have rebooted, you can launch the FAStT MSJ again, and view how the paths have
been configured. (Figure 5-41).
Exit from FAStT MSJ, click File -> Exit. Stop qlremote on the host system by pressing Ctrl-C
in the correct terminal window or type the following:
If you now view /etc/modules.conf you will notice that FAStT MSJ has added a large options
string. This string is used by the Fibre HBA drivers (Figure 5-42).
As changes have been made to modules.conf we need to rebuild the ramdisk. First run
depmod -a to update the modules.dep file.
The -f option allows you to build a ramdisk to a name which already exists, i.e overwrite it. In
our case, we used the following command with Red Hat Advanced Server (your naming
convention may vary from this example):
mkinitrd -f /boot/initrd-storage-2.4.9-e.12summit.img 2.4.9-e.12summit
If you used a new name for the ramdisk, update /etc/grub and reboot the system.
First, you install the required management software. Install (at least) qlremote from the FAStT
MSJ package on the host. After you have configured your storage, launch the qlremote agent
and access it with the MSJ (either locally or from a management station). After you have set
up your paths, MSJ instructs qlremote to write an option string to the modules.conf, which is
used to configure the driver module during system boot.
Note: Run qlremote only during the usage of FAStT MSJ. Once you are done and the
output string is written to the modules.conf, stop qlremote immediately!
You can verify your setup by unloading the module with rmmod qla2x00 and reload with
modprobe qla2x00. During this load the module will use the option string carrying the path
SuSE notes
Before you build the new ramdisk, you have to make the following change to the file
/etc/sysconfig/kernel: add qla2200 or qla2300 to the string behind INITRD_MODULES,
depending on your HBA designation:
INITRD_MODULES=”aic7xxx reiserfs qla2300”
The script /sbin/mk_initrd uses this string as input. By running mk_initrd a new version of
the initial ramdisk is built in the /boot directory. Even though this is convenient you might built
a new ramdisk instead of replacing the existing one. Have a look at the mk_initrd script, it
offers you quite some useful (and sometimes colorful) options.
Note: Every change in the configuration of your SAN causes the need to reconfigure your
paths and rebuild the initial ramdisk!
For more about Storage Area Networks, please consult any of several IBM Redbooks on the
topic.1
As final step you have to reboot your system. If you want an automatic mount of your volumes
you have to enter the appropriate parameters in /etc/fstab.
Important: Keep in mind, failure of a volume prior or during reboot may cause a change in
the SCSI device numbering sequence.
1
Introduction to Storage Area Networks, SG24-5470-01 and Designing and Optimizing an IBM Storage Area
Network, SG24-6419-00 are just two examples.
Table 5-3 List of limitations that apply with IBM TotalStorage Manager
Limitations Workaround
Vixel Rapport 2000 Fibre Channel hubs Do not use Vixel Rapport 2000 Fibre Channel
using controllers with firmware version hubs when your system is operating in a
04.01.02 cause problems, including controller or an I/O path fault-tolerant
damage to data, system instability, and environment.
disrupted loop activity.
After removing all drives from a storage Add one of the drives to the storage
subsystem, you are prompted for a subsystem and attempt the operation again.
password at software startup, or when you
perform protected operations, all passwords
you enter fail.
A standard non-network configuration is not Install the TCP/IP software on the Linux
supported when the Linux host does not host computer and assign the host a static
have the TCP/IP software installed. IP address
Multipath failover will work only if the When you configure the storage
storage controllers are in active/active subsystem, change both controllers to
mode. active status.
The controller firmware does not recognize When you configure a new storage
or communicate with a single controller until subsystem with a single controller, you must
slot A is populated. This restriction does not place the controller in slot A.
apply to storage subsystems that were
originally configured with two controllers.
Your windows and online help display a Run the Storage Manager 8.3 application in
brownish hash pattern when you run in 256 a higher display mode.
color mode.
You might not see the maximum number of Use Manual Configuration to select
drives during Automatic Configuration if you individual drives and select the maximum
are using drives of different capacities. number of drives allowed.
XpandOnDemand
XpandOnDemand is part of the X-Architecture. The power of a 16-way server is now available
with the xSeries 440 to take charge of advanced enterprise applications and drive a higher
level of performance on this flagship server. Powered by Enterprise X-Architecture™
technology, these 4U rack-optimized, industry-standard servers support up to 16-way
processing by interconnecting two xSeries 440 chassis as a single 8U configuration. This
makes it one of the most rack-dense 16-way servers in the world.
XpandOnDemand offers scalability in the way you buy and grow. This revolutionizes data
center servers with a modular, pay-as-you-grow building block design. This design offers low
entry price points and upgradability to powerful 16-way SMP and remote I/O.
XpandOnDemand allows you to purchase only the performance and I/O capacity that you
need, when you need it, without having to buy costly upfront infrastructure.
System partitioning
System partitioning is another of the many mainframe capabilities that Enterprise
X-Architecture technology brings to Intel architecture servers. Among the benefits of system
partitioning are hardware consolidation; software migration and coexistence; version control;
development, testing and maintenance; workload isolation and independent backup; and
recovery on a partition basis.
PCI-X
PCI-X provides a new generation of capabilities for the PCI bus, including more efficient data
transfers, more adapters per bus segment, and faster bus speeds for server systems. PCI-X
enhances the PCI standard by doubling the throughput capability and providing new
adapter-performance options while maintaining compatibility with PCI adapters. PCI-X allows
all current 66MHz PCI adapters (either 32-bit or 64-bit) to operate normally on the PCI-X bus.
PCI-X adapters take advantage of the new 100MHz and 133MHz bus speeds, which allow a
single 64-bit adapter to move as much as 1 GB of data per second. (The next PCI-X
specification (2.0) will support bus speeds of up to 266MHz.) Additionally, PCI-X supports
twice as many 66MHz/64-bit adapters in a single bus as PCI.
Remote I/O
Using IBM Enterprise X-Architecture remote I/O (RIO) enclosure support, it is possible to add
dozens of PCI/PCI-X adapter slots through external I/O expansion boxes to a single server3,
providing incredible I/O scalability. This is yet another example of what we mean by “pay as
you grow” scalability. Buy only what you need, when you need it. It allows IBM to continue to
shrink server cabinets while increasing I/O scalability through external expansion units.
Active Memory
IBM is delivering a number of memory technology breakthroughs through Active Memory that
increase capacity, performance, and reliability:
Large memory capacity - There are any number of reasons to buy additional servers. You
may have run out of room to add processors, or you may need more adapter slots
(something that is solved through Enterprise X-Architecture remote I/O), or you may need
more memory than your server can hold. While some servers are constrained by the
number of memory sockets they can hold, others are limited by the maximum amount of
memory that can be addressed by the chipset the server is using. Most servers are limited
to 16 GB of RAM or less for these reasons.
The Enterprise X-Architecture design smashes that barrier by enabling the use of as much
as 256 GB of RAM in a 64-bit Itanium-based server (64 GB in a 32-bit Xeon MP server).
This is enough to hold most databases entirely in memory, with potentially huge gains in
performance over databases that are accessed primarily from disk. The concern
preventing most users from considering running their databases entirely from memory is
the fear of a memory failure causing a system crash, with an attendant loss of data. The
possibility of such a failure is the reason for the major enhancements in memory reliability
and availability provided by the Enterprise X-Architecture solution.
High-speed memory access - Today, the fastest Intel processor-based servers have a
front-side bus (FSB) speed of 133 MHz (many servers still use a 100 MHz FSB). This
determines how fast the processor can access main memory and external cache memory.
By contrast, Enterprise X-Architecture technology enables servers to implement the 400
MHz FSB of the Intel Xeon (Foster) and second-generation Itanium (McKinley)
processors. This means reads and writes of memory by the processor will be at triple
today’s servers’ best speed, due to higher bandwidth and lower latency (waiting for the
memory to be ready for the next read/write). In addition, Enterprise X-Architecture design
also supports the use of double data rate (DDR) main memory, for even higher
performance.
Memory ProteXion - Memory ProteXion helps protect you from unplanned outages due to
memory errors far more effectively than standard ECC technology, even while using
standard ECC DIMMs. It works somewhat like hot-spare disk sectors in the Windows
NTFS file system, where if the operating system detects bad sectors on disk, it will write
the data to spare sectors set aside for that purpose. Think of Memory ProteXion as
providing hot-spare bits. The error correction is handled by the memory controller, so there
Active Diagnostics
Expandability, performance, and economy are all important features in a server, but equally
important is your ability to prevent or minimize server downtime. A technology new to
industry-standard servers that helps in that regard is IBM Active Diagnostics. Although not
directly enabled by the XA-32/XA-64 chipsets, Active Diagnostics is another Enterprise
X-Architecture feature that will be incorporated in many of the servers that use the chipsets.
If you are looking for all of these abilities in an industry-standard server today, they are
available from IBM. These features deliver application flexibility, innovative technology, and
new tools for managing e-business. They bring to industry-standard servers the kinds of
capabilities that were formerly available only to users of mainframes and other high-end
systems. Combined with existing X-Architecture technologies, these innovations result in
unprecedented “economies of scalability” unmatched flexibility, and new levels of server
availability and performance for Intel processor-based servers. In Intel-Architecture (IA)
servers, Linux has evolved quickly. When compared with other operating systems, new
drivers and functions were incorporated one after another from the great development
environment surrounding open source software. Since distributions with kernel-2.4 appeared,
Linux for Intel-Architecture servers has been accepted as an operating system with the
function and stability that can fully be used as an enterprise operating system. While the
http://www.pc.ibm.com/us/eserver/xseries/xarchitecture/enterprise/index.html
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see “How to get IBM Redbooks” on page 91.
Note that some of the documents referenced here may be available in softcopy only.
Linux Handbook A Guide to IBM Linux Solutions and Resources, SG24-7000
Implementing Linux with IBM Disk Storage, SG24-6261-01
The IBM TotalStorage Solutions Handbook, SG24-5250
Introduction to Storage Area Networks, SG24-5470
Fibre Array Storage Technology - A FAStT Introduction, SG24-6246
IBM TotalStorage FAStT700 and Copy Services, SG24-6808
Online resources
These Web sites and URLs are also relevant as further information sources:
IBM Linux Web site
http://www.ibm.com/linux/
The Linux Technology Center
http://www.ibm.com/linux/ltc
The IBM TotalStorage Web site
http://www.storage.ibm.com/
The IBM TotalStorage SAN fabric Web site
http://www.storage.ibm.com/ibmsan/products/sanfabric.html
The IBM eServer Web site
http://www.ibm.com/eserver
A E
ESS 10
access LUN 58
Ethernet VLAN 31
Active Memory 25
EXP500 42
Active PCI 24–25
EXP700 43, 86
Active PCI-X 88
Expansion Unit
application consolidation 33
EXP500 42
application server 27
Expansion unit
array 66, 69
EXP700 43
Automatic Discovery screen 62
B F
FAStT 10
base logical drive 46
controller 58, 60
BIND (Berkeley Internet Name Daemon) 5
firmware 60, 63
blade server 26
host 71, 73
BladeCenter 26
host port 71
requirements 50
Management Suite for Java 60
BM ServerProven 24
Management Suite Java 53, 75
boot loader
MSJ 50, 53
GRUB 78
NVSRAM 63
Bourne Again Shell (Bash) 3
Premium Features 44
BSD (Berkeley Systems Distribution) 2
Service Alert 55
Storage Manager 52
C agent 60
C2T Interconnect 24 client 60
Chipkill 24–25, 89 Storage Partitioning 65, 68
cluster 13, 26–27, 36, 44 Storage partitioning 70, 72
Beowulf 28 FAStT200 38, 43, 86
Cluster 1350 30 FAStT500 39, 43
High availability (HA) 27 FAStT600 40
High performance computing (HPC) 27–29 FAStT600 Turbo 40
Horizontal scaling (HS) 27, 29 FAStT700 41, 43
Linux 30 FAStT900 42
Cluster Systems Management (CSM) 31 file and print serving 5
Connection requirements 50 file structure 4
consolidating applications 33 firewall 5
consolidating data 32 FlashCopy 46
consolidation 26, 31
copy-on-write 46
G
GNU 1–3
D GNU C Compiler (gcc) 4
data consolidation 32 GPL (General Public License) 1–2
data management grid computing 26
services for Linux 15 GRUB 78
DB2
for Linux Migration Services 15
support for Linux 15
H
Hardware requirements 49
device driver 26, 49–50, 78
HBA 50, 76, 81
DHCP 5
High Availability 38
distribution 1, 3, 10, 13
high availability 4
domain name server (DNS) 5
host 45
driver (see device driver)
host group 45
host port 45
T
TCSH shell 3
Technology Center 10
Technology Partner 6
Tivoli 15–16, 18
Linux supported Tivoli products 15
U
UnitedLinux 6
UNIX 2–3
V
VMware 32
VolumeCopy 48–49
W
Web serving 5
WebSphere
product services matrix for Linux 14
services for Linux 14
workload consolidation 31
workload management solution 32
WWN 45
X
X-Architecture 24, 87
XpandOnDemand 87
xSeries
requirements 50
xSeries Linux information matrix 20
xSeries services for Linux 20
Z
zSeries
zSeries Linux information matrix 19
zSeries services for Linux 19
Index 95
96 Linux with xSeries and FAStT - Essentials
Linux with xSeries and FAStT: Essentials
(0.2”spine)
0.17”<->0.473”
90<->249 pages
Back cover ®
Enterprise-class IBM TotalStorage products are known for their high quality and
reliability and work well with Linux. As part of a well designed
INTERNATIONAL
solutions with Linux
Linux based e-business infrastructure, they can help you cut TECHNICAL
and IBM TotalStorage
costs, consolidate infrastructure, and position you for the new SUPPORT
Focus on xSeries and on-demand world. ORGANIZATION
FAStT
This IBM Redbook presents high-level information on Linux in
conjunction with IBM and TotalStorage products, giving
FAStT Management proof points that these products can be deployed all together to
Software overview BUILDING TECHNICAL
provide enterprise-class solutions. In particular this book looks at INFORMATION BASED ON
Linux with the xSeries servers and IBM TotalStorage FAStT disk PRACTICAL EXPERIENCE
products.
IBM Redbooks are developed
This redbook is intended as a starting point and reference for IBM by the IBM International
representatives, Business Partners, or clients who are planning Technical Support
Linux based solutions with IBM xServers and FAStT storage Organization. Experts from
IBM, Customers and Partners
products. from around the world create
timely technical information
Most of the information contained in this book is a compilation of based on realistic scenarios.
the material from the Linux Handbook, SG24-7000-00, and Specific recommendations
Implementing Linux with IBM Disk Storage, SG24-6262-01. We are provided to help you
implement IT solutions more
encourage the reader to refer to these IBM Redbooks for more effectively in your
complete information or implementation details. environment.