Operations Management Capabilities Model: Sun Microsystems, Inc
Operations Management Capabilities Model: Sun Microsystems, Inc
http://www.sun.com/blueprints
Sun Microsystems, Inc.
4150 Network Circle
Santa Clara, CA 95045 U.S.A.
650 960-1300
Part No. 819-1693-10
Revision 1.0, 1/14/05
Edition: February 2005
Copyright 2005 Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, California 95045 U.S.A. All rights reserved.
Sun Microsystems, Inc. has intellectual property rights relating to technology embodied in the product that is described in this document.
In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at
http://www.sun.com/patents and one or more additional patents or pending patent applications in the U.S. and in other countries.
This product or document is protected by copyright and distributed under licenses restricting its use, copying, distribution, and decompilation.
No part of this product or document may be reproduced in any form by any means without prior written authorization of Sun and its licensors,
if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers.
Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark
in the U.S. and other countries, exclusively licensed through X/Open Company, Ltd.
Sun, Sun Microsystems, the Sun logo, Java, Sun BluePrints, SunSolve, SunSolve Online, docs.sun.com, JumpStart, N1, and Solaris are
trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries. All SPARC trademarks are used under license
and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and other countries. Products bearing SPARC trademarks
are based upon an architecture developed by Sun Microsystems, Inc.
The OPEN LOOK and Sun™ Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledges
the pioneering efforts of Xerox in researching and developing the concept of visual or graphical user interfaces for the computer industry.
Sun holds a non-exclusive license from Xerox to the Xerox Graphical User Interface, which license also covers Sun’s licensees who implement
OPEN LOOK GUIs and otherwise comply with Sun’s written license agreements.
U.S. Government Rights—Commercial use. Government users are subject to the Sun Microsystems, Inc. standard license agreement and
applicable provisions of the FAR and its supplements.
DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES,
INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-
INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
Copyright 2005 Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, California 95045 Etats-Unis. Tous droits réservés.
Sun Microsystems, Inc. détient les droits de propriété intellectuels relatifs à la technologie incorporée dans le produit qui est décrit dans ce
document. En particulier, et ce sans limitation, ces droits de propriété intellectuelle peuvent inclure un ou plus des brevets américains listés à
l’adresse http://www.sun.com/patents et un ou les brevets supplémentaires ou les applications de brevet en attente aux Etats - Unis et
dans les autres pays.
Ce produit ou document est protégé par un copyright et distribué avec des licences qui en restreignent l’utilisation, la copie, la distribution, et la
décompilation. Aucune partie de ce produit ou document ne peut être reproduite sous aucune forme, par quelque moyen que ce soit, sans
l’autorisation préalable et écrite de Sun et de ses bailleurs de licence, s’il y en a. Le logiciel détenu par des tiers, et qui comprend la technologie
relative aux polices de caractères, est protégé par un copyright et licencié par des fournisseurs de Sun.
Certaines parties de ce produit pourront être dérivées des systèmes Berkeley BSD licenciés par l’Université de Californie. UNIX est une marque
enregistree aux Etats-Unis et dans d’autres pays et licenciée exclusivement par X/Open Company, Ltd.
Sun, Sun Microsystems, le logo Sun, Java, Sun BluePrints, SunSolve, SunSolve Online, docs.sun.com, JumpStart, N1, et Solaris sont des marques
de fabrique ou des marques déposées de Sun Microsystems, Inc. aux Etats-Unis et dans d’autres pays. Toutes les marques SPARC sont utilisées
sous licence et sont des marques de fabrique ou des marques déposées de SPARC International, Inc. aux Etats-Unis et dans d’autres pays. Les
produits portant les marques SPARC sont basés sur une architecture développée par Sun Microsystems, Inc.
L’interface d’utilisation graphique OPEN LOOK et Sun™ a été développée par Sun Microsystems, Inc. pour ses utilisateurs et licenciés. Sun
reconnaît les efforts de pionniers de Xerox pour la recherche et le développement du concept des interfaces d’utilisation visuelle ou graphique
pour l’industrie de l’informatique. Sun détient une licence non exclusive de Xerox sur l’interface d’utilisation graphique Xerox, cette licence
couvrant également les licenciés de Sun qui mettent en place l’interface d’utilisation graphique OPEN LOOK et qui en outre se conforment aux
licences écrites de Sun.
CETTE PUBLICATION EST FOURNIE “EN L’ETAT” ET AUCUNE GARANTIE, EXPRESSE OU IMPLICITE, N’EST ACCORDEE, Y COMPRIS
DES GARANTIES CONCERNANT LA VALEUR MARCHANDE, L’APTITUDE DE LA PUBLICATION A REPONDRE A UNE UTILISATION
PARTICULIERE, OU LE FAIT QU’ELLE NE SOIT PAS CONTREFAISANTE DE PRODUIT DE TIERS. CE DENI DE GARANTIE NE
S’APPLIQUERAIT PAS, DANS LA MESURE OU IL SERAIT TENU JURIDIQUEMENT NUL ET NON AVENU.
Please
Recycle
Contents
Part 1—Introduction...................................................................vii
Chapter 1—Executive Summary.................................................................1
Chapter 2—Introduction..............................................................................3
About This Document..........................................................................................................3
Concept of Operational Capability ......................................................................................4
Operations Management Capabilities Model ......................................................................6
Other Industry Standards and Models .................................................................................8
Gartner IT Process Maturity Model ...........................................................................................8
Vrije Universiteit IT Service Capability Maturity Model ..........................................................9
Why A New Model? ................................................................................................................10
What This Document Contains..........................................................................................11
i
Practices of the Sun ITMF People Aspect .........................................................................31
Organizing................................................................................................................................ 31
Resourcing................................................................................................................................ 33
Skills Development .................................................................................................................. 34
Workforce Management........................................................................................................... 35
Knowledge Management..........................................................................................................36
Chapter 5—Sun IT Management Framework —Process....................... 39
Overview of the Sun ITMF Process Aspect ......................................................................40
Diagram of the Sun ITMF Process Aspect...............................................................................40
Processes and Process Categories ............................................................................................ 41
IT Services................................................................................................................................ 41
Processes of the Sun ITMF Process Aspect.......................................................................42
Create IT Services .................................................................................................................... 42
Implement IT Services ............................................................................................................. 46
Deliver IT Services...................................................................................................................48
Improve IT Services .................................................................................................................52
Control IT Services .................................................................................................................. 54
Protect IT Services ...................................................................................................................58
Chapter 6—Sun IT Management Framework—Tools ........................... 63
Overview of the Sun ITMF Tools Aspect..........................................................................64
Manual and Automated Processes............................................................................................ 64
Tools and Tool Categories ....................................................................................................... 65
Diagram of the Sun ITMF Tools Aspect.................................................................................. 66
Tools Framework ..................................................................................................................... 66
Components of the Tools Framework......................................................................................67
Tools Framework Touch Points ...............................................................................................70
Tools of the Sun ITMF Tools Aspect ................................................................................71
Instrumentation Types.............................................................................................................. 72
Element and Resource Management Applications................................................................... 73
Event and Information Management Applications .................................................................. 76
Service Level Management Applications ................................................................................ 79
Workflow and Portal Systems.................................................................................................. 81
Contents iii
Service Level Management.................................................................................................... 141
Availability Management....................................................................................................... 143
Implement IT Services.....................................................................................................144
Release Management.............................................................................................................. 144
Deliver IT Services ..........................................................................................................147
Capacity Management............................................................................................................ 147
Incident Management............................................................................................................. 149
Capabilities Profile .................................................................................................................151
Service Desk........................................................................................................................... 151
Improve IT Services.........................................................................................................153
Problem Management ............................................................................................................ 153
Continuous Process Improvement.......................................................................................... 155
Control .............................................................................................................................157
IT Financial Management ......................................................................................................157
Configuration Management.................................................................................................... 159
Change Management.............................................................................................................. 161
Protect IT Services...........................................................................................................163
IT Service Continuity Management ....................................................................................... 164
Security Management............................................................................................................. 165
Summary of the Process Capabilities Profile ..................................................................168
Chapter 10—OMCM Specification—Tools ........................................... 169
Specification of Management Tools Architecture ...........................................................170
Implementation of Functional Components ....................................................................172
Element and Resource Managers ........................................................................................... 174
Event and Information Managers........................................................................................... 176
Service Level Managers ......................................................................................................... 178
Process Workflow Managers ................................................................................................. 180
Management Portals...............................................................................................................182
Degree of Visibility .........................................................................................................184
Integration of Components ..............................................................................................186
Process Automation .........................................................................................................189
Effectiveness of the Implementation ...............................................................................191
Summary of the Tools Capabilities Profile......................................................................193
Part 4—Conclusion....................................................................197
Chapter 11—Application of the OMCM................................................ 199
Assessment and Scoring ..................................................................................................200
Vendor Application..........................................................................................................202
Chapter 12—Resources for More Information ..................................... 205
Contents v
vi Contents • February 2005
1
Part 1—Introduction
Part 1 of this document provides an executive summary and introduces the concept
of operational capability, the capabilities model described in this document, and
compares it with other industry standards and models. Part 1 contains the following
chapters:
■ Chapter 1, “Executive Summary”
■ Chapter 2, “Introduction”
vii
viii • February 2005
1
Executive Summary
Today’s IT organizations are under pressure to meet or improve IT service levels for
critical business functions despite shrinking budgets and amidst organizational,
process efficiency, and automation challenges. In an effort to provide consistent and
predictable levels of service to their respective organizations, IT departments have
invested heavily in technology resources (people, processes, and tools) to manage
the extended data center. Despite this investment, many firms are still not able to
effectively manage the IT environment and meet the service level requirements for
users of the organization's IT products and services.
Various industry standards, such as the IT Infrastructure Library (ITIL) and the
Controls Objective for Information and Related Technology (COBIT) standard, have
gained wide acceptance as comprehensive methodologies for improving the
effectiveness of IT management. What these methodologies lack, however, are
specific metrics that can be used to measure and assess—in an objective and
consistent manner—the effectiveness of IT management in an organization.
1
■ provides the basis of assessment for the purpose of determining where best to
invest in IT resources in support of key business needs
The OMCM is based on the Sun IT Management Framework (Sun ITMF), which defines
the three different aspects—people, processes, and tools—of an organization’s IT
management infrastructure. The people aspect represents the skills, training,
management, and discipline required to effectively and efficiently execute the
processes and run the tools to support the IT lifecycle and automate the IT
management processes. The process aspect represents the actual IT management
processes used to support the IT service life cycle. The tools aspect represents the
actual technology used to facilitate and automate the execution of the various IT
management processes.
Introduction
Note – Throughout this document, terms are defined within the context of the
OMCM and the Sun IT Management Framework (Sun ITFM). Such definitions
represent Sun’s interpretations of industry standard terminology. We provide
definitions to ensure that terms are used consistently and to set the proper
expectations for what is—and what is not—specified in this document.
Despite this heavy investment, many firms are not able to effectively manage the IT
environment. Surveys and estimates from a variety of sources indicate that few
organizations actually deliver (defined as putting a solution into production) an
enterprise management project. Even fewer succeed (defined as having the solution
meet or exceed expectations) with the effort. Past surveys from Gartner2 reveal that
enterprise event console implementations have a completion rate of 40% and a
success rate of 20%. Although the Gartner research is dated, there is little to indicate
that this has changed significantly in the past five years.
Finally, there are always ongoing efforts to address the people who manage the IT
environment, with one of the latest trends being the shifting of IT development and
support activities to lower cost locations overseas. When this is done, there should
be a standard against which the overseas IT delivery / IT delivery team can be
measured.
All of these efforts on the part of IT organizations are focused on providing value to
the organization. However, the desired result is not necessarily an operational
monitoring environment or a robust change management process. As the old adage
says, a person who buys a shovel does not really want a shovel—they want a hole in
the ground. The same holds true for organizations that make investments in
enterprise management technology, process implementation, or staffing. What is
actually being purchased is the ability to meet the service level requirements for both
internal and external users of the organization's IT products and services. The
people, process, and tools are simply a means to acquire this capability.
1. Worldwide Enterprise System Management Software Forecast and Analyst Summary 2002 2006, IDC Bulletin 27402,
2002.
2. Effectively Managing Event Console Implementations, Gartner Group Document COM-03-6600, May 1998.
The following figure provides a visual representation of the interaction among the
process, people, and tools components that results in an organization's operational
capability.
Because the components of operational capability include not only tools but also
people and process, it is very difficult to acquire operational capability through a
revolutionary or big bang approach. Implementation of a robust IT management
infrastructure is as much an exercise in organizational change as it is a technology
implementation.
3. The American Heritage Dictionary (Third Addition), Dell Publishing, New York, NY 1994.
In order to be useful, this model needs to define, at some level, the components of
operational capability, their relationships, and the evolutionary path that is followed
to acquire and integrate them into the organization. The model needs to be
improvement focused with clearly defined requirements for each step. The model
should also be specified with sufficient detail to support practical application by IT
professionals. However, too much detail can make the model inflexible and
communicate a level of accuracy that might not exist. Therefore, a certain level of
ambiguity is present by design to allow the model to be applied in a wide variety of
roles and situations.
We name this model the Operations Management Capabilities Model (or OMCM) to
reinforce two ideas:
■ This model predominately deals with the operational infrastructure. We reference
the business and application delivery environments as necessary to set context or
identify requirements, and process interfaces. As will be described in the chapters
on the management framework, we draw a clear line between the three areas.
■ This model makes the distinction between capability and maturity. In some cases,
both terms are used interchangeably when discussing the evolution of IT
operations. However, we believe there is no particular advantage in having a
mature environment. Over the years, many mature organizational processes have
been found to be inefficient or downright dysfunctional. We believe that the term
capability better defines the reason for investment.
Other maturity models were also reviewed, including the Model specified in the
COBIT Management Guidelines4 and other work by the Software Engineering
Institute, such as the Software Capability Maturity Model (CMM).
Estimated
IT Process Distribution of
Level Maturity Description Organizations
TABLE 2-2 IT Service Capability Maturity Model (F. Niessink and H. van Vliet 1999)
IT Process
Level Maturity Description
6. The Vrije Universiteit IT Service Capability Maturity Model, Technical Report IR 463 Release I.2-1.0, Frank
Niessink and Hans van Vliet, December 1999.
The IT Service Capability Maturity Model defines two key characteristics that we
found to be useful when specifying the OMCM:
■ The model is strictly ordered. This is explicitly stated by Niessink and van Vliet and
implied by Gartner. A strictly ordered model means that it is not possible to
obtain a given level of maturity without first meeting the requirements of the
previous levels. This reinforces the idea that organizations evolve the level of
maturity over time instead of buying it all in one step.
■ The model is minimal. Both models only state what is required to reach a given
level. They do not restrict what might be done in addition to the minimum. The
model also only describes what the requirements are—it does not specify how
they are met.
13
14 • February 2005
3
The OMCM needs to be specified within the context of a framework that allows us
to quantify the components of operational capability. We call this framework the Sun
IT Management Framework (or Sun ITMF.)
This chapter provides some context for the Sun IT Management Framework within
the extended IT environment. It contains the following sections:
■ Definitions of Key Terms
■ Introduction to the Sun E-Stack
■ Business Framework
■ Execution Framework
■ Sun IT Management Framework
The following three chapters provide additional detail on the Sun ITMF.
■ Chapter 4, “Sun IT Management Framework—People”
■ Chapter 5, “Sun IT Management Framework —Process”
■ Chapter 6, “Sun IT Management Framework—Tools”
15
Definitions of Key Terms
Before describing the Sun IT Management Framework, we need to provide Sun’s
definitions for key terms—system, framework, architecture, and design—that are
frequently ill defined and overused within the IT industry. Our working definitions
apply within the context of this document—we do not make the claim that our
definitions are the only correct ones.
Systems
Frameworks, architectures, and designs all provide a description of a system. In this
document, a system is defined as a group of elements that work cooperatively to provide
specific results by performing specific tasks.
Term Definition
The OMCM is specified within the context of the Sun IT Management Framework
(Sun ITMF). We believe that the Sun ITMF conforms to the definition of framework
as given above. Of course, actual system implementation requires additional effort.
Defining an architecture or design of a management system is outside the scope of
this document.
The purpose of this construct is to visually describe all of the components and
interactions that must be addressed when an organization delivers IT-based
solutions to internal or external customers. Within any organization, the discipline of
IT management is separate from, yet integrated with, the architecture that is
managed. The process of developing an architecture is a complex, high-level set of
tasks that considers the inputs, outputs, and dependencies of an IT service on the
existing IT environment, along with the definition and mapping of requirements to
technology. The E-Stack helps organize these considerations to ensure that they are
addressed during the course of developing a solutions architecture.
The components of the E-Stack involve three separate but mutually dependent
architectural disciplines—the Business Framework, Execution Framework, and Sun
IT Management Framework, as shown in the previous figure. The rest of this chapter
describes these frameworks further.
The Business Framework provides the basis of the organization requirements for the
Execution Framework and the Sun IT Management Framework.
Execution Framework 21
Dimensions of the Execution Framework
This section describes the three dimensions of the Execution Framework—functional
layers, service tiers, and systemic qualities.
Functional Layers
The functional layers of the Execution Framework describe the various technology
components that make up an application, system, and the supporting environment,
including:
■ Business logic that captures the business process being implemented.
■ Software container and services that this logic uses to execute its function.
■ Supporting operating systems, hardware, and other components that provide
computing and data storage.
■ Network that connects the various distributed systems and enables
communication.
■ Facilities (power, heat, light, etc.) that provide the appropriate environment for all
of the physical components of the architecture.
Service Tiers
The service tiers of the Execution Framework describe the logical partitioning of
functions within a distributed application. References to “n-tier” applications are, in
effect, describing this aspect of the Execution Framework.
Systemic Qualities
Systemic qualities capture the various non-functional (or operational) requirements
that must be considered during the architectural process. These considerations do
not impact how an application will work but rather how well it will work. Their
position as the third aspect of the IT Execution Framework means that these
requirements are considerations at each intersection of a service tier and functional
layers.
Special attention should be paid to the visibility component (systemic quality) of the
Execution Framework. The degree to which this quality is considered and
implemented will determine the amount of control over the Execution Framework
that is provided to external entities represented by the management framework tools
aspect.
People Aspect
The people aspect of the Sun ITMF represents the organizational component of the IT
environment. This includes IT operations staff, help desk organizations, operations
and administrative groups, IT management, and any other internal IT stakeholders.
The framework depicts a first level set of activities that are applied when managing
IT staff. These activities are designed to cover a range of organizational management
functions, such as designing the organization, obtaining resources, and managing
resources on a day-to-day basis. This aspect also includes the concept of creating,
capturing, and reusing organizational knowledge.
Process Aspect
The process aspect of the Sun ITMF represents the actual IT management processes that
are needed to support the IT service life cycle. It describes processes for creating,
deploying, and managing IT services.
Tools Aspect
The tools aspect of the Sun ITMF describes the technology used to facilitate and
automate the execution of the various IT management processes. This framework is
a functional categorization under which a variety of product approaches may be
inserted.
This chapter describes the people aspect of the Sun IT Management Framework. It
includes the following sections:
■ Overview of the Sun ITMF People Aspect
■ Practices of the Sun ITMF People Aspect
■ Organizing
■ Resourcing
■ Skills Development
■ Workforce Management
■ Knowledge Management
The people aspect of the Sun ITMF is a process oriented improvement model. The IT
organization can be matured through the institutionalization of different workforce
management processes. The more integrated into the organization these processes
become, the more effective and efficient the organization will be.
Definitions
This document makes frequent references to the notion of the competency of the
organization. The term competency, along with other associated CMM terms, have very
specific meanings within the CMM context. To clarify the use of CMM terminology
in the context of the Sun ITMF, this section provides relevant definitions from the
CMM reference.
Competency
Competency is an underlying characteristic of an individual that is causally related to
effective/superior performance, as determined by measurable, objective criteria, in a
job or situation. A correlation exists between an individual’s competency and the
effectiveness in performing their job.
7. People Capability Maturity Model P-CMM Version 2.0, Bill Curtis, William E. Helfly, Sally A. Miller, Software
Engineering Institute, July 2001.
Workforce Competency
Workforce competency is cluster of knowledge, skills, and process expertise that an
individual should develop to perform a particular type of work in the organization.
A workforce competency can be stated at a very abstract level, such as a need for a
workforce competency in software engineering, financial accounting, or technical
writing. Workforce competencies can also be decomposed into more granular
abilities, such as competencies in designing avionics software, testing switching
system software, managing accounts receivable, preparing consolidated corporate
financial statements, or writing user manuals and training materials for reservation
systems.
Organizing
Organizing is the practice category that encompasses the activities related to the
design of the organization's structure. These activities would include such practices
as identifying organizational groups, developing specific roles and responsibilities
for each group, and describing the interfaces between groups.
Communication / Coordination
Communication / coordination practices focus on the establishment and maintenance of
information sharing within the organization. It includes the development of
individual communications skills and the establishment of formal processes to
ensure timely and effective two way communications.
Workforce Planning
Workforce planning practices focus on aligning the IT organization with the goals and
objectives of the larger organization. This practice includes identifying the current
and future competency needs of the organization based on expected activities, and
then planning the steps necessary to acquire this capability when needed.
Participatory Culture
Participatory culture practices focus on ensuring that decision making is performed in
a structured manner, and then is executed at the appropriate levels of the
organization. The lines of communication established by the communication /
coordination practice are used to ensure that work groups are informed about their
performance and its impact on the overall performance of the company. The decision
making process is designed to provide a balance of speed and effectiveness. Decision
making is delegated to the levels in the organization that are best able to evaluate
and implement the decisions.
Empowered Workgroups
Empowered workgroup practices involve workgroups that have responsibility and
authority to determine how to most effectively conduct their operations. This
practice is focused on the decentralization of planning, decision making, and
operations to each workgroup. The workgroup is held accountable for their
performance as measured against specific objectives.
Competency Integration
Competency integration practices refer to the integration of different workforce
competencies to improve the efficiency of activities that have dependencies across
areas of competency. The intent of this practice is to institutionalize the use of
competency centers as building blocks to complete tasks requiring a
multidisciplinary approach.
Resourcing
Resourcing is the practice category that encompasses the activities necessary to
acquire the individuals necessary to meet the goals of the organization. This practice
includes such activities as identifying required skill sets, determining how many of
each type is required, developing a timeline for acquiring them, and identifying
sources to fill the requirements.
Staffing
Staffing practices involve matching work to individuals. This practice includes
processes to recruit, select, and transition individuals into specific roles.
Competency Analysis
Competency analysis practices focus on analyzing the activities of the organization
and developing the complete inventory of competencies needed to support them.
This inventory includes the individual skills required as well as the identification of
processes and knowledge necessary to meet the workforce requirements of the
organization.
Skills Development
Skills development is a practice category that encompasses activities taken to help
individuals acquire the knowledge and practical abilities necessary to perform their
current job or to prepare them for future assignments. Skills development includes
the following CMM practices.
■ Training and Development
■ Career Development
■ Competency Development
■ Mentoring
Career Development
Career development practices assist individuals with meeting their career goals and
objectives. This practice involves defining development paths that identify the
requirements for advancement, and communicating these development paths to the
organization. Periodic reviews of career aspirations and opportunities are performed
with individuals to ensure that they understand the options available within the
organization. The goal is to ensure that individuals see the organization as a place
where they can develop and realize individual career goals.
Competency Development
Competency development practices focus on continuously improving the ability of the
workforce to execute the required competency based processes.
Workforce Management
Workforce management is a practice category that encompasses the activities
performed to control and support individuals as they perform their tasks. Workforce
management includes the management of individual performance and
compensation, as well as the activities necessary to provide the workforce with the
infrastructure required to perform their job functions.
Work Environment
Work environment practices involve ensuring that the physical working environment
is conducive for individuals to perform their job functions in an effective and
efficient manner. This practice includes the development of processes to evaluate
and maintain the physical environment (work space), supporting technology
(computers, phones, etc.), procedures to minimize distractions, and so on.
Knowledge Management
Knowledge management is a practice category that encompasses such activities as the
capture, documentation, maintenance, and dissemination of organizational learning.
Knowledge management enables the creation and maintenance of competency-based
practices. Through the execution of knowledge management, organizations are able
to take successful solutions and institutionalize them for reuse. It is through this set
of practices that organizations distribute effective processes and make them
repeatable.
Competency-Based Practices
Competency-based practices are focused on the development of workforce
competencies. These practices are used to align the staffing, compensation and other
resourcing practices with the competency development goals of the organization.
This chapter describes the process aspect of the Sun IT Management Framework.
It includes the following sections:
■ Overview of the Sun ITMF Process Aspect
■ Processes of the Sun ITMF Process Aspect
■ Implement IT Services
■ Deliver IT Services
■ Improve IT Services
■ Control IT Services
■ Protect IT Services
39
Overview of the Sun ITMF Process
Aspect
The process aspect of the Sun ITMF represents the actual IT management processes
that are needed to support the IT service life cycle. The process aspect describes
processes for creating, deploying, and managing IT services.
When creating services, organizations need people that can think out of the box
with a business focus. Implementing IT services means a focus on meeting
schedules and resource challenges. The deliver IT service process category brings
the focus to consistent service quality. The improve IT service process category
requires a focus on understanding how to measure and a how to facilitate process
improvement. Both the control and protect process categories have aspects that
require a look beyond the IT environment into other areas, such as finance,
security, and business continuity.
Sun ITMF process categories are designed to allow for the easy mapping of different
IT standards. In this document, we include process standards as they are defined in
the Information Technology Infrastructure Library (ITIL) to provide us with the
details to determine the degree of implementation of a process. The categorization of
these processes is based on a center of gravity approach, meaning that it is widely
recognized that most activities, as defined by ITIL, revolve around the target
category but that certain aspects can, and most likely will, also play in different
categories. For more information about ITIL, see www.itil.co.uk/.
IT Services
In this document, a IT service is defined as the end-to-end provision of the people,
process, and technology necessary to deliver a specific organization requirement. It
includes all of the activities and components required to deliver a service, through
IT, to the end user.
Create IT Services
The create IT services process category describes all processes related to the creation
of new services, including identifying, quantifying, architecting, and designing IT
services. It involves:
■ Determining what services are needed and desired for the IT customers.
■ Defining of the relationship between IT customers and the IT service provider,
including the definition of Service Level Agreements (SLAs).
■ Addressing the processes that ensure the completeness of the IT service portfolio
and the alignment of the IT Services with each other.
SLAs provide the basis for managing the relationship between the provider and the
IT customer. An SLA is a written agreement between the IT service provider and the
IT service customer(s). It defines the key service targets and responsibilities of both
parties. The emphasis must be on agreement—SLAs should not be used for coercing
one side or the other. A true partnership should be developed between the IT
Catalogue
Draft
Negotiate
Review UCs
and OLAs Control
Agree and
execute Monitor
Report
Review SLM
Process Review
The service level management process includes the following main categories of
activities (note that this is a circular process):
■ plan and design
■ implement and execute
■ control and feedback
Availability Management
The availability management process involves managing key components of the
predictability and availability of IT services. Availability requirements heavily
influence service architecture design. Availability management is the process that
assures the ability of an IT service or component to perform its required function at
a stated instant or over a stated period of time.
The following figure shows the inputs and outputs of the availability management
process, as well as the importance of being driven by business requirements.
Inputs Output
Availability depends on other components of the service. The following figure shows
what these relationships are and how they are managed.
Availability
SLAs
IT Services
Service Provider
IT Systems IT Systems
IT Systems
This figure also shows the terminology and type of contracts used to define the
relationships. Additional details about the structure of these agreements are outside
the scope of this document.
Release Management
The release management process involves a collection of authorized changes to an IT
service. A release typically consists of a number of problem fixes and enhancements
to the service, the new or changed software required, and any new or changed
hardware needed to implement the approved changes. The following table describes
the most common categories of releases:
Major software Normally contain large areas of new functionality, some of which
releases and may make intervening fixes to problems redundant. A major
hardware upgrades upgrade or release usually supersedes all preceding minor
upgrades, releases, and emergency fixes.
Minor software Normally contain small enhancements and fixes, some of which
releases and may have already been issued as emergency fixes. A minor
hardware upgrades upgrade or release usually supersedes all preceding emergency
fixes.
Emergency software Normally contain the corrections to a small number of known
and hardware fixes problems.
Release management works closely with the change management and configuration
management processes to ensure that the shared CMDB is kept up-to-date following
changes implemented by new releases, and that the content of those releases is
stored in the DSL. Hardware specifications, assembly instructions, and network
configurations are also stored in the DSL/CMDB.
The following figure shows the main release activities and its close ties to
configuration management.
Capacity Management
The capacity management process involves ensuring that the capacity of the IT
infrastructure matches the evolving demands of the organization in the most cost-
effective and timely manner. The process encompasses:
■ Monitoring performance and throughput for IT services and the supporting
infrastructure components.
■ Tuning system components to make the most efficient use of existing resources.
■ Understanding the demands currently being made for IT resources and producing
forecasts for future requirements.
■ Influencing the demand for resources, perhaps in conjunction with financial
management.
■ Producing a capacity plan that enables the IT service provider to deliver services
of the quality defined in the SLAs.
The following figure illustrates the key inputs and outputs of capacity management.
Level Description
The capacity planning process should include the fact that all three levels influence
each other.
Incident Management
The incident management process involves activities associated with service
disruptions. The primary goal of the incident management process is to restore
normal service operation as quickly as possible, minimizing the adverse impact on
business operations and ensuring that the best possible levels of service quality and
availability are maintained. Normal service operation means that services are
The following figure shows the key activities of incident management and its
relationships with other process components, including configuration management,
problem management, and change management.
Service Request
Procedures
Service Desk
Operations
Networking Investigate & Resolve and
Procedures diagnose recover
Monitoring
Agents/Probes
Problem/Error
Other Ownership,
monitoring,
DB
Closure
tracking and
communication
CMDB
Service Desk
The service desk process involves a central point of contact for handling customer,
user, and related issues to meet customer and business objectives. This function is
known under several possible names (or their variants), including:
■ Service Desk
The service desk extends the range of services and offers a more global-focused
approach, allowing business processes to be integrated into the service management
infrastructure. It handles incidents, problems, and questions. The service desk also
provides an interface for other activities, such as customer change requests,
maintenance contracts, software licenses, service level management, and
configuration management, availability management, financial management for IT
services, and IT service continuity management.
The service desk is customer-facing and its main objectives are to drive and improve
service to—and on behalf of—the organization. At an operational level, its objective
is to provide a single point of contact that dispenses advice, guidance, and the rapid
restoration of normal services to its customers and users.
The roles and responsibilities of the service desk are dependent on the nature of the
organization's business and of the support infrastructure in place. For most
organizations, a primary role is the recording and management of all incidents that
affect the operational service delivered.
As a single point of contact, it is important that the service desk minimally provide
the customer with a status update on service availability and any request being
managed by the service team, including the incident number for use in future
communication. Status update information can include:
■ Likely request completion time
■ When their equipment move or installation is scheduled for
■ When a new release is planned
■ Status on service enhancements
■ Where to get further information on a subject
■ Whether the computer systems are available at a given time
The key benefit to having a service desk lies in the communication it provides
between service customers and the support teams—providing customers with
information so they are being helped. The service desk communicates the status,
while the support organization focuses on doing the work to fix the problem or
otherwise fulfill requests.
Improve IT Services
The improve IT services process category addresses all activities surrounding the
measurement and optimization of IT service activities with the goal of continuously
improving service levels.
ITIL has included many of these components in each process, but problem
management is the focal point for root cause analysis and the prevention of issues.
Sun has developed SunSM Sigma to formalize a methodology to facilitate process
improvement—in general and specifically in the IT environment. In combination,
problem management and continuous process improvement (Sun Sigma) create a
solid foundation to facilitate continuous service level improvement.
Problem Management
The problem management process involves:
■ minimizing the adverse impact of incidents and problems on the organization
that are caused by errors within the IT infrastructure
■ preventing the recurrence of incidents related to these errors
In order to achieve this goal, problem management seeks to get to the root cause of
incidents and then initiate actions to improve or correct the situation.
The problem management process has both reactive and proactive aspects.
■ Reactive problem management is concerned with solving problems in response to
one or more incidents.
■ Proactive problem management is concerned with identifying and solving problems
and Known errors before incidents occur in the first place.
Sun Sigma is the core methodology that Sun is using to achieve industry-leading
availability and quality. Sun Sigma drives key processes with data about critical
customer requirements. Sigma is the term used in statistical analysis for variation
Sun Sigma refers to a methodology commonly known as Six Sigma (see http://
www.isixsigma.com/). The objective of Sun Sigma is to completely satisfy customer
requirements profitably. We call it Sun Sigma because not all customers will require
all of the processes to yield products or services at 6 sigma (such as 3.4 defects per
million opportunities, or DPMO). The real challenge is to more thoroughly
understand customer requirements and plan the sigma levels of the products,
services, and processes accordingly.
Control IT Services
The control IT services process category involves ensuring that IT services are
delivered within the constraints identified by the governing body and includes the
processes that facilitate the governing activities. Examples of governing functions
are: financial controls, audit, alignment with organizational objectives, and so on.
The control process category includes the following ITIL based processes:
■ IT Financial Management
■ Configuration Management
■ Change Management
IT Financial Management
The IT financial management process involves controlling the monetary aspects of the
organization. It supports the organization in planning and executing its business
objectives and requires consistent application throughout the organization to achieve
maximum efficiency and minimum conflict.
Process Description
Business IT
Requirements Operational Charges
Cost Analysis
plan. (incl
budgets)
In this diagram, it is assumed that charging for IT services might be desirable. This
could be considered by large IT service providers, such as Internet Service Providers
(ISPs), but the process of charging become less effective for smaller organizations.
IT must be able to justify its cost in relation to the business objectives at any time.
IT financial management sets out to provide that capability.
Incident Change
Management management
CMDB
Problem Release
Management Management
Change Management
The change management process involves ensuring that standardized methods and
procedures are used for efficient and prompt handling of all changes, with the goal
of minimizing the impact of change-related incidents upon service quality and,
consequently, to improve the day-to-day service delivery of the IT organization.
Note that change management processes need to have high visibility and open
channels of communication in order to promote smooth transitions while changes
are occurring.
Change management is responsible for managing its interfaces with other business
and IT functions. The following figure shows a sample process model of change
management. This is just one example—the way in which an organization decides to
implement the change management process is, to a large extent, driven by the
available resources (time, priorities, people, and budget).
Urgent Standard
Change change
process processes
Reject
Accept No
No
Yes
Yes No
No
Backout
To start
process
Protect IT Services
The protect IT services process category involves ensuring that IT services are still
available under extraordinary conditions, such as catastrophic failures, security
breaches, unexpected heavy loads, and so on. This area has become increasingly
important as organizations depend more and more on IT services. Therefore,
implementing IT service protection at the right levels is crucial to an organization’s
strength and survival.
The following figure shows this Business Continuity Lifecycle. Note that disaster
recovery procedures are only a part of this process.
Assurance
Initiate BCM BC Strategy Recovery Plan
Test
Development
Risk
Assessment
Develop procedures
Training
and initial tests
Security Management
The security management process, as defined by ITIL, is the process of managing a
defined level of security for information and IT services, including the reaction to
security incidents. Security management is more comprehensive than physical
security and password disciplines. It includes other core aspects, such as data
integrity (financial aspects), confidentiality (intelligence agencies/defense), and
availability (health care).
In this document, information security incidents are defined as events that can cause
damage to confidentiality, integrity, or the availability of information or information
processing. These incidents materialize as accidents or deliberate acts.
Prevention/
Detection Repression Correction Evaluation
reduction
Detection is the ability to notice the fact that a security incident is in progress or has
occurred. Once detected, repressive measures can then be taken to counteract the
attempt, such as virus detection software with quarantine options, or account lock-
outs after numerous failed login attempts.
If damage occurs the corrective procedures are activated. Like in the virus scanning
software has options to repair infected files or the restore (or rebuild) of a corrupted
database. A key function of security management is the evaluation and the
subsequent suggestions for changes (if needed) to prevention, reductive, repressive
and corrective measures.
The following figure shows how the security management process is a continuous
improvement process driven by Service Level Agreements as they are defined in
each ITIL process.
Reporting SLA
Plan
Maintain SLA, OLA,
Learn, Improve Underpinning
contracts, Policy
Control
Organize,
Framework,
Allocate
Responsibilities
Implement
Evaluate Awareness,
Audits, classification,
Assessments access rights
Incidents incident handling
etc...
This chapter describes the tools aspect of the Sun IT Management Framework.
It contains the following sections:
■ Overview of the Sun ITMF Tools Aspect
■ Tools of the Sun ITMF Tools Aspect
■ Instrumentation Types
■ Element and Resource Management Applications
■ Event and Information Management Applications
■ Service Level Management Applications
■ Workflow and Portal Systems
63
Overview of the Sun ITMF Tools Aspect
The tools aspect of the Sun ITMF describes the technology used to facilitate and
automate the execution of the various IT management processes.
However, practical considerations make this approach less than optimal and—in
some cases—impossible to implement.
Poor Scalability
The manual inspection approach is limited by some maximum number of servers
per operator. The time required to complete a manual review of all servers within
the operator's area of responsibility is a function of the number of servers. Even if
the operator only reviews system logs, the time between successive reviews of the
Human Error
The manual inspection approach involves human error. Humans are not capable of
performing any task with a 100 percent accuracy rate all of the time, especially when
repetitive and relatively mundane tasks, such as reviewing systems logs, are
involved. An error condition that is caught one time by the operator might be
missed any other time. Therefore, this solution cannot guarantee consistency of
performance and integrity in the resulting information.
Automation addresses the scalability problem because each new server is provided a
copy of the script. The operator now just deals with the exception conditions.
Automation addresses the consistency issue because a well-written program
performs the same with each invocation. Once a script is capable of identifying an
error condition, it will always identify the condition.
Tools Framework
This figure shows what we refer to in this document and within the Sun ITMF as the
tools framework—a tiered (layered) combination of management applications
integrated as appropriate to support an associated set of processes. In addition to the
different layers of the framework, certain components (process and workflow
systems, and management portal components) provide functionality that spans
across the layers and exposes them to the external environment.
Note that the categorization of tools in this fashion does not mean that a specific
product cannot fill more than one role or have components that work at different
levels. For example, the Sun™ Management Center product spans multiple layers,
providing basic monitoring functionality for the Solaris environment (Element and
Instrumentation Layer
The instrumentation layer of the tools framework consists of all management elements
that allow the various management tools to gain access to the system resources that
they manage. In the context of the Execution Framework, instrumentation is
generally implemented where managed resources reside. Tools in this layer are most
tightly coupled with the components of the managed environment and are most
directly impacted by changes to the managed environment. For example, for
different operating systems (such as Solaris, Linux, and Windows), different versions
of the same vendor's instrumentation are required.
The definitions in the following table help clarify these kinds of distinctions.
Term Definition
Management Portal
The management portal is a collection of applications that provides external entities
with access to selected portions of the tools framework. Examples include a web
interface for reviewing SLM reports, web or other types of user interfaces for the
various tools, or an application used by end users to submit requests for service. It
should also be possible—even desirable—to use this portal to expose management
information and facilities to people outside of the IT organization.
The following figure shows a detailed view of the Sun ITMF tools aspect.
Instrumentation Types
The instrumentation types tool category includes:
■ Agents
■ Probes
■ Ad Hoc Solutions
Agents
Agents are software entities within the execution framework that communicate with
management applications in the management framework using a defined protocol
and naming scheme for managed objects. Examples include a SNMP agent that ships
with a router, or a proprietary agent, such as the one that is part of the BMC
PATROL solution.
Probes
Probes are special-purpose management components (hardware and software) that
operate in the execution environment to perform specific management functions on
behalf of management applications. Probes are stand-alone devices, while agents are
generally installed on a component with another purpose. Examples include a
network device that provides SNMP Remote Monitoring (RMON) functionality, or a
special purpose computer that generates synthetic transactions for service level
testing.
Ad Hoc Solutions
Ad hoc solutions refer to scripts and executables that operate autonomously on
components within the execution framework. These components generally do not
communicate with, or act on behalf of, a management application.
The following figure shows the tools that reside in the element and resource
management layer.
Monitoring Tools
Monitoring tools sample the values of specific managed objects and compare these
values to a pre-defined threshold. In most cases, threshold violations result in the
generation of some type of notification (alarm). This monitoring includes both
simple activities (such as testing for network connectivity or CPU utilization), as
well as more complex actions (such as scanning a system log for a predefined
pattern). Examples of monitoring tools include Sun Management Center for the
Solaris environment, BMC PATROL for application layer entities, and Aprisma
Spectrum for the IP network layer.
Measurement Tools
Because the applications that process data reside in different parts of the tools
framework, it is necessary to collect this data and then make it available for use by
the applications. Measurement tools collect data from the managed environment and
then facilitate its movement to other tools within the tools framework. Measuring
differs from monitoring because the results of sampling activity are maintained.
Managing and moving the sometimes large amounts of data may require the use of
different instrumentation technology from what is being used to perform monitoring
activities. SNMPv1 is a useful protocol for monitoring solutions. However, the
nature of the protocol makes it inefficient for bulk data transfer. SNMP managed
devices might require a different mechanism to move performance data.
Measurement tools are not limited to the collection of performance data. They can
also capture configuration and asset information, such as hardware information,
installed software, patch levels, and so on.
Backup Tools
Backup tools provide a mechanism to copy, archive and, if necessary, recover
enterprise data. Backup systems can also manage backup media (tape management).
An example of a backup system is the Sun StorEdge™ Enterprise Backup Software.
Diagnostic Tools
Diagnostic tools are applications that facilitate data collection and test execution in
order to identify the root cause of an error condition. An example diagnostic system
is the Sun™ Management Center Hardware Diagnostic Suite.
Security Tools
Security tools work to prevent or detect unauthorized use of IT resources, such as
applications that monitor for intrusions, that access the vulnerabilities of different
systems, and that perform digital forensics and data recovery activities.
Examples of security systems include tools such as Tripwire, COPS and Satan.
Distribution Tools
Distribution tools provide the mechanisms needed to transfer and install software,
such as OS images, patches, or application software, within the execution
framework. Examples of distribution tools include Sun Jumpstart™, Sun N1™ Grid,
and Computer Associates’ Unicenter Software Delivery.
The following figure shows the tools that reside at the event and information
management layer of the tools framework.
Activity Description
Mediation Tools
Mediation tools bridge the gap between lower layer data collection mechanisms
(measurement tools) and external systems used for charge back or billing. Mediation
tools provide a means of taking performance data from a wide variety of sources
and providing the preprocessing necessary to allow the application of rating and
discount parameters by a billing system. Functions preformed by the mediation tools
include:
■ Collection of detailed performance data from lower level management tools.
■ Processing of collected data to check for syntactical correctness and format as
necessary.
■ Summarizing the data at a level of detail necessary for the application of billing
policy.
■ Providing the resulting information (Call Detail Report or CDR) to the billing
systems.
Component Description
The people and process aspects of service delivery are covered on the people and
process aspects, respectively, of the Sun ITMF. In the tools aspect, it is the operation
of the hardware and software components that are managed at this level. The
following figure shows the components at the service level management layer.
Business and IT think differently about the technology being provided in the
management of services. For example, a user might focus on the perceived
responsiveness and reliability of a web application, while the IT staff might focus
8. A Dictionary of IT Service Management Terms, Acronyms and Abbreviations, The IT Service Management Forum,
December 2001.
In both approaches, the goal is to evaluate the entire service chain so that the current
state of the service can be inferred with a high degree of accuracy. Service level
monitoring approaches that ignore critical components of the service should be
avoided. Note that the two approaches are not mutually exclusive—they can be used
together to provide service level monitoring. Services are deployed in support of
business processes, and true service level management solutions will include a
mechanism to assess business impact.
Transaction Generators
Transaction generator tools introduce workloads on a specific service and evaluate the
level of response received. The workload is designed to mimic the activities of a
service consumer. This testing enables the IT organization to track the service
performance of both, and to evaluate the service from the perspective of the end
user. These tools enable IT organizations to implement a user centric approach to
service level testing and compliance monitoring. Examples of this technology
includes Sun Management Center Service Availability Manager, Micromuse Internet
Service Monitor, Proxima Centauri, and Mercury Interactive LoadRunner.
9. Service Level Management for Enterprise Networks, Lundy Lewis, Artech House, Boston, 1999.
These tools are used to facilitate process automation and access to management
information. The following figure shows how these types of applications fit into the
tools framework.
Function Description
Examples of service desk and process automation technology include the Remedy
Action Request System and the Network Associates Magic Help Desk. Process
integration examples include the Collaxa Business Process Execution Language
(BPEL) server and the Intalio N3 Business Process Management System.
Management Portals
Management portals are collections of applications that provide external entities with
access to selected portions of the management framework, such as a web interface
for reviewing SLM reports, web or other types of user interfaces for the various
tools, or an application used by end users to submit requests for service.
The management portal is a loosely defined concept within the Sun ITMF. In its
simplest form, it could be a collection of product specific web interfaces that provide
access to the various tools within the framework. In a more complex form, it could
85
86 • February 2005
7
OMCM Specification—Overview
This chapter provides an introduction to the components of the OMCM. It has the
following sections:
■ OMCM Levels and Profiles
■ Structure of the OMCM
Reaching OMCM Level 3 requires that the organization have an epiphany regarding
the nature of, and solutions for, its IT operational problems. The realization is that
ensuring the delivery of IT services requires a holistic approach that addresses all of
the components of operational capability.
Traceability from the business metrics to the IT metrics allows decisions and process
improvement in one area to be based on information from the other. For example,
direct traceability from revenue to a service availability number could be used to
evaluate IT expenditures that would increase service availability. An example of the
reverse would be the analyzing of data on how customers navigate a web site to
determine the effectiveness of different marketing messages.
Innovation in service delivery is now possible. For example, classes of service based
on more complex pricing models may be used. OMCM Level 5 is reached through
the application of continuous improvement methodologies such as Sigma.
The definition of each degree depends upon the component being discussed.
The characteristics of a functional IT operational process will be described differently
from the characteristics of a functional monitoring infrastructure. However, this
scoring mechanism allows us to use consistent terminology for all three portions of
the Sun ITMF, and it simplifies application of the model to real situations.
Once we describe the degree of implementation, we map the various degrees to the
OMCM levels. This mapping allows us to create a capabilities profile that describes
the degree of implementation for every component at a given OMCM level. This
profile is then used to determine an organization's OMCM level.
We realize that this approach is more complex than some other models that simply
describe each component as being at one of x maturity levels. We use this alternative
approach because we feel that various portions of the management infrastructure
develop at different times. Capturing the concept that components evolve at
different points in an organization's development requires a mechanism to
distinguish between individual component evolution and the capability level of the
organization.
For example, the incident management and Sigma-based process improvement are
implemented to different degrees at OMCM Level 3. Incident management is at a
functional level, while the Sigma process is emerging. In some cases, the degree of
implementation for a given component may not change from level to level.
Component Heading
Name of the component.
Description
A brief description of the area under consideration. Most areas are described in detail
in the Sun ITMF chapters of this document.
Critical to Quality
Critical to Quality defines a list of key items that should considered by organizations
as they evaluate the component, such as:
■ key enablers for success
■ specific items that must be in place in order to properly address the component
■ recommendations based on experience in implementing the component in
question
This list of examples, though certainly not comprehensive, shows how the scope of
these items can range from tactical to strategic.
Criteria
Criteria provides defining characteristics for each level of implementation, as shown
in the following table. Criteria is used to characterize the degree of implementation
for a given component.
Degree of
Implementation Criteria
Degree of
Implementation Criteria
Metrics
Metrics have predictive or descriptive measures that describe the component's level
of implementation. These values are provided as means to help in benchmarking an
organization as it works on improving operational capability. These lists of metrics
are by no means exhaustive nor definitive. However, in all cases, metrics represent
measurable quantities. Multiple data types are used, including:
■ numeric values (such as the number of service requests per day)
■ Boolean values that represent the presence or absence of a specific item or practice
(such as the existence of a management architecture document for the enterprise)
Capabilities Profile
The following table details the degree of implementation required at each OMCM
level.
At the end of each chapter, a summary table shows the capabilities profile for all of
the components is provided.
OMCM Specification—People
The people aspect of the OMCM identifies the key people-oriented activities that are
critical for an organization to understand and use to correlate their current maturity
level via measurable criteria. This chapter describes the practice areas that can be
assessed for maturity, as well as the criteria to use to help move the organization to
the next desired OMCM level. This model can be used as the framework for
developing the organization using industry accepted best practices.
This chapter provides the details for determining the degree of implementation for
each of the organization or people components of the management framework.
It includes the following sections:
■ Organizing
■ Resourcing
■ Skills Development
■ Workforce Management
■ Knowledge Management
■ Summary of the People Capabilities Profile
93
Organizing
The organizing practice category aligns the work being done with the goals of the
organization. Any work being done that does not support the organization’s goals
can be identified and eliminated. The roles and responsibilities are identified to
successfully achieve the goals. The overall IT organization is structured based on
these roles and responsibilities. The organizational structure is defined down to the
individual workgroups, and all workgroup interfaces are identified.
Communication / Coordination
Description
Establishes a culture for openly sharing information across organizational levels and
among dependent workgroups.
Critical to Quality
■ Information is shared effectively across the entire organization.
■ Individuals and workgroups coordinate their activities to effectively accomplish
their objectives.
■ Communication and coordination practices are institutionalized to ensure that
they are performed as defined by organizational practices.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Roles and team structure are defined and documented.
■ Individual responsibilities are identified, as are task gaps and overlaps across
workgroups within the IT organization.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Organizing 95
TABLE 8-2 Communication / Coordination—Capabilities Profile
Workgroup Development
Description
Defines common workgroup methods and procedures used to perform standard
activities.
Critical to Quality
■ Workgroups are established to optimize the performance of interdependent work.
■ Workgroup staffing activities focus on the assignment, development, and future
deployment of the workforce competencies.
■ Workgroup development practices are institutionalized to ensure that they are
performed as defined by organizational processes.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Organizational objectives and goals are known and understood.
■ Work activities are mapped to organizational objectives and goals.
■ Work activities are categorized as core, support, or boundary.
■ The required skills needed to perform assignments are documented.
■ Role-based learning paths are defined.
■ Individual and team skill gaps are identified, and training plans are in place.
■ Collaboration tools and methodologies are in place and used between
workgroups.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Organizing 97
Workforce Planning
Description
Ties the organization's workforce activities directly to its business strategy and
objectives.
Critical to Quality
■ Measurable objectives for capability for each of the workforce competencies are
defined.
■ The organization plans for the workgroup competencies needed to perform their
current and future business activities.
■ Workforce planning practices are institutionalized to ensure that they are
performed as defined by the organizational practices.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Participatory Culture
Description
A participatory culture begins with providing individuals an understanding of the
organizations goals and how their participation contributes to achieving them.
Critical to Quality
■ Information about the business activities and results is communicated throughout
the organization.
■ Decisions are delegated to the appropriate level.
■ Participatory culture practices are institutionalized to ensure that they are
performed as defined by organizational processes.
Organizing 99
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Roles and team structure are defined and documented.
■ Individual responsibilities, as well as task gaps and overlaps across workgroups,
are identified within the IT organization.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Description
Empowering workgroups involves preparing individuals to work independently
within the constraints of the organizational goals and objectives.
Critical to Quality
■ Empowered workgroups are delegated responsibility and authority over their
work processes.
■ Empowered workgroups practices are institutionalized to ensure that they are
performed as defined organizational processes.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Ad Hoc • No formal processes are in place that allow for the delegation of
responsibility and authority over workgroup processes.
Emerging • The organization begins the development and performance of
empowered workgroups.
• Empowered workgroups are formed with a statement of their charter
and authority for completion.
Functional • Empowered workgroups are delegated the responsibility and authority
to determine the methods they will use to accomplish committed work.
• The organization’s workforce practices are tailored for use with
empowered workgroups.
Effective • Responsibility and authority for performing selected workforce
activities is delegated to empowered workgroups.
• Empowered workgroups perform the workforce activities delegated to
them.
Optimized • Empowered workgroups participate in managing their performance.
Organizing 101
Metrics
■ Job responsibilities and criteria for job qualification and performance are defined.
■ Job requirements and expectations for job performance are documented and are
aligned with the organization's objectives.
■ Job descriptions are in place.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Competency Integration
Description
Creates efficiencies for each workgroup by integrating all of the processes of the
individual workgroups.
Critical to Quality
■ The competency-based processes of the various workgroups are integrated to
improve overall organizational efficiency.
■ Competency integration practices are institutionalized to ensure that they are
performed as defined in the organizational processes.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Skill profiles related to customer specific work, goals and IT environment are
developed and in place.
■ Areas where skill levels do not meet requirements and expectations both for the
individual and team are identified.
■ Training plans are in place defined education and training events are
implemented.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Organizing 103
TABLE 8-12 Competency Integration—Capabilities Profile
Description
Aligning the performance results of the individuals and workgroups with the stated
objectives and goals of the organization and business objectives.
Critical to Quality
■ Alignment among individuals, workgroups, and the organization is continuously
improved.
■ Measurable objectives are defined for the individual, workgroup, and
organization.
■ Organizational performance alignment practices are institutionalized to ensure
that they are performed as designed by organizational processes.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Job responsibilities and criteria for job qualification and performance are defined.
■ Job requirements and expectations for job performance are documented and are
aligned with the organization's objectives.
■ Job descriptions, staffing level estimates, and team structure are defined and in
place.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Organizing 105
Resourcing
The resourcing practice category focuses on hiring or selecting the right people with
the right skills to achieve an organization’s goals and objectives. It also focuses on
retaining top talent within the organization. Replacement costs alone can be
substantial. More importantly, hiring candidates who do not possess the right skills
can impede organizational progress and productivity. Selection tools and good
practices are created to help managers make better decisions about who is the most
qualified candidate for the job. Retention programs address the source of turnover
problems and target interventions to avoid loss of critical IT personnel.
Staffing
Description
Establishes a formal process in which committed work is matched to existing
workgroups, and qualified individuals are recruited, hired, and placed into
assignments.
Critical to Quality
■ Staffing decisions and work assignments are based on an assessment of work
qualifications and other valid criteria.
■ Individuals transition into and out of positions in an orderly way.
■ Staffing practices are institutionalized to ensure that they are performed as
managed processes.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Job responsibilities and criteria for job qualification and performance are defined.
■ Job requirements and expectations for job performance are documented and are
aligned with the organization's objectives.
■ Selection tools that measure key job qualifications—performance indicators/
aptitude tests, knowledge tests, and structured interview guides—are developed
and administered.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Resourcing 107
TABLE 8-16 Staffing—Capabilities Profile
Competency Analysis
Description
Identifies the workgroup competencies needed to perform the business activities
that the workgroup services. Such competencies are required to fulfill the needs of
the business. For example, a workgroup might need to provide tools, either through
buying or developing them. Workgroup competency descriptions are periodically
reviewed to ensure that they still meet the business activities.
Critical to Quality
■ The workforce competencies required to perform the organization's business
activities are defined and updated as necessary.
■ The organization tracks its competencies in each of the workgroups.
■ Competency analysis practices are institutionalized to ensure that they are
performed as defined by the organizational practices.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Work activities are mapped to organizational objectives and goals and are
categorized as core, support, or boundary.
■ Job responsibilities and criteria for job qualification and performance are defined.
■ Skill profiles are developed and in place.
■ Areas where skill levels do not meet requirements and expectations both for the
individual and team are identified.
■ Skills development and employee performance results are managed, tracked,
measured and periodically reviewed.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Resourcing 109
TABLE 8-18 Competency Analysis—Capabilities Profile
Description
The level of skills, knowledge, and process available within the workgroups to
perform the committed work. The analysis is performed at the individual level to
determine the total skills, knowledge, and process abilities available within the
workgroups.
Critical to Quality
■ The impact of workgroup practices and activities on the capabilities of
competency-based processes (in critical workgroup competencies) is evaluated
and measured.
■ Organizational capability management practices are institutionalized to ensure
that they are performed as defined by organizational practices.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Job responsibilities and criteria for job qualification and performance are defined.
■ Job requirements and expectations for job performance are documented and are
aligned with the organization's objectives.
■ Selection tools that measure key job qualifications—performance indicators/
aptitude tests, knowledge tests, and structured interview guides—are developed
and administered.
■ Skills development and employee performance results are managed, tracked, and
measured.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Resourcing 111
TABLE 8-20 Organizational Capability Management—Capabilities Profile
Description
The purpose of continuous capability improvement is to provide a foundation for
individuals and workgroups to continuously improve their capability for performing
competency based processes.
Critical to Qualify
■ The organization establishes and maintains mechanisms for supporting
continuous improvement of its competency-based processes.
■ The capabilities of competency-based processes are continuously improved.
■ Continuous capability improvement practices are institutionalized to ensure that
they are performed as defined by organizational processes.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Skills development and employee performance results (paper or system) are
managed, tracked, measured, and periodically reviewed.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Resourcing 113
TABLE 8-22 Continuous Capability Improvement—Capabilities Profile
Skills Development
The skills development practice category includes:
■ performing skills analysis at the individual and team levels
■ identifying skill gaps based upon the defined role(s) for which the individual and
team is responsible
■ providing learning events (such as training, mentoring, and coaching) to fill
identified skill gaps
■ certifying skills and knowledge
Using the results from the skills analysis, skill development plans and learning paths
are developed for the individual and the team to help them gain the skills necessary
to perform their roles. The learning path allows the individual to obtain the skills
necessary for projected future roles. Skills analyses are repeated periodically to help
ensure that the required skills have been obtained.
Description
The purpose of training and development is to close the gaps between an
individual's current skills and those necessary to perform their assignments.
Training plans are developed that prioritize the critical skills needed to perform the
assignment. The results of the training plan are tracked in the workgroup's training
plan.
Critical to Quality
■ Individuals receive timely training that is needed to perform their assignments in
accordance with the workgroup's training plan.
■ Individuals capable of performing their assignments pursue development
opportunities that support their development objectives.
■ Training and development practices are institutionalized and performed as a
managed process.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Career Development
Description
Career development is used to enable the individual to see the organization as a
vehicle for achieving their career goals. It ensures that the individual develops
workforce competencies that will allow them to achieve their career goals.
Critical to Quality
■ The organization provides career opportunities to encourage growth in their
workgroup competencies.
■ Individuals pursue career opportunities that increase the value of their
knowledge, skills, and process abilities to the organization.
■ Career development practices are institutionalized to ensure that they are
performed as a managed process.
Degree of
Implementation Criteria
Metrics
■ Succession plan is in place.
■ Customer-specific skill profiles are developed.
■ Existing skill levels are analyzed.
■ Individual development plans are created.
■ Targeted training is implemented.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Competency Development
Description
The purpose of competency development is for the organization to constantly
enhance the ability of the workgroups to deliver on the assigned business objectives.
Critical to Quality
■ Individuals develop their knowledge, skills, and process abilities in the
workgroup competencies.
■ Workgroups uses their workgroup skills to develop the skills of others in the
workgroup.
■ Skills development practices are institutionalized to ensure that they are
performed as defined by the organizational processes.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Customer-specific skill profiles are developed.
■ Existing skill levels are analyzed.
■ Individual and team development plans are created.
■ Targeted training is implemented.
■ Internally-driven mentoring and coaching program are established.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Description
Mentoring transfers the knowledge and expertise of more experienced individuals or
individuals with scarce skills to other members of the workgroup.
Critical to Quality
■ Mentoring programs are established and maintained to accomplish defined
objectives.
■ Mentors provide training and/or guidance to individuals and workgroups.
■ Mentoring practices are institutionalized to ensure that they are delivered
according to organizational processes.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Workforce Management
The workforce management practice category involves managing the day to day
administration of employees, including compensation, work environment,
communication, training plans, providing feedback through performance reviews,
and other activities targeting employee needs.
Work Environment
Description
The purpose is to establish and maintain the physical working environment and to
provide the resources needed for individuals to perform their tasks effectively and
without unnecessary distractions.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Work space and ergonomic assessments are completed with corrective actions.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Description
Objectives are established at the individual level and are based on the workgroup's
objectives needed to achieve their committed work. Periodic reviews are conducted
with the individual to assess achievement and continuing relevance of the objectives.
Critical to Quality
■ Workgroup and individuals objectives are documented to ensure that business
objectives are accomplished.
■ Periodic reviews of objectives are conducted.
■ Performance problems are managed.
■ Reward and recognition occurs.
■ Staff performance management practices are institutionalized to ensure that they
are performed as managed services.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Job responsibilities and criteria for job qualification and performance are defined
and documented.
■ Job requirements and expectations for job performance are documented and are
aligned with the organization's objectives.
■ A performance management program is in place.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Compensation
Description
The purpose of compensation is to provide individuals with remuneration and
benefits commensurate with their contribution and value to the organization.
Critical to Quality
■ Compensation strategies and activities are defined, executed, and communicated.
■ Compensation is equitable relative to the skills, knowledge, and contribution to
the organization.
■ Compensation practices are institutionalized to ensure that they are performed as
managed services.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ A compensation program, which includes base salary, variable pay, equity
offerings and benefits, is in place.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Description
A quantitative performance management strategy is developed to identify, measure,
and analyze the competency-based processes that contribute to the achievement of
workgroup objectives.
Critical to Quality
■ Measurable performance objectives are established for the competency-based
processes that most effectively contribute to achieving workgroup objectives.
■ Metrics exist to manage competency-based processes.
■ Quantitative performance management practices are institutionalized to ensure
that they are performed as defined by organization processes.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Competency-Based Practices
Description
The purpose of competency-based practices is to ensure that all practices are based
on developing the competencies of the workgroups.
Critical to Quality
■ Workgroup practices are focused on increasing the organization's capability in its
workgroup competencies.
■ Compensation strategies and reward and recognition practices are designed to
encourage the development and application of the organization's workgroup
competencies.
■ Competency-based practices are institutionalized to ensure that they are
performed according to organizational processes.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ A performance management program is in place.
■ Processes/tools are in place to assist managers and individual contributors in
documenting the outcomes of performance mapping, development planning
discussions, and skills development activities.
■ Selection tools that measure key job qualifications—performance indicators/
aptitude tests, knowledge tests, and structured interview guides—are developed
and administered.
■ A compensation program, which includes base salary, variable pay, equity
offerings, and benefits, is in place.
Competency-Based Assets
Description
Competency-based assets captures the knowledge, experience, and artifacts
developed in performing competency-based processes within an organization.
Critical to Quality
■ The knowledge, experience, and artifacts resulting from performing competency-
based processes are developed into competency-based assets.
■ Competency-based assets are deployed and used.
■ Competency-based assets are institutionalized to ensure that they are performed
as defined by organizational processes.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ A standard documentation set has been defined.
■ Documents are accessible via a repository, such as a web site, a collaboration tool
or a content management system.
■ A documentation set is managed, reviewed, and updated periodically.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Description
Continuos workforce innovation involves establishing of processes for proposing
improvement in workgroup activities, identifying needs for new practices and
technologies, and implementing the most beneficial ones across the organization.
Critical to Quality
■ The organization establishes and maintains mechanisms for continuous
improvement of its workgroup practices and technologies.
■ Innovative or improved workgroup practices and technologies are identified and
evaluated.
■ Innovative or improved workgroup practices and technologies are deployed in an
orderly manner.
■ Continuous workforce innovation practices are institutionalized to ensure that
they are performed as defined organizational processes.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ A process improvement plan is defined and in place.
■ A community of practice is established for the purpose of exchanging, creating,
and extending best practices and tools.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
OMCM Specification—Process
The process aspect of the OMCM represents the actual IT management processes that
are needed to support the IT service life cycle. The process aspect describes
processes for creating, deploying, and managing IT services. This chapter explains
how to determine a certain level of OMCM operation capability for the process
aspect.
This chapter provides the details for determining the degree of implementation for
each of the process components of the management framework. It includes the
following sections:
■ Overview
■ Create IT Services
■ Implement IT Services
■ Deliver IT Services
■ Improve IT Services
■ Control
■ Protect IT Services
Overview
This section provides an overview of process criteria and defines key terms.
Overview 137
Process Maturity Criteria
Unlike the tools and people aspects of the OMCM, the criteria used to determine the
degree of implementation for a process are very similar across processes. Therefore,
the following table describes these criteria and, to be consistent with the format for
this document, each section will refer back to this table.
(3) Functional Definitions are These are now Guidelines: Tools are External links
complete but well defined, Well defined, effectively have been
not 100% with little published, and leveraged and identified.
effective. They overlap and no communicated. they improve Proper handoff
have been major gaps. The Policies: process quality. procedures
communicated roles are also Defined, exist. The
and are reflected in the published, and processes are
understood by organization, communicated. loosely coupled
the IT but authority is Procedures: with external
organization. still informal or Not all are systems.
self-assigned. completely
defined. Not
necessarily up
to date and not
completely
tested.
(4) Effective Definitions are Well defined, Guidelines: Tools are External links
complete and with no overlap Well defined, effectively have been
propagated into or gaps. They published, and leveraged and identified.
the match the communicated. they improve Proper handoff
organization. organization Policies: process quality. procedures
They are well and some areas Defined, exist. The
understood by have also published, and processes are
all involved. formal communicated. loosely coupled
authority to Procedures: All with external
complete their required exist systems.
responsibilities. and they are
well
maintained and
tested.
(5) Optimized Definitions are Well defined, Guidelines: Tools are External links
complete and with no overlap Well defined, effectively are well
propagated into or gaps. They published, and leveraged and defined. Proper
the match the communicated. they improve handoff
organization. organization Policies: process quality procedures
They are well and all areas Defined, and efficiency. exist and are
understood by have also published, and efficient. The
all involved. formal communicated. processes are
authority to Procedures: All now tightly
complete their required exist coupled with
responsibilities. and they are external
well systems.
maintained and
tested.
Overview 139
Definitions
The following definitions are used to describe the documentation aspects of the
process maturity criteria:
Term Definition
Note that the degree of specificity increases as you move from guideline to policy to
procedure.
Create IT Services
The create IT services process category describes all processes related to the creation
of new services, including identifying, quantifying, architecting, and designing IT
services. It involves:
■ Determining what services are needed and desired for the IT customers.
■ Defining of the relationship between IT customers and the IT service provider,
including the definition of Service Level Agreements (SLAs).
■ Addressing the processes that ensure the completeness of the IT service portfolio
and the alignment of the IT Services with each other.
■ All activities necessary to identify, quantify, architect, and design IT services.
To determine the level of capability, the key question is: Does IT deliver services
according to the SLAs, and do these SLAs reflect the business requirements?
This process category includes the following process areas, which are assessed to
determine the level of operational capability:
■ Service Level Management
Description
The service level management process involves:
■ planning, coordinating, drafting, agreeing, monitoring, and reporting on SLAs
■ the on-going review of service achievements to ensure that the required and cost-
justifiable service quality is maintained and gradually improved.
SLAs provide the basis for managing the relationship between the provider and the
IT customer. An SLA is a written agreement between the IT service provider and the
IT service customer(s). It defines the key service targets and responsibilities of both
parties.
The existence of SLAs is a sign of higher levels of OMCM. One cannot successfully
implement this process unless certain leading processes (such as problem and
change management) and skills and tools are in place.
Critical to Quality
Consider the following factors when determining the level of implementation:
■ Is service level management (SLM) well defined?
■ How well is the service level management process implemented?
■ Are operational level agreements in place with other internal suppliers (support
groups)?
■ Are underpinning contracts (UC) in place with external suppliers?
■ Is service level reporting in place?
■ Are tools in place to support SLM?
■ Is a service catalogue utilized?
■ Are there effective relationships with other IT service management disciplines?
■ Are service management review meetings held?
■ Do you have a service improvement process?
■ Are SLM KPIs/quality measures used?
■ Are service level management responsibilities well defined?
These questions are part of Sun's more comprehensive ITIL assessment, and
therefore Sun methodologies and tools are available to support answering of these
questions. To remain focused on the purpose of this document, some of the ITIL
assessment details have been omitted.
Metrics
The following KPIs and metrics can be used to judge the effectiveness and efficiency
of the SLM process:
■ What number or percentage of services are covered by SLAs?
■ Are underpinning contracts and OLAs in place for all SLAs and for what
percentage?
■ Are SLAs being monitored and are regular reports being produced?
■ Are review meetings being held on time and correctly minuted?
■ Is there documentary evidence that issues raised at reviews are being followed up
and resolved (for example, via an SIP)?
■ Are SLAs, OLAs, and underpinning contracts current, and what percentage are in
need of review and update?
■ What number or percentage of service targets are being met, and what is the
number and severity of service breaches?
■ Are service breaches being followed up effectively?
■ Are service level achievements improving?
■ Are customer perception statistics improving?
■ Are IT costs decreasing for services with stable (acceptable but not improving)
service level achievements?
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Description
Availability management is the process that manages key components of the
predictability and availability of IT services, assuring the ability of an IT service or
component to perform its required function at a stated instant or over a stated
period of time.
Critical to Quality
Consider the following factors when determining the level of implementation:
■ How well is availability management defined?
■ How well is the availability management process defined and executed?
■ How well is the cost of availability understood?
■ To what level is availability planning undertaken?
■ How well is the availability improvement process defined?
■ To what level is measurement and reporting implemented?
■ How well are methods and techniques employed within the availability
management process?
■ To what levels are tools used to support availability management?
■ How effective are relationships with other IT service management disciplines?
■ To what extend are availability management KPIs/quality measures used?
■ Are the availability manager's responsibilities well defined?
Criteria
For details, see TABLE 9-1, "Process Maturity Criteria for LOI Determination," on
page 138.
Metrics
The following KPIs and metrics can be used to judge the effectiveness and efficiency
of the availability process:
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Implement IT Services
The implement IT services process category encompasses efforts to properly roll-out of
a new or updated IT service that has been created. This process category includes the
following process area used to assess the level of operational capability: Release
Management.
Release Management
Description
The release management process involves a collection of authorized changes to an IT
service. A release typically consists of a number of problem fixes and enhancements
to the service, the new or changed software required, and any new or changed
hardware needed to implement the approved changes.
Release management works closely with the change management and configuration
management processes to ensure that the shared CMDB is kept up-to-date following
changes implemented by new releases, and that the content of those releases is
stored in the DSL. Hardware specifications, assembly instructions, and network
configurations are also stored in the DSL/CMDB.
Critical to Quality
Consider the following factors when determining the level of implementation:
■ How well is release management defined?
■ How well is the release policy defined?
■ How well is the DSL defined?
■ Is release planning well structured?
■ How well are releases designed, built and configured?
■ Is the acceptance process well defined?
■ How well is release rollout planning undertaken?
Criteria
For details, see TABLE 9-1, "Process Maturity Criteria for LOI Determination," on
page 138.
Metrics
The following KPIs and metrics can be used to judge the effectiveness and efficiency
of the release management process:
■ Releases are built and implemented on schedule and within budgeted resources.
■ Very low (preferably no) incidence of releases needing to be backed out due to
unacceptable errors.
■ Low incidence of build failures.
■ Secure and accurate management of the DSL with no evidence of software that
has not passed quality checks.
■ Compliance with all legal restrictions relating to bought-in software.
■ Accurate distribution of releases to all remote sites.
■ The on time implementation of releases at all sites.
■ No evidence of unauthorized reversion to previous versions at any site.
■ No evidence of use of unauthorized software at any site.
■ No evidence of payment of licence fees or wasted maintenance effort.
■ No evidence of wasteful duplication in release building.
■ Accurate and timely recording of all build, distribution, and implementation
activities within the CMDB.
■ The planned composition of releases matches the actual composition.
■ The number of problems in the live environment that can be attributed to new
releases.
■ The number of major and minor releases per reporting period.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Deliver IT Services
The deliver IT services process category is the most visible part of the IT
organization's activities. This category addresses all activities that assure the proper
delivery and ongoing operation of the IT services, including efforts to assure
predictable, consistent service delivery. This is often referred to as IT operations or
data center operations.
This process category includes the following process areas, which are assessed to
determine the level of operational capability:
■ Capacity Management
■ Incident Management
■ Capabilities Profile
■ Service Desk
Capacity Management
Description
The capacity management process is responsible for ensuring that the capacity of the
IT infrastructure matches the evolving demands of the organization in the most cost-
effective and timely manner. The process encompasses:
■ The monitoring of performance and throughput of IT services and the supporting
infrastructure components
■ Undertaking tuning activities to make the most efficient use of existing resources
■ Understanding the demands currently being made for IT resources and producing
forecasts for future requirements
■ Influencing the demand for resources, perhaps in conjunction with financial
management.
Critical to Quality
Consider the following factors when determining the level of implementation:
■ Is capacity management well defined?
■ Is the “resource capacity management” sub-process well defined?
■ Is the “service capacity management” sub-process well defined?
■ Is the “business capacity management” sub-process well defined?
■ How well is capacity data managed?
■ How well is demand management defined?
■ How well are modelling activities undertaken?
■ How well is the capacity plan defined?
■ Is capacity management reporting implemented well?
■ To what degree are tools used to support capacity management?
■ Are there effective relationships with other IT service management disciplines?
■ Are capacity management KPIs/quality measures used?
■ Are capacity management responsibilities well defined?
These questions are part of Sun's more comprehensive ITIL assessment, and
therefore Sun methodologies and tools are available to support answering of these
questions. To remain focused on the purpose of this document, some of the ITIL
assessment details have been omitted.
Criteria
For details, see TABLE 9-1, "Process Maturity Criteria for LOI Determination," on
page 138.
Metrics
The following KPIs and metrics can be used to judge the effectiveness and efficiency
of the capacity management process:
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Incident Management
Description
The incident management process addresses the activities associated with service
disruption events. The primary goal of the incident management process is to restore
normal service operation as quickly as possible and minimize the adverse impact on
business operations, thus ensuring that the best possible levels of service quality and
availability are maintained. A normal service operation is defined as service
operation within the SLA limits.
These questions are part of Sun's more comprehensive ITIL assessment, and
therefore Sun methodologies and tools are available to support answering of these
questions. To remain focused on the purpose of this document, some of the ITIL
assessment details have been omitted.
Criteria
For details, see TABLE 9-1, "Process Maturity Criteria for LOI Determination," on
page 138.
Metrics
Measurable targets for objective metrics should be set for the effectiveness of the
incident management process. Consider including the following KPIs and metrics:
■ Total number of incidents.
■ Mean elapsed time to achieve incident resolution or circumvention, broken down
by impact code.
■ Percentage of incidents handled within the agreed response time (incident
response-time targets may be specified in SLAs, for example, by impact code).
■ Average cost per incident.
■ Percentage of incidents closed by the service desk without reference to other
levels of support.
■ Incidents processed per service desk workstation.
■ Number and percentage of incidents resolved remotely, without the need for a
visit.
Service Desk
Description
The service desk process involves a central point of contact for handling customer,
user, and related issues to meet customer and business objectives. This function is
known under several possible names (or their variants), including:
■ service desk
■ help desk
■ call centre
■ customer hot line
The service desk extends the range of services and offers a more global-focused
approach, allowing business processes to be integrated into the service management
infrastructure. It handles incidents, problems, and questions. The service desk also
provides an interface for other activities, such as customer change requests,
maintenance contracts, software licenses, service level management, and
configuration management, availability management, financial management for IT
services, and IT service continuity management.
The service desk is customer-facing and its main objectives are to drive and improve
service to—and on behalf of—the organization. At an operational level, its objective
is to provide a single point of contact that dispenses advice, guidance, and the rapid
restoration of normal services to its customers and users.
Critical to Quality
Consider the following factors when determining the level of implementation:
These questions are part of Sun's more comprehensive ITIL assessment, and
therefore Sun methodologies and tools are available to support answering of these
questions. To remain focused on the purpose of this document, some of the ITIL
assessment details have been omitted.
Criteria
For details, see TABLE 9-1, "Process Maturity Criteria for LOI Determination," on
page 138.
Metrics
Measurable targets for objective metrics should be set to measure service desk
effectiveness. Consider the following KPIs and metrics:
■ Percentage of incidents closed by first level support, second level support, and so
on.
■ Telephony based KPIs (for example, average wait time, maximum wait time,
average call duration, or call abandon rate).
■ Number of management reports delivered on time.
■ Number of customer satisfaction surveys completed on time.
■ All customer complaints followed up and actioned.
■ Accurate and timely breakdown and workload analyses produced of incident
lifecycle, by support group, third party, and so on.
■ Quantity of customer training needs identified.
Improve IT Services
The improve IT services process category addresses all activities surrounding the
measurement and optimization of IT service activities with the goal of continuously
improving service levels.
ITIL has included many of these components in each process, but problem
management is the focal point for root cause analysis and the prevention of issues.
Sun has developed SunSM Sigma to formalize a methodology to facilitate process
improvement—in general and specifically in the IT environment. In combination,
they create a solid foundation to facilitate continuous service level improvement.
This process category includes the following process areas, which are assessed to
determine the level of operational capability:
■ Problem Management
■ Continuous Process Improvement
Problem Management
Description
The problem management process involves:
■ minimizing the adverse impact of incidents and problems on the organization
that are caused by errors within the IT infrastructure
■ preventing the recurrence of incidents related to these errors
The problem management process has both reactive and proactive aspects.
■ Reactive problem management is concerned with solving problems in response to
one or more incidents.
■ Proactive problem management is concerned with identifying and solving problems
and Known errors before incidents occur in the first place.
Critical to Quality
Consider the following factors when determining the level of implementation:
■ How well is problem management defined?
■ Are problem control activities well defined?
■ Are error control activities well defined?
■ How well is proactive problem management executed?
■ Are problem metrics defined well?
■ To what degree are tools used to support problem management?
■ How effective are relationships with other IT service management disciplines?
■ To what extent are problem management KPIs/quality measures used?
■ Are the problem manager's responsibilities well defined?
These questions are part of Sun's more comprehensive ITIL assessment, and
therefore Sun methodologies and tools are available to support answering of these
questions. To remain focused on the purpose of this document, some of the ITIL
assessment details have been omitted.
Criteria
For details, see TABLE 9-1, "Process Maturity Criteria for LOI Determination," on
page 138.
Metrics
Measurable targets for objective metrics should be set to measure the effectiveness of
the problem management process. Consider including the following KPIs and
metrics:
■ The number of requests for change (RFCs) raised and the impact of those RFCs on
the availability and reliability of the services covered.
■ The amount of time worked on investigations and diagnoses per organizational
unit or supplier, split by problem types.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Description
Although ITIL understands the need for continuous process improvement, it has not
defined a separate discipline to address this important aspect. The Sun ITMF uses
the processes as defined by Sun internally. However, any Sun Sigma-based approach
should provide sufficient rigor and commitment to sufficiently address this area.
Sun Sigma is the core methodology that Sun is using to achieve industry-leading
availability and quality. Sun Sigma drives key processes with data about critical
customer requirements. Sigma is the term used in statistical analysis for variation
from perfection. Sun attains a common measurement of quality for any type of
process by using data to define and control process, and then measuring defects
across a project (or across the organization).
Sun Sigma refers to a methodology commonly known as Six Sigma (see http://
www.isixsigma.com/). The objective of Sun Sigma is to completely satisfy customer
requirements profitably. We call it Sun Sigma because not all customers will require
Critical to Quality
Consider the following factors when determining the level of implementation:
■ Is the Sigma process well defined?
■ Are there standard Six-Sigma processes defined?
■ Is continuous process improvement embedded in daily operations?
■ Is there high level management support for Sigma initiatives?
■ Are the progress reports published regularly?
■ Are there effective relationships with other IT service management disciplines?
■ Are Sigma process leader's responsibilities well defined?
Criteria
For details, see TABLE 9-1, "Process Maturity Criteria for LOI Determination," on
page 138.
Metrics
Below are some examples of KPIs that can be used to measure the level of
implementation of the Sun Sigma processes.
■ Duration of process improvement projects.
■ ROSS (return on Sun Sigma) numbers.
■ Abandoned processes.
■ Customer feedback on perceived improvements.
■ Have the objectives of the project been achieved?
■ In a timely manner?
■ Stakeholder involvement.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Control
The control process category involves ensuring that IT services are delivered within
the constraints identified by the governing body and includes the processes that
facilitate the governing activities. Examples of governing functions are: financial
controls, audit, alignment with business objectives, and so on.
This process category includes the following process areas, which are assessed to
determine the level of operational capability:
■ IT Service Continuity Management
■ Security Management
■ Change Management
IT Financial Management
Description
The IT financial management process involves controlling the monetary aspects of the
organization. It supports the organization in planning and executing its business
objectives and requires consistent application throughout the organization to achieve
maximum efficiency and minimum conflict.
Control 157
TABLE 9-11 IT Financial Management Processes
Process Description
Critical to Quality
Consider the following factors when determining the level of implementation:
■ Is “financial management for IT services” well defined?
■ How well is budgeting implemented?
■ How well is the IT accounting system developed?
■ How well is the IT charging system developed?
■ How well is the ongoing operation of financial management for IT services
managed?
■ Are tools in place to support IT financial management?
■ How effective are relationships with other IT service management disciplines?
■ Are financial management for IT services KPIs/quality measures used?
■ Are IT financial management responsibilities well defined?
These questions are part of Sun's more comprehensive ITIL assessment, and
therefore Sun methodologies and tools are available to support answering of these
questions. To remain focused on the purpose of this document, some of the ITIL
assessment details have been omitted.
Criteria
For details, see TABLE 9-1, "Process Maturity Criteria for LOI Determination," on
page 138.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Configuration Management
Description
The configuration management process provides a logical model of the infrastructure
or a service by identifying, controlling, maintaining, and verifying the versions of
configuration items (CIs) in the organization.
Control 159
■ Provide accurate information on configurations and their documentation to
support all the other service management processes.
■ Provide a sound basis for incident management, problem management, change
management, and release management.
■ Verify the configuration records against the infrastructure and correct any
exceptions.
Critical to Quality
Consider the following factors when determining the level of implementation:
■ How well is configuration management defined?
■ How well is the configuration management plan defined?
■ How well are configuration items (CIs) identified?
■ How well are configuration items (CIs) controlled?
■ Is configuration status accounting well defined?
■ How well is configuration verification and audit undertaken?
■ How well is the configuration management database (CMDB) defined and
utilized?
■ To what degree are tools used to support configuration management?
■ How effective are relationships with other IT service management disciplines?
■ To what extent are availability management KPIs/quality measures used?
■ Are the configuration manager's responsibilities well defined?
These questions are part of Sun's more comprehensive ITIL assessment, and
therefore Sun methodologies and tools are available to support answering of these
questions. To remain focused on the purpose of this document, some of the ITIL
assessment details have been omitted.
Criteria
For details, see TABLE 9-1, "Process Maturity Criteria for LOI Determination," on
page 138.
Metrics
Measurable targets for objective metrics should be set for the effectiveness of the
configuration management process. Consider using the following KPIs and metrics:
■ Occasions when the configuration is not as authorized.
■ Incidents and problems that can be traced back to wrongly made changes.
■ RFCs that were not completed successfully due to poor impact assessment,
incorrect data in the CMDB, or poor version control.
■ The cycle time to approve and implement changes.
■ Licences that have been wasted or not put into use at a particular location.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Change Management
Description
The change management process involves ensuring that standardized methods and
procedures are used for efficient and prompt handling of all changes, with the goal
of minimizing the impact of change-related incidents upon service quality and,
consequently, to improve the day-to-day service delivery of the IT organization.
Note that change management processes need to have high visibility and open
channels of communication in order to promote smooth transitions while changes
are occurring.
Control 161
Change management is responsible for managing its interfaces with other business
and IT functions. The following figure shows a sample process model of change
management. This is just one example—the way in which an organization decides to
implement the change management process is, to a large extent, driven by the
available resources (time, priorities, people, and budget).
Critical to Quality
Consider the following factors when determining the level of implementation:
■ How well is change management defined?
■ How well are requests for change (RFCs) utilized?
■ How well is the concept of a Change Advisory Board (CAB) applied?
■ Is a Forward Schedule of Changes (FSC) well implemented?
■ How well defined is the outsourced change management process?
■ How well are changes categorized and prioritized?
■ How well is impact and resource assessment conducted?
■ How well are changes built, tested, and implemented?
■ Are urgent changes clearly defined?
■ How well are change reviews undertaken?
■ Are change metrics defined well?
■ To what degree are tools used to support change management?
■ How effective are relationships with other IT service management disciplines?
■ To what extent are change management KPIs/quality measures used?
■ Are the change manager's responsibilities well defined?
These questions are part of Sun's more comprehensive ITIL assessment, and
therefore Sun methodologies and tools are available to support answering of these
questions. To remain focused on the purpose of this document, some of the ITIL
assessment details have been omitted.
Criteria
For details, see TABLE 9-1, "Process Maturity Criteria for LOI Determination," on
page 138.
Metrics
Consider using the following KPIs and metrics to judge the effectiveness and
efficiency of the change management process:
■ A reduction of adverse impacts on service quality resulting from poor change
management.
■ A reduction in the number of incidents traced back to changes implemented.
■ A decrease in the number of changes backed out.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Protect IT Services
The protect IT Services process category involves ensuring that IT services are still
available under extraordinary conditions, such as catastrophic failures, security
breaches, unexpected heavy loads, and so on. This area has become increasingly
important as as organizations depend more and more on IT services. Therefore,
implementing IT service protection at the right levels is crucial to an organization’s
strength and survival.
This process category includes the following process areas, which are assessed to
determine the level of operational capability:
■ IT Service Continuity Management
■ Security Management
Description
The IT service continuity management (ITSCM) process supports the overall business
continuity management process by ensuring that the required IT technical and
services facilities (including computer systems, networks, applications,
telecommunications, technical support, and service desk) can be recovered within
required and agreed upon business time constraints.
Critical to Quality
Consider the following factors when determining the level of implementation:
■ How well is the business continuity management strategy defined?
■ How well has the ITSCM scope been defined?
■ How well has the BCM plan been defined?
■ How well have the requirements analysis and strategy been defined?
■ How well has the BCM plan been implemented?
■ How well is ITSCM process managed operationally?
■ How well is the invocation process and guidance defined?
■ Is there a clear management structure for BCM?
■ How well is the IT service continuity management recovery plan defined?
■ How effective are relationships with other IT service management disciplines?
■ Are ITSCM KPIs/quality measures used?
■ Are the ITSCM manager's responsibilities well defined?
These questions are part of Sun's more comprehensive ITIL assessment, and
therefore Sun methodologies and tools are available to support answering of these
questions. To remain focused on the purpose of this document, some of the ITIL
assessment details have been omitted.
Metrics
The following KPIs and metrics can be used to judge the effectiveness and efficiency
of the ITSCM process:
■ Are scheduled tests executed on time?
■ Are audits and reviews carried out regularly?
■ Are the results of the test schedule published?
■ Is time taken to recover services within plan?
■ How many people are passing through education and awareness programs?
■ Are review meetings being held on time and accurately minuted?
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Security Management
Description
The security management process, as defined by ITIL, is the process of managing a
defined level of security for information and IT services, including the reaction to
security incidents. Security management is more comprehensive than physical
security and password disciplines. It includes other core aspects, such as data
integrity (financial aspects), confidentiality (intelligence agencies/defense), and
availability (health care).
In this document, information security incidents are defined as events that can cause
damage to confidentiality, integrity, or the availability of information or information
processing. These incidents materialize as accidents or deliberate acts.
Critical to Quality
Consider the following factors when determining the level of implementation:
■ How well is the security management strategy defined?
■ How well has the security management scope been defined?
■ How well has the security management plan been defined?
■ How well do the SLAs include security management requirements?
■ How well has the security management plan been implemented?
■ How well is security management process managed?
■ How well are the responsibilities of the security management defined?
■ Is there a centralized role for security management?
■ How well are prevention, reduction, detection repression, correction, and
evaluation measures implemented?
■ How effective are relationships with other IT service management disciplines?
■ Are security management KPIs/quality measures used?
These questions are part of Sun's more comprehensive ITIL assessment, and
therefore Sun methodologies and tools are available to support answering of these
questions. To remain focused on the purpose of this document, some of the ITIL
assessment details have been omitted.
Criteria
For details, see TABLE 9-1, "Process Maturity Criteria for LOI Determination," on
page 138.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
TABLE 9-17 Process Capabilities Profile Summary of the OMCM Process Aspect
Level of Implementation
OMCM Specification—Tools
The tools aspect of the OMCM addresses the technology used to facilitate the
management of the execution environment, including:
■ technologies to interact with, and control, the execution environment
■ technologies to control and monitor the processes used to manage the
environment
■ technologies that provide access to the information and capabilities of the
management infrastructure for a diverse set of stakeholders
This chapter describes how to determine the degree of implementation for each of
the tools components of the OMCM. It includes the following sections:
■ Specification of Management Tools Architecture
■ Implementation of Functional Components
■ Degree of Visibility
■ Integration of Components
■ Process Automation
■ Effectiveness of the Implementation
■ Summary of the Tools Capabilities Profile
169
Specification of Management Tools
Architecture
Description
The management tools infrastructure is an IT system that is little different from other
IT applications. As organizations first start to deploy management tools, their efforts
tend to be stove piped and tightly focused. The design concepts are simple and
generally communicated to a small group of stakeholders. The need for a formal,
blueprint for construction is minimal. As the tools infrastructure matures, the
implementations become more complex, involving multiple organizations and
products that must function in a cooperative fashion. This drives the need for a plan
that captures and documents these complexities and communicates them to a larger
audience for acceptance and use. This plan is what we call the management tools
architecture.
Critical to Quality
■ The management tools architecture should exhibit the characteristics of any
properly developed technical architecture, involving the separation of functions,
well defined interfaces, and formal documentation.
■ The architecture will evolve over time. A process should be in place to
periodically refresh to architecture to account for changes in technology,
requirements, or the organization.
■ The architecture should be specified in a product neutral fashion. The addition or
removal of a specific vendor's technology should not invalidate the basic
architecture.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Ad Hoc • The organization does not have any concept of a enterprise wide
management architecture.
Emerging • A tools-based architecture is focused on the deployment of individual
silos.
Functional • A tools-based architecture specifies the components and integration of
the complete management architecture.
• Architecture is serves as a guideline, and deviations exist.
Effective • Management architecture provides a basis for organization standards.
• Tools acquisition and deployment is controlled by the management
architecture.
• Deviations from the architecture are minimal and closely managed.
Optimized • Holistic management architecture includes people and process
considerations in place.
• Traceability between the three components of the management
architecture.
Metrics
■ Existence of an architecture document or document set.
■ Elapsed time because the architecture and associated documentation was
reviewed and updated.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Description
In Chapter 6, “Sun IT Management Framework—Tools,” we introduced a
generalized framework for enterprise management technology. This tools framework
provides a product-neutral approach for categorizing the various roles to be played
by each component of the overall management tools architecture. Specific
commercial or locally developed products may be used to fill one or more of the
functional categorizes, depending on the needs of the organization.
In keeping with the idea that management capability develops over time, we realize
that the individual functional components are deployed at different times in the
organization's evolution. Part of this is driven by the nature of the various
components and their interaction. For example, implementing an event management
console generally requires the existence of lower level monitoring components to
generate enough events to make the effort worthwhile.
Criteria
Use the following criteria to determine the degree of implementation for each of the
components of the management tools model. Metrics and capabilities profiles for
each layer are provided in the following sections. When appropriate, additional
criteria for specific areas are provided.
Degree of
Implementation Criteria
Application Description
Metrics
■ Percentage of problems identified by the management infrastructure.
■ Existence and amount of available performance data (in number of days of
history) for components of the execution environment (hardware, storage,
network, applications, etc.).
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
OMCM Level 5
OMCM Level 2 OMCM Level 3 OMCM Level 4 Business
OMCM Level 1 IT Component IT Operations IT Service Value
Component Crises Control Management Management Management Management
Application Description
Application Description
Metrics
■ Total number of events processed per day.
■ Total number of actionable events (requiring action by the operations staff) per
day.
■ Number and frequency of periodic reports generated for IT management review.
■ Number of applications for which a predictive performance analysis model exists.
■ Number of hardware field-replaceable unit (FRU) line items maintained by the
organization.
■ Total number of notifications (pages, emails, and so on) generated per day.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Application Description
Metrics
■ Percentage of SLAs for which a transaction test set has been defined and
implemented.
■ Level of granularity for business impact analysis of failures (service/application,
user group, business unit, business process, and so on).
■ Average to time notify impacted customers/users of a service level issue.
Criteria
Use the following criteria, along with previously-described criteria, to determine the
degree of implementation for the process and workflow management subsystem.
Degree of
Implementation Criteria
Metrics
■ Number of service requests initiated daily/weekly/monthly.
■ Number of service requests closed daily/weekly/monthly.
■ Average service time for all classes of service requests.
■ Percentage of service requests initiated automatically by the management tools
infrastructure.
■ Percentage of service requests processed by each level of the support structure,
including the percentage of requests met via customer self service mechanisms as
well as the various tiers of the organization’s support organization.
Management Portals
Management portals are collections of applications that provide external entities with
access to selected portions of the management framework. Examples: Web interface
for reviewing SLM reports, Web or other types of user interfaces for the various tools
or applications used by end users to submit requests for service. It should also be
possible, and even desirable, to use this portal to expose management information
and facilities to people outside of the IT organization.
Criteria
Use the following criteria, along with previously-described criteria, to determine the
degree of implementation for the management portal.
Degree of
Implementation Criteria
Metrics
Not specified in this draft.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Description
To be effective, the management tools infrastructure must be capable of obtaining
information about the environment being managed. In areas that are not visible to
the management tools infrastructure, information and notification of critical
conditions must be obtained in other ways. Generally, these alternate methods are
manual and reactive. The more information that is available concerning the state of
the managed environment, the more effective the management infrastructure will be.
Application Description
Instrumentation components may take the role of sensor (obtaining data from the
environment), effector (manipulating the environment), or both.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Ad Hoc • Only a minimal subset of the hardware and network layers is visible.
Emerging • Critical network and hardware/storage components are visible.
Functional • A majority of the network and hardware/storage layers of the
managed environment is visible.
• Key application infrastructure components are visible.
Effective • The lower three layers of the execution architecture are visible.
• A majority of the application infrastructure is visible.
• Key applications are visible.
Optimized • The management infrastructure has complete visibility into the
management environment.
Metrics
■ Average number of agents per managed system.
■ Percentage of problems detected by the management infrastructure.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Integration of Components
Description
Because most, if not all, enterprise management infrastructures are built using
multiple components from different vendors, these components must be integrate
somehow. Integration facilitates operations across management silos and is
necessary for organizations engaged in managing to service level agreements that
span multiple systems, networks databases, and so on.
Among the approaches available for integrating the different parts of a management
system, two categories are generally used:
Approach Description
Critical to Quality
■ The management tools architecture must specify the required integration points
between tools and the methods used to realize the integration.
■ In keeping with the idea that the replacement of specific parts of the management
tools environment should not invalidate the architecture, components should be
loosely coupled so that significant dependencies do not exist between them.
Example: Replacement of the event management console should not require major
changes in the underlying monitoring systems.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Degree of
Implementation Criteria
Metrics
Not specified in this draft.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Description
The primary purpose for implementing a management tools infrastructure is to
automate activities that would otherwise have to be performed manually. A simple
example is the activity of reviewing a system log for error conditions. This can be
done on a periodic basis by a systems administrator who opens the log file in a text
editor and searches for specific types of error messages. Implementation of a
monitoring application that performs pattern matching and alerts when necessary
relieves the administrator of the requirement to look at the file on a periodic basis.
More complex examples include automated recovery actions to specified failure
scenarios, customer self-service capabilities (knowledge base, adds, moves changes),
and dynamic service provisioning.
Process automation refers to the degree to which processes, policies, and procedures
are embedded into, controlled by, and executed by the management tools
infrastructure.
Critical to Quality.
■ Automation of process requires well understood and well documented
procedures.
■ A process must be in place to maintain the procedures and associated technical
implementation.
■ KPIs for each process are identified and tracked so that performance of the
automated process is understood.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Percentage of user service requests satisfied through self service mechanisms.
■ Number of IT processes implemented and controlled by a process workflow
management tool.
■ Availability of metrics that quantify the process execution.
■ Cycle time (request to completion) for critical activities, such as user adds and
service provisioning.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Description
Like any other business investment, organizations need to justify the expenditures
towards improving operational capability by realizing some value for the
investment. For the purpose of the OMCM, we define investment in a tools
infrastructure as organizational effort needed to implement and maintain the infrastructure.
This effort is measured by the expenditure of capital, human effort, and other
organizational resources.
Critical to Quality
■ The organization should have a budget for the implementation and maintenance
of the management tools infrastructure.
■ Metrics to quantify and measure IT operational efficiency, and contributions to
profitability, should be identified and tracked. Investments in the management
infrastructure should be justified by an expected improvement in one or more of
the metrics.
■ Efforts to avoid capital expenditures by developing local solutions should be
discouraged by the organization. Locally developed tools eventually cause
maintenance and extensibility problems as the organization matures.
Criteria
Use the following criteria to determine the degree of implementation.
Degree of
Implementation Criteria
Metrics
■ Annual budget for enterprise management tools.
■ Full-time employee (FTE) for IT operations.
Capabilities Profile
The following degrees of implementation are expected at each capability level of the
OMCM.
Part 4—Conclusion
This part of the document does provides a conclusion and additional resources. It
includes the following chapters:
■ Chapter 11, “Application of the OMCM”
■ Chapter 12, “Resources for More Information”
■ Chapter 13, “About the Authors”
197
198 • February 2005
11
For vendors like Sun, this model provides a means to segment the customer market
and target products and services based on the current operational capability of a
customer. It is our contention that the needs of a customer at OMCM Level 2 are
significantly different from the needs of an OMCM Level 4 customer.
199
Assessment and Scoring
The most obvious application of the OMCM is its use as an assessment vehicle to
help organizations benchmark their level of operational capability as a first step in
an improvement effort. Part 2, “Sun IT Management Framework,” of this document
was written to facilitate a lightweight assessment. More detailed assessments are
possible using the wealth of ITIL, CMM, and other available collateral to drill down
into the details of the specific areas. Sun Microsystems has a number of vehicles to
assist in this effort, such as ITIL assessments and the SunReady Availability
Assessment. The goal is to develop a snapshot of the organization that provides
input into the remediation plan.
This document does not specify how to arrive at a score for a given organization.
There is a natural desire for people to want a grade as a measure of how well they
are doing. Although it is possible to provide such a grade using the OMCM, we have
deliberately left this as an exercise for the implementer, for two reasons:
■ We believe that the focus of OMCM application should be on the state of
individual components, not on the overall score. The fact that an organization
decides it is a OMCM Level 2 is less useful than the identification of the people,
process, and tools areas that are not implemented to the degree necessary to meet
the desired OMCM level. Targeted diagnosis helps focus investment for the
organization.
■ We believe any implementation of the OMCM must take into account the political
realities of the organization. By not specifying a scoring approach, we give
implementers the flexibility to tailor the message so that it is most effective for
driving the desired organizational behavior.
OMCM
OMCM Level OMCM Level OMCM Level OMCM Level Level 5
Process Sub-Process 1 Profile 2 Profile 3 Profile 4 Profile Profile
Two potential approaches may be taken to determine the OMCM level from these
results.
■ One approach is an extension of the strict ordering concept introduced in
Chapter 2, “Introduction.” Strict ordering means that the organization cannot
achieve a given level unless it has satisfied the requirements of that level and all
Regardless of the approach taken, and the final grade, the real value in the above
assessment is the identification of capacity, problem, and configuration management
areas requiring attention.
Vendor Application
As we stated above, service providers, ISVs, and other vendors can use the OMCM
to segment their customers and categorize their offerings. The table below shows an
example segmentation along with suggested focus areas for offerings. The goal is to
tailor the solution to the customer capability level.
For example, it has been our experience that organizations at levels one and two will
need help in stabilizing the environment before they are able to support longer term
efforts to improve capability. For these organizations, appropriate services could
provide additional on site expertise (staff augmentation or management services)
and tools with rapid implementation times that provide basic monitoring.
Organization URL
205
Management Tools Vendors
Vendor URL
Aprisma http://www.aprisma.com
Micromuse http://www.micromuse.com
Teamquest http://www.teamquest.com/
This chapter provides more information about the authors of this document.
Edward Wustenhoff
Edward Wustenhoff is currently an IT Management Strategist in the Global
Datacenter Practice of Sun Client Solutions. He has more than 15 years’ experience in
networked computer systems and data center management. During the past nine
years at Sun, he has been involved with Sun's technologies and Datacenter Best
Practices. In one of his previous roles at Sun, he managed the Enterprise
Management Practice, where he advised Sun’s customers about Datacenter
Management best practices, tools selection, and deployment strategies.
Michael J. Moore
Michael. Moore is currently a Solution Architect in the Global Data Center Practice
of Sun Client Solutions. Mr. Moore has over 20 years of experience in the design and
operations of IT systems and networks. During this time, his focus has been on
technologies and best practices for systems and network management. Recent work
has included development of a IT management meta-framework for use in the
design of enterprise management solutions, and the development of a operations
capabilities maturity model for IT organizations. Mr. Moore's experience also
includes the development and implementation of enterprise management solutions
using a wide range of products; and over 10 years of IT operations experience in
both the military and commercial sectors. He is a Certified Micromuse Consultant
and a Certified Aprisma Spectrum Engineer.
207
Dale H. Avery
Dale Avery is currently a Practice Development Manager for Sun Educational
Services. Mr. Avery has over 25 years’ experience in the technology sector, including
application and system software design and development. He has also managed the
development and deployment of IT applications. Mr. Avery specializes in the areas
of efficiency, use of best practices, and continuous improvement. He has managed
organizations that provided development support to software and hardware
application vendors. He has been responsible for training and developing people in
organizations. Most recently, he participated in the development of the People
Aspect in Sun's Operational Maturity Capability Model. Mr. Avery holds a BS degree
in Information Technology and is certified in Prince2.
Index 211
workgroup development 32, 96 emerging 138
P functional 139
participatory culture optimized 139
defined 32 process workflow managers 180
specification for 99 protect
people aspect IT service continuity management 164
defined 24 security management 165
diagram 28 protect IT services
knowledge management 36 defined 58
organizing 31 IT service continuity management 58
resourcing 33 security management 59
skills development 34 Q
workforce management 35 quantitative performance management
People Capability Maturity Model® (P- defined 36
CMM®) 24 specification for 127
performance analysis tools, defined 77 R
policies, defined 140 release management
probes, defined 73 defined 46
problem management specification for 144
defined 52 report generation tools, defined 79
specification for 153 resourcing
procedures, defined 140 competency analysis 33, 108
process and workflow managers 69 continuous capability improvement 34,
process and workflow systems, defined 82 112
process aspect defined 33
control IT services 54 organizational capability management
create IT services 42 33, 110
defined 24 staffing 33, 106
deliver IT services 48 S
diagram 40 security management
implement IT services 46 defined 59
improve IT services 52 specification for 165
overview 40 security tools, defined 75
protect IT services 58 service desk
process automation 189 defined 50
process maturity criteria 138 specification for 151
ad-hoc 138 service level management
effective 139 defined 42
Index 213
defined 32
specification for 98
workgroup development
defined 32
specification for 96
Wustenhoff, Edward 207