Hype CycHype Cycle For Application Development 2007le For Application Development 2007
Hype CycHype Cycle For Application Development 2007le For Application Development 2007
ID Number: G00147982
A shift to process and service orientation is altering staffing, tools and methods of
software development. In parallel, governance, planning, control and quality assurance
techniques are being refined and strengthened to drive more predictability and meet the
challenges of global sourcing.
2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved. Reproduction and distribution of this publication in any form
without prior written permission is forbidden. The information contained herein has been obtained from sources believed to
be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. Although
Gartner's research may discuss legal issues related to the information technology business, Gartner does not provide legal
advice or services and its research should not be construed or used as such. Gartner shall have no liability for errors,
omissions or inadequacies in the information contained herein or for interpretations thereof. The opinions expressed herein
are subject to change without notice.
TABLE OF CONTENTS
Analysis ............................................................................................................................................. 4
What You Need to Know ...................................................................................................... 4
The Hype Cycle .................................................................................................................... 4
The Priority Matrix ................................................................................................................ 4
On the Rise........................................................................................................................... 6
Data Service Architectures ...................................................................................... 6
Metadata Ontology Management ............................................................................ 7
Information-Centric Infrastructure............................................................................ 8
SDLC Security Methodologies ................................................................................ 9
SOA Testing .......................................................................................................... 10
Collaborative Tools for the Software Development Life Cycle .............................. 10
Enterprise Information Management ..................................................................... 11
Application Quality Dashboards ............................................................................ 12
Event-Driven Architecture...................................................................................... 13
Metadata Repositories........................................................................................... 14
RIA Platforms ........................................................................................................ 16
At the Peak ......................................................................................................................... 17
Application Testing Services ................................................................................. 17
SOA Governance Technologies............................................................................ 19
Globally Sourced Testing ...................................................................................... 21
Model-Driven Architectures ................................................................................... 22
Scriptless Testing .................................................................................................. 23
Architected, Model-Driven SODA.......................................................................... 24
Enterprise Architecture Tools ................................................................................ 25
Application Security Testing .................................................................................. 26
Sliding Into the Trough ....................................................................................................... 27
Project and Portfolio Management ........................................................................ 27
Business Application Package Testing ................................................................. 28
Agile Development Methodology........................................................................... 29
Unit Testing ........................................................................................................... 30
ARAD SODA ......................................................................................................... 31
SOA ....................................................................................................................... 32
Climbing the Slope ............................................................................................................. 33
Enterprise Software Change and Configuration Management.............................. 33
Enterprise Portals .................................................................................................. 33
Microsoft .NET Application Platform...................................................................... 34
OOA&D Methodologies ......................................................................................... 36
Linux as a Mission-Critical DBMS Platform........................................................... 37
Performance Testing ............................................................................................. 38
Open-Source Development Tools ......................................................................... 38
Business Process Analysis.................................................................................... 39
Entering the Plateau ........................................................................................................... 40
Automated Testing ................................................................................................ 40
Java Platform, Enterprise Edition .......................................................................... 41
Appendices ......................................................................................................................... 43
Hype Cycle Phases, Benefit Ratings and Maturity Levels .................................... 45
Recommended Reading.................................................................................................................. 46
Page 2 of 47
LIST OF TABLES
Table 1. Hype Cycle Phases ........................................................................................................... 45
Table 2. Benefit Ratings .................................................................................................................. 45
Table 3. Maturity Levels .................................................................................................................. 46
LIST OF FIGURES
Figure 1. Hype Cycle for Application Development, 2007................................................................. 4
Figure 2. Matrix for Application Development, 2007 ......................................................................... 5
Figure 3. Hype Cycle for Application Development, 2006............................................................... 43
Page 3 of 47
ANALYSIS
Scriptless Testing
Model-Driven Architectures
Globally Sourced Testing
SOA Governance Technologies
Application Testing Services
RIA Platforms
Metadata Repositories
Event-Driven Architecture
Application Quality
Dashboards
Enterprise Information
Management
Agile
Development
Methodology
Business Application
Package Testing
Collaborative Tools
for the Software
Development Life Cycle
Unit Testing
SOA Testing
SDLC Security
Methodologies
ARAD SODA
Information-Centric
Infrastructure
Metadata Ontology Management
Data Service Architectures
Open-Source
Development Tools
Business Process Analysis
Performance Testing
Linux as a Mission-Critical DBMS Platform
SOA
OOA&D Methodologies
Microsoft .NET Application Platform
Enterprise Portals
Enterprise Software Change and
Configuration Management
As of June 2007
Technology
Trigger
Peak of
Inflated
Expectations
Trough of
Disillusionment
Slope of Enlightenment
Plateau of
Productivity
time
Years to mainstream adoption:
less than 2 years
2 to 5 years
5 to 10 years
obsolete
before plateau
Page 4 of 47
discipline and effective planning techniques that lead to predictable results. Although individually
incremental, the convergence of changes in the governance, planning and control techniques are
transformative when taken across all topic areas.
Service-oriented architecture (SOA) leads to service-oriented development and requires
substantial changes in staffing, tooling and practice throughout development organizations.
Business process management (BPM) techniques move companies in the same direction.
Ultimately, the distinctions between service-oriented development of applications (SODA) and
BPM will narrow, as both marginalize the distinctions between development time and runtime
systems and processes.
Figure 2. Matrix for Application Development, 2007
benefit
transformational
Enterprise Portals
2 to 5 years
Data Service
Architectures
SOA
5 to 10 years
Event-Driven Architecture
Information-Centric
Infrastructure
SOA Testing
high
ARAD SODA
Automated Testing
Business Application
Package Testing
Java Enterprise Edition
Linux as a MissionCritical DBMS Platform
Application Quality
Dashboards
Enterprise Information
Management
Application Security
Testing
Model-Driven
Architectures
Architected, Model-Driven
SODA
Scriptless Testing
moderate
Business Process
Analysis
Application Testing
Services
Agile Development
Methodology
OOA&D Methodologies
Enterprise Software
Change and Configuration
Management
Metadata Ontology
Management
Performance Testing
Unit Testing
Open-Source
Development Tools
SDLC Security
Methodologies
low
As of June 2007
Page 5 of 47
On the Rise
Data Service Architectures
Analysis By: Mark Beyer
Definition: Data services consist of processing routines that provide direct data manipulation
pertaining to the delivery, transformation and the logical and semantic reconciliation of data.
Unlike point-to-point data integration solutions, data services de-couple data storage, security and
mode of delivery from each other, as well as from individual applications, to deliver them as
independently designed and deployed functionality that can be connected via a registry or
composite processing framework. Data services can be used in a networked fashion that is
orchestrated through a composite processing model or designed separately, then reused in
various, larger-grained processes.
Position and Adoption Speed Justification: Data services are, by their nature, a new style of
data access strategy that replaces the data management, access and storage duties currently
deployed in an application-specific manner. Data services architecture is merely a sub-class or
category of SOA that does not form a new architecture, but brings emphasis to the varying
services that exist within SOA. Most of the large vendors have announced road maps and plans
to pursue some variant of the data service approach, but this is an evolutionary architectural style
that does not warrant "rip and replace" at this time and will coexist with current application design
techniques. Disillusionment will occur as organizations realize the granularity required to deploy
this type of architecture, especially relative to the differences between handling data via a
business operational process vs. data handling via industry delivery concepts.
User Advice: Users should focus on delivering a semantic layer that portrays the use of data and
information in the organization and, at the same time, begin developing a logical business model.
The logical and semantic model should be interpreted to the physical repositories throughout the
organization creating a physical-to-logical-model reconciliation. In 2006, this technology class
was specifically focused on information in the former "structured" data class only. In 2007, initial
advances in using model-to-model (M2M) language communication via metadata operators are
blended into this technology. The M2M introduction caused a temporary retrograde in the
technology position and at the same time will accelerate its movement along the cycle. Existing
data integration vendors: extraction, transformation and loading (ETL), enterprise integration
information (EII) and enterprise application integration, have begun to pursue common metadata
repositories used as a core library to deploy all data delivery modes but have not built machine
intelligence into optimization strategies. Organizations should eschew vendor development
platforms that deny or refute the requirement for interoperability.
Business Impact: Data services are not an excuse for each organization to write its own, unique
database management system (DBMS), as most DBMSs both store data and provide ready
access. Data services can sever the tight links between application interface development and
the more infrastructure-style decisions of database platforms, operating systems (OSs) and
hardware. Specifically, the metadata interpretation between business process models, semantic
usage models and logical/physical data models will enhance the overall adaptiveness of IT
solutions. This will create a portability of applications to lower-cost repository environments when
appropriate and create a direct corollary between the cost of information management and the
value of the information delivered by delivering semantically consistent data and information to
any available presentation format. This is opposed to the current scenario in which monolithic
application design can drive infrastructure costs up because of their dependence on specific
platform or DBMS capabilities.
Benefit Rating: Transformational
Page 6 of 47
Page 7 of 47
Business Impact: Tighter integration between business process changes and IT systems
change. Business units and users will be able to relay their concerns better regarding use of
information assets throughout the organization. Better assessment by business analysts on the
risks and benefits that accrue in the business regarding maintenance and security of information
assets.
Benefit Rating: Moderate
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Information-Centric Infrastructure
Analysis By: David Newman
Definition: Information-centric infrastructure (ICI) is a technology framework that enables
information producers and information consumers to organize, share and exchange any content
(structured and unstructured data, for example), anytime, anywhere. It is the technology building
block within an organization's enterprise information management (EIM) program. Since different
systems use different formats and different standards to share and exchange different types of
information, the technologies that make up an ICI ensure that common processes applied to
common content will produce similar results.
Position and Adoption Speed Justification: The vision for an ICI will be adopted by
organizations seeking to bring a greater balance to their integration activities to address cost and
complexity issues associated with silo-based, application-centric development. One of the
reasons organizations cannot respond as quickly as market conditions dictate is because much of
the information has been isolated within applications each fulfilling its own unique (processdriven) requirements. As demands for access to information sources increase, organizations will
use an ICI as their technical foundation to facilitate the convergence of different types of content
required by industry "ecosystems" and trade exchanges. This will help resolve issues around infoglut, and will improve application integration capabilities during migration toward SOAs.
User Advice: Recognize that different project teams use different applications, formats and
standards to exchange information. Look for common ways to normalize and extract meaning
from all types of content so that it can be exchanged across the organization. Use existing system
analysis and designs as starting points to develop common models, which can then be shared by
different processing components and system entities. Use existing methods of content-centric
processing to identify gaps that need to be filled to support ICI requirements. For instance,
determine the usefulness of the Federal Enterprise Architecture Framework Data Reference
Model (version 2.0) to your industry regardless of whether you are a commercial or
government organization. Exploit the use of emerging standards (such as XML), or data and
metadata interchange and create a common components library of metadata objects based on
corporate standards, thereby promoting wide-scale reuse.
Business Impact: An ICI brings balance to many application-driven environments because it
"normalizes" the chaos caused by having different and diverse standards, formats and protocols.
It extracts meaning and delivers context so that each content instance can be shared and
exchanged to support a variety of business process needs by identifying, abstracting and
rationalizing commonalities across content; applying semantics for information exchange and
interoperability; and implementing metadata management for discovery, reuse and repurpose.
Organizations failing to invest in building out an ICI by 2015 will experience a 30% increase in
overhead costs to manage their IT operations. An ICI will make far greater use of emerging
Page 8 of 47
technologies than most companies are used to. It is the inevitable outcome of decoupling
application logic from data management requirements (as seen in SOA).
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Recommended Reading: "Key Issues for Information-Centric Infrastructures, 2007"
"Gartner Defines the Information-Centric Infrastructure"
"Information-Centric Infrastructure: Application Integration Via Content"
"Predicts 2007: Information Infrastructure Emerges"
Page 9 of 47
SOA Testing
Analysis By: Thomas Murphy
Definition: SOA testing tools are designed to assess service-oriented applications. Tools verify
XML, perform load and stress testing of services, and promote the early, continuous testing of
services as they are developed. These products have to deal with changing standards and should
support the interfaces, formats, protocols and the variety of implementations available. Although
similar to traditional functional and load testing tools, these products do not rely on a user
interface for definition and should deal with issues such as long-running and parallel processes.
As these tools mature, links should occur to leverage the produced data with service governance
tools, such as security and registry management tools.
Position and Adoption Speed Justification: SOA testing tools are new in the market and tend
to be from relatively new companies, with improving support from the historic testing leaders.
Web services definition and standards are evolving, prompting tool manufacturers to catch up.
User Advice: If you have invested in building out Web services, then you should have a solid unit
testing approach. Investigate these tools primarily to ensure load capacity for your services, to
discover failure behaviors and to speed the development of new services. Testing for services
should make use of an existing foundation of tests written for the underlying implementation code.
Tests should be factored to enable testing of specifically affected systems when changes are
made, rather than testing the entire system. This includes that ability to unit test individual
elements, as well as specific orchestrations across services.
Business Impact: Web services must be stable and reliable for applications to be built on top of
them. They need a solid testing focus or the services will become liabilities to application stability.
Because services offer a way to transform the business, these testing tools will be critical to the
strategic success of businesses.
Benefit Rating: Transformational
Market Penetration: One percent to 5% of target audience
Maturity: Emerging
Sample Vendors: HP; iTKO; IBM; Mindreef; Parasoft; Solstice Software; SOASTA
Page 10 of 47
Page 11 of 47
User Advice: End-user clients should resist vendor claims that their products "do" EIM. EIM is
not a technology market. Clients should connect certain technologies and projects (such as
master data management, metadata management, information life cycle management, content
management and data integration) as part of an EIM program.
Secure senior-level commitment for EIM as a way to overcome information barriers, exploit
information as a strategic resource, and fuel the drive toward enterprise agility. Use pressures for
improving IT flexibility, adaptability, productivity and transparency as part of the EIM businesscase justification. Grow the EIM program incrementally. Pursue foundational EIM activities such
as master data management and metadata management. Address operational activities, such as
defining the EIM strategy, creating a charter and aligning resources to the program.
Operationalize EIM with a defined budget and resource plan. Establish an ICI to share and
exchange all types of content. Implement governance processes such as stewardship and data
quality initiatives. Set performance metrics (such as reducing the number of point-to-point
interfaces or conflicting data sources) to demonstrate value.
Business Impact: EIM is foundational to complex business processes and strategic initiatives.
By organizing related information management technologies into a common, ICI, an EIM program
can reduce transaction costs across companies and improve the consistency, quality and
governance of information assets. EIM supports transparency objectives in compliance and legal
discovery. It breaks down information silos by facilitating the decoupling of data from applications
a key aspect of successful SOAs. It establishes a single version of the truth for master data
assets. EIM institutes information governance processes to ensure all information assets adhere
to quality, security and accessibility standards. Key components of EIM (for example, master data
management, global data synchronization, semantic reconciliation, metadata management, data
integration and content management) have been observed across multiple industries (such as
banking, investment services, consumer goods, retail and life sciences).
Benefit Rating: High
Market Penetration: One percent to 5% of target audience
Maturity: Emerging
Recommended Reading: "Business Drivers and Issues in Enterprise Information Management"
"Mastering Master Data Management"
"From IM to EIM: An Adoption Model"
"Data Integration Is Key to Successful Service-Oriented Architecture Implementations"
"Gartner Study on EIM Highlights Early Adopter Trends and Issues"
"Gartner Definition Clarifies the Role of Enterprise Information Management"
"Key Issues for Enterprise Information Management, 2007"
Page 12 of 47
Event-Driven Architecture
Analysis By: Roy Schulte; Yefim Natis
Definition: Event-driven architecture (EDA) is a subset of the more general topic of event
processing. EDA is an architectural style in which some of the elements of the application
execute in response to the arrival of event objects. An element decides whether to act and how to
act based on the incoming event objects. In EDA, the event objects are delivered in messages
that do not specify any method name (such messages are called event notifications).The event
source does not tell the event receiver what operation to perform. An event is something that
happens (or does not happen, but was expected or thought possible). Examples include a stock
trade, customer order, address change, and a shipment arriving or failing to arrive (under
specified conditions). An event may be documented in software by creating an event object
(sometimes called plain "event," which then is a second meaning for the term). An event (object)
represents or records a happening ("ordinary") event. Examples of event objects include a
message from a financial data feed (a stock tick), an XML document containing an order or a
database row. In casual discussion, programmers often call the message that conveys an event
object an "event."
Position and Adoption Speed Justification: Computer systems have used event processing in
many different ways for decades. Event processing is moving through the Hype Cycle now
because its concepts are being applied more broadly and on a higher level. Business events,
such as purchase orders, address changes, payments, credit card transactions or Web "clicks"
are being used as a focus in application design. This contrasts to past treatments of events where
business applications addressed events more indirectly, and event modeling was considered to
Publication Date: 29 June 2007/ID Number: G00147982
2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
Page 13 of 47
be secondary to data modeling, object modeling and process modeling. Businesses have always
been real-time, event-driven systems, but now more aspects of their application systems are also
real-time systems. EDA concepts are also used on a technical level to make application servers
and other software more-efficient and scalable. The spread of other types of SOA (conventional,
request/reply SOA) is also helping to pave the way for EDA because some of the concepts,
middleware tools and organizational strategies are the same.
User Advice: In an era of accelerating business processes, pervasive computing and exploding
data volumes, companies must master event processing if they are to thrive. Companies should
use event processing in two ways: to engineer more-flexible application software through the use
of message-driven processing, and to gain better insight into current business conditions through
complex-event processing (CEP). Architects can use available methodologies and tools to build
good EDA applications, but must consciously impose an explicit focus on events because
standard methodologies and tools do not yet make events first-class citizens in the development
process. Companies should implement EDA as part of their SOA strategy because many of the
same middleware tools and organizational techniques (such as using an SOA center of
excellence [COE] for EDA and for other kinds of SOA) apply. Companies should not implement
request/reply SOA now and wait for one or two years to implement EDA SOA because a
request/reply-only SOA strategy will not be able to support some business requirements well.
Business Impact: EDA is relevant in every industry. Large companies experience literally trillions
of ordinary business events every day, although only a minority of these are represented as event
objects, and only a tiny minority of those event objects are fully exploited for their maximum
information value. The number and size of event streams are growing as the cost of computing
and networking continues to drop. Companies now generate data on events that were never
reported in the past. The CEP type of business EDA was first used in financial trading, energy
trading, supply chain management, fraud detection, homeland security, telecommunications,
customer contact centers, logistics and sensor networks, such as those based on radio frequency
identification (RFID). Event processing is a key enabler in business activity monitoring (BAM),
which makes business operations more visible to end users.
Benefit Rating: Transformational
Market Penetration: Five percent to 20% of target audience
Maturity: Adolescent
Sample Vendors: Actimize; Agent Logic; Agentis Software; Aleri; Avaya; Axeda; BEA Systems;
coral8; Cordys; Event Zero; Exegy; firstRain; IBM; jNetX; Kabira; Kx Systems; open cloud;
Oracle; Progress Software/Apama; Red Hat (Mobicents); Rhysome; SAP; SeeWhy; StreamBase
Systems; Sun; Sybase; Syndera; Synthean; Systar SA; Tibco Software; Truviso; Vayusphere;
Vhayu; Vitria Technology; WareLite
Metadata Repositories
Analysis By: Michael Blechar; Jess Thompson
Definition: Metadata is an abstracted level of information about the characteristics of an artifact,
such as its name, location, perceived importance, quality or value to the organization, and
relationship to other artifacts. Technologies called "metadata repositories" are used to document,
manage and perform analysis (such as change impact analysis and gap analysis) on metadata in
the form of artifacts representing assets that the enterprise wants to manage. Repositories cover
a wide spectrum of metadata/artifacts, such as those related to business processes, components,
data/information, frameworks, hardware, organizational structure, services and software in
Page 14 of 47
support of focus areas like application development, data architecture, data warehousing and
enterprise architecture (EA).
Position and Adoption Speed Justification: Most organizations that have tried to implement a
single enterprise metadata repository have failed to meet the expected return on investment.
Community-based repositories supporting business process modeling and analysis, SOA and
data integration have shown benefits in improved quality and productivity through an improved
understanding of the artifacts, impact queries and the reuse of assets such as services and
components. For the near future, there will be no proved, viable solution that federates multiple
metadata repositories (or federates repositories with other technologies that contain metadata,
like service registries holding runtime metadata artifacts) sufficiently to satisfy the needs of
organizations.
Mainstream IT organizations will find that the most pragmatic approach to metadata management
and reporting is to have multiple, community-based repositories, which have some degree of
federation and synchronization. Although it is possible to create federated queries across multiple
repositories, many organizations still may want to consolidate and aggregate selected metadata
information from disparate sources into a "metadata warehouse" for ease of reporting and for ad
hoc query purposes. Leading metadata repository vendors are well-positioned to meet this need,
but competitors will emerge, including large, independent software vendors (ISVs), which will look
to provide these capabilities in their tool suites. Large vendors, such as IBM, Oracle and SAP, are
adding repositories or are improving their repository support for design-time and runtime
platforms to enhance metadata management support for their development and deployment
environment. As a result, Gartner expects to see a broader degree of acceptance by customers,
along with a consolidation in this market during the next few years. We position metadata
repositories as being two to five years from plateau, because most Global 1000 companies have
purchased metadata repositories and are not yet aggressively seeking replacements, and
because most new buyers are less-sophisticated IT organizations looking to large ISVs to
improve their federation capabilities before committing to the new tools. As a result, most
repository purchases will be tactical in nature based on the needs of specific communities, such
as data warehousing and SOAs.
User Advice: Owing to the diversification and consolidation of metadata management solutions,
the enterprise uber-repository market no longer exists. Consider the acquisition or extension of
using a metadata repository as part of moving to SOAs, or consider implementing BPM, data
architecture, data warehousing and EA initiatives. Most organizations will be best-served by living
with metadata in multiple tools or by using different repositories based on communities of interest,
with some limited bridging or synchronization to promote the reuse and leveraging of knowledge
and effort. Organizations that need to approximate the capabilities of an enterprise metadata
repository are still best-served by solutions from leading repository vendors.
Business Impact: Metadata repository technology can be applied to aspects of business,
enterprise, information and technical architectures, including the portfolio management and
cataloging of software services and components; business models; data-warehousing ETL rules;
business intelligence transformations and queries; data architecture; electronic data interchange;
and outsourcing engagements.
Benefit Rating: High
Market Penetration: Five percent to 20% of target audience
Maturity: Adolescent
Sample Vendors: Allen Systems Group; BEA Systems; LogicLibrary
Page 15 of 47
RIA Platforms
Analysis By: Ray Valdes
Definition: Rich Internet Application (RIA) platforms enable organizations and software vendors
to build applications that provide a richer, more-responsive user experience compared to oldergeneration, "plain browser" Web platforms. RIA platforms and technologies span a range of
approaches that, from a runtime perspective, fall into three basic categories: browser-only,
enhanced browser and an outside-the-browser.
The browser-only approach is known as Ajax, which leverages the capabilities that are already
built into every modern browser (for example, Firefox, Internet Explorer, Opera and Safari), such
as the JavaScript language engine and the Document Object Model support. The Ajax approach
is supported by vendors, such as Backbase, Jackbe and Tibco, and by open-source toolkits, such
as Dojo and Kabuki. The enhanced-browser approach begins with a browser and extends it with
a plug-in or other browser-specific machine-executable component (unlike the JavaScript-centric
Ajax approach, which is mostly browser-independent). Examples of this approach are Adobe
Flash (further enhanced by Adobe Flex server-side technology), Google Gears, Microsoft
Silverlight and the Curl RIA platform from Curl.
The outside-the-browser approach means adding some large-footprint system software to the
client operating environment, such as the Java Virtual Machine (JVM) runtime, the Microsoft .NET
language environment or the Adobe Integrated Runtime (AIR) software stack. On top of this stack
can be additional layers that add capabilities for client-side data persistence, automatic
provisioning and versioning of platforms and applications, and migration of server-side
component models. Examples of this approach include Adobe AIR, IBM Lotus Expeditor,
Microsoft Windows Presentation Foundation and Sun JavaFX.
Position and Adoption Speed Justification: Major system vendors, such as IBM and Microsoft,
have been talking about a "rich client" or "smart client" alternative to plain browser-based user
interfaces since the early part of this decade. The concept and road map was driven as much by
a vendor's agenda for maintaining a system software footprint on a user's devices (desktop PCs,
laptops and PDAs) that was more than a basic browser, which was perceived to be commodity
technology. However, in 2005, the use of Ajax (a "basic browser" technology) appeared on the
scene and enjoyed explosive growth, blind-siding vendors' road maps based on heavyweight
technologies (for example, Microsoft WinForms with ClickOnce technology). In 2007, there have
been high-profile new initiatives such as Adobe AIR, Microsoft Silverlight, IBM Lotus Expeditor
and Sun JavaFX that indicate a renewed effort on the part of vendors to go beyond the basic
browser.
User Advice: To gain real value from RIA technology, invest in an enhanced development
process based on empirically proven usability design principles and on continuous improvement
before investing in any user interface technology.
Page 16 of 47
Business Impact: A user experience that is perceptively better than other offerings in a product
category can provide sustainable, competitive advantage. Consider the flagship examples of the
RIA/Ajax genre, such as Google's Gmail, Maps and Calendar applications, which achieved high
visibility and strong adoption despite entering late into a mature and stable product category.
However, competitive advantage is not a guaranteed result of RIA technology deployment, and
depends on innovations in usability (independent of technology) and on server-side architectures
that complement client-side user interface technology. Many organizations do not have the
process maturity to deliver a consumer-grade user experience and will need to acquire talent or
consulting resources to achieve positive business impact.
Benefit Rating: High
Market Penetration: Five percent to 20% of target audience
Maturity: Emerging
At the Peak
Application Testing Services
Analysis By: Frances Karamouzis; Allie Young; Lorrie Scardino
Definition: Application testing services include all types of validation, verification and testing
services for quality control and assurance within the application development life cycle to deliver
software that is developed according to defined specifications and will operate in a production
environment. Testing services, which have always been an integral part of the application
development life cycle, are now increasingly carved out as a separate competency area, often
supported by a distinct development methodology. Testing services may be performed manually
or with automation tools, and carried out by internal IT resources or by ESPs.
The scope of application testing services includes various functions that go by different names,
such as unit testing (which is done by the application developers), integration testing, system
testing, functional testing, regression testing, performance/stress testing, usability testing and
security testing. Application testing applies to custom application development or packaged
applications, as well as single applications or many applications. When externally sourced,
application testing services may be purchased as staff augmentation, discrete project work and
longer-term outsourcing engagements.
Position and Adoption Speed Justification: In the past three years, more attention and focus
have been placed on testing services. Several business factors have accelerated this focus:
The cost of software defects is better understood today than it has been historically
because organizations are getting better at baselining costs and performance/service
levels as part of a larger sourcing strategy.
Accelerated release cycles for business applications is a reality, with more applications
directly touching the customer and application availability directly tied to revenue
performance. The cost of software defects is more visible in many industries.
When organizations look for additional ways to cut costs, especially after already
outsourcing, testing and QA emerge as good candidate services.
Page 17 of 47
The rise in the use of external providers for application development especially Indian
offshore providers has raised awareness of the need for improved processes and
methodologies.
These factors have converged to accelerate the hype associated with application testing services.
On the demand side, organizations have great expectations when they decide to externally
source testing services but often have not considered the implications of doing so, or the way in
which they should structure the contract and relationship. IT decision makers generally do not
engage the right number and level of developers in the planning process, and keep business
users at the periphery. This leads to integration problems and conflicts among resources, made
more significant because an external source is assessing the quality of others' work products,
which often includes the products of other external sources.
On the supply side, the opportunity to leverage low-cost labor by using offshore resources in offhours (when compared to the client's work day) testing is especially appealing to pure-play
offshore providers. They have invested in expanding their testing services to offer them as standalone services to an existing client base. Niche providers also emerged in offshore locations as
testing specialists. When demand reached critical mass, traditional providers started to see the
opportunity and began to make investments to compete with the offshore providers. Thus, there
has been a rapid proliferation of providers that claim to have application testing expertise.
Providers will accept work, especially from an existing client, in virtually any way the client wants
to scope and pay for it. This opportunistic approach perpetuates an environment that lacks
standards for scope of work, service levels, price, contractual terms and other attributes that are
consistent with an immature and hyped service offering.
User Advice: If isolating testing functions makes sense as part of your sourcing strategy, then
ensure that you have a well-defined scope, clear performance requirements, measurable success
criteria and engagement with all the application and user groups that will be integral to the testing
process. As a discrete function, the organization must have the resources, methodology and
practices in place to provide output to the testing provider, and then receive input when the
function is completed. Many organizations can operate in this type of environment, while many
others prefer broader accountability, such as what exists at the application level.
When evaluating providers, ensure that you give proper weighting to the level of maturity,
automation and process standardization that the provider has achieved in testing services when
offered and delivered as stand-alone services. Consider providers with dedicated business units
for testing with consistent revenue growth for that business area. If the business unit is relatively
new, then require the provider to demonstrate its commitment to this market. Check references
carefully and match your specific requirements to similar engagements.
View testing as part of the application development life cycle, even if it is externally sourced as a
discrete function. Ensure alignment between the application development methodology and the
testing methodology. Build knowledge transfer into the outsourcing action plan. The selected
provider will need to learn your methodology, and you will need to learn its. Organizations that
want to leverage a provider's intellectual property must pay special attention to knowledge
transfer and training during the transition process.
Application testing services may be purchased in various ways, and organizations need to be
clear about their objectives and the value proposition of each option. Staff augmentation is used
to address resource constraints. Organizations are responsible for directing the resources and
ensuring the outcomes. Discrete project work is typically used in two scenarios: for a specific
application development effort that requires independent testing or as a consulting-led project to
Page 18 of 47
evaluate the efficacy of changing the way application testing is performed. These consulting-led
projects are often described as pilot programs and will often lead to long-term outsourcing
contracts. Finally, testing services purchased through an outsourcing contract signal the
organization's commitment to leverage the market's expertise and assign delivery responsibility to
an external source.
Organizations considering various sourcing options are likely to find an aggressive sales
approach to broaden the scope of application services beyond the organization's intent. In many
cases, a broader scope of work might provide benefits from leveraging the provider's process
maturity to build quality into the software as opposed to simply testing the quality of the software.
Although this is a worthwhile aspiration, organizations must ensure that they are prepared to
invest in broader quality programs before engaging in relationships of this nature.
Business Impact: The major business impacts of application testing services include:
Cost savings in the discrete application development life cycle and the longer-term
The ongoing cost of maintaining the application; decreased time to implement new
applications or functionality
Many organizations do not know how much they are spending on application testing and software
QA, nor do they understand the true cost of inadequate testing processes. Furthermore, most do
not have discretionary budgets to develop world-class testing services. The lack of testing and
QA standards and consistency often leads to business disruption, which can be costly. However,
most organizations do not use a process that links testing failures to business disruption on a cost
basis. Application testing is a case where the use of an external provider can be effective but
sometimes difficult to clearly demonstrate.
Benefit Rating: Moderate
Market Penetration: One percent to 5% of target audience
Maturity: Emerging
Sample Vendors: AppLabs Technologies; Aztecsoft; Cognizant; EDS; Hexaware; IBM; Infogain;
Infosys Technologies; Keane; Satyam Computer Services; Tata Consultancy Services; Thinksoft;
Wipro Technologies
Page 19 of 47
SOA registries and repositories help manage metadata related to SOA artifacts (for example,
services, policies, processes and profiles) and have recently evolved to include the creation and
documentation of the relationships (that is, configurations and dependencies) between various
metadata and artifacts.
SOA QA and validation technologies validate the individual SOA artifacts, and determine the
relationships to each other within the context of an SOA deployment. For example, these
technologies will test and validate a composite service that executes specific processes, while
having specific policies enforced on it.
Monitoring is present throughout the individual technical domains and enables companies to
study an SOA and its environment and provide deeper, real-time business intelligence and
analytics applications. It also helps them checking that the various governance processes are
actually followed. Business activity management (BAM; see "MarketScope for Business Activity
Monitoring Platforms, 3Q06") plays a key role in the evolution and agility of an SOA and is the
foundation for future complex event processing scenarios as the SOA life cycle (a cycle of
developing, testing, deploying, monitoring, analyzing and refining).
Adapters, interfaces, application program interfaces and interoperability standards enable all the
technical domains to communicate and share information, as well as enable the governance suite
to be integrated with existing infrastructure applications, such as business applications,
integration middleware or OSs for optimal policy definition and executions.
Position and Adoption Speed Justification: SOA governance technologies, specifically the
service registry, and SOA policy enforcement (service management and service security) have
been hyped by vendors and end users; many end users are deploying these technologies without
credible SOA governance organizational processes and strategies. As a result, service registries
and policy enforcement tools are often underused today (only for cataloging and XML security).
With more vendors entering into OEM agreements and partnerships with best-of-breed vendors,
these technologies will reach the Peak of Inflated Expectations within 12 months. However,
because most SOA deployments will likely fail without proper governance, companies will
eventually move to better leverage SOA governance technologies to provide visibility,
manageability, monitoring security and QA.
User Advice: Regardless of the overhyping of SOA governance, companies deploying SOAs
need to first develop a strategy and process for SOA governance that encompass technologies
and organizations. Deploying a service registry for reuse and developing some policies around
the development of services is a good start, but companies should plan on using that registry for
SOA life cycle management and for visibility into various SOA artifacts.
Business Impact: Any company or division deploying an SOA will be impacted by SOA
governance. Entities providing software as a service, integration as a service, business-tobusiness services or hosting applications should take advantage of SOA governance
technologies to enhance their offerings, better manage their SOA artifacts and obtain competitive
differentiation.
Benefit Rating: High
Market Penetration: Five percent to 20% of target audience
Maturity: Adolescent
Sample Vendors: Actional; AmberPoint; BEA; HP-Mercury; iTKO; IBM; Layer 7 Technologies;
LogicLibrary; Oracle; Reactivity; Software AG/webMethods; SOA Software; Tibco Software;
Vordel; WebLayers
Page 20 of 47
Page 21 of 47
the software, as opposed to testing quality in, as well as to move toward automated testing
process and environments.
Benefit Rating: High
Market Penetration: Five percent to 20% of target audience
Maturity: Adolescent
Sample Vendors: AppLabs Technologies; Cognizant Technology Solutions; IBM Global
Technology Services; Infogain; ReadyTestGo; Tata Consultancy Services; Wipro Technologies
Model-Driven Architectures
Analysis By: David Norton; David Cearley; David McCoy
Definition: The term "Model Driven Architecture" is a registered trademark of the Object
Management Group (OMG). It describes OMG's proposed approach to separating business-level
functionality from the technical nuances of its implementation (see www.omg.org/mda). The
premise behind OMG's Model Driven Architecture and the broader family of model-driven
approaches (MDAs) is to enable business-level functionality to be modeled by standards, such as
Unified Modeling Language (UML) in OMG's case; allow the models to exist independently of
platform-induced constraints and requirements; and then instantiate those models into specific
runtime implementations, based on the target platform of choice.
"Model-driven," as in "model-driven software engineering," is a commonly (if sometimes
generically) used prefix that denotes concepts in which an initial model creation period precedes
and guides subsequent efforts, including model-driven application development, such as SODA;
model-driven engineering; and model-driven processes, such as BPM. "Model-driven" has
become a "catchall" phrase for an entire genre of approaches.
Position and Adoption Speed Justification: Core supporting standards, such as UML
(referenced by OMG's Model Driven Architecture) are well-established; however, comprehensive
MDAs as a whole are less mature than their constituent supporting standards in terms of vendor
support and actual deployment in the application architecture, construction and deployment cycle.
An MDA represents a long-standing goal of software construction that has seen prior incarnations
and waves of Hype Cycle positioning (for example, computer-aided software engineering
technology). The goal remains the same: Create a model of the new system, and then enable the
model to become transformed into the final system as a separate and significantly simplified step.
As always, such grand visions take time to catch on, and they face significant hurdles along the
way. A new wave of model-driven hype is emerging.
User Advice: Technical and enterprise architects should strongly consider the implications of
implementing architectural solutions that are not MDA-compliant. However, all major vendors will
provide adherence, to at least some degree, in their tools, coupled with best-practice extensions
beyond MDA standards. Organizations implementing SOAs should pay close attention to the
MDA standards and consider acquiring tools that automate models and rules. These include
architected rapid application development (ARAD) and architected model-driven (AMD)
technologies and rule engines supporting code-generating and non-code-generating (late
binding) implementations.
AMD is primarily suited to complex projects that require a high degree of reuse of business
services, where you can put significant time into business process analysis (BPA) and design. At
the same time, no competent organization would want to do AMD-only development, because the
additional time and cost of the analysis and design steps would not bring adequate return on
Page 22 of 47
investment or agility for time- and/or budget-constrained application development projects. The
ideal solution is to mix AMD, ARAD and rapid application development (RAD) methods and tools.
Business Impact: MDAs reinforce the focus on business first and technology second. The
concepts focus attention on modeling the business: business rules, business roles, business
interactions and so on. The instantiation of these business models in specific software
applications or components flows from the business model. By reinforcing the business-level
focus and coupling MDAs with SOA concepts, you end up with a system that is inherently more
flexible and adaptable. If OMG's Model Driven Architecture or the myriad related MDAs gain
widespread acceptance, then the impact on software architecture will be substantial. All vertical
domains would benefit from the paradigm.
Benefit Rating: High
Market Penetration: One percent to 5% of target audience
Maturity: Emerging
Sample Vendors: BEA Systems; Borland; Compuware; IBM; Kabira; OMG; Pegasystems;
Telelogic; Unisys
Scriptless Testing
Analysis By: Thomas Murphy
Definition: Scriptless-testing tools are second-generation testing tools that reduce the amount of
manual scripting needed to create tests using data-driven approaches. The goal is to keep the
test project from becoming another development project, and to enable business user testing.
These tools have a broad set of pre-defined objects that can interact with the application being
tested, including error handling and data management. As the tools mature, they'll continue to
shift toward a more MDA.
Position and Adoption Speed Justification: Although these tools reduce the amount of code to
be written, they don't remove the need for skilled testers. Scriptless testing makes it easier for
business analysts to be involved in testing efforts, but the analysts must still be paired with quality
engineers to drive testing effectiveness. This is especially important with packaged applications.
The emergence and changing nature of SOA and the tools supporting it will extend the time
needed for this market to mature, and additional areas (such as data management) will suppress
the expected results. Tools and users will reach the Slope of Enlightenment during the next two
years and take another three to five years to reach the Plateau of Productivity.
The promise of being "script-free" has existed for several years; however, although improvements
have been made, it's unlikely that all scripts can be removed for all applications. Expect the
greatest benefits to come from domain-limited tools. Tools will also gain capabilities as modeloriented approaches appear, but these will require skills and model management to be effective.
User Advice: Evaluate tools that reduce the cost of testing. In addition, recognize that these tools
aren't meaningfully well-integrated with leading application life cycle management suites, which
reduces a team's ability to coordinate effectively. Although these tools will reduce the need for
scripting, well-designed tests still require skill and business users typically don't have the right
skills and mind-set for this.
Business Impact: Scripting-centric tools are labor-intensive not only for the initial creation, but
also for maintenance. Scriptless testing will reduce overall testing costs and enable better
coverage, which should lead to improved defect detection earlier in the development cycle (thus
Page 23 of 47
Page 24 of 47
for a subsection of the application portfolio, so they should be coupled with ARAD and other rapid
development tools as part of an application development tool suite.
Benefit Rating: High
Market Penetration: One percent to 5% of target audience
Maturity: Emerging
Sample Vendors: CA; Compuware; IBM; Mia-Software; Oracle; Telelogic; Wyde
Page 25 of 47
in your industry and any related tool capabilities, such as support for an industry-specific
architecture framework; and the vendor's understanding of EA.
Business Impact: Business strategists, planners and analysts can derive considerable benefit
from an EA tool, because it helps them to better understand the complex system of IT resources
and its support of the business. Crucially, this visibility helps to better align IT with the business
strategy, as well as providing other benefits, such as improved disaster recovery planning.
Benefit Rating: High
Market Penetration: Five percent to 20% of target audience
Maturity: Adolescent
Sample Vendors: ASG; Casewise; IDS Scheer; Mega International; Proforma; Sybase;
Telelogic; Troux Technologies
Recommended Reading: "Telelogic's System Architect for Enterprise Architecture"
"Follow These Best Practices to Optimize Architecture Tool Benefits"
"Troux: Innovative Enterprise Architecture Tools"
"Cool Vendors in Enterprise Architecture, 2007"
Page 26 of 47
Business Impact: Enterprises adopting application security testing technologies and processes
will benefit from risk and cost reductions, because these technologies and processes provide
early detection and correction of vulnerabilities before applications move into production and
become open to attack.
Benefit Rating: High
Market Penetration: Five percent to 20% of target audience
Maturity: Adolescent
Sample Vendors: Acunetix; Cenzic; Coverity; Fortify Software; Klocwork; Ounce Labs; SPI
Dynamics; Veracode; Watchfire
Recommended Reading: "MarketScope for Web Application Security Vulnerability Scanners,
2006"
Market Definition and Vendor Selection Criteria for Source Code Security Testing Tools
Page 27 of 47
User Advice:
IT project organizations should pursue a PPM initiative that includes three parts:
The PPM system evaluation is an important part of a PPM initiative, but it is not the only part, and
the change management required for a successful PPM initiative should not be ignored. PPM
systems are not "off the shelf" systems. Initial implementations should be controlled and narrow,
reflecting the commonly low PPM process maturity levels among IT departments.
Business Impact: Visibility into work demand, committed work and resources (time, people and
money) used and available is the first step toward cost savings and a more-effective response to
rapid business change in IT departments. PPM systems help organizations lower investment risk,
reduce operational costs, increase work execution efficiency and quality, and more effectively
align IT output with business strategy.
Benefit Rating: High
Market Penetration: One percent to 5% of target audience
Maturity: Early mainstream
Sample Vendors: CA; Compuware; HP; IBM; Planview; Primavera Systems
Page 28 of 47
are accurate. The amount and type of testing that an organization will require is driven by the
complexity of the implementation: multiple geographies, workflow customization, integration with
other applications and regulatory compliance.
Business Impact: Performance tuning can save considerable amounts on deployment hardware
(as much as 50%). Effective functional testing early in development will reduce the cost of repairs
and rollout failures. Building automation tests for key workflows will enable faster rollout of
maintenance releases and add-on functionality while reducing the risk of application failure.
Benefit Rating: High
Market Penetration: Five percent to 20% of target audience
Maturity: Early mainstream
Sample Vendors: Arsin; Gamma Technologies; HP; Newmerix; SAP; Sucid; Worksoft
Page 29 of 47
User Advice: Adopt agile approaches judiciously, on internal projects with sophisticated teams
that are not averse to process discipline. "Coding cowboys" are not the developers who will
capture lessons learned from agile development projects and help build agile capability in the
organization. Agile projects need smart, disciplined developers that understand patterns. Agile
proponents state that they value working software "over" process and documentation, whereas
less-sophisticated developers value it to the exclusion, or "instead of," process. A key driver of
agility is the increasing number of tools that enable agile practices and provide the "comfort" that
a bigger process will provide. Comfort is coming from real information and real data, not just a
big, static document. Examples are code review tools, unit testing tools, continuous integration
tools, metrics tools and project management tools that help automate and drive information into
repositories to support management views of project status.
Business Impact: In some Type A development organizations, agile approaches on some
projects are already delivering the benefits of fast, accurate delivery of priority application
requirements. Although rarely the dominant approach, even the lessons that aggressive adopters
(Type A) learned from agile development have a positive impact on other methodological
approaches in the development organization. Tight business collaboration (on-site customer) is a
key success factor with agility, but it's also the most broken principle ("You can have my domain
expert for three weeks, no more"). The successful adoption of agility will require more business
involvement in the development process than most organizations have committed or experienced
in older methods.
Benefit Rating: Moderate
Market Penetration: One percent to 5% of target audience
Maturity: Early mainstream
Sample Vendors: Borland; IBM; Jaczone AB; Rally Software Development; Thought Equity
Motion; ThoughtWorks; VersionOne
Recommended Reading: "Borland Runs With the Continuous Integration Gauntlet"
"Pairing Agility with Quality: Gartner's 10 Principles of NeoRAD"
"Agile Requirements Definition and Management Will Benefit Application Development"
"Agile Development: Fact or Fiction"
Unit Testing
Analysis By: Thomas Murphy
Definition: Unit-testing tools are designed to test application functionality at the "unit" of
modularity. In object-oriented programming, this is generally at the class level; in SOA, it is a
service. Unit testing is advocated in agile development methods and is put to use by some
organizations as part of test-driven development. These tools provide frameworks for
automatically generating unit tests, managing and running suites of tests, and reporting results.
Position and Adoption Speed Justification: Tools in this space began in the open-source
market more than five years ago and are commonly used. Environments now provide support for
unit tests as part of check-in criteria and have the capability to discover needed tests and
boundary conditions. In addition, tools are appearing on top of basic unit testing frameworks to
support the creation of user interface functional tests. Driving testing early into the life cycle and
running the test fixtures frequently can result in dramatic quality improvements and greatly
reduced costs by detecting defects early.
Page 30 of 47
Although unit testing is relatively well-known and adoption of the core frameworks is mainstream,
the use of unit testing is expanding as new solutions are layered on top of current frameworks,
pushing unit testing to do more for example, functional testing. In addition, tools that automate
the generation of unit tests promise big results but suffer in performance. Because most
organizations use a test-second rather than test-driven approach, they do not gain all the
promised benefits of unit testing.
User Advice: Drive the use of unit testing in your organization, and train developers on the
effective use of these tools to help in design discovery and refactoring. Use these tools to ensure
that application code presents contract-oriented interfaces, which will enable smoother transitions
toward SOAs. Although open source is cost-efficient, automation provided in integrated
development environments and by commercial products improves and reduces the amount of
code that must be created.
Business Impact: Unit testing can reduce the amount of time needed to functionally test an
application, and can also provide a solid baseline for functional and load testing. The appropriate
use of the technology can drive out defects earlier in the development process, thus reducing the
cost to fix defects.
Benefit Rating: Moderate
Market Penetration: Twenty percent to 50% of target audience
Maturity: Early mainstream
Sample Vendors: Agitar Software; Eclipse; Instantiations; Microsoft; Open Source Applications
Foundation; Parasoft; United Binary
ARAD SODA
Analysis By: David Norton; Mark Driver
Definition: Architectural and design patterns are modified by an organization's technical
architects to adhere to the company's architectural standards. All code involving the architecture
is generated in applications in a compliant manner as part of the ARAD process for SODA.
Position and Adoption Speed Justification: Architectural standards continue to evolve.
Technologies that can use these standards to generate Java 2 Platform, Enterprise Edition
(J2EE) and .NET code are moving toward mainstream use. As mainstream users of RAD tools
experience the steep learning curves and lost productivity associated with building more-complex
service-oriented applications, they will look to ARAD tools for relief.
User Advice: To reduce the learning curve and increase the productivity of J2EE or .NET
developers, or to ease the transition of traditional client/server developers using COBOL, C or
4GLs, enterprises should consider providing these developers with ARAD methods and tools.
Business Impact: Design tools, coupled with code generators, are used to ensure compliance
with business and technical models and architectures, while providing productivity and quality
improvements. Gartner studies indicate that productivity gains using ARAD methods and tools
are typically 30% to 40% compared with traditional client/server methods and tools. Application
and development managers should look for a return within 12 months of ARAD implementation.
Benefit Rating: High
Market Penetration: Five percent to 20% of target audience
Maturity: Adolescent
Page 31 of 47
Sample Vendors: CA; Codagen Technologies; Compuware; IBM; Interactive Objects; MiaSoftware; Microsoft; ObjectVenture; Wyde
SOA
Analysis By: Roy Schulte; Yefim Natis
Definition: SOA is a style of application architecture. An application is an SOA application if it is
modular; the modules are distributable; software developers have written or generated interface
metadata that specifies an explicit contract so that another developer can find and use the
service; the interface is separate from the implementation (code and data) of the service provider;
and the services are shareable that is, designed and deployed in a manner that enables them
to be invoked successively by disparate consumers. Unlike some other types of distributed
computing, services in SOA can be shared across applications running on disparate platforms
and are inherently easier to integrate with software from other development teams.
Position and Adoption Speed Justification: The use of SOA is accelerating in response to
escalating business requirements, the emergence of Web and Web services standards (such as
WSDL and SOAP) and the improving availability of SOA-capable development tools and
applications. Competition, globalization and technology advances are driving companies to
change their products, business processes and prices more frequently than they did before the
mid-1990s. The growing use of BPM and BAM is also causing companies to use more SOA
because BPM and BAM are more-effective and easier to develop when using SOA. Vendors of
middleware, development tools and packaged applications have committed to moving to SOA,
and their product lines are well into the transition. User companies are moving more slowly, on
average, and they are experiencing varying degrees of difficulty in ramping up their use of SOA.
These difficulties hinder, but will not prevent, the spread of SOA throughout the application
portfolios of large companies. The growing, if limited, practical experience with SOA has
demonstrated the real costs and benefits of the transition to SOA. SOA skepticism is gradually
giving way to a realistic anticipation of costs and benefits. Development and management best
practices for SOA are still not fully mature, but companies are largely satisfied with their
experience with it.
User Advice: Use SOA when designing new business applications, particularly those whose life
spans are expected to be more than three years and that will undergo continuous refinement,
maintenance or enlargement. SOA is well-suited especially for building composite applications.
When buying packaged applications, rate those that implement SOA more highly than those that
do not. Also, use SOA in application integration scenarios that involve composite applications that
tie new logic to purchased packages, legacy applications or services offered by other business
units. However, do not discard non-SOA applications in favor of SOA applications just on the
basis of architecture. Discard non-SOA applications only if there are compelling business reasons
why the non-SOA application has become unsatisfactory. Continue to use non-SOA architecture
styles for some new, tactical applications of limited size and complexity, and for minor changes to
installed non-SOA applications. Recognize that there are multiple patterns within SOA (such as
multichannel applications, composite applications, multistep process flows and event-driven
SOA), and each of these has its own best practices for design, deployment and management.
Business Impact: SOA is a durable change in application architecture, like the relational data
model and the graphical user interface. The main benefit of SOA is that it reduces the effort and
time needed to change application systems to support changes in the business. The
implementation of the first SOA application in a business domain will generally be as difficult as,
or more difficult than, building the same application using non-SOA designs. However,
subsequent applications and changes to the initial SOA application are easier, faster and less
expensive because they leverage the SOA infrastructure and previously built services. SOA is an
Page 32 of 47
essential ingredient in strategies that seek to enhance the agility of a company. SOA also
reduces the cost of application integration, especially after enough applications have been
converted or modernized to support an SOA model.
Benefit Rating: Transformational
Market Penetration: Twenty percent to 50% of target audience
Maturity: Early mainstream
Recommended Reading: "Five Principles of SOA in Business and IT"
"SOA: Where Do I Start?"
"Applied SOA: Transforming Fundamental Principles Into Best Practices"
Enterprise Portals
Analysis By: David Gootzit
Page 33 of 47
Definition: A portal is Web software infrastructure that provides access to and interaction with
relevant information assets (such as information/content, applications and business processes),
knowledge assets and human assets by select targeted audiences, delivered in a highly
personalized manner. Enterprise portals may face different audiences, including employees
(business-to-employee B2E), customers (business-to-consumer) or business partners
(business-to-business). B2E portals are the most relevant type of enterprise portal to the highperformance workplace, but portals serving other audiences also play important roles.
Position and Adoption Speed Justification: Portals continue to be one of the most highly
sought interfaces across Fortune 2000 enterprises. They're fundamental technical components of
the high-performance workplace, whether as a replacement for a first-generation intranet, the
cornerstone of knowledge management initiatives, or a B2E portal that serves as the primary way
for employees to access and interact with back-end systems and repositories.
User Advice: Enterprise portal use has reached mainstream enterprises. B2E portal
deployments are no longer limited to early adopters and technologically aggressive enterprises.
Organizations are evaluating horizontal portal technology to augment their customer and partnerfacing Web presences. The personalized delivery of and interaction with relevant applications,
content and business processes can yield many benefits at the enterprise level, primarily focused
on reducing process cycle times and improving the quality of process execution.
Many enterprises that have deployed portals find themselves facing multiple, siloed deployments
using different portal frameworks. These enterprises should investigate appropriate portal
containment and rationalization policies. Enterprise portals are incorporating RIA technologies,
primarily in the form of Ajax, to improve the quality of the user experience they deliver.
Business Impact: The benefits of enterprise portals include controlling the "infoflood," providing
single sign-on, enhancing customer support and enabling tighter alignment with partners. The
benefits of internally facing portals include cost avoidance via employee self-service, but the most
compelling business impact can be improved business agility, velocity and throughput. Externally
facing portals can lead to increased revenue and profitability. Enterprise portals' biggest business
impact is in reducing cycle times and improving the quality of process execution.
Benefit Rating: Transformational
Market Penetration: Twenty percent to 50% of target audience
Maturity: Early mainstream
Sample Vendors: BEA Systems; BroadVision; Fujitsu; IBM; Microsoft; Oracle; SAP; Sun; Tibco
Software; Vignette
Recommended Reading: "Magic Quadrant for Horizontal Portal Products, 2006"
"Hype Cycle for Portal Ecosystems, 2006"
Page 34 of 47
the .NET framework (which includes ASP.NET), Internet Information Server, enterprise services
(such as Component Object Model +), Microsoft Message Queuing, BizTalk Server, Office
SharePoint and more. The release of Windows Server 2008 (formerly known as Longhorn) will
likely make some changes and additions to this list, including, most notably, Windows
Communications Foundation. It is already clear that the .NET Framework v.3 adds significant
enrichment to the previous versions of MSAP in the areas of productivity, support of SOA,
innovations in support of modern user experience and multiprotocol integration.
MSAP competes against Java Platform, Enterprise Edition (Java EE) and other enterprise
platform architectures in the high-end enterprise projects. MSAP also competes against PHP
platform, Ruby on Rails, ColdFusion and open-source Java frameworks, such as Spring and
Struts, in lower-end productivity-oriented enterprise projects.
Position and Adoption Speed Justification: The technical quality and quality of service of .NET
are suitable for the majority of enterprise projects. However, Microsoft's business strategy for
mission-critical projects, including support, account management, long-term continuity in the
relationship and its software offerings, lags behind leading enterprise vendors' strategies.
Microsoft does not have a good reputation of seeing its customers through long-term software
architecture endeavors. Its exclusive commitment to Windows as the only OS platform reduces
the appeal of MSAP in large-scale enterprise settings. The impending replacement of Windows
Server 2003 in the 2008 time frame and its anticipated fundamental platform changes continue to
remind high-end enterprise platform users that MSAP has not yet reached dependable maturity
levels, despite its increasing technical strengths.
However, in lower-end enterprise settings, where MSAP dominates with its productivity, ubiquity
of skills and relatively low cost, MSAP remains a strong and growing option for the majority of
projects. Yet, MSAP's market share has not been increasing dramatically lately because of the
growing number of Java and other high-productivity alternatives, including open source.
MSAP is a technically strong platform option. It is used widely for small and midsize software
projects and, on occasion, for very large enterprise projects. Because of business strategy
shortcomings, MSAP is not considered for high-end projects as much as it could technically
handle. MSAP's market share is stable and will likely remain at present levels until the next
generation of MSAP is available (and proven) with the release of Windows Server 2008 and until
Microsoft's business strategy in high-end enterprise settings better reflects the requirements of
that environment.
User Advice: If you choose to use MSAP, then you have to use the Windows OS platform. If that
is acceptable, then consider MSAP for small and midsize software projects without limitations.
For large-scale projects, recognize that the technical quality of the MSAP technology is likely
sufficient for the requirements of the project. However, to achieve the "whole product" experience,
you will need to rely on third-party system integrators.
Microsoft excels at providing strong developer productivity at the cost of long-term flexibility.
Consider MSAP as one element in a larger IT strategy (for example, that might include Java EE)
or as the principle strategic platform when a Microsoft-centric strategy is acceptable or preferable.
Business Impact: MSAP frees IT organizations from the lock-in into a single programming
language (such as Java or PHP), but it locks the project into Windows OS and only the hardware
options that are available for Windows. MSAP can be a lower-cost option for enterprise platform
technologies. However, in larger settings, this assumption is not always true and must be verified
with real numbers. The history of significant discontinuities as Microsoft releases major new
versions of its MSAP technologies have also increased costs of long-term use of MSAP. Above
all, however, users who are happy to use Windows infrastructure for all their development needs
Page 35 of 47
find MSAP and its development environment easy to learn, easy to use and easy to staff major
factors reducing the costs and improving the time-to-market of software projects.
Benefit Rating: High
Market Penetration: More than 50% of target audience
Maturity: Early mainstream
Sample Vendors: Microsoft
Recommended Reading: "Magic Quadrant for Enterprise Application Servers, 2Q06"
OOA&D Methodologies
Analysis By: Michael Blechar; David Norton
Definition: Object-oriented analysis and design (OOA&D) methodologies are used by IT
professionals to convert business processes and requirements into software solutions in the form
of objects, components and services.
Position and Adoption Speed Justification: Virtually every major software vendor is advising
its customers to move to OOA&D methods, in conjunction with moving to service-oriented
development targeting J2EE and .NET platforms. Most organizations are adhering to this advice
by converting to OOA&D methods and tools to document requirements and perform logical
design as a front end to Java and C# code construction. The UML, owned by the OMG
consortium, is the de facto standard for OOA&D.
Domain-Specific Language (DSL) methods are emerging as a natural extension of and
complement to UML. DSL is helping to remove the semantic gap between problem and solution
domains by using tailor-made syntax and semantics. However, DSL-based modeling is largely
seen as a Microsoft initiative part of its Software Factories approach. This has split the
modeling community, and as a result, a period of instability will delay the broad adoption of DSL.
Microsoft believes you shouldn't bother with UML for automated model-based development, but
instead use purpose-built DSL right from the start. Microsoft argues that UML should be used for
documentation and DSL should be used for development.
User Advice: Implement OOA&D methods and tools in conjunction with the development of
service-oriented applications. As a best practice, use BPA/modeling methods and tools at a
conceptual/planning and analysis level to define business processes, workflows and events; then
bridge those models to OOA&D tools to continue IT modeling for the processes to be
implemented in software solutions.
DSLs should be considered as a complementary extension to UML. Most organizations will find
that they need both techniques to tackle the range and complexity of projects, and to balance
modeling return on investment against productivity and quality.
Business Impact: Improved specification languages close the gap among business
requirements, designs and executables.
Benefit Rating: Moderate
Market Penetration: Twenty percent to 50% of target audience
Maturity: Early mainstream
Page 36 of 47
Sample Vendors: Borland; Embarcadero Technologies; IBM; Microsoft; Sparx Systems; Sybase;
Telelogic
Page 37 of 47
Performance Testing
Analysis By: Thomas Murphy
Definition: Performance testing tools are used to simulate production loads to ensure that the
system being tested will perform adequately. This includes stress testing to ensure application
stability in adverse conditions. Leading tools are integrated into test management solutions and
support data retrieval from a wide variety of sources. Although the concept and tools are
generally mature, the shift toward Web 2.0 and SOA will continue to create challenges during the
next two years.
Position and Adoption Speed Justification: This technology is well-understood and provided
by all major players. Outsourced performance testing has also been used successfully. However,
organizations are still struggling to achieve accurate results. SOA presents a greater potential for
less-linear processes, making performance testing more-complex, not a simple end-to-end script.
User Advice: Define best practices to follow when using these tools, and investigate
virtualization and simulation tools to improve the completeness of tests. Use production insight to
understand true workloads and test in an environment that matches production as closely as
possible.
Business Impact: Proper load testing will ensure that: You don't overspend on hardware and
software licenses during a deployment; and application failures are minimized.
Benefit Rating: Moderate
Market Penetration: Twenty percent to 50% of target audience
Maturity: Early mainstream
Sample Vendors: AdventNet; Borland; Compuware; Empirix; HP; IBM; Microsoft; RadView
Software; Worksoft
Page 38 of 47
alternative OSS IDE, NetBeans (www.netbeans.org), is gaining momentum, as well proving that
more than one open-source solution can prosper in a market.
Even as some tools mature, others begin to emerge within mainstream efforts. Open-source
testing, modeling and configuration tools have begun to put serious pressure on closed-source
incumbent market leaders and show promise to further expand market influence in the coming
years.
User Advice: Open-source application development tools can provide a compelling balance
among cost, performance and features. However, most (including Eclipse) come with higher
levels of self-support efforts than closed-source alternatives. Consider direct adoption of opensource tools when they fit a strong need, but also consider tools that are based on open-source
technologies as well. Finally, consider the need for external service and support for all application
development tools when appropriate just because open source offers the option of self-support
does not mean that every application development organization should look toward this strategy
exclusively.
Business Impact: Like other markets, open-source development tools will commoditize market
dynamics. These tools will provide a universally accessible level of technology that is available to
developers and technology providers. Vendors that find a way to coexist (even synergize) with
open source will create the strongest long-term market presence. Vendors that choose to
compete directly against these open-source tools will find it increasingly difficult to maintain
market positions.
Benefit Rating: Moderate
Market Penetration: Twenty percent to 50% of target audience
Maturity: Early mainstream
Sample Vendors: ActiveState; Apache Software Foundation; CollabNet; Eclipse Foundation;
Free Software Foundation; Interface21; NetBeans.Org; Zend
Recommended Reading: "Managing Open-Source Service and Support"
"Learn the Basic Principles of Open-Source Software"
Page 39 of 47
User Advice: Use BPA tools to support EA and BPM initiatives, and as a starting point for
architected, model-driven application development and integration projects. BPA tools bring a
powerful, visual comprehension capability to a broad audience, so use them to improve process
understanding, the clarity and quality of requirements, and the credibility of business process
improvement projects.
Business Impact: Understanding complex business processes is a significant challenge.
Although there's a whole discipline around BPM, the assistance of a tool is essential to model
processes to realize cost and time savings. IT organizations may have to contribute substantial
modeling and operational support to enable BPA capabilities for the business.
Benefit Rating: Moderate
Market Penetration: Five percent to 20% of target audience
Maturity: Early mainstream
Sample Vendors: Casewise; iGrafx; IBM; IDS Scheer; Mega; Microsoft; Proforma; Telelogic
Recommended Reading: "Magic Quadrant for Business Process Analysis Tools, 2006"
"Consider Eight Functionality Selection Criteria When Choosing BPA Tools"
"Focusing on a Business Process Analysis Tool Acquisition"
Page 40 of 47
Is too complex for many software projects (although Java EE 5 is an improvement) and
supports only one programming language (Java).
In its complete rendition, it offers too much functionality for many business applications
Provides basic support for EDA via MDBs and JMS programming models
Designed well for distributed homogeneous applications, but lacks completeness for full
support of heterogeneous service-oriented applications
Position and Adoption Speed Justification: The Java EE programming model dominates highend enterprise software projects. Thousands of mainstream enterprises use Java EE as the
platform for their mission-critical applications. Leading application ISVs use Java EE for their new
software development. The platform implementations range from high-cost, high-end versions to
low-cost, mass-market versions. Closed-source and open-source options compete for new
projects.
The Java EE specification changes have become gradual, incremental and infrequent.
Discontinuous innovation is rare and typically addresses isolated specification shortcomings. The
leading products are proven, dependable and interchangeable for most applications. The use
practices and the development methodologies are well-established, and the skilled resources are
broadly available and steadily increasing. The product is clearly near its plateau of productivity,
although not quite there yet. The new Java EE 5 has introduced a mild disruption to the installed
base. Introduction of EJB v.3 is technically incremental, but, in effect, discontinuous. It is a major
Publication Date: 29 June 2007/ID Number: G00147982
2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
Page 41 of 47
re-architecture of the Java EE component model and, although the previous versions are required
to be supported by compliant implementations, no new development should invest in anything but
the more-efficient and easier-to-use EJB v.3, which, in turn, requires new skills and design
practices.
User Advice: Most mainstream business applications do not need the full power of a high-end
Java EE platform. Considering the high degree of compatibility between implementations of Java
EE, use the proven low-cost offerings for less-demanding parts of the application, and invest in
the high-end alternative platforms only for select high-demand parts of the application
environment.
Consider the open-source option as a viable alternative to the more-established, closedsource implementations.
Expect price decreases in the low end of Java EE and price increases in the lessstandard, high-end and extended Java EE arena.
Page 42 of 47
Appendices
Figure 3. Hype Cycle for Application Development, 2006
Page 43 of 47
visibility
Scriptless Testing
BPM Suites
Offshore Outsourced Testing
Unit Testing
Basic Web Services
Metadata Repositories
Collaborative Tools
for the Software
Development Life Cycle
Application Quality
Dashboards
Performance
Testing
Business
Process
Management
Automated Testing
.NET Managed Code Platform
Business Process Analysis
SOA Testing
SDLC Security
Methodologies
As of December 2006
Technology
Trigger
Peak of
Inflated
Expectations
Trough of
Disillusionment
Slope of Enlightenment
Plateau of
Productivity
time
Years to mainstream adoption:
less than 2 years
2 to 5 years
5 to 10 years
obsolete
before plateau
Page 44 of 47
Definition
Technology Trigger
Trough of Disillusionment
Slope of Enlightenment
Plateau of Productivity
Definition
Transformational
High
Moderate
Page 45 of 47
Benefit Rating
Definition
Low
Status
Products/Vendors
Embryonic
In labs
None
Emerging
Commercialization by vendors
Pilots and deployments by
industry leaders
First generation
High price
Much customization
Adolescent
Second generation
Less customization
Early mainstream
Proven technology
Vendors, technology and adoption
rapidly evolving
Third generation
More out of box
Methodologies
Mature mainstream
Robust technology
Not much evolution in vendors or
technology
Legacy
Obsolete
Rarely used
RECOMMENDED READING
"Understanding Gartner's Hype Cycles, 2007"
Page 46 of 47
REGIONAL HEADQUARTERS
Corporate Headquarters
56 Top Gallant Road
Stamford, CT 06902-7700
U.S.A.
+1 203 964 0096
European Headquarters
Tamesis
The Glanty
Egham
Surrey, TW20 9AW
UNITED KINGDOM
+44 1784 431611
Asia/Pacific Headquarters
Gartner Australasia Pty. Ltd.
Level 9, 141 Walker Street
North Sydney
New South Wales 2060
AUSTRALIA
+61 2 9459 4600
Japan Headquarters
Gartner Japan Ltd.
Aobadai Hills, 6F
7-7, Aobadai, 4-chome
Meguro-ku, Tokyo 153-0042
JAPAN
+81 3 3481 3670
Latin America Headquarters
Gartner do Brazil
Av. das Naes Unidas, 12551
9 andarWorld Trade Center
04578-903So Paulo SP
BRAZIL
+55 11 3443 1509
Page 47 of 47