Developing Enterprise Applications
Developing Enterprise Applications
Developing Enterprise Applications
Preface
Acknowledgments
The Technical Validation Group for Developing Enterprise Applications—An
Impurist's View
Tell Us What You Think!
Foreword
5. Distribution Considerations
Data Marshalling
Remote Activation
Structured Data-Passing Techniques
Microsoft Transaction Server
Summary
13. Interoperability
Interoperability Defined
Interoperability Through Data Movement
Interoperability Through Data Sharing
Summary
I wish to thank the staff at Macmillan USA for their support and assistance in putting
this book together. Special thanks to Michelle Newcomb for helping me work around
my other schedules, to Bryan Morgan, Jay Aguilar, Christy Parrish, and Tonya
Simpson for applying their superb editing skills to my manuscripts, and to Tracy
Dunkelberger for giving me the opportunity to write this book. I would also like to
thank Fawcette Technical Publications for giving me my original avenues of writing,
with special thanks to Lee Thé for bringing this sort of content into the forefront of
technical media.
I also wish to thank the current and former management at Compaq Computer
Corporation for allowing me to work on the applications that led to the formation of
the techniques and architecture presented in this book. Thanks go to Marshall Bard
for allowing me to build the applications that we could not buy and supporting these
efforts every step of the way. Special thanks to George Bumgardner for constantly
being a champion (to both management and our customer base) for the applications
we were building. Finally, I would like to thank all the end users who validated the
value of our efforts. Without their constant feedback and push for ongoing added
value, these topics would not have come about.
I want to pay particular thanks to Bill Erzal of MSHOW in Austin, Texas. Bill has been
my partner in development of applications at Compaq for the last several years and
has been the original implementer of many of these techniques on a large scale. I
thank him for being candid with me when I have presented a bad architectural
decision, for biting his lip and pushing on when he was unsure of a design, and for
saying "this looks good" when he knew one was right. In addition, many of the user
interface design techniques covered in the book have been lifted directly from his
work, which has been the result of an aggregation of experiences from his broad
career in application development. I thank him for allowing me to include them with
this book.
The Technical Validation Group for Developing
As the reader of this book, you are our most important critic and commentator. We
value your opinion and want to know what we're doing right, what we could do
better, what areas you'd like to see us publish in, and any other words of wisdom
you're willing to pass our way.
As an associate publisher for Que, I welcome your comments. You can fax, email, or
write me directly to let me know what you did or didn't like about this book—as well
as what we can do to make our books stronger.
Please note that I cannot help you with technical problems related to the topic of this
book, and that due to the high volume of mail I receive, I might not be able to reply
to every message.
When you write, please be sure to include this book's title and author as well as your
name and phone or fax number. I will carefully review your comments and share
them with the author and editors who worked on the book.
Fax: 317-581-4666
Email: [email protected]
Mail: Associate Publisher Que 201 West 103rd Street Indianapolis, IN 46290 USA
Foreword
With the advent of the Internet and the new user interface paradigm
and application delivery model it provides, we must rethink the
traditional development model. Many Internet models are patched
onto existing applications, sometimes effectively, but many times not.
The architecture in this book makes the Internet an integral part of
the application. Counter to this, it does not force an unwarranted
Internet basis when it is unnecessary.
The code samples in this book have been developed using Microsoft
Visual Basic 6, Enterprise Edition, Service Pack 3. SQL Server schema
were developed in version 6.5 but should also work on version 6.x ad
7.0.
The source code listings that appear in the book are also available at
http://www.mcp.com.
Part I: An Overview of Tools and Technologies
5 Distribution Considerations
You might be wondering about the reason for the word Impurist in the
title of this book. Before delving into an introductory definition, I
would like to state that the application development industry is
undergoing a significant transformation in how it develops
applications. New tools and technologies are emerging at a rapid pace.
With this emergence come experts that profess the techniques on
how to employ these tools and technologies to their best use to solve
the problems at hand. The tool vendors themselves provide guidance
in how to use their products to solve a broad range of problems. It
follows, then, that these techniques begin to coalesce within the
industry and form the conventional wisdom on how we must use these
tools and technologies. On the other end of the spectrum, we have the
theoretical view on how to perform application development, and we
chastise the tool vendors for not following a pure approach.
Somewhere in between are the upper-level managers screaming the
"on time, under budget" mantra, hoping to keep the development
team focused on the tasks at hand. The effect of this mantra is that
conventional wisdom typically takes on a "quick and dirty" component
that runs counter to what we would otherwise do as we develop
applications.
Enterprise Development
With these concepts in mind, we can start defining what the term
enterprise development embodies. At its most succinct level,
enterprise development means the capability to support multiple sites,
geographies, organizations, and users with their informational needs.
This support comes by way of focused applications embedded within
the business processes needed to run core activities of an
organization. The number of users supported by such an application
can range into the hundreds or even thousands. If one then considers
the capabilities afforded by the Internet and Dial-Up Networking, then
it also means the capability to support the mobility of the user base.
This would not only indicate a high level of application availability and
accessibility to the users, but also ease of administration and
maintenance for the developers and support teams over such diverse
connection modes.
Because users with differing roles exist across the user base, no single
user typically exercises the entire breadth of functionality provided by
an enterprise application. The application is multifaceted, although
that can mean different things to different people. There can be many
human interfaces, both of the input and output variety. There are
information generators as well as consumers. In most cases, the
number of consumers far outweighs the number of generators
because it is this dispersal of information and knowledge that drives
such applications.
Thus, we have a series of attributes that help define what an
enterprise application really entails. To summarize, an enterprise
application has the following features:
Commercial Frameworks
Some commercial frameworks extend beyond the infrastructure side and actually
begin to layer on some of the business process functionality. Examples include
IBM's San Francisco Project, which attempts to define a core set of frameworks
across several business domains. For some time, Oracle has provided business
frameworks for accounting, manufacturing, and other popular problem domains.
Application Servers
Custom Frameworks
With the emergence of enterprise development tools and components, it is not too
difficult to develop a framework suited to a specific business process or organization.
Microsoft has provided a suite of server products, development tools, and
distribution technologies to enable the development of a custom framework for
enterprise applications. The official moniker for this is the Microsoft Distributed
interNet Applications (Microsoft DNA) architecture. Although DNA is Microsoft's
attempt to fully define the tools, technologies, and implementation details needed
to build such applications, it is not itself a framework.
Microsoft DNA
Whether you are a devout promoter, a casual user, or merely an observer, Microsoft
is a major player in the enterprise development market. No other set of tools and
technologies enable you to have a dynamic, database-driven Web site up and
running in a short amount of time. No other set of tools and technologies enables
you to build a robust, multi-tier application in a short amount of time. No other
company provides the set of online support and technical information that Microsoft
does. Although Microsoft has provided the tools, guidelines, and sample
applications, this does not mean it is the definitive source on how to build our
applications. It is merely a component of the conventional wisdom mix that we
mentioned earlier.
Although Microsoft can give us the tools and a basic model to follow through DNA,
they have to do so in such a way that is applicable to their entire customer base,
which means a lowest common denominator approach. In many cases, their efforts
at simplification work adversely to your application's requirements, potentially
reducing performance to a much lower level than the underlying technology is
capable of delivering. Microsoft's prime directive is to provide the horizontal
infrastructure for application development, whereas your job as an enterprise
application developer is to use these tools to provide the vertical applications that
your business needs. That they help you a bit by defining DNA is a bonus. We should
not hold Microsoft to blame for this approach because they do provide a viable
solution to cover a wide range of applications. It is only as we peel back the layers
that we can see room for improvement.
For example, if the effort savings reside completely in the 80% template
functionality area, it probably does not offer significant value. If, on the other hand,
it covers the 20% value-added functionality, it is probably worth a look. The former
category is indicative of horizontal frameworks, whereas the latter is where
vertical-based frameworks reside. We should note that good vertical frameworks
typically implement up to 60% of an application's code, as part of the framework.
We will take the approach of building our own framework for the purpose of this
book. The framework topics presented in the rest of this book use several
fundamental patterns that have emerged over the course of successful enterprise
application development. These patterns are, in turn, implemented using a specific
set of development tools and deployment technologies loosely based on Microsoft
DNA. It is important to note that the framework topics presented in this book are not
simply a rehash of DNA. There are many critical areas where we diverge from the
Microsoft model for various reasons. Although the remainder of this book is devoted
to presenting various framework design and implementation topics, it does not
necessarily represent all the implementation options. Please be sure to use the
topical information as a guideline to foster the appropriate design for your situation.
This book targets those readers interested in learning about the concepts of building
a distributed enterprise framework using industry-standard tools and technologies.
Specifically, this book covers the use of Visual Basic 6.0 Enterprise Edition,
Transaction Server 2.0, Internet Information Server 4.0, and SQL Server 6.5 as the
core components of an enterprise framework. It will also present pragmatic
examples in the form of sample applications and accompanying source code, to
further strengthen the topics of discussion.
Target Audience
This book targets the software architect, developer, and manager who wants to
understand both the capabilities and limitations of the Microsoft tools and
technologies available to them within the realm of enterprise applications. Readers
of this book should also want to understand how such tools and technologies could
be used to provide business-critical functionality in the form of world-class
applications to their organizational customer base. Readers of this book need to
have an intermediate to advanced understanding of the tools and technologies
outlined in the following sections. These readers will learn how to take their existing
skills in these areas and apply them to building enterprise applications.
Windows NT Networking
SQL Server
Although there are several server options here, SQL Server 6.x or 7.0 will meet
these requirements handily. In addition, SQL Server offers a graphical user
interface in the form of the SQL Enterprise Manager, eliminating the need to use a
query console window to perform administrative and developmental tasks. SQL
Server also exposes the underpinnings of the Enterprise Manager in the form of an
SQL-DMO (SQL-Data Management Objects). This programming module can be
invaluable when it comes to automating complex administrative tasks on the server.
This might include activities such as setting up a new server or simply running a
weekly re-index and recompile of the views and stored procedures that need to
follow a certain processing order.
COM/DCOM
We will cover COM (Component Object Model) and DCOM (Distributed COM) in
sufficient detail in Chapter 3. Still, we need some overview here before we can
proceed with the remaining tools and technologies that build upon COM.
The COM architecture is the foundation for Microsoft's OLE (Object Linking and
Embedding) and ActiveX technologies. COM is both a for mal specification and a
binary implementation. Technically, any platform can implement COM, not just
Win32. The reason that it is so ubiquitous on the Win32 platform is that Microsoft
has provided the reference (and hence the standard) implementation of the
specification. On the Win32 platform specifically, COM relies on Microsoft's Dynamic
Link Library (DLL) mechanism. The DLL architecture allows for a high level of
runtime modularity (as opposed to source-code level), allowing binary modules to
load in and out of a process address space at runtime. COM, and hence our
framework, relies heavily on this dynamic nature of COM to support long-term
flexibility over the life of the application.
Any programming language that can access the Win32 COM API and implement a
virtual function table can generate a COM class. Visual Basic, which we will discuss
shortly, is such a language, allowing a developer to build these types of classes
while simultaneously hiding the gory implementation details.
The development of the user interface is one of the critical areas of overall
application development. It does not matter how elegant or robust your architecture
is underneath if it is difficult for the user because of a poorly designed interface.
After the development team clearly understands the business process flow for a
particular area of the application, it must be able to easily transform that into the
user interface. As such, the developer needs a capable development tool at his or
her disposal.
Visual Basic 6.0 (VB6) is just such a tool, but its capabilities extend far beyond form
design. One particularly nice feature of VB6 is that it enables the developer to build
custom ActiveX controls that encapsulate core business process flows into a
component that can run in a variety of locations. VB6 also enables the developer to
create ActiveX Dynamic Link Libraries (DLLs) that are also usable in a variety of
locations. Turning things around, VB6 is not only able to create these ActiveX
components, but also able to host many of those created by other development
tools.
VB development extends beyond simply the user interface and client machine,
allowing us to develop modules that run on a server as part of a distributed
application. We will discuss distribution in much more detail in Chapter 5.
Concerning the ease of development, VB6 has all sorts of goodies within the
Integrated Development Environment (IDE). These features include IntelliSense,
which can help the developer finish a variable reference with just the first few letters
being typed, or show the calling convention for a native or user-defined function or
method. VB6 also has a feature known as the Class Builder Utility, a simple class
modeler and code generator that can save significant time in generating
well-formed class modules. The IDE also performs an auto-correction of the code,
color-coding key words and comment blocks, and block indenting. Although these
features might seem minor, developers will spend the majority of their time during
the coding phase within the IDE; therefore, every little improvement in productivity
adds up over the life of the project.
The preferred browser in this architecture is the Internet Explorer 4/5 (IE4/5) based
on its DHTML and ActiveX control hosting capabilities. In many corporate settings,
IE4/5 has been adopted as the standard browser for a multitude of reasons.
The architecture we will present in Part II uses browser interfaces to support the
basic reporting needs, or output side of the application. Using standard HTTP form
processing techniques, the browser will work in conjunction with the IIS server,
using ASP to support simple data management. VB-based client applications, or
browser-hosted ActiveX controls, implement complex data management that is too
difficult to implement using the HTTP form approach.
Microsoft Transaction Server (MTS) provides several functions that might not be
apparent from its name. First, it is a DCOM surrogate, improving the management
and administration of these components on a server. Second, it is a transaction
coordinator, assisting in performing disparate database transactions as a group and
rolling them back as a group if any part fails. Third, MTS is a resource-pooling
manager, allowing multiple logical objects to run in the context of a pool of physical
ones. It also provides database connection pooling for the DCOM libraries to
minimize the performance issues associated with login and connection.
Internet Information Server 4.0/5.0
We choose Internet Information Server (IIS) as our Web server for several reasons.
First, it is the foundation for Active Server Pages (ASP), a VBScript-based
environment for the dynamic generation of browser-agnostic HTML pages. In
addition, IIS and MTS integrate tightly when the two are running on the same
physical machine, bypassing some of the normal activation processes to improve
overall performance.
We use Visual InterDev as our primary ASP development tool. It has a powerful IDE
much like Visual Basic, allowing us to develop our ASP pages more rapidly than we
could in a conventional text editor (which up until release 6.0 was the primary path).
In addition, Visual InterDev provides debug facilities that we can use to step
through some server-side pages during generation or through the completed page
on the client side, which might also have some embedded scripting code.
OLEDB/ADO
The Extensible Markup Language (XML) is currently one of the hottest topics in the
enterprise application community. Similar to HTML, XML is a textual format for
representing structured information. The difference between HTML and XML is that
the former represents format and the latter represents data.
What is powerful about MSXML is its COM basis that gives it the capability to run
within Visual Basic, ASP, and IE. Even more powerful is that data formatted as XML
in a Windows-based COM environment is readable by a UNIX-based Java XML
reader in another environment.
CDONTS
The final technology that we will use is that of CDONTS, or Collaborative Data
Objects for NT Server. CDONTS provides many features, but the one of interest to
us is its SMTP (Simple Mail Transport Protocol) capability that bypasses MAPI (Mail
API). The reason that this is important is that MAPI requires the use of a mail service,
such as Exchange, that adds additional overhead in administration and performance.
Although there is a similar CDO (non-NT server) version, it lacks this SMTP-based
messaging engine that we need. Fortunately, we can run CDONTS on our NT
Workstation development machine. In production mode, we can use CDONTS with
both IIS and MTS to provide server-side mail processing for collaboration and
notification activities.
The remainder of Part I of this book first presents a quick overview of elements that
will be used throughout the rest of the book. This overview is purposefully just
that—an overview. The goal is to provide a quick familiarization of what we are
using and why we are using it. Many books are available that go into in-depth
coverage of these topics. This overview will then be followed by some fundamental
design topics concerning object orientation, components, databases, distribution,
and interface-based development.
Although the reader does not technically need to be a master of each of these areas
to understand the framework topics in this book, he or she will need to be
comfortable with each of the technologies. Along the way, hints and warnings
provide helpful implementation techniques that have come about after many long
hours and late nights of scouring through Microsoft documentation to find the
solution to some particular quirk.
Part II discusses actual enterprise components built upon the concepts outlined in
Part I. This book presents each framework component by first discussing the
architectural reasoning behind the component and the types of trade-off decisions
that were made during its development. The book then presents the component
design in detail accompanied by the full source code required for its proper
implementation.
Chapter Layout
Chapter 9, "A Two-Part, Distributed Business Object," discusses the splitting of the
traditional business object into several parts that run on multiple tiers.
Chapter 10, "Adding an ActiveX Control to the Framework," discusses the
development of the user interface using ActiveX control technology, allowing
front-end deployment in a variety of hosts.
Chapter 11, "A Distributed Reporting Engine," discusses how to leverage ASP as
your primary reporting engine. It is followed by a discussion on how to implement
more complex reporting through an MTS-based reporting component.
Chapter 12, "Taking the Enterprise Application to the Net," discusses how to make
your application functionality available to a larger client base through the corporate
intranet.
Chapter 13, "Interoperability," discusses how to set up links to other systems, both
internal and external to the corporation. It presents several models to deal with the
most common needs that arise in the corporate setting.
Chapter 14, "A Task Management Component," presents the issues surrounding
task automation, message queuing, and cross-system collaboration.
Chapter 15, "Concluding Remarks," presents several topics that have been left
uncovered. These include security and scalability.
Chapter 2. Layers and Tiers
Layers
Modern applications partition the system into at least three distinct logical layers of
code known as user, business, and data. The Microsoft DNA architecture names
these layers as presentation, application, and data, respectively. A fourth layer,
named system, provides access to the services provided by the network and
platform operating systems. This system layer should not be confused with
Microsoft's workflow layer because the two are different in nature. For the purposes
of the framework presented in this book, Microsoft's view of workflow logic is
embedded in the user and business layers as part of the distribution mechanism.
This partitioning of functionality across layers not only allows the distribution of
processing across multiple machines, but also creates a high level of modularity and
maintainability in the code base. Figure 2.1 shows an overview of these layers and
the interactions between them.
User services provide the presentational and navigational aspects of the application.
The user services layer is the part of the system the user sees and interacts with
regularly. In most cases, the user considers the user interface to be the application
because they are unaware that any other parts of the system exist. We can define
a user interface within the context of an application screen that contains complex
interactive controls. These might include tables, drop-down lists, tree views, list
views, button bars, tab strips, and so on. Similarly, we can define a page with simple
form elements rendered in a Web browser as a simple user interface. In addition, we
can also define a user interface in terms of a Web page that hosts an ActiveX control
or Java applet.
Although the user interface is what the end user sees, the business layer is what
defines the application in terms of what it does from an information management
perspective. It is logical to assume that all data input and output comes from the
user layer; however, this is not the case. It is convenient to first define the business
layer in these terms, but it will become clear in the development of the framework
that inputs and outputs can be comprised of interfaces to other applications as well
as to end users. The modularity of the layered approach drives the ability to support
both types of interfaces with a common business layer.
We often refer to the business layer as the heart of the system, and for good reason.
Besides being the location where we implement all business logic, it is also the
center point of a multilayer system. On one side of this layer stack, it interfaces with
the user layer, providing the information needed to populate the interface and the
validation logic needed to ensure proper data entry by the user. On the other side of
the layer stack, it interfaces with the data layer that in turn interacts with the data
storage and retrieval subsystems. The business layer can also communicate with
other business layers either within or external to the application.
With respect to the user interface, the business layer provides both inputs and
outputs. On the input side, the business layer handles the validation logic needed to
ensure appropriate information entry by the user. If we take an example from an
accounting application, a simple field-level validation might be necessary to ensure
that the posting date on a ledger transaction constitutes a valid date. Complex logic,
on the other hand, validates across an information set, but we still handle this on the
business layer. An example taken from the same accounting application might be to
make sure a check's posting date occurs before its clearing date. The business layer
also defines the output aspects of the system. This might be in the form of the
content that makes up human-readable reports or in data feeds to other systems.
This could go beyond just a simple dump from a database system, where a standard
query against a server provides the data, to a system that performs transformation
of the data from one or more data storage systems.
When we start the definition process for a new application, we must focus on how to
meet both the high-level business needs and the needs of the end user. Although
common sense might seem to indicate a user-layer focus, we should really look to
the business layer to drive our design efforts because the users understand the real
world the best. As we will see in the next chapter, we can model the real world using
object-oriented techniques, creating business-layer objects that drive the
application. By using this approach, we can avoid an initial focus on the user and
data layers that can sidetrack our efforts. Instead, we will implement a robust
framework that will allow these outer layers to become a natural extension of our
inner business layer.
The data services layer performs all interactions with the data storage device, most
often a Relational Database Management System (RDBMS) server. This layer is
responsible for providing the rudimentary CRUD (Create, Retrieve, Update, and
Delete) functionality on behalf of the system. It can also enforce business-entity
relationship rules as part of its administrative duty. Typically, it not only involves the
database server itself, but also the underlying data access methodology, such as
Active Data Objects (ADO), and the formal database language, such as Structured
Query Language (SQL).
From an interaction standpoint, only the data layer should deal directly with the
business layer. If we look around, we will see many systems deployed wherein the
developer has directly coupled the user and data layers, effectively eliminating the
business layer. Data-bound controls follow just this approach. Although this is a
viable solution, it is inflexible in terms of extensions to the business processes
because it does not implement them to begin with. If we do not implement a solid
business process within our application, we have effectively created a dumb, fragile,
and data-centric solution to a business problem.
TIP
Do not use controls while in data-bound mode in enterprise applications. They offer
no flexibility for extensibility, minimal capability for business process
implementation, and represent a poor design.
Although often considered synonymous, tiers differ from layers in that they
represent the physical hardware employed by the system. It is the number of such
pieces of hardware that give a particular deployment strategy its tiering
nomenclature. Common sense says that increasing the number of pieces of
hardware has the effect of distributing the processing load, thereby increasing
application performance. Although this is the design intent of a tiered architecture,
simply adding hardware into the application does not necessarily improve the
overall application. We must be careful to add hardware in an appropriate manner
so that we achieve the desired effect.
Single-Tiered Architecture
2-Tiered Architecture
3-Tiered Architecture
N-Tiered Architecture
An N-tiered architecture starts with a 3-tiered approach but allows the addition of
new business or data layers running on additional hardware. This might be typical of
applications that interface with other applications, but can simply be an application
with multiple business, data, or user tiers. Figure 2.5 shows a realistic, albeit
contrived, complex N-tiered architecture. Figure 2.6 shows a similar, complex
N-tiered architecture specifically using our selected tools and technologies.
Figure 2.5. A complex, generic N-tiered architecture.
Figure 2.6. The N-tiered architecture using our
From Figure 2.5, we can see that for each tier, there can be a system layer to
support the primary layer implemented by the tier. It is important to note that a
middle tier might only be home to a system layer. This arises when we implement
the function- ality needed to drive administration tasks, workflow routing, and
collaboration activities that take place as part of the application's daily chores.
NOTE
The cost and complexity of building either 3- or N-tier applications can be much
higher than that for a standard 2-tier model. This is especially true when going
through the first development project using such a model because the learning
curve is steep and the number of decision-making points is high. With these issues
in mind, you should plan to use such architectures only in applications with large
user bases, such as those found in medium to large corporate environments. If you
do decide to tackle a 3- or N-tier model in a smaller-scale application, start with
some of the framework components presented in Part II of this book. This will help
make the transition easier, whether the goal is a proof of concept or simply a plan
for the future.
As we have shown, layers and tiers are different; yet they relate to each other in
that we have to decide where to physically put our functionality. Depending on how
we perform this mapping, we can define the level of client-side functionality
required by the application. This is important when it comes to the hardware cost
goals of the application, which the development team often has little or no control
over.
Thick Client
Thin Client
When a thin client approach is used, only the user layer resides on the client
machine. The business and data layers reside elsewhere, leading us to a 3- or
N-tiered model. In this case, we need a machine with only minimal capabilities. In
this approach, we are limited to a user interface with little complexity because a
simple Web browser constitutes the application. Because of the lowered capabilities,
we use thin clients primarily for data consumption or only light-duty data
generation.
Typically in a thin client approach, we are providing a pure layer to tier mapping.
The user layer maps completely to the client, the business layer maps to a middle
tier (such as MTS), and the data layer maps to a back-end database server. Because
of this mapping approach, all user input must cross from the client to the middle tier
for simple activities, such as data validation, a process known as server
round-tripping. In input-intensive applications, this can be frustrating for the end
user because there is a performance penalty.
Plump Client
A plump client is somewhere in between the thin and thick varieties. Here we use a
3- or N-tiered model as well. In this mode, the user layer and a portion of the
business layer reside on the client side. The remainder of the business layer resides
on a middle tier. This solution represents a best-of-both-worlds scenario in which
we can isolate the business process logic on a middle tier server, yet still enable a
complex user interface. In this mode, we need a client machine that is somewhere
in between the requirements of the thin and thick client modes as well. Although we
can use a Web browser in this mode as well, it usually hosts a complex user layer
object, such as an ActiveX control or a Java applet. Because of the balance afforded
by a plump client, we use it primarily for heavy-duty data generation activities.
In a plump client mode, we modify the pure mapping described in the thin client
approach by making the lines between the tiers a bit fuzzier. In this mode, the client
tier has the user layer and a user-centric portion of the business layer. The middle
tier has the business layer and a data-centric portion of the data layer. The data tier
has the data layer and a business-centric portion of the business layer. While our
tongue is untwisting after reading that series of sentences, we should look at Figure
2.7.
Beyond mapping layers to tiers, we also need to consider how the two relate to the
development team. Although it is important to have experts in each of the user,
business, and data layer categories, it is also important to maintain a breadth of
knowledge across the team. Any developer should be able to go into any layer of the
application and perform work on the code base. The reason for this is that the
layered approach means a certain level of cooperation is required between these
functional areas. As such, it is important for one layer to provide the functionality
needed by its attached layer, meaning, for example, that the user layer expert must
understand the requirements of the business layer expert, and vice-versa. Such a
full understanding of all layers by all developers will make the overall development
and maintenance process more robust.
Summary
Object Orientation
Object orientation is not an entirely new concept, but it is becoming more prevalent
in the underpinnings of modern applications. It has just been within the last ten
years or so that object-orientation migrated from academia and experimentation to
a true, commercial-grade development methodology. Since then,
non–object-oriented development has moved into the minority position.
NOTE
What is most striking about object-orientation is that it follows the true sense of the
business world. In this world, anything that a business deals with, whether it is a
widget that a company produces or a financial account that a bank maintains on
behalf of a client, is definable in software terms through a class model. This class
model defines the information pertinent to the business entity, along with the logic
that operates on that information. Additionally, a class definition can contain
references to one or more external classes through association or ownership
relationships. In the case of a financial account, informational elements might
include the account number, the names of the account owners, the current balance,
the type of account, and so on. We call these items properties (also known as
attributes) of the class. Similarly, the class can define a function to add a new
transaction to the account or modify/delete an existing transaction. We call these
items methods (also known as operations) of the class. What differentiates a class
from an object is that a class is a definition, whereas an object is an instance of that
definition.
We can also graphically represent our objects using a class diagram. There are
many different views on how to represent these diagrams, but the most pervading
forms are the Yourdon/Coad and the Rumbaugh methods, named after the
individuals who developed them. Many drawing programs have templates
predefined for these models, whereas many modeling tools can support some or all
of the more popular styles. You can also create your own object modeling technique
using simple lines and boxes. We have chosen to use the Rumbaugh model in this
book because of the popularity of the Unified Modeling Language (UML), of which it
is a component. It also happens to be the model used by the Microsoft Visual
Modeler that is bundled with Visual Studio 6.0 Enterprise Edition. Figure 3.1 shows
an example of a graphical depiction for a financial account class.
Figure 3.1. The CAccount class using the UML graphical
model.
TIP
As you can see, we modeled our real-world Account business entity in terms of
properties and methods. We call this modeling process abstraction, which forms the
basis for object orientation. With this in mind, we can further our discussion of other
features of object-orientation.
Encapsulation
What should be apparent from Figure 3.1 is that we have bundled everything about
the class into a nice, neat package. We formally define everything that the outside
world needs to know about this class in terms of these properties and methods. We
call the public properties and methods of a class its interface, which represents the
concept of encapsulation. In the real-world account example, a customer does not
necessarily need to know how the current balance is calculated based on
transactions that are added, modified, or deleted. They just need to know their
current balance. Similarly, users of the account class do not need to know how the
class calculates the current balance either—just that the class properly handles it
when the transaction processing methods are called. Thus, we can say that
encapsulation has the effect of information hiding and the definition of narrow
interfaces into the class. This concept is critical to the development of robust,
maintainable applications.
A class might implement internal methods and properties but choose not to expose
them to the outside world through its interface. Because of this, we are free to
change the internal workings of these private items without affecting how the
outside world uses our class through its public interface. Figure 3.2 shows how a
public method calls a private method to perform a calculation that updates the value
of a public property.
Suppose, for the sake of argument, we were to expose the internal function (also
known as a private method) that calculates current balances. We would do this by
defining it to be public versus private. An application using this class, for whatever
reason, might deem it acceptable to call this internal method directly and does so in
a multitude of places. Now suppose that we must change the calling convention of
this method by adding a new parameter to the parameter list, such that we have to
modify every piece of software that references this internal method. Assume also
that the public transaction methods would not have had to change, only the
formerly private method. We have effectively forced ourselves into a potentially
large code rewrite, debug, test, and deployment cycle that we could have otherwise
handled simply within the object's private methods while leaving the public interface
intact. We will see, in the COM model discussion to follow, that we can easily modify
only the class and redeploy it across the user base with a minimum of effort. In the
corporate world, this translates into time and money.
Because the term interface might be a difficult concept to grasp at first, it might be
easier to think of as an electrical socket. In the 220-volt parts of the world, there are
three-pronged sockets with one of the prongs oriented 90 degrees out from the
other two. In the 110-volt parts of the world, there are two- and three-pronged
plugs with a different geometry such that you cannot plug a 110-volt oriented plug
into a 220-volt socket and vice-versa. Imagine if the 110-volt world suddenly began
using 220-volt–style plugs and sockets (assuming voltage will not change). We
would have to replace the plug on every electrical device along with all the wall
sockets. It would be a huge mess. The same goes for properties and methods. After
we define the interfaces of a class and write applications against them, making
changes becomes difficult and costly.
TIP
Encapsulation also has the effect of protecting the integrity of objects, which are
instantiated using the class definition. We have already touched on this when we
stated that a class is responsible for its own inner workings. Outsiders cannot
meddle in its internal affairs. Similarly, property definitions can be implemented
such that the class rejects invalid property states during the setting process. For
example, a date-based property could reject a date literal, such as "June 31, 1997,"
because it does not constitute a date on any calendar. Again, because the validation
logic is contained within the class definition itself, modifying it to meet changing
business needs occurs in a single place rather than throughout the application base.
This aspect of encapsulation is important, especially for enterprise applications,
when we discuss the implementation of validation logic in Chapter 9, "A Two-Part,
Distributed Business Object." It further adds to our ability to develop robust,
maintainable, and extensible applications.
NOTE
Method Description
YieldToMaturity Calculates the interest rate that equates the present value of the
coupon payments over the life of the bond to its value today.
Used in the secondary bond market.
BondPrice Calculates the bond price as the sum of the present values of all
the payments for the bond.
CurrentYield Calculates the current yield as an approximation of the yield to
maturity using a simplified formula. Note: Available only on
CouponBond types.
DiscountYield Calculates the discount yield based on the percentage gain on
the face value of a bond and the remaining days to maturity.
Each method uses one or more of the public property values to perform the
calculation. Some methods require additional information in the form of its
parameter list, as can be seen in Figure 3.3. As you might guess, the BondType
property helps each method determine how to perform the calculation. A sample
Visual Basic implementation of the BondPrice method might be as follows in Listing
3.1.
As you can see, each value of the BondType property requires a different use of the
properties to perform the correct calculation. The application using the class is not
concerned with how the method performs the calculation, but only with the result.
Now suppose that you need to modify the calculation algorithm for the BondPrice
method. Because of encapsulation, you only need to modify the contents of the
BondPrice method and nothing more. Better yet, because you have not changed
the calling convention, the applications using the CBond class are none the wiser
that a change occurred.
Polymorphism
Suppose you are developing classes that must interact with a relational database.
For each of these classes, there can be a standard set of methods to retrieve
property values for an object instance from a database. We call this process of
storing and retrieving property values object persistence, a topic we will discuss in
detail in Chapter 5, "Distribution Considerations." We can illustrate an abstract
definition of a couple of methods as follows:
For each class that is to follow this behavior, it must not only define, but also provide
the implementation for these two methods. Suppose you have three such
classes—CClassOne, CClassTwo, and CClassThree. An application that creates
and loads an object might implement polymorphic code in the following manner
(see Listing 3.2).
In the preceding code example, we use a technique known as late binding, wherein
Visual Basic performs type checking at runtime rather than at compile time. In this
mode, we can declare a generic object (a variable type intrinsic to Visual Basic) to
represent the instantiated object based on any of the three class definitions. We
must assume that each of these classes defines and implements the
RetrieveProperties and SetStateFromVariant methods as mandated by our
polymorphic requirements. If the classes deviate from these conventions, a runtime
error will occur. If the classes meet these requirements, we can simplify the coding
of the object retrieval process into a single function call on the application. This not
only leads to code that is easier to maintain over the life of the application, but also
makes extending the application to support new class types much simpler.
The late binding technique of Visual Basic presents us with some concerns. Because
late binding performs type checking at runtime, some errors might escape early
testing or even propagate into the production application. Furthermore, late binding
has a performance penalty because Visual Basic must go through a process known
as runtime discovery with each object reference to determine the actual methods
and properties available on the object. This said, we should scrutinize the use of
late-binding approaches in the application wherever possible and choose alternative
approaches. We will discuss several approaches to circumvent these issues when we
discuss the framework components in Part II of the book.
Inheritance
Looking again at our CBond class, we notice that there is a BondType property to
force certain alternative behaviors by the calculation methods. We can modify our
CBond class into a single IBond base class and three subclasses called CCouponBond,
CDiscountBond, and CConsolBond. We use IBond here (for Interface Bond)
instead of CBond to coincide with Microsoft's terminology for interface
implementation. Graphically, we represent this as shown in Figure 3.4.
Figure 3.4. An inheritance diagram for the IBond base
class.
If we revisit our bond calculation functions in the context of inheritance, they might
look something like Listing 3.3. Disregard the IBond_ syntax for now because it is a
concept that we gain a thorough understanding of in our work in Part II of this book.
Although the application portion of this example might look somewhat similar to the
polymorphic mechanism from before, there is an important distinction. Because we
have defined these subclasses in the context of a base class IBond, we have forced
the interface implementation of the base class. This, in turn, allows Visual Basic to
perform early binding and therefore type checking at compile time. In contrast to
late binding, this leads to better application performance, stability, and
extensibility.
TIP
Any class definition that contains a Type property is a candidate for
inheritance-based implementation.
Critics have chastised Microsoft for not implementing inheritance properly in Visual
Basic in that it does not support a subclass descending from more than one base
class, a concept known as multiple-inheritance. Although this lack of
implementation technically is a true statement, in reality, multiple inheritance
scenarios arise so infrequently that it is not worth the extra complexity that
Microsoft would have had to add to Visual Basic to implement it.
Many critics would further argue that Visual Basic and COM, through their interface
implementation technique, do not even support single inheritance properly and that
the notion of the capability to subclass in this environment is ludicrous. Without
taking a side in this debate, we can sufficiently state that interface implementation
gives you some of the features afforded by single-inheritance, whether or not you
want to formally define them in this manner. The particular side of the debate you
might fall into is immaterial for the purposes of our framework development in Part
II of this book.
Association Relationships
After we have defined the basics of classes with simple property types, we can
expand our view to show that classes can have associative relationships with other
classes. For example, a class might reference another class in a one-to-one manner,
or a class might reference a group of other classes in a one-to-many fashion.
One-to-One Relationships
One-To-Many Relationships
collection class.
One-to-many relationships and the collection classes that implement them are
synonymous with the master-detail relationships found across many applications.
We will be using these collection classes frequently throughout our framework
architecture. We will cover collections in detail in Chapter 7, "The ClassManager
Library."
Component-Based Development
As we alluded in our discussion on layers and tiers in Chapter 2, "Layers and Tiers,"
a CBD approach has some distinct advantages during the development process.
Chief among these is the ability to develop and test components in isolation before
integrated testing.
Object-based CBD allows the packaging of class definitions into a deployable entity.
Under the Microsoft Component Object Model (COM) architecture, these packages
are special Dynamic Link Libraries (DLLs), a dynamic runtime technology that has
been available since the earliest days of Microsoft Windows. Microsoft renamed
these COM-style DLLs to ActiveX to indicate that there is a difference. An application
gains access to classes in an ActiveX DLL by loading the library containing the class
definitions into memory, followed by registration of the classes by the COM engine.
Applications can then instantiate objects based on these classes using the COM
engine.
The traditional DLL (non-ActiveX) meets the definition for CBD, but it is procedurally
based (that is, non–object-based). ActiveX DLLs also meet this definition, being
object-based in nature. Because an object-based approach is already rooted in the
reusability of functionality, the ActiveX DLL implementation of CBD is widely
considered the most powerful and flexible technology when working solely on the
Win32 platform.
Although COM is both a component and object engine, it differs from other CBD
technologies in that it represents binary reusability of components versus
source-code level reusability. Because of its binary basis, we can write COM libraries
in any language on the Win32 platform that adheres to the COM specification and its
related API. The basic requirement to support the COM API is the capacity of a
language to implement an array of function pointers that follow a C-style calling
syntax.
The COM engine uses this array as a jumping point into the public methods and
properties defined on the object. Visual Basic is one of many languages with this
capability.
COM actually has two modes of operation: local and remote invocation. The
distinction between these two will become important as we discuss distribution in
Chapter 6, "Understanding Development Fundamentals and Design Goals of an
Enterprise Application."
With in-process servers, an application can reference an object, its methods, and its
properties using memory pointers as it shares a memory space with the component.
Figure 3.8 depicts the local, in-process invocation.
Figure 3.8. The local, in-process invocation mode of
COM.
In the out-of-process server mode, all data must be serialized (that is, made
suitable for transport), sent over the interprocess boundary, and then deserialized.
We call this serialization process marshalling, a topic that we will cover in detail in
Chapter 6. Additionally, the out-of-process mode must set up a "proxy" structure on
the application (or client) side, and a "stub" structure on the component (or server)
side. Figure 3.9 depicts the local, out-of-process mode.
of COM.
The reason for this proxy/stub setup is to allow the client and server sides of the
boundary to maintain their generic COM programming view, without having to be
concerned about the details of crossing a process boundary. In this mode, neither
side is aware that a process boundary is in place. The client thinks that it is invoking
a local, in-process server. The server thinks that we have called it in an in-process
manner. The in-process mode of COM is fast and efficient, whereas the
out-of-process mode adds extra steps and overhead to accomplish the same tasks.
TIP
mode of COM.
COM-Definable Entities
A COM library not only enables us to define classes in terms of properties and
methods, but also to define enumerations, events, and interfaces used in
inheritance relationships. We already have talked about properties, methods, and
interfaces, so let us complete the definition by talking about enumerations and
events.
Enumerations are nothing more than a list of named integral values, no different
from global constants. What differentiates them is that they become a part of the
COM component. In essence, the COM component predefines the constants needed
by the application in the form of these enumerations. By bundling them with the
classes that rely on them and giving them human-readable names, we can ensure a
certain level of robustness and ease of code development throughout the overall
application.
TIP
Use public enumerations in place of constants when they tie intrinsically to the
operation of a class. This will keep you from having to redefine the constants for
each application that uses the class, because they become part of the COM
component itself. Where goes the class, so go its enumerations.
Events defined for a class are formal messages sent from an object instance to its
application. The application can implement an event handler to respond to these
messages in whatever manner deemed necessary.
NOTE
Visual Basic and COM define events as part of a class, alongside properties and
methods. One might assume then that we can define events on an interface,
thereby making them available to classes implementing the interface. Although this
is a reasonable assumption and a desirable feature, Visual Basic and COM do not
support this. As such, do not plan to use events in conjunction with interface
implementation.
Component Coupling
With the flexibility to place COM classes into components and then have these
components reference each other, it can become easy to create an environment of
high coupling. Coupling occurs when we create a reference from a COM class in one
component to the interface of a COM class in another component. Because
components are different physical entities, this has the effect of hooking the two
components together relative to distribution. Wherever we distribute a component
that references other components, we also must distribute all the referenced
components, all their referenced components, and so on. One reason for coupling is
that we might not properly group functionality into common components.
Functionality that represents a single subpart of the overall business application
might be a good candidate for a single component. Alternatively, functionality that
represents similar design patterns might belong in a single component.
TIP
It is important during the analysis and design phases to group components based on
similar functionality. Although we invariably need to create system-level classes for
use by other classes, we should try to minimize the creation of a chain of component
references. These chains lead to administration and maintenance issues after the
application is in production.
Another issue that leads to coupling is that we try to over-modularize the application
by placing small snippets of subparts into components. Beyond the coupling aspects,
each ActiveX DLL has a certain amount of overhead to load and retain in memory.
Placing functionality in ten components when two would suffice adds unnecessary
performance overhead and complexity to your application.
From a performance perspective, we can look at the time necessary to initialize the
two scenarios. There are two initialization times to look at: the first is the time
required to initialize the component, and the second is the time required to initialize
the object. Remembering that a component in the COM world is a specialized DLL,
we can infer that some initialization time is associated with the DLL. When Visual
Basic must load an ActiveX DLL, it must go through a process of "learning" what
objects are defined in the component in terms of properties, methods, and events.
In the two scenarios, the 10-DLL case will have five times the load time of the 2-DLL
case, assuming negligible differences in the aggregate learning time of the objects
within the components.
From a complexity perspective, the more components created means more work on
the development team. One of the problematic issues with any object-oriented or
interface implementation project is that of recompilation and distribution when
something changes, especially in the early development phases of the application.
For example, if the definition of a core class referenced throughout the project
changes, it is much easier to recompile the two components versus the ten. As you
might already know from multitiered development in the DCOM environment,
propagating such seemingly simple changes across tiers can be very difficult. Thus,
appropriate minimization of the number of components up front is desirable.
We are not trying to say that you should place all your functionality into one
component—this leads to its own set of problems. The moral of the story is that one
should not force modularity purely for the sake of doing so. You should find an
appropriate balance that can come only from experience in developing these sorts
of systems. The framework presented in Part II is a good starting point for
understanding where these lines of balance should be drawn.
TIP
Figure 3.11 shows tight coupling, whereas Figure 3.12 shows its bridged
counterpart.
coupling.
Figure 3.12. A graphical representation of bridged
coupling.
In Figure 3.11, it should be clear that components A and B must travel together
wherever they go. An application that only needs component A must bring along
component B as well. An application that uses component A might go through test,
debug, and redistribution whenever component B changes, although it is not using
it.
Summary
Although the COM model is good for defining and implementing classes in the form
of binary reusable components, it offers nothing in the form of persistence or the
long-term storage of object state. By state, we mean the values of the properties at
any given moment in time. Perhaps this is something that Microsoft will address in
a future release of the COM standard, but until then, a common solution to this
problem is to store and retrieve data using a relational database management
system (RDBMS).
One of the greatest challenges faced when developing any application that interacts
with an RDBMS is how to provide a mapping between the database, the business
objects, and the user interface. There are several different theories on how to
accomplish this, but the prevalent models involve taking a data-centric, a
user-centric, or a business-centric view.
The data-centric view defines the database structure independently of any other
considerations. Following this model, we can sacrifice functionality in our business
and data layers, severely impeding our ability to cleanly implement the application.
The data-centric view sometimes presents itself simply because of the organization
of the development team. On many teams, there is a database expert focused on
data integrity, normalization, and performance. This person might care about
nothing else. Many database design decisions come about strictly because of what
the database expert perceives to be best for the application. In some cases, this
works adversely to the rest of the development team from an implementation and
flexibility standpoint. For example, the database designer might want to have all
database access take the form of stored procedures, disallowing any direct
manipulation by dynamic SQL calls generated by the application. The reasoning
behind this, in the database expert's mind, is to protect the integrity of the database
from developers who do not necessarily understand the database structure. It might
also come about simply because of territorial infringement issues. Using this model,
we must code specific data access procedures on each business object because the
calling convention will be different depending on the properties defined. It is
extremely difficult to define a generic database layer using this approach or using a
polymorphic method on the class.
From our examples in the last chapter, let us define how we can implement a
RetrieveProperties method on CBond using a stored procedure approach (see
Listing 4.1).
'From CBond
Public Sub RetrieveProperties(ByVal ObjectId As Long, _
ByRef FaceValue As Currency, _
ByRef CouponRate As Single, _
ByRef BondTerm As Intger, _
ByRef BondType As EnumBondType, _
ByRef Name As String)
Dim rs As ADODB.Recordset
cmd.CommandText = "sp_RetrieveBond"
cmd.CommandType = adCmdStoredProc
Call cmd.Parameters.Append(cmd.CreateParameter("ObjectId", _
adInteger, _
adParamInput, _
ObjectId))
Call cmd.Parameters.Append(cmd.CreateParameter("FaceValue", _
adCurrency, _
adParamOutput, _
FaceValue))
Call cmd.Parameters.Append(cmd.CreateParameter("CouponRate", _
adSingle, _
adParamOutput, _
CouponRate))
Call cmd.Parameters.Append(cmd.CreateParameter("BondTerm", _
adInteger, _
adParamOutput, _
BondType))
Call cmd.Parameters.Append(cmd.CreateParameter("BondType", _
adInteger, _
adParamOutput, _
BondType))
Call cmd.Parameters.Append(cmd.CreateParameter("Name", _
adVarChar, _
adParamOutput, _
Name))
Set cmd.ActiveConnection = cnn ' global connection for COM lib
Call cmd.Execute
Set cmd = Nothing
End sub
In terms of extensibility, suppose that we need to add a new field to the database to
support a new property on a business object. The stored procedures driving this
business object will need updating along with the business object code. Because we
will be changing the RetrieveProperties method, we will be changing an interface
on the class, which means that we will need to modify, recompile, and redeploy the
applications using this class to make this change.
The user-centric view defines the database by how we present the information to
the user. This is probably the worst approach to use in defining a database and is
akin to the issues with data-bound controls. Most likely, these sorts of interfaces are
simple master/detail type screens, with little to no data normalization on the
information making up the detail portion.
Because object-orientation enables us to model the real world, and the business
layer is the realization of that model, we should be able to follow a business-centric
view during database design. This is precisely what we have done because it is
simple when we have a good object model. In so doing, we guarantee that the
database structure closely follows the business object structure.
Table Orientation
With our wonderful object-orientation and RDBMS worlds at our disposal, a problem
arises when it comes to marrying the two together. We call this the impedance
mismatch problem, where we have to programmatically map objects into our
database structure. Tables are row- and column-based; classes are object- and
property-based.
Our mapping process is actually simple. We create a table for every class and define
columns of the appropriate data type for each property. Thus, a class maps to a
table and properties map to columns, with a table row representing an object
instance. In the case of an inheritance relationship, we map all subclasses of a base
class to a single table, with a ClassType field to indicate the particular subclass. In
this mode, we must ensure that there are columns defined to represent all
properties across the subclasses. Although this might create "empty" column
conditions on some rows, it is a much more efficient approach. Our data layer will
know which columns are safe to ignore during our insert and update processing.
We handle object relationships with primary/foreign key pairs. In our CAccount and
CPerson association example, we would have tables Table_Account and
Table_Person defined. Following this object relationship, Table_Account would
have a column (foreign key) known as Person_Id to reference the Id column
(primary key) of Table_Person. In this mode, we reference the associated object
from the object that makes the association. We sometimes refer to this as
downward referencing.
For example, suppose you had developed a system that used a 10-digit part number
string as its primary key on a table. Now suppose that through mergers and
acquisitions this part number changes to a 15-digit part number loosely based on
the formats from the combined companies. To accommodate this change, you not
only have to update your primary table with the new numbers, but also update
every table that references the primary table with this key. This level of work also
includes the expansion of the effected fields and the synchronization of the values in
all tables, a task that can grow to be quite complex.
Another benefit of the approach of using a single Id field as the primary key is that
of overall database size. On SQL Server, an integer field requires four bytes of
storage space. In the preceding example, the 10-digit part number required 10
bytes of space, and the expanded version required 15 bytes. Let us assume from the
preceding example that the primary table has 10,000 records. Let us also assume
that an additional 50,000 records among 10 other tables reference this primary
table. In the 10-digit scenario, the key values alone would consume 585KB of space
in the database, whereas the 15-digit version would jump to 879KB. In the
Id-based approach, the keys require only 234KB of space. These numbers might
seem small given the relatively low cost of storage space, but it should be easy to
extrapolate this 73% reduction in key storage space across a much larger data set.
OID Generation
With the need of OIDs in mind, we must be able to generate unique OID values in an
efficient fashion. Some developers prefer to create a single table with a single row
that does nothing other than track the last OID value used. In this mode, our OID
values are unique across a database when they only need to be unique within a table.
This has the effect of under-utilizing the key storage capacity of the long integer
field by disbursing its values across all tables. To solve this problem, some
developers have modified the previous approach by creating a last used row for
each table. Although this does solve the under-utilization problem, it forces a
database read followed by an update (to increment the key value) for each row
inserted elsewhere in the database. This is in conjunction with the overhead
associated with the data row access in the target table.
A third approach to OID generation is to have an insert trigger on the table calculate
the next Id value and perform an update with the appropriate value. For
performance and uniqueness reasons, this technique relies on there being a unique
clustered index on the Id column. Such an index has the property that the Id value
is unique across all rows and that the RDBMS physically orders the rows according
to their logical sort order based on the index. Database administrators normally
apply these types of indexes to the primary key, with the intent of improving search
times on the most commonly used index. Just prior to our row insert, we perform an
SQL query to get the maximum current Id value, increment it by one, and use the
result as our new OID. There are some issues with this approach. The most
problematic is that, to ensure concurrency, a lock must be placed on the table from
the time the SQL statement to generate the Id is executed until the update has
completed. For high transaction situations, this can create significant deadlock
issues that can force one or more client operations to fail at the expense of others.
In our model, we are relying on the underlying capabilities of the Identity column
type, also known as an AutoNumber field in Access. The Identity type is a special
column that is based on the integer type, but one in which SQL Server automatically
increments with each row insertion. Until version 2.1 of ADO, there was no reliable
way to retrieve this value from the server so it could be used to programmatically
formulate the necessary relationships to other tables in the database. With the 2.1
release, we are able to retrieve these values as long as triggers do not insert
additional rows into other tables with Identity columns. A complete discussion of
this issue can be found on Microsoft's KnowledgeBase in an article titled "Identity
and Auto-Increment Fields in ADO 2.1 and Beyond."
NOTE
It is important to note that for the sample code accompanying the text to work on
the provided Access database, the Microsoft Jet OLE DB Provider 4.0 must be used
in conjunction with the Microsoft Jet 4.0 version database. Both are installed by
Microsoft Access 2000.
The primary issue with this approach is that currently it is guaranteed to work only
with SQL Server and Jet 4.0 databases. The insert trigger issue might also present
a problem if the development team cannot move the functionality implemented by
these triggers to the Application tier.
Referential Integrity
Most, if not all, RDBMS systems have some mechanism for defining referential
integrity (RI). When we speak of RI, we mean that the database server makes sure
that we do not cause invalid primary/foreign key pair references in the database.
For example, in the Table_Portfolio example, when we delete a row in table, we
should also delete every referenced row in Table_Bonds. There are several ways to
accomplish this. Most RDBMS servers have declarative RI, where we formally define
the primary/foreign key pairs and the server takes care of RI natively. Although this
is efficient, on many servers, the names of the columns must be unique across the
entire database, meaning we cannot implement a standard naming convention
across all the tables as discussed in the previous section.
An issue arises with this approach when we might want to nullify a foreign key
column when its parent row is deleted, versus simply deleting the row with the
foreign key. In the CSerializedAutomobile and CSerializedEngine example
from Chapter 3, "Objects, Components, and COM," we might not want to delete the
engine when we delete the automobile. By nullifying the foreign key, we simply
indicate that no automobile owns the engine.
Another issue arises in that we might want to perform more than just RI during a
delete process, such as inactivating an account if we delete all its transactions or
providing complex validation logic. In these cases, we will be using database
triggers to perform this work. A database trigger is a programming hook provided
by the vendor that allows us to write code for the insert, update, and delete events
of a given database row. Part of this logic could be to abort the transaction if
something is not valid.
TIP
For maximum flexibility and maintainability and the issues with declarative RI, we
should consolidate our RDBMS side logic into triggers.
Data Localization
What is important about global data is that we should try to maintain it at the master
server level. Although it is possible to enable bidirectional replication, it is extremely
painful to keep global data synchronized if we are generating global data at the local
level. It is also difficult to ensure that there are not any OID collisions. Because we
are generating OID values based on the Id field of a table in a site-based server we
might have to go to a multi-key approach where we include a Site_Id column on
every table.
Locking
With an RDBMS system, we are concerned with data locking. At one level, we want
to ensure that two users are not trying to update the same row simultaneously.
Fortunately, the RDBMS takes care of this for us in conjunction with our lock settings
controlled through the data access library (ADO). In SQL Server 6.5 and later,
locking occurs at the page level, which means not only the row being altered is
locked, but also every row on the same data page as the locked row. This can cause
some issues in high-volume situations. We will provide workarounds to this problem
in Chapter 10, "Adding an ActiveX Control to the Framework."
When we instantiate an object, we retrieve the state information from the RDBMS.
Only during this retrieval process is the row locked because we return our database
connection to the pool when we are finished. After this happens, there are no
safeguards to prevent another user from instantiating another editable copy of the
object. Because of this, we must provide an object-locking mechanism. We will
discuss such details in Chapter 10.
Performance Tuning
For example, assume that one of the conditions in the WHERE clause produces a
working result set of 10,000 rows. If the optimizer incorrectly picks an inefficient
index because of stale statistics, it might spend a significant amount of time
retrieving these rows, although it thinks it is being efficient. Worse, the optimizer
might have forgone an initial working result set that would have produced only five
rows because of bad statistics.
Although this is a simple concept to grasp, what is difficult about it is how SQL
Server can determine that one condition will produce the five-row result set while
the other will produce the 10,000-row result set. SQL Server will not know how
many rows a given condition will generate until it actually performs the query; by
that time, it is too late. Instead, SQL Server tries to use index statistics as an
approximation of result set size and row-selection efficiency. To do this, it first
makes a list of which indexes it can use based on the columns in the indexes and the
columns in the WHERE clause and join portions of the query. For each possible index,
it looks at a statistic known as average row hits to estimate how many rows will
need examining to find a specific row using this index. A unique clustered index on
the primary key of the table will have this value set to 2, whereas other indexes on
the table might be in the thousands. SQL Server will also express this value as a
percentage of the total rows in the table that it must examine to select a row. It will
also provide a subjective, textual rating.
For example, in the unique clustered index case, the percentage is 0.00% for very
good selectivity, while another index might have a percentage of 5% and a rating of
very poor selectivity. You can access this information for an index by clicking the
Distribution button in the Manage Indexes dialog box in the SQL Enterprise
Manager.
NOTE
Indexes with selectivity indexes greater than 5% should be considered for removal
from the RDBMS because they add little value but have some maintenance
overhead.
Using the efficiencies of all available indexes combined with the relative table sizes,
SQL Server proceeds to pick the order in which it will filter and join to arrive at the
result set. There is little you can do other than to provide SQL Server with good
indexes, fresh statistics, and occasional optimizer hints when it comes to
performance tuning. Because of this, database tuning can seem like an art more
than a science. Following several essential steps can provide a method to the
madness:
1. Verify that you have indexes on all columns that are participating as part of
a table join operation. Normally, these should be the primary and foreign
keys of each table, with one index for the primary key columns and another
for the foreign key columns.
2. Verify that you have indexes on one or more columns that are participating
in the WHERE clause. These are sometimes known as covering indexes.
3. Rebuild the indexes used by the queries in question to have them placed on
sequential pages in the database. This will also update the statistics on the
index.
4. Verify the results using an SQL query window within Enterprise Manager with
the Show Query Plan option turned on. You might still need to override the
SQL Server query plan optimizer using optimizer hints.
After you have gotten through the initial tuning phase of your RDBMS, you still must
periodically repeat the last few steps to maintain optimal efficiencies. In the long run,
you might need to re-tweak certain queries by adding or modifying indexes and
repeating the steps. Many production SQL Server implementations use the Task
Manager to schedule weekly or monthly re-indexing operations during off-peak load
times. You can accomplish this by selecting the Execute as Task button when
rebuilding an index from the Manage Indexes dialog box within the Executive
Manager.
Summary
This chapter covered the basics of an RDBMS system. Specifically, we talked about
simple database design techniques and methods for mapping objects to tables for
the purpose of persistence. We have also talked about some of the issues related to
generating an OID value and returning that value to the client application so we can
maintain the proper table references in the database. We also touched on secondary
issues such as data replication, locking, and performance tuning to address the
scalability and accessibility issues associated with enterprise applications.
In the next chapter, we will begin discussing the issues involved with creating a
distributed application. We will focus our efforts on how we can place objects on
different machines and make them communicate through Distributed COM (DCOM)
technologies. We will also explore some of the tradeoff decisions that must be made
relative to moving data efficiently between machines.
Chapter 5. Distribution Considerations
Data Marshalling
Remote Activation
The calling convention for this mode might look something like in Listing 5.1.
Pull Properties
On line 100, the object is created on the remote server "MTS-HOU05" and the
resulting object reference is sent back to the client and set to the Person object
reference. At this point, DCOM has created the proxy and stub. On line 105, we call
the Load method of the Person object to populate the state from the data store.
DCOM must marshal the Id parameter during this call. By line 110, our Person
object is instantiated and its state has been set from the data store. We begin
moving the data from the object into our UI elements for presentation to the user.
Each of the property accesses result in a trip through the proxy/stub layer to the
server, because that is where the object is physically living. DCOM must also call the
marshaller into action for each of these property accesses.
An equivalent subroutine to save the object back to the data store might be as
shown in Listing 5.2.
Example 5.2. Instantiating a Remote DCOM Object to
Push Properties
Again, each property access requires the same proxy/stub layer traversal and
passes through the marshaller.
Although this simple example might seem trivial, we only need to imagine an
application with five to ten objects per UI form and a user base of several hundred
to see the implications of this approach. There will be many server round trips
through the proxy/stub layer to perform relatively simple tasks. One common way
to solve some of the round-tripping overhead is to bundle all the individual property
accesses into batch methods.
The same LoadPerson subroutine when re-written with a batch call might look
something like Listing 5.3.
Properties as a UDT
Properties as a UDT
Again, by using a UDT in a single call, we are making judicious use of network and
system resources.
As might have become apparent by now, one of the primary issues to solve when
implementing distributed objects is how to optimally communicate object state
information between the tiers. We have already discussed using a UDT as a
mechanism to pass a structured data packet that represents the state of all
properties. By doing this, we can accommodate the setting or getting of all
properties with a single call across the DCOM boundary. The next sections expand
on this technique with several alternatives that are commonly used to solve this
problem.
Disconnected Recordsets
The LoadPerson subroutine written with a recordset passing convention would look
like Listing 5.5.
In the case of collections of objects, we can return the information for the multiple
objects with the single call. The need for this might arise quite frequently when we
talk about the detail side of a master/detail relationship. In this case, the return
parameter would still be the recordset, but it would have a row for each object
instance. The client-side object is responsible for iterating through each row.
NOTE
For result sets above 10,000 records, ADO recordsets are the most efficient method
for sending information across a DCOM boundary. In such cases, you should
consider redesigning an application that needs to send so many records across a
DCOM boundary.
Another potential issue is that the client side not only must have the ADO library
installed (or its lighter-weight ADOR sibling), but its version must be compatible
with the version running on the server. Because this is a technology just entering
widespread use, expect Microsoft to make revisions over time and include such
revisions in their full range of products. Confounding this issue is that the names
Microsoft uses for the primary DLLs to support ADO and ADOR are the same,
regardless of the version. For example, the ADO library is found in a DLL called
MSADO15.DLL whether it is version 1.5, 2.0, or 2.1; the same is true for
MSADOR15.DLL. Although the libraries are backward compatible with each other,
you might have ADO or ADOR upgraded on your client machine as part of some
other installation process without it becoming evident to you. If you start using
some of the newer properties, you might experience difficulty when deploying to an
MTS machine with older libraries installed. Worse, it can take you several days to
determine the source of the problem because the filenames for the libraries are the
same across versions.
As of the writing of this book, Microsoft has gone through three revisions (1.5, 2.0,
and 2.1) of ADO, whereas 2.5 is currently in beta. In addition, because ADO might
actually interface with ODBC to get to the database server, it too will need installing
and administering on the client side.
TIP
Do not use ADO on the client unless you are prepared to maintain it and potentially
distribute and maintain ODBC across the user base.
Property Bags
Microsoft developed the PropertyBag object to support the saving of design time
settings for ActiveX controls created in Visual Basic. Although we can extrapolate
their use to support structured information communication, they are still just a
collection of name/value pairs. In one sense, however, we can think of a
PropertyBag as a portable collection with one important caveat. The PropertyBag
has a Contents property that converts the name/value pairs into an intermediate
byte array that then converts directly to a string representation. On the receiving
end of the DCOM boundary, another PropertyBag object can use this string to
re-create the byte array and subsequently set its Contents property, effectively
re-creating the information.
Properties as a PropertyBag
Although the marshalling aspect of the string generated by the Contents property is
of minimal concern, creating a PropertyBag is more expensive than other options in
terms of speed and information bloat. If we assume that an ADO recordset is the
original source of most information, we will have to traverse the entire recordset
programmatically in VB to move the data into the PropertyBag.
Properties as a PropertyBag
User-Defined Types
User-Defined Types (UDTs) are simple in concept in that they follow the structural
definition common to many procedural languages. In Visual Basic, we define a UDT
using a Type…End Type block in the declaration section of a code module.
A sample UDT definition corresponding to the CPerson class might look something
like Listing 5.9.
To reiterate here, the LoadPerson subroutine with a UDT passing convention would
look like Listing 5.10.
Example 5.10. An Optimized DCOM Call That Pulls
Properties as a UDT
a UDT
With all the benefits of UDTs, it might be difficult to understand why any other
approach might be necessary. At issue is the only major drawback to a UDT—it
cannot be supported by VBScript. At first glance, this might seem insignificant until
we remember that the basis for Active Server Pages is VBScript. With more
application functionality moving to the IIS server, this becomes a crippling
limitation.
Variant Arrays
Variant arrays are the most flexible and the simplest form of data transfer across a
DCOM boundary. Although it does require the development of some indexing
structures to handle them effectively, such development is relatively minor when
viewed against the long-term benefits.
The LoadPerson subroutine written with a variant passing convention would look
like Listing 5.12.
In the preceding example, we are receiving two return parameters from the
SetStateToVariant method: vFields and vData. The former is a variant array of
string values representing the field names. The ordinal position of the values in this
array corresponds to the same ordinal positions in the vData array, which is the
actual data being returned. So that we can more easily manage the data array, we
create a Dictionary object keyed on the field name so that we can index into it. ASP
again drives an implementation decision to use the Dictionary object instead of a
standard VBA Collection object, which VBScript does not support. Regardless of
whether we are returning data for single or multiple rows, vData will always be a
two-dimensional array, hence the second index dimension on lines 140–195. This
directly relates to the use of the GetRows functionality on the ADO recordset to
generate the variant array.
The variant array approach is simple and fast. It also represents the utmost in
flexibility because neither the server nor the client requires UDT definitions. As in
the case of the other options discussed so far, we might need to handle multiple
records. The variant array approach handles this naturally because it is a
two-dimensional array with the first dimension representing the field and the
second indicating the row. The metadata needed to describe the data is simply an
ordered list of string values that apply to the entire data set.
If we consider that most data originates as a database query, Microsoft must realize
something here because they provide a highly optimized method in the form of the
GetRows method. Although the method must be performing a memory copy, the
internal structure of the recordset must be similar to that of the variant array that it
generates. We can make this inference from the fact that even for large recordsets,
the GetRows method returns quickly. The auto marshaller then processes this
resulting array quickly for passage across the DCOM boundary. This approach is not
only of minimal cost in performance and of overhead, but it also represents the best
solution in flexi-bility in supporting both the typed VB language and the
variant-based VBScript within ASP.
XML
Although we will cover XML (eXtensible Markup Language) in detail in Chapter 13,
"Interoperability," it is important to note that although it is usable as a
cross-process communication mechanism, it is the one with the highest cost.
Because of this, we relegate it to boundaries that cross platforms or applications
rather than simple cross-process communication within a platform. In these cases,
the boundary might cross over the Internet, something that DCOM does not handle
cleanly.
XML is simply a textual stream of data, similar in style to the HTML pages that your
browser pulls down from the Internet and renders on-the-fly to present to you.
What differentiates XML from HTML is that XML represents data, whereas HTML
represents content and format. Because XML is capable of representing complex
object hierarchies within the confines of a textual stream, it is easy to see how we
can employ it as a communication vehicle.
A simple XML stream corresponding to the CPerson class might look something like
Listing 5.14.
Example 5.14. A Simple XML Stream
<?xml version="1.0"?>
<!DOCTYPE Person [
<!ELEMENT Person EMPTY>
<!ATTLIST Person
Id PCDATA #REQUIRED
LastName PCDATA #REQUIRED
FirstName PCDATA #REQUIRED
MiddleInitial PCDATA #REQUIRED
EmployeeNumber PCDATA #REQUIRED
OfficePhone PCDATA #REQUIRED
OfficeFax PCDATA #REQUIRED
Pager PCDATA #REQUIRED
RoomNumber PCDATA #REQUIRED
DepartmentId PCDATA #REQUIRED
UserName PCDATA #REQUIRED
DomainName PCDATA #REQUIRED
>
]>
<Person Id="1234"
LastName="Smith"
FirstName="Joe"
MiddleInitial="M"
EmployeeNumber="5678"
OfficePhone="(212) 555-5555"
OfficeFax="(212) 555-5556"
Pager="(212) 555-5557"
RoomNumber="13256"
DepartmentId="52"
UserName="JMSmith"
DomainName="XYZCORP"
/>
The LoadPerson subroutine rewritten using an XML strategy and the Microsoft XML
parser would look like Listing 5.15.
Properties as XML
Properties as XML
The LoadPerson subroutine written using an XML strategy and the ADO recordset's
capability to load an XML stream would look like Listing 5.17.
If the file-based approach is used, then both the client and server sides of the DCOM
boundary must deal with temporary file management issues in addition to the extra
overhead of file access. If the Stream object is used instead, then everything
happens in memory, which is both more efficient and faster. Nonetheless, the same
issues associated with using an ADO recordset on the client concern us here as well.
As programming-unfriendly as it can be, it is much easier to install and administer
the MSXML parser on the client than is ADO.
Thus, we are concerned with the remainder of the micro-level timing parameters
that make up the total time. These micro-level elements include the following:
• The time to package the data, if any, into a form suitable for transfer
(premarshalling).
• The time to marshal/transfer/de-marshal the data.
• The time to move the data into client-side elements.
Methodology
The best test environment is that of your own corporate infrastructure, including
clients, servers, and the underlying network connecting them. One critical factor is
to perform the testing first under light network loads. It is common sense that a
corporate network is most heavily loaded in the morning, after lunch, and just
before closing time because people sift through their emails at these times of day.
After you have developed your test bed during the evening hours and weekends,
you can validate your findings during the peak times to make sure the relative
timings are still valid.
To test in your environment, create a collection of n simple objects of the same class
within the context of an MTS component. Each object should consist of various
randomly generated data types, such as strings, integers, floating points, and dates.
Create a disconnected recordset from the collection, followed by a variant array
created from the recordset (using the GetRows function). From a client-side
component, repeatedly request the data set to be sent to the client under several
scenarios. The exact same data set should be sent with each test run. Average the
total time for each scenario and divide by the number of requests to determine the
average time.
Under many environments up to about 10,000 records, you might find that
scenarios 1 and 3 are the fastest and on par with each other. Scenario 4 is the next
fastest, but about 100 times slower than 1 and 3. Scenario 3 is the worst performer,
about 500 times slower than 1 and 3.
We have spent a significant amount of time in the last several chapters talking about
DCOM, remote activation, and distribution considerations. Underlying all this is the
use of MTS in the server side of these discussions. Although MTS is not a
requirement for implementing DCOM, it makes things significantly easier. Several of
the reasons that we use MTS are for its DCOM hosting capability coupled with its
sophisticated object and database connection pooling. It also makes the DCOM
administrative process much easier.
Using MTS
One of the most important things to remember is that the development team must
be using Windows NT Workstation or Server as its development platform. The
reason for this is that MTS runs only on these platforms, so for many debugging
purposes, this will simplify things. We will call this the local MTS when we refer to
debugging activities. If we are using an MTS instance on another machine—whether
we are talking about debug or production modes—we refer to it as the remote MTS.
TIP
NOTE
Walking to the snack machine does not constitute an acceptable form of exercise.
How you structure the directories and component packages within MTS is important.
If you do not already have a standard structure within your organization, consider
employing the ones presented here.
MTS Packages
In MTS, DCOM components run within the context of a package. A package is a unit
of management for MTS relative to security, lifetime, and so on. Each package can
contain one or more components, whether they belong to one or multiple
applications. Although it is possible to place all your DCOM components into a single
package on MTS, it is easier to manage the development and maintenance aspects
of the application base if you group components under some logical mechanism.
This package is the unit of distribution for the components of your distributed
application. Fixing a class in one of the components in the package means a
redistribution of the entire package.
You may create a package that groups the components driving one of the subparts
of the application. You might alternatively decide to group based on a similar set of
functionality that the components provide. The reason that such grouping is
important is that after a developer begins working on a single component within a
package, other components within the package are not available to other
developers.
TIP
It is prudent to align your development team and MTS package layout, or vice-versa,
as much as possible. After the application begins coming together, you might have
one developer waiting on another to complete his or her work if their components
are co-mingled in the same package.
Summary
This chapter has addressed the issues associated with communication between
distributed objects. Several widely used techniques can be used to pass object state
information between tiers: user-defined types, ADO disconnected recordsets,
PropertyBags, variant arrays, and XML. Each technique has its own advantages and
disadvantages, although our framework will follow the variant array approach in
future chapters.
The next chapter covers the development fundamentals and design goals for
enterprise applications. It lays the final groundwork for our work in Part II,
"Implementation of an Enterprise Framework."
Chapter 6. Development Fundamentals and
Design Goals of an Enterprise Application
Although a rich set of development tools and technologies are at our disposal, they
sit before us with minimal structure. We are free to do with them what we please.
Although this level of flexibility is important, we must decide on a standard approach
to implementation when we begin using these tools. The importance of
standardization spans both small and large development teams. Standardization
creates consistent implementation techniques, nomenclatures, and methodologies
that become the underlying fabric and texture of your application. Standardization
also forces a best-practice implementation that, in turn, promotes the fundamental
stability of the application. If one development team member reviews a piece of
work by another team member, it should make some reasonable level of sense or it
should provide the information for another developer to understand it relatively
quickly. Similarly, when you look at the code six to twelve months from now in a
maintenance mode, you should be able to re-acclimate yourself to it quickly.
In this chapter, I will outline some of the fundamental design and implementation
decisions that we must make, regardless of which part of the application is under
construction. In the process of outlining this, I will provide some sample techniques
or argue for one approach over another. This chapter covers Visual Basic 6.0,
Microsoft Transaction Server (MTS) 2.0, Internet Information Server (IIS) 4.0, and
Structured Query Language (SQL) Server.
Visual Basic
We will begin by taking a look at some of the capabilities of the Visual Basic
programming language. A thorough understanding of these concepts will allow you
to utilize the language to its full extent.
Option Explicit
Visual Basic has the capability to force or ignore compile-time type checking. We
can only assume that Microsoft chose to allow this for flexibility purposes, although
it has such significant consequences that perhaps Microsoft should consider
eliminating this option in future releases, or at least making it the default option. It
is important to note before proceeding that this topic differs slightly from the
discussions on runtime versus compile-time type checking in Chapter 3, "Objects,
Components, and COM." In the current chapter, the reference to type checking is
relative to variable declarations versus the object binding methods discussed before.
Unless it is told otherwise, Visual Basic will implicitly dimension variables upon first
use. If Visual Basic does this, it has no other option but to dimension the variables
as variant data types. As previously discussed, the use of these data types reduces
application performance because Visual Basic must perform extra steps when
assigning values to, and accessing the values from, variables of the variant type.
It just so happens that this implicit declaration of variables is the default mode for
Visual Basic. To switch this behavior, an Option Explicit statement is required at
the beginning of the declaration section of every module. In this mode, Visual Basic
will generate a compile-time error if it encounters a variable in the source code that
has not been declared in the current scope.
There are other important reasons to use the Option Explicit mode and not allow
Visual Basic to implicitly declare each variable as variant. When assigning a value to
a variant type variable, Visual Basic must make some assumptions as to the intrinsic
underlying type of the variable. If the value being assigned is the result of a function
of a known type, Visual Basic's job is relatively easy. For example, the statement
ThisDate = Now() tells Visual Basic that the underlying type of ThisDate, which is
implicitly a variant if it has not been declared in the current scope, is a date because
that is the type returned by the Now function. It is important to understand that a
variant data type has both data and a data-type descriptor. Within the first few
bytes of the storage allocated for the variant variable is information defining this
type information. The VbVarType enumeration defined under Visual Basic for
Applications (VBA) provides the list of these types. If the VarType function were
performed on ThisDate, it would return vbDate.
If Visual Basic cannot determine the underlying data type, it must make some
assumptions that might not correlate with the assumptions you would make. For
example, consider the following function:
Again, the preceding example will compile without issue. If the data types of
parameters of A and B are always numeric, we have no issue. The assignment of D
will fail, however, if either parameter, A or B, is of a string type. This problem arises
when the user of the DoSomething routine is unaware of what is happening within in
it. Although this is a trivial example given for exposition, the manifestations of these
issues can become complex in real-world situations.
In essence, by following an implicit data type approach, you are allowing both Visual
Basic and your development team to make possibly incompatible assumptions
throughout your code base. Although you will catch many of these issues during the
development and debug stages, your team will spend non–value-added time
tracking them down and fixing them. Worse still, your team might not catch all
these issues and they can escape into production, where the cost to fix them can
affect you in terms of additional time (which is measurable) and lowered customer
satisfaction (which is immeasurable). Remember that being penny-wise might
result in being dollar-foolish here. Although many would argue that not setting
Option Explicit is acceptable development practice for small-scale applications, it
is inappropriate when building robust enterprise applications. The following is an
example of its implementation:
Option Explicit
Private mName As String
Private mAddress As String
Enumerations
Component Object Model (COM) defines enumerations as their own first-class entity,
making them shareable across all the classes defined within the COM component
and visible to users of the component. Visual Basic does not have a mechanism to
natively support the definition of enumerations. To do so would mean that a new
type of code module would have to be developed to support them. If enumerations
are placed in a standard code module (bas module), they become visible to the
classes defined in the component but invisible to anything externally. To solve this,
the developer must place the enumeration definitions within any public class
module defined in the component. This technique has the effect of making the
enumeration visible both internally and externally to the component. Although the
choice of which class module within the component is used to define the
enumeration does not matter, a good practice is to place it in one of the classes that
will be using it. In essence, one of the class modules is acting as a gracious host for
the enumeration definition, so it makes sense that the class that needs it should be
the one that defines it. Although this makes no sense, Microsoft has taken this
approach to enable COM development within Visual Basic. If you look at the bigger
picture, this quirky enumeration implementation is a relatively minor issue.
Enumerations can be used in place of global constants that are used by more than
one component. In the CBond example in Chapter 4, "The Relational Database
Management System," we defined a BondType field with possible values of
CouponBond, DiscountBond, and ConsolBond. A code sample for these definitions
using constants would be as follows:
What should be apparent is that these types of constants must be defined in both
the component itself and the application that uses the component. Furthermore, the
definitions in both places must be synchronized as changes are made to the CBond
class.
This not only saves the time to remember or look up the particular constant name,
but also the time required typing it into the editor. This might seem like trivial
savings, but over the course of many hours of code development, it can actually
produce some significant savings.
After a component is compiled with this option, any changes to class interfaces or
enumerations force the developer to break compatibility, which means generation
of a new GUID and a forced recompilation of each component that references the
changed component. Each of these components referencing the original component
must also break compatibility in the process, generating more new GUID values.
This occurs whether the change in the original component would have had any
impact on the current component's functionality. This process repeats until all
components in the referencing chain are recompiled. In a highly layered
environment, this can be very frustrating. After an application is placed into a
production mode, changing an enumeration in a component running on an MTS
server can force a recompilation of all components such that the application must be
redistributed all the way back to the client. This runs counter to one of the main
goals of a distributed architecture: being able to make simple changes on the
application tier without affecting the client.
NOTE
You should seriously consider whether to use enumerations on the application and
data tiers or whether a set of constants would be more appropriate. Only when you
are 99.99% sure that an enumeration on these tiers would not change over the
lifetime of the application should you consider using one.
Naming Conventions
As is evident in the biblical story of the Tower of Babel, things are much more
efficient when we are using a common language. We will extrapolate this here and
apply it to the importance of developing standardized naming conventions for
various parts of your code.
Variables
It is easy to clearly understand the data type associated with a variable if you are
within the declaration section of a code module, Function, Sub, or Property block.
However, you quickly lose focus of that if that section is no longer physically visible
on the screen within the editor. One method the industry has adopted, sometimes
referred to as Hungarian notation, is to prefix the variable name with something to
indicate its data type. Examples include an i to designate integer types, an l for long,
an s for string, a b for boolean, an o for object, a c for class, an sng for single, a dt
for date, and so on. Similarly, we also want to use suffixes that have some sort of
embedded meaning reflecting their use. Examples include LastName, FirstName,
HomePhoneNumber, Balance, and so on. By combining these prefixes and suffixes,
we can derive useful variable names. For example, sLastName tells us that that the
variable is a string used to store a value representing a last name.
Function naming might not seem like something with which we should concern
ourselves. Again, we would argue that standardization is vital to making it easier for
developers to be able to grasp what an area of code is trying to accomplish with
minimal effort. It is important to understand that most functions and subroutines do
something. More precisely, some type of action is performed. That said, each
function and subroutine should contain a verb fragment in its name, such as Delete,
Create, Make, Run, Do, Get, and so on. Likewise, there should be a receiver of
the action, such as Report, Query, and so on. If there is a series of functions or
subroutines that provide similar functionality, their names should provide some
indication of the difference. For example, rather than having two names like
SetStateOne and SetStateTwo, we would prefer to name them
SetStateFromVariant and SetStateFromXML.
Many developers over the years have chosen to abbreviate or shorten functional
names to the point where they are cryptic. A quick glance at the functions defined
within the Windows Application Programming Interface (API) will provide you with
some great examples. The reasoning behind this is that as names become more
descriptive, their length increases, making it more time-consuming to fully type
them out in the editor. This is especially true in a procedural-based language. This
same problem does not exist in the Visual Basic editor for long method and property
names because the IntelliSense feature will help complete the code with minimal
keystrokes.
Files
As you add files to your project, Visual Basic attempts to name each one for you,
depending upon its intended use. Classes would be named Class1.cls,
Class2.cls, Class3.cls, and so on if you allowed Visual Basic to handle it. Forms
and basic modules will follow an identical pattern. The framework presented in Part
II will be following the approach shown in Table 6.1.
Table 6.1. File/Source Naming Conventions
Commenting Conventions
Any general-purpose programming course will stress the need for comments.
Although comments are vital to good programming, these courses tend go
overboard. Most courses insist that you place a nice block of comments at the
beginning of each function or subroutine to explain the inputs and outputs. However,
if proper naming conventions were followed, the need for many of the comments is
diminished. In one sense, the code should document itself as much as possible
through these conventions. It is painful to follow code that has more comments than
code.
Although it would be wonderful if such a minimalist approach were sufficient for all
code, there still exists a need to ensure that code written today can still be
understood six months from now when maintenance or enhancement phases are
started. Some of the areas that need particular attention are the areas in which
business logic is being implemented. In many cases, this is a step-based process, so
it makes sense to make a comment like the following:
' Step 1 - Check that start date is less than end date
… code
' Step 2 - Get a list of transactions between start and end dates
… code
' Step 3 - etc.
Whatever the approach, make sure that it is followed consistently by all developers.
Do not make it so burdensome that your team begins skipping proper commenting
during late-hour coding sessions.
Property Lets and Gets
In the COM API, properties are implemented as special types of functions known in
the object-orientation world as mutator and accessor functions. The former name
implies a change in the state of the object—in this case, the property to which a new
value is assigned. In the latter case, the state of the object is returned, or accessed.
In Visual Basic, these special functions take the form of Property Let and Property
Get statements. For properties that are object references, the Let statement is
replaced with a Set statement. The Get statement returns the value of the property,
whereas the Let/Set statement assigns a value to the property. For example, an
OpenDate property might be implemented as in the following:
Visual Basic does not require explicit programming of the Get and Let/Set functions
because declaring public variables in the declaration section of the class module will
have the same effect. The reason that you should formally program property Get
and Let/Set statements is so there is a place for validation logic. Whether this logic
is implemented today is irrelevant because you are protecting against the need for
future change by putting the framework in place today. The use of Get and Let/Set
statements also imparts standardization throughout the code base, an important
feature in multi-developer environments. The maintenance teams will thank you as
well because they will not have to break compatibility to add functionality under a
Get or Let/Set statement in the future. As discussed in the enumeration section,
breaking compatibility necessitates the recompilation of all the code that uses that
component, which might lead to redistribution.
The use of a private variable to store the state of a non-derived property—one that
is not calculated by its accessor function but is retrieved from a static variable—is
common among object-oriented languages. In many cases, normal Hungarian
notation requirements are relaxed by prefixing the variable with the letter m to
designate member. This approach loses visibility to the underlying data type. This is
a common naming convention used throughout Visual Basic code development, and
it is the default mechanism used in the code generated by the Visual Modeler, which
is discussed later in this chapter in the section titled "Modeling Tools." Some
developers do not like the loss of data type visibility by the convention, so an
indication of the underlying variable type can be added back in. For example, the
private variable mOpenData for the OpenDate property can be named mdtOpenDate.
This is a matter of preference. Again, just be sure to standardize across your
development team.
As mentioned earlier, the accessor function can be implemented in a mode that does
not simply reference a private variable, but instead derives itself from other
information and functionality available to the statement. Examples include using a
case statement to select among several values or using a logic set traversed with
If…Then…Else blocks. Another example of a derived property is one that calculates
its final result, such as a property named TotalCost that is the sum of several other
properties defined on the class.
Registry-Based Configuration
As we develop our solutions, there inevitably are times when our applications need
some form of configuration information. A configured approach is preferred over a
"hard-coded" one as a means to ensure flexibility. This configuration information
might be the name of the MTS server used by the application, publication path
names to Web servers whose content is generated by the application, application
login names, or simply general-purpose information needed by the application.
The Win32 system has a Registry that is just the place to store this information. In
most cases, the standard Visual Basic functions of GetSetting and SetSetting can
be used to perform this Registry access. These functions place Registry keys in a
specific, Visual Basic area of the Registry. In some cases, an application might be
integrating with other applications and will need access to the full Registry.
Collection Classes
Collections are some of the most fundamental classes in the framework presented in
Part II. Everywhere there is a one-to-many relationship in the model there will be a
collection class in the code. Visual Basic already provides a Collection class, but
the framework creates its own collection, employing the Visual Basic version to do
most of the dirty work. The reason for this is that, as a developer, I might want to
add more business-specific functionality onto a collection class than is available on
the Visual Basic version. For example, I might have a CAccount class that contains
a CTransactionItems collection of CTransactionItem objects. Aside from the
standard Add, Item, Remove, and Count methods and properties available on the
Visual Basic collection, we might want to add a method called CalculateBalance.
This method will loop through the collection, adding debits and credits to the
account along the way to produce a result.
It is important to get into the habit of defining all collection classes in this manner,
even if you do not plan to extend the standard collection with business functionality.
Although it might not seem necessary today, a week or a month from now you might
realize that you do and it will be much more difficult to put in. It is relatively trivial
to set up a collection class in this manner, especially when the code generation tools
discussed later in the "Modeling Tools" section are used.
An example might be when an application has a basic file import process that
supports a multitude of file formats. Some customers might need one set of
importers, while others might need a completely different set. Rather than place all
importers in the same component, they can be separated out into logical groups and
implemented in several components. Adding support for new importers can require
creation of a new component or modification of an existing component. If you bind
these components to the client application in a configurable manner, then the
application does not have to be recompiled and redistributed with each release of a
new importer. Instead, a new or existing component is distributed and changes are
made to the configuration information. In essence, the application can be
configured in an a la carte fashion using this technique.
Modeling Tools
If you begin to explore all the extras that come with Visual Basic Enterprise Edition,
you will find two modeling tools: One is the Class Builder Utility and the other is the
Visual Modeler. Both enable you to formally define classes and class hierarchies with
subsequent code generation. The idea is that using either of these tools reduces
much of the basic coding of class properties and methods and enforces a certain
standard coding style implicitly with what it generates.
The Class Builder Utility is the simpler tool, but there are several issues and
limitations with it. The Class Builder Utility enables you to define new classes in
terms of properties, methods, and events using a simple dialog. After the definitions
are made, the utility creates the necessary class modules and generates the
skeleton code to support the properties and methods just defined. To access this
utility, you must first add it using the Add-In Manager in Visual Basic. Figure 6.2
shows the Class Builder Utility being used to edit properties on a class, while Figure
6.3 shows it being used to edit methods.
Figure 6.2. The Class Builder Utility—Property Editor.
The second issue is that the Class Builder Utility does not enable you to override the
Add method on the collection classes that it generates, using the long calling
convention that we spoke of earlier. This can lead to broken compatibility issues
when making changes to the underlying class that we are collecting.
The third issue is that the Class Builder Utility does not enable you to make a
collection containing another collection, a design requirement that can occasionally
surface within the application.
The fourth issue is that the Class Builder Utility does not generate any code with the
Option Explicit statement, so you will have to go back and add this information
yourself.
The fifth issue is that the Class Builder Utility does not support the definition or
implementation of interfaces within your design. As discussed earlier, we should be
taking advantage of the features of object-orientation to make our application more
robust and skewed toward the expectations of enterprise-level users.
Overall, the Class Builder Utility is inferior to the Visual Modeler that Microsoft has
also bundled with Visual Basic. It is perfectly legitimate to ask why Microsoft has
chosen to bundle two similar utilities. The answer is that the Visual Modeler only
comes with the Enterprise Edition of Visual Basic, because it is really the product of
another company (Rational Software) to which Microsoft must pay royalties. The
Class Builder Utility, on the other hand, ships with lesser editions of Visual Basic as
a simple productivity utility in those editions.
Visual Modeler
The Visual Modeler is a much more sophisticated and powerful tool that we should
use for any large-scale application development. The functionality of this tool
extends far beyond the simple class-building mechanism as in the Class Builder
Utility. It represents a complete modeling tool that enables you to plan your
application across a three-tiered deployment model using the standardized UML
notation. It is highly flexible in how it generates its code, allowing the user to set
many of the generation options. It also allows for reverse engineering, whereby you
can make changes in the source code and have the model easily updated. It also
exhibits none of the issues outlined in the Class Builder Utility case. To access the
Visual Modeler, you must first add the Visual Modeler Menus add-in using the Add-In
Manager in Visual Basic. Figure 6.4 shows the Visual Modeler in action, while Figure
6.5 shows it being used to edit properties on a class and Figure 6.6 shows it being
used to edit methods.
The Visual Modeler not only has the capability to generate source code from the
model information, it also has the capability to reverse-engineer the model from the
code. This latter feature is important when changes are made in the code in terms
of properties and methods that must be annotated back into the model. This is
crucial when multiple developers are working on the same component but only one
copy of the model exists. During standard code check-in processes, a single
individual can be responsible for updating the model to reflect the most recent
changes.
Another important feature is that the Visual Modeler is fully aware of COM interface
implementation, and can even generate code to support this concept, if modeled
appropriately.
Because of the rich feature set and the fact that the framework presented in Part II,
"Implementation of an Enterprise Framework," will be using interface
implementation, the Visual Modeler will be used exclusively in the course of
development activities throughout the remainder of the book.
SQL Server
Setting up an RDBMS such as SQL Server presents the development team and
database administrator (DBA) with several decision points. Although many of the
administrative tasks are not necessarily crucial to the operation of a given
framework, some database design decisions must be made to coincide with the
application architecture being implemented.
Logins
The configuration of SQL Server offers many options related to setting up user
logins and mapping security rights to users. SQL Server provides both standard and
integrated security models. In the former model, user logins are created on the
server as in most other RDBMSs. In the latter model, users are implicitly logged in
using their standard Windows NT login. These NT logins must then be mapped to
SQL Server user groups, which then define the various levels of access to the
underlying entities on the server. Although this might be acceptable for a small user
base, this process of mapping NT to SQL Server users can become administratively
burdensome for a large user base. In the framework in Part II, a decision has been
made to provide a common application login to the server and to administer user
rights programmatically. Although this adds a bit more development complexity
throughout the application, it offers more flexibility and moves security off the data
tier and into a service tier. It is important to note that the database is still protected
from malicious individuals through this common login, as long as login names and
passwords are safely hidden.
Views
In the framework presented in Part II, views will be defined that join the underlying
tables in the manners needed by the application data objects. Although this join
logic can be provided as part of the ad hoc SQL that is being issued to the database
by the application, a performance hit is associated with this technique. When views
are created in SQL Server, the SQL is parsed into an efficient format known as a
normalized query tree. This information is stored in the database in a system table
know as sysprocedures. Upon the first access of the view after the SQL Server has
started, this query tree is placed into an in-memory procedure cache for quicker
performance. Using this tree, SQL Server must only generate a query plan based on
the current index statistics to access the information. In the ad hoc approach, SQL
Server must first compile the SQL into the normalized query tree before generating
the query plan. After SQL Server has satisfied the ad hoc request, it discards the
query tree because it has no basis for knowing which queries might be used again in
the near future. Management of such a cache can degrade performance more than
improve it in highly loaded situations. Because these ad hoc query trees cannot be
cached, there is a high likelihood of degraded performance over the view approach.
Indexes will also be added to the fields that are designated to be part of the name
uniqueness pattern. An example of such a pattern may be when an application
needs to guarantee that there are not two rows with the same values in the
FirstName, LastName, MiddleInitial, and SocialSecurityNumber fields.
Although a unique index can be implemented to force the RDBMS to generate name
uniqueness violations, the resulting error messages returned from the server will
not be sufficient to inform the user of the problem. In this case, the application will
receive a "Unique index xyz had been violated" message from the server, which is
non-informative to the user and will most likely generate a hotline call. Instead, a
better choice is not to make this a unique index but instead handle the name
uniqueness pattern in the INSERT and UPDATE triggers where an explicit and much
more descriptive error message can be generated. Here, an error can be raised that
reads "The First Name, Last Name, Middle Initial, and Social Security Number must
be unique," which tells the user exactly what the issue is without the need for a
hotline call. This is one of the deviations from an academically pure n-tier model, in
that this represents a portion of the business logic that resides on the RDMBS. It is
important to note that not all tables will need this name uniqueness pattern;
therefore, this type of index will not need implementation on all tables.
Stored Procedures
Triggers
Binary Fields
It is important to note that the framework presented in this book does not support
the use of binary large object (BLOB) or text fields. SQL Server includes these data
types as a means to store large amounts of binary or textual data. Because most of
the aggregate and query functionality becomes limited on these data types, there is
little impetus for having them in an RDBMS to begin with. For these types of fields,
in most cases, it is much more efficient to place them on a file server and to simply
store in the database a path to their location. This is the recommended approach
followed by the framework presented in Part II.
IIS has been chosen as the framework Web server for the reasons as outlined in
Chapter 1, "An Introduction to the Enterprise." Visual InterDev has been chosen as
our tool for editing Active Server Pages (ASP). With the ASP application model, we
have several options as to how we might structure our application, which we will
discuss here.
Global Configurations
For the same reasons as those outlined in the previous Registry-based configuration
discussion, application variables within the global.asa file will be used to control
such configuration settings on the IIS machine. Some sample settings might be MTS
server names, administrator mailto: addresses, and so on.
Stylesheets
Although not an IIS-specific feature, stylesheets are used extensively to control the
look and feel of the Web site portion of the framework discussed in Part II. This
allows for easy modifications to the formatting aspects of the application over time,
which can include font formats as well as colors. In cases where an MTS object is
generating a complex HTML stream directly, most of the formatting tasks can be
driven by the stylesheet. This enables minor format changes to be made without
having to recompile the object.
Include Files
If you dig through the IIS documentation, you might find it difficult to learn anything
about the notion of server-side include files. The framework in Part II will be using
include files to help modularize the Web site portion of the application. For example,
the script code to check the user's login status is in one include file. The script code
to generate the header and footer parts of each page is also implemented as include
files. If the header or footer needs changing, it can be made in just those places
versus the potential hundreds of pages that would otherwise be affected.
The framework discussed in Part II will have its own IIS-specific service-layer
component that will be used across multiple ASP pages. One set of functionality will
be to provide the user login and verification services that must be handled. Several
utility functions will also be implemented that will enable extraction of information
from the object state information needed to generate the ASP page.
ASP will be used as a simple scripting tool to glue MTS components together in the
form of a cohesive application. In the framework, IIS is used as a surrogate for the
user interface layer in the form of the HTML pages sent back to the client browser.
Business-layer activities will not be performed on the IIS server, but instead will be
relegated to the business-layer components in MTS. Stated another way, no direct
business-layer logic will be embedded with ASP script. Instead, ASP will call the
appropriate functionality found within a business-layer object running within MTS.
This notion is difficult to grasp and is one of our major divergences from a traditional
viewpoint. Although ASP can directly access data bases through ADO, it does so in
a scripting context that is inefficient. It is important to remember that everything is
a variant data type in this environment, that the ASP page must be compiled with
every access, and that it is run in an interpreted, rather than compiled, format. MTS
offers not only resource pooling, but also the capability to run components in a
compiled binary format. Even if the functionality to be delivered is only to the
intranet portion of the application, it is more prudent to place it in a business-layer
component under MTS. Resorting to MTS is a minor issue because the infrastructure
to do so is already in place since other parts of the application are already using it.
Indeed, Microsoft must have recognized these issues, making the integration
between IIS and MTS highly efficient when the two are running on the same physical
server.
As we have mentioned many times over, MTS forms the core of the application
framework discussed in Part II. Although there are many ways in which to configure
MTS and install components, some practices enable efficient development, debug,
and deployment activities.
Directories
In MTS, you will need a place to put the ActiveX DLL files that will be loaded as
DCOM processes. You might also have a series of ActiveX DLL files to support these
DCOM libraries, but are themselves in-process COM servers. When moving
component packages, you will need a location to which you can export the
necessary files for both the clients and servers.
The INPROC directory is where service layer components reside on the server. These
are the components required by the MTS components, but they are not MTS
components themselves. You will need a mechanism to register these components
on the server using a program, such as REGSVR32.EXE or some other remote
registration utility. At some point, when your application reaches a production
phase, you can build an installer to install and register these components more
efficiently.
The DCOM directory is where the MTS objects reside on the server. You should copy
your ActiveX DLL files to this location, and then import them into a package on the
MTS server. This process will be discussed in more detail in Chapter 9.
The EXPORTS directory is where you export the packages so that you can move them
to other MTS servers. This process will also generate the client-side installers
needed by the application. Again, this topic will be discussed this topic in more detail
in Chapter 9.
Debugging
For those issues that are difficult to find in debug mode on a development machine,
a developer can take advantage of the NT event log to write out debug or exception
information. The ERL variable becomes very important when debugging MTS
components in this mode. This little-known variable tracks the last line number
encountered before an exception occurred. By writing this information out to the
event log along with the error information, the location of errors can be more easily
pinpointed in the source. An important thing to note is that the Visual Basic
functionality used to write to the event log works only when the component is
running in compiled mode, so do not expect to see events being logged while you
are stepping through the code.
One important thing to remember about the event log is that when it fills up, MTS
stops for all components. With this in mind, the event log should not be used to write
out volumes of data such as the value of a variable within a loop that repeats 100
times. The event viewer application is available under the Administrative Tools
section of the Start menu. Be sure to switch the log view mode from System to
Application when looking for information logged from the application.
Figure 6.7 shows the Event Viewer and an event written to the event log from within
Visual Basic.
Figure 6.7. The Event Viewer and a VB logged event.
Design Goals
As we work our way through the framework beginning with the next chapter, we
must have some basic design goals to drive our efforts. Our overarching goal is to
follow an n-tier, distributed approach. Figure 6.8 shows an overview of where Part
II will head with this architecture.
Figure 6.8. Our guidepost of where we are headed.
User Interface
We want to offer our users a simple Web browser interface where it is appropriate.
Many of our users will need only simple data retrieval services, so this allows us to
project our application to the widest possible audience. Still, we must also preserve
our ability to provide a rich user interface for the more complex, entry-intensive
tasks. These users will be fewer in number, but they will be responsible for the vast
majority of the information going into the system. We do not want to penalize them
by unnecessarily forcing them to use a Web browser for input purposes. The issues
with a Web browser interface, as a data entry mechanism, is that we want to
provide user input validation as soon as possible, as well as a high level of
responsiveness from our application. These are two things we cannot easily achieve
using a browser and client-side scripting code. If we must use the browser as the
user-interface delivery vehicle, then we want the ability to use ActiveX controls as
needed. If we are smart in our design, we should be able to use the same ActiveX
controls in both the Visual Basic client and Web browser.
For the rich client, we want to preserve the user interface metaphors that users
have already become accustomed to from using Windows (95, 98, NT4), such as the
Explorer, Finder, tabbed dialogs, and so on.
Business Logic
We want to keep our business logic in one place so that it is easier to maintain over
time. We want the same business objects supporting our Visual Basic client as our
Web browser. We do not want client-side business logic muddled up in our Web
pages. We do not want business logic muddled up in our ASP pages.
Database Server
We want to preserve the ability to switch out RDBMS vendors at any point in time;
therefore, we must minimize the use of any one server-vendor's proprietary
functionality.
Summary
This chapter has provided an overview of the design goals and development
fundamentals that will be followed from this point forward. It has done so with a
very broad brush, first covering the development technologies (Visual Basic, SQL
Server, IIS, and MTS). For each of these technologies, you learned a series of best
practices and common pitfalls as a preparation going forward so it will be more clear
why a particular design or implementation decision is being made. This was followed
by a discussion of specific design goals for the application as a whole, and then
broken down into the User, Business, and Data layers of the system. A discussion on
modeling tools, specifically comparing the Class Builder Utility to the Visual Modeler,
was also provided.
Next, you learn the long-awaited implementation of the framework that we have
spent so much time building up to. Chapter 7, "The ClassManager Library,"
introduces the concept of metadata-driven class definitions and provides the initial
building block for the application framework.
Part II: Implementation of an Enterprise
Framework
13 Interoperability
15 Concluding Remarks
Chapter 7. The ClassManager Library
With the completion of Part I and its overview material, we can now turn our
attention to the presentation and development of the framework for which you
bought this book. This presentation starts with one of the core components of the
business layer—the ClassManager Library. This ActiveX DLL library is primarily
responsible for managing the metadata necessary to map class definitions to
database tables.
Remembering the section titled "Mapping Tables and Objects" in Chapter 4, "The
Relational Database Management System," there is a need in an object-based
application to persist state information to the database. A technique was discussed
that mapped classes to tables and properties to the columns in those tables. The
ClassManager Library presented in this chapter provides the necessary objects to
implement this mapping and class definition process.
In addition to defining the mapping between objects and tables, the ClassManager
library enables developers to define arbitrary attributes at the property level. These
attributes can be used to track any form of additional metadata needed by the
application, such as validation rule parameters, XML tag names and so on.
Examples of both types of additional metadata will be shown; the XML tag name
information is particularly important for topics discussed in Chapter 13,
"Interoperability."
Design Theory
The underlying design goal of the ClassManager library is to provide the definition
mechanism necessary to drive both the CRUD (Create, Retrieve, Update, and Delete)
capabilities and the simple property-level validation required by the business and
data layers. The overarching design goal is to provide a generic solution that can
easily be modified through metadata changes at the business layer and schema
changes on the RDMBS when support for new properties is needed. To do this with
a minimal level of effort, we will place this library on the application tier running on
MTS. This particular library is not itself an MTS object, but provides a service to the
business objects running on MTS.
The first requirement is to create one class to support the definition of a database
column, and another to support the definition of an object property. For the former,
we will create a class called CcolumnDef, while for the latter, we will create one
called CpropertyDef. To augment the CPropertyDef class, we will create a
CAttribute class to allow us to add other important metadata to our property
definitions. The second requirement is to provide a mechanism to link a column to a
property. After these base classes have been established, a class known as
CClassDef is defined to pull everything together and provide the core functionality
of the library. As discussed in Chapter 4, "The Relational Database Management
System," we perform a one-to-one mapping of a class to a database table. In the
case of class inheritance, all subclasses are mapped to the same table and use a
ClassType field within the definition to designate the specific implementation.
The CColumnDef class is simple, containing only properties. See Figure 7.1 for the
Unified Modeling Language (UML) representation.
model.
Properties
The Name property is used to provide the name of the column within the RDMBS
system. The CanRead property indicates whether the column can be read from the
database, whereas the CanWrite property determines whether the column can be
written to. The CanRead property is used in conjunction with the ReadLocation
property on the CClassDef to generate the SQL column list for data retrieval
purposes. Similarly, the CanWrite property is used in conjunction with the
WriteLocation property on CClassDef to generate the SQL column list for data
updates. The CClassDef class is discussed in more detail in the "CClassDef" section
later in this chapter.
We must explicitly provide both a CanRead and CanWrite indicator for a given
column versus using a singular approach because there are times when we might
want to read without writing, or vice-versa. If we are storing a foreign key reference
to another table, we must be able to read columns from the referenced tables within
the context of a view, but we will not want to write those same columns back out.
Only the column with the foreign key reference can be written to in this case.
We also define a ColumnType property to help us during the SQL generation process
in our data layer. Sometimes, the system cannot explicitly determine an underlying
data type in order for the appropriate SQL grammar to be generated to support a
given database request. For example, a property might be defined as a string type,
but the underlying column in the database, for whatever reason, is an integer. In
this case, when building an SQL WHERE clause using this property, a VarType
performed on the property would infer a string, causing the SQL generator logic to
place quotes around it in the SQL statement. The RDBMS would generate an error
because the column is an integer. Thus, for robustness, we provide a mechanism to
explicitly define a particular column type.
Building this CColumnDef class using the Visual Modeler is rather straightforward.
Start the Visual Modeler from the Start menu of Windows (95/98/NT) under the
Programs, Visual Studio 6.0 Enterprise Tools, Microsoft Visual Modeler submenus.
When Visual Modeler starts, expand the Logical View node, followed by the Business
Services node in the tree view. Right-click the Business Services node followed by
New, followed by Class, as shown in Figure 7.2.
Figure 7.2. Defining a new class in the Visual
Modeler.
When you tell Visual Modeler to create a new class, a new child node is added to the
Business Services node, a UML graphical symbol for a class is placed into the
right-hand view under the Business Services column, and the newly added node is
placed into edit mode so the class name can be entered. Figure 7.3 shows the Visual
Modeler after the new class has been named CcolumnDef.
Figure 7.3. The CColumnDef class created within Visual
Modeler.
Public mode indicates that the property will be visible both internal and external to
the component; protected mode means that it will be visible to all classes within the
component but not visible external to the component; private mode means it will be
visible within the class itself but not visible elsewhere; and implementation mode is
similar in meaning to private mode. The Visual Modeler can be used to generate
C++ code, and the Rational Rose product on which it is based can generate for Java
as well; both are true object-oriented languages with multilevel inheritance. In
these cases, the protected and private modes take on expanded meanings because
visibility is now concerned with the subclassing. This explains why the
implementation and private modes are similar for Visual Basic.
Turning back to the Visual Modeler, the NewProperty property is renamed Name.
Double-clicking the new Name property node launches the Property Specifications
dialog. The Type field is set to String, and the Export Control selection is set to
Public. There is also a Documentation field in which you can enter text that
describes the property. If this is done, the information will be placed above the
property implementation in the generated code as commented text. At this point,
this information does not make it into the COM property help field that is displayed
by the object browser. The end result of these edits appears in Figure 7.4.
As you continue to add the properties to complete the CColumnDef class, you might
begin thinking that this is too tedious a process and that it just might be easier to
manually type the code. If this is the case, there is a faster way to enter these
properties than what was just described. Double-click the CColumnDef node to
launch the Class Specifications dialog box. Click the Properties tab to show a list of
all the currently defined properties. Right-click this list to bring up a menu with an
Insert option. Select this option to insert a new property into the list in an edit mode.
After you enter the name, if you slowly double-click the icon next to the property
name, a graphical list box of all the visibility modes appears, as shown in Figure 7.5.
If you do the same in the Type column, a list of available data types appears as well,
as shown in Figure 7.6.
Figure 7.5. Changing the visibility of a property in the
To add the ColumnType property, follow the same procedure as for the other
properties. Because the Visual Modeler has no way to define an enumeration for
generation (they can only be reverse engineered from an ActiveX DLL), you will
have to manually enter the name of the enumeration in the Type field. After the
code is generated, the enumeration must be manually entered into the source.
To generate code for this class, several other pieces of information must be defined.
The first is a component to contain this class. To do this, right-click the Component
View folder, select New, and then select Component. Enter ClassManager for the
component name. Double-click the ClassManager node to launch the Component
Specification dialog. From this dialog, select ActiveX for the Stereotype field. This
tells the Visual Modeler to generate an ActiveX DLL for the component. The
Language field should be set to Visual Basic. The last item before generation is to
assign the class to this newly created component. The easiest way to accomplish
this is to drag the CColumnDef node and drop it onto the ClassManager node. From
this point, code generation can occur.
Right-click the CColumnDef node and select GenerateCode to launch the Code
Generation Wizard. Step through this wizard until the Preview Classes step appears,
as indicated in the title bar of the dialog. Select the CColumnDef class in the list and
click the Preview button. The wizard switches into Class Options mode, as shown in
Figure 7.7. From this wizard, set the Instancing Mode to MultiUse. In the Collection
Class field, enter the name CcolumnDefs. Anything other than the word Collection
in this field will tell the Visual Modeler to generate a collection class for this class.
Click the Next button in the wizard to go to the Property Options step. Select the
CanRead property in the list, and then check the Generate Variable, Property Get,
and Property Let options. This tells the Visual Modeler to generate a private variable
named mCanRead, followed by the Property Let and Property Get statements.
This activity is summarized in the text box at the bottom of the screen. Repeat this
for every property in the list. For the ColumnType property that is defined as
EnumColumnType, the Visual Modeler only allows for the property Set and Get
options. After generation, this Set will have to be changed to a Let in the source
code. The results of this step are shown in Figure 7.8.
Click the Next button in the wizard to go to the Role Options. Skip over this for now.
Click the Next button again to go to the Methods Options step. Because no methods
are defined on this class, the list is empty. Click the Finish button to return to the
Preview Classes step of the wizard. If multiple classes were being generated, you
would preview each class in the manner just described. Click the Next button to get
to the General Options step. Deselect the Include Debug Code and Include Err.Raise
in All Generated Methods options. Click the Finish button, and the wizard first
prompts for a model name and then launches Visual Basic. The result of this
generation effort is shown in Figure 7.9.
Figure 7.9. The code generated in Visual Basic by the
Visual Modeler.
Notice that a Form1 is generated by the Visual Modeler. This is actually a by-product
of the automation steps in Visual Basic. When you return to the Visual Modeler, the
wizard is on the Delete Classes step with this Form1 selected in the Keep These
Classes list. Click it to move it to the Delete These Classes list. Click OK to delete it
from the project and display a summary report of the Visual Modeler's activities.
To add the enumeration for the ColumnType property, go to the Visual Basic class
module for the CColumnDef class and manually enter the enumeration as shown in
the following code fragment:
Listing 7.1 provides the code to implement the CColumnDef class. The comments
generated by the Visual Modeler have been omitted for the sake of brevity.
Example 7.1. The CColumnDef Class
Option Base 0
Option Explicit
Listing 7.2 shows the code generated by the Visual Modeler for the CColumnDefs
class, again with comments omitted.
We should point out several things about how the Visual Modeler generates
collection classes. The first is that it generates a NewEnum property that has a bizarre
bit of code in the form of the following statement:
This syntax enables users of this collection class to use a special COM iteration
construct to iterate through the elements in a collection. For example, consider the
following code fragment:
For i = 1 To ColumnDefs.Count
Set ColumnDef = ColumnDefs.Item(i)
' …
Next i
The second item to notice is that the Visual Modeler has declared a private variable
mCol of type Collection to use as the underlying storage mechanism. In this case,
however, it does not instantiate the variable until the Class_Initialize event, and
it does not destroy it until the Class_Terminate event. This generation mode can
be overridden in the Visual Modeler based on the preference of the development
team. One school of thought says that the code size will be smaller using this
technique because Visual Basic will not allocate space for the mCol variable at
compile time, but rather at runtime. Conversely, the object will take longer to
instantiate because it must allocate memory for this variable at runtime during
startup. The preference of this book is to use the default mode of Visual Modeler.
Before we can define our CPropertyDef class, we must first define a simple
CAttributeItem class and its associated CAttributeItems collection class.
CAttributeItem has a simple Name and Value property. These attributes will be
used to allow extra information needed by the application to be added to the
property definition information. This approach provides for a significant amount of
flexibility over time because a developer can just add another property to the
CAttributeItems collection without forcing any changes to the interface of a class.
The Visual Modeler can once again be used to generate the CAttributeItem class
and its associated CAttributeItems collection class. Listing 7.3 shows the code for
the CAttributeItem class.
Although there is nothing overly exciting Listing 7.3, one area in particular deserves
closer investigation. Looking at the code generated by the Visual Modeler for the
Property Get statement for the Value property shows that it is implemented
slightly differently than what has been seen in the past. Because we have declared
the property as a variant type, it can contain an object reference and therefore
needs the Set construct in these cases. The IsObject function enables Visual Basic
to check whether the variant contains an object reference so that the code can react
accordingly.
Again, we now need to use the Visual Modeler to generate a collection class for
CattributeItem. The complete code listing will not be shown because it differs only
slightly from the code generated in the CColumnDefs case. However, several
changes have been made to the Add method, as shown in the following code
fragment:
ThisAttribute.Name = Name
ThisAttribute.Value = Value
The CPropertyDef class, like its CColumnDef cousin, is composed only of simple
properties. Figure 7.10 shows the UML representation for this class.
Figure 7.10. The CPropertyDef class in the UML graphical
model.
Properties
Here, the Name property is used to identify the name that will be used to refer to this
property throughout the business and user layers. Although the Name property here
can exactly match the Name property on its mapped CColumnDef object, it does not
have to do so. The only other property is AttributeItems, which as discussed
previously, is used as a freeform mechanism to store additional information related
to a property. We can use this information throughout the business layer, and we
can pass it to the user layer if necessary. The flexibility exists to add whatever
information at a property level is needed by the application. Some examples of
standard items that could be simple property validation might include
PropertyType, ListId, MinimumValue, MaximumValue, DefaultValue, and
Decimals. In this framework, a standard XMLAttributeName property for XML
generation is defined, a topic covered in Chapter 13. Once again, the Visual Modeler
is used to define both a CPropertyDef class and its associated CPropertyDefs
collection class. Listing 7.4 provides the code to implement the CPropertyDef class.
Looking at the code, you will see that we have done a few things differently than
before. First, only a Property Get statement has been created for the Attributes
property. The corresponding Property Set statement has been omitted because
this subordinate object is being managed directly by the CPropertyDef class, so
there is no reason for any external code to set its value to something other than the
internal, private mAttributes variable. Doing so would potentially wreak havoc on
the application; therefore, access to it is protected under the principle of
encapsulation and data hiding that was talked about in Chapter 3, "Objects,
Components, and COM." In addition, you will note that the contained objects are
instantiated in the Class_Initialize event as was done for collections earlier in
the chapter. The same reasoning applies here.
Because the CPropertyDefs collection class is not changed from the code
generated by the Visual Modeler, the listing is omitted here.
The CClassDef Class
Now that the CColumnDef and CPropertyDef classes and their supporting collection
classes have been created, it is time to generate the CClassDef class, which is
responsible for pulling everything together to drive the metadata model. Figure
7.11 shows the UML representation for this class.
model.
Properties
To provide the class-to-RDBMS mapping, both the name of the table that we will be
using to save object state information and the name of the view that will be used to
retrieve object state information must be known to the application. The mapping
technique was discussed in detail in Chapter 4. To meet these needs, the properties
WriteLocation and ReadLocation are defined.
After the names of the table and views have been defined, the columns that act as
the primary keys on the table must be defined. Recall from Chapter 4 that these
keys also serve as the Object Identifier (OID) values for an object instance. This
framework can support two-column keys, or OIDs; so, the properties IdColumnName
and SubIdColumnName are defined. The framework assumes that an empty value for
SubIdColumnName indicates that only a single key is used. The response of the
framework when IdColumnName is empty is not defined.
If the particular class that is being defined by an instance of the CClassDef class is
the child in a parent-child–style relationship, the columns that represent the foreign
keys to the table containing the parent object instances must be defined as well. The
properties ParentIdColumnName and ParentSubIdColumnName are defined for just
this purpose. The data layer, discussed in Chapter 8, "The DataManager Library,"
will use this information during its SQL generation process for retrieval statements.
Similarly, for a parent-child–style relationship, there can be many child objects as in
the case of a one-to-many or master-detail relationship. In these cases, the
developer might need to order the items in a particular way, so an
OrderByColumnName property is defined. If more than one column is required, a
comma-separated list of column names on which to sort can be provided. These
columns do not necessarily have to appear in the ColumnDefs property that we will
discuss shortly.
Finally, the CClassDef class contains a property of type PropertyDefs and another
of type ColumnDefs. Because these two sets of definitions are built
programmatically at runtime, these two properties store the column and property
definition information for use by the business and data layers. In addition to these
two properties, two other properties are implemented to help map between
ColumnDef objects and PropertyDef objects. They are called PropertyToColumn
and ColumnToProperty, both of which are implemented as simple Visual Basic
Collection classes. The keying mechanism of the collection will be used to help
provide this mapping.
Once again, the Visual Modeler can be used to implement both the CClassDef class
and CClassDefs collection class. Be sure to use the same model that has been used
throughout this chapter so that there is visibility to the PropertyDefs and
ColumnDefs collection classes.
Listing 7.5 provides the code to implement the properties of the CClassDef class.
Methods
Four methods are defined on the CClassDef class to implement creation of the
metadata model at runtime. The first of these is AppendMapping, a method that is
responsible for creating ColumnDef and PropertyDef instances, adding them to the
necessary collections, and providing the mapping between the two. Listing 7.6
provides the code listing for this method.
CClassDef Class
Exit Sub
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("CClassDef:AppendMapping", Err.Number,
Err.Description, Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "CClassDef:AppendMapping",
Err.Description & " [" & Erl & "]"
End Sub
In an effort to minimize the mapping creation process in the business layer, only the
minimal information needed to create a column and property, and subsequently
generate a mapping, is passed into the method. This information is all that is needed
to drive the basic architecture. If additional information is needed by your
implementation of this framework, then the AppendMapping method can be
modified, although the recommended approach is to utilize the Attributes
property on the PropertyDef class. The reasoning behind this is so that flexibility
going forward is preserved by not having to modify the AppendMapping method.
The AppendMapping method is self-explanatory up until line 145, where the actual
mappings are created. It is here that the keying feature of a Collection is used to
provide the bidirectional mappings. For the private mColumnToProperty collection,
the PropertyDef object is added, keyed on the column name. For the private
mPropertyToColumn collection, the opposite is performed and the ColumnDef object
is added, keyed on the property name. Rather than provide direct access to these
underlying collections, two methods to expose this mapping facility in a cleaner
fashion are implemented. These methods are PropertyToColumnDef and
ColumnToPropertyDef. The code for these two methods is provided in Listing 7.7.
Class
Finally, the MakeDTDSnippet method that will be used in the XML DTD generation
facility of the framework is implemented. Although a detailed discussion of this
functionality will be deferred until Chapter 13, I'll make a few comments. The code
is provided in Listing 7.8.
Example 7.8. The MakeDTDSnippet Method of the
CClassDef Class
Now that we have completely defined our class manager component, it is time to
put it to work. Figure 7.12 shows the completed class hierarchy for the
ClassManager library.
Figure 7.12. The ClassManager library in the UML
graphical model.
Suppose that we want to define the persistence information for the example using
bonds discussed in Chapter 3. Table 7.1 provides the property and column
information from that example.
Recalling this CBond example from Chapter 3, a class inheritance structure has been
defined as shown in Figure 7.13.
Figure 7.13. The bond inheritance structure.
To implement the CBond object structure, a new ActiveX DLL called BondLibrary is
created in Visual Basic. Class modules for Ibond, CdiscountBond, CconsolBond,
and CCouponBond are added, and a reference to the ClassManager DLL is set.
IBond
Option Explicit
' declarations section
Private mClassDef As CClassDef
' code section
Private Sub Class_Initialize()
Set mClassDef = New CClassDef
With mClassDef
.ReadLocation = "dbo.fdb.Table_Bond"
.WriteLocation = "dbo.fdb.Table_Bond"
.IdColumnName = "Id"
.KeyColumnName = "Name"
.TypeColumnName = "Bond_Type"
.AppendMapping "Id", "Id", True, False, ctNumber, "OID"
.AppendMapping "Name", "Name", True, True, ctString, "Name"
.AppendMapping "FaceValue", "Face_Value", True, False, ctNumber,
"FaceValue"
.AppendMapping "CouponRate", "Coupon_Rate", True, False,
ctNumber, "CouponRate"
.AppendMapping "BondTerm", "Bond_Term", True, False, ctNumber,
"BondTerm"
.AppendMapping "BondType", "Bond_Type", True, False, ctNumber,
"BondType"
End With
End Sub
Relative to CClassDef
Option Explicit
' declarations section
Private mIBondObject As IBond
' code section
Private Sub Class_Initialize()
Set mIBondObject = New IBond
mIBondObject.ClassDef.TypeId = 1
End Sub
Example 7.11. The CCouponBond Initialization Code
Relative to CClassDef
Option Explicit
' declarations section
Private mIBondObject As IBond
' code section
Private Sub Class_Initialize()
Set mIBondObject = New IBond
mIBondObject.ClassDef.TypeId = 2
End Sub
Relative to CClassDef
Option Explicit
' declarations section
Private mIBondObject As IBond
' code section
Private Sub Class_Initialize()
Set mIBondObject = New IBond
mIBondObject.ClassDef.TypeId = 3
End Sub
The previous set of code listings shows the initialization process that provides the
complete population of a ClassDef object for a given subclass. For example, looking
at Listing 7.12, you can see that when a CConsolBond object is instantiated, the first
statement in its Class_Initialize event instantiates an IBond object, which
transfers control to the IBond object initialization routine. This routine proceeds to
populate the vast majority of the ClassDef object. After returning to the
initialization routine of CconsolBond, the only property left to set is the TypeId
associated with the subclass.
Summary
This chapter has developed the first major component of the framework, the
ClassManager. This component is responsible for managing the metadata that
describes class definitions and the object-to-table mappings needed for object
persistence. In development of this component, the Visual Modeler was used
extensively to generate both the base classes and their collection class
counterparts.
In the next chapter, attention turns to defining the second core component, the
DataManager. This component will be used to interact with the database on behalf
of the application. It will use information found in the ColumnDefs collection, defined
in the CClassDef class, as one of its primary tools for generating the appropriate
SQL needed to accomplish the tasks required by the application.
Chapter 8. The DataManager Library
Now that we have defined and implemented the ClassManager components, the
capability exists to create class definitions programmatically through metadata.
This component also provides the infrastructure to define the mappings of classes to
tables and properties to columns within an RDBMS. Now, we need a mechanism to
interact with the database itself. This mechanism, aptly called DataManager, is also
an ActiveX DLL residing in the data services layer and is enlisted by the business
layer. Its design is such that it is the sole interface into the database by the
application. The business services layer is the only one, by design, that can enlist it
into action because the user services layer does not have visibility to it. Although
this library physically runs on the MTS machine, it does not run under an MTS
package. Instead, the business layer running under an MTS package calls this
library into action directly as an in-process COM component.
Design Theory
The goal in creating the DataManager component is to provide a library that can
handle all interactions with a Relational Database Management System (RDBMS) on
behalf of the application. The majority of these requests are in the form of basic
CRUD (Create, Retrieve, Update, and Delete) processing that makes up a significant
portion of any application. Create processing involves implementing the logic to
create a new row in the database, copy the object state information into it, and
generate a unique Object Identifier (OID) for the row and object. Retrieve
processing involves formulating the necessary SQL SELECT statement to retrieve
the desired information. Update processing involves implementing the logic to
retrieve a row from the database for a given OID, copying the object state
information into it and telling the RDMS to commit the changes back to the row.
Delete processing involves formulating the necessary SQL DELETE statement to
delete a specific row from the database based on a given OID.
For the Retrieve and Delete portions of CRUD, an SQL composer is implemented. An
SQL composer is nothing more than a generator that can take minimal information
and create a valid SQL statement from it. The information used by the composer
logic is taken directly from the metadata in a ClassDef object. Pieces of the
composer logic that is used by the retrieve and delete methods are also used to
assist in the create and update portions of CRUD. Abstracting this composition logic
in the DataManager component in such a manner allows the application to
automatically adapt to database changes. For example, as new column names are
added to support new properties, existing functionality in the DataManager
component is not broken. Because all database access is driven through the
metadata in a ClassDef object, the DataManager component never must be
redeveloped to support changes in the object hierarchy or database schema.
Although this approach is very flexible, the dynamic SQL generation implemented
by the composer logic does have compilation overhead that repeats with every
database transaction. As discussed in Chapter 6, "Development Fundamentals and
Design Goals of an Enterprise Application," SQL Server views are precompiled and
cached in a manner similar to stored procedures; thus, much of the overhead
associated with the compilation process does not exist on retrievals from views.
Assuming that the highest percentage of database activity on many applications is
in retrievals and those retrievals are from views, the penalty from dynamic SQL
generation might be negligible. On high-volume objects though, this might not be
acceptable. On some database servers (although not on SQL Server 6.x), the
system caches dynamic SQL statements so that it does not have to recompile. A
significant amount of such dynamic SQL can overflow the cache and degrade overall
database performance. In either case—high-volume objects or caching of
dynamically generated SQL statements—a stored-procedure approach might be
necessary.
Implementation
Component-Level Functions
First, several core functions are defined within the context of a basic code module
that is used by all classes within the component. The first function is a generic
RaiseError function (see Listing 8.1), whose purpose is to wrap outbound errors
with information to indicate that the source was within this component—an
approach that will be adopted with many of the server-side components to be
implemented in future chapters.
Example 8.1. A Core RaiseError Function Defined
The second is a function (see Listing 8.2) to write error messages to the NT event
log, called aptly WriteNTLogEvent. This is important for libraries running on a
remote server, as discussed in the "Debugging" section in Chapter 6.
sMsg = "Error " & ErrNumber & " (" & ErrDescription & "), sourced by "
& _
ErrSource & " was reported in " & ProcName
App.StartLogging "", vbLogToNT
App.LogEvent sMsg, vbLogEventTypeWarning ' will only write in compiled
mode
Debug.Print sMsg ' will only write in run-time mode
End Sub
As can be seen from the code in Listing 8.2, two messages are actually written. One
message is to the NT event log, which can occur only when the component is
running in non-debug mode. The other message is to the debug window, which can
only occur when the component is running in debug mode.
model.
Methods
CStringList
Option Explicit
Private mCol As Collection
The Add method has been designed to accept multiple string values through a
ParamArray parameter named StringItems. The method iterates through the
individual strings in this StringItems array, adding them one at a time to the
internal collection. A calling convention to this method might look like the following:
StringList.Add("Id","Name","Address1","Address2")
This design technique allows for a dynamically sized parameter list, making it easier
to build the string list from the calling code.
The ExtractClause is implemented to help quickly turn the list of strings stored in
the internal collection into a delimited version of itself. This is needed by the
composer logic to create the select, from, and where predicates needed for the
SQL statements. Continuing with the preceding example, a call to the
ExtractClause method is simply
StringList.ExtractClause(",")
This call would produce the string "Id , Name , Address1 , Address 2" as its result.
The CQueryParms Class
With the capability to create lists of strings in tidy CStringList objects, attention
turns to defining the parameters necessary to form an SQL query to support CRUD
processing. To generate a retrieve or delete statement, the table name (or possible
view name) as well as the row specification criteria must be known. Furthermore,
for the select statement, the list of columns and optionally an order by list needs to
be known. A CQueryParms class is defined to accommodate these requirements.
Figure 8.2 shows a UML representation of the CQueryParms class.
model.
Properties
The CQueryParms class has a simple TableName property, along with three other
properties that are instances of the CStringList class. These properties are
ColumnList, WhereList, and OrderList. If a list of where conditions are used, a
mechanism to tell the composer logic how to concatenate them together must be
defined; therefore, a WhereOperator property is defined for this purpose.
NOTE
This framework does not support complex concatenation of where clauses in the
CRUD processing because it occurs relatively infrequently and because
implementation of such support would be extremely difficult. Anything that requires
this level of complexity is usually outside the capabilities of basic CRUD, and instead
within the realm of the business logic domain. For these types of queries, a
secondary pathway on CDataManager is provided that can accept ad hoc SQL.
Methods
Because CQueryParms is primarily a data container, its only method is Clear, which
simply calls the Clear method of its ColumnList, WhereList, and OrderList
properties.
With these two base helper classes (CStringList and CQueryParms) defined, we
can turn our attention to the implementation of the CDataManager class itself.
Figure 8.3 shows a UML representation of CDataManager.
Figure 8.3. The CDataManager class in the UML graphical
model.
Properties
Methods
Because the underlying data repository is an RDBMS, and because Active Data
Objects (ADO) is used to access it, we need to define methods that enable the class
to connect to, and disconnect from, the database. These methods are called
DoConnect and DoDisconnect, respectively, and they are shown in Listing 8.6. It is
assumed that the business layer provides some form of direction on how to connect
through a ConnectString parameter that follows ADO syntactical requirements.
Methods on CDataManager
Call cnn.Close
DoDisconnect = True
Exit Function
DoDisconnectErr:
Call RaiseError(Err.Number, _
"CDataManager:DoDisconnect Method", _
Err.Description)
DoDisconnect = False
End Function
After a connection has been established, the capability to interact in CRUD fashion
with the database exists using one of four methods. The first two methods, GetData
and DeleteData, implement the retrieve and delete functionality, respectively. The
second two methods, GetInsertableRS and GetUpdatableRS, are helpers to the
business layer to implement the create and update functionality, respectively. The
GetData, DeleteData, and GetUpdatableRS methods each take a CqueryParms
object as an argument to provide the necessary information for the composer logic
within the methods. The logic within the GetInsertableRS needs only a table name,
so it does not require the CqueryParms object. A fifth method, ExecuteSQL, is
implemented to accept ad hoc SQL statements for execution. This SQL statement
can be the name of a stored procedure that does not have any OUT arguments
defined. If the need to support such a stored procedure exists, a new method will
have to be added to the CdataManager class.
The GetData method returns a Recordset object that can contain zero or more
records. The GetData code is shown in Listing 8.7.
Set rs = ExecuteSQL(SQL)
Exit Function
GetDataErr:
If Erl >= 125 Then
'1. Details to EventLog
Call WriteNTLogEvent("CDataManager:GetData", _
Err.Number, _
Err.Description & " <<CMD: " & SQL & ">>", _
Err.Source & " [" & Erl & "]")
'2. Generic to client
Err.Raise Err.Number, "CDataManager:GetData", _
Err.Description & " <<CMD: " & SQL & ">>" & " [" & Erl &
"]"
Else
'1. Details to EventLog
Call WriteNTLogEvent("CDataManager:GetData",_
Err.Number, _
Err.Description, _
Err.Source & " [" & Erl & "]")
End Function
The GetData method starts by checking to make sure that the TableName property
has been set, and then proceeds to expand the CStringList properties of the
CQueryParm object. After these expanded strings are built, checks are made to
ensure that there are FROM and WHERE clauses. If any violations of these conditions
are found, errors are raised and the method is exited. The order by list is optional,
so no checks for this property are made.
After all the necessary information has been expanded and validated, the method
proceeds to form an SQL statement from the pieces. A DISTINCT keyword is placed
in the statement to ensure that multiple identical rows are not returned, a condition
that can happen if malformed views are in use. Although this offers some protection,
it also limits the processing of Binary Large Object (BLOB) columns that cannot
support this keyword. If your application requires BLOB support, you must
implement specific functionality in addition to the framework presented.
After the SQL statement is ready, it is simply passed off to the ExecuteSQL method
that will be discussed at the end of this section. To check for the existence of records,
the If Not(rs.EOF Or rs.BOF) syntax is used. Although a RecordCount property
is available on the Recordset object, it is not always correctly populated after a call,
so the previous convention must be used for robustness.
Several other items to note relate to error handling. From the code, you can see that
an error enumeration is used with a resource file providing the error messages. The
purpose of this is to make it easier to modify the error messages without
recompiling the code, as well as reducing the overall compiled code size. This also
allows for multi-language support if so required by your application. The Visual
Basic Resource Editor add-in can be used for this purpose. Figure 8.4 shows the Edit
String Tables dialog that is used to build the resource file.
The other item to note is that the error-handling routine has been designed to
operate differently based on the line number at which the error occurred. For line
items greater than 125, the SQL has been generated. Thus, it might be helpful to
see the potentially offending SQL statement in the error stream for debugging
purposes. Otherwise, the absence of an SQL statement in the error stream indicates
that the error occurred prior to SQL formation.
The DeleteData method works in a fashion similar to GetData, and is able to delete
one or more rows from the database. This method expects that the TableName and
WhereList properties on the CQueryParms argument object have been populated.
All other properties are ignored. This method proceeds to extract the WHERE clause
and ensure that it has information, similar to the checking performed in the GetData
method. In this case, the existence of WHERE clause information is vital, or else the
resulting SQL statement will delete all rows in the table—a lesson learned the hard
way. Again, the necessary SQL DELETE statement is generated and passed off to the
ExecuteSQL method. The DeleteData code is shown in Listing 8.8.
CDataManager
ExitFunction:
Exit Function
ErrorTrap:
If Erl >= 125 Then
'1. Details to EventLog
Call WriteNTLogEvent("CDataManager:DeleteData", _
Err.Number, _
Err.Description & " <<CMD: " & SQL & ">>", _
Err.Source & " [" & Erl & "]")
'2. Generic to client
Err.Raise Err.Number, "CDataManager:DeleteData", _
Err.Description & " <<CMD: " & SQL & ">>" & " [" & Erl & "]"
Else
'1. Details to EventLog
Call WriteNTLogEvent("CDataManager:DeleteData", _
Err.Number, Err.Description, _
Err.Source & " [" & Erl & "]")
Now that the two simpler components of CRUD have been implemented, attention
turns to the more complex Create and Update portions of the acronym. Although the
dynamic SQL generation process can be followed as in the previous two methods,
there are issues with this approach. Specifically, there are concerns with how to
handle embedded quotes in the SQL data. Rather than dealing with this issue in
INSERT and UPDATE statements, it is easier to work with Recordset objects.
CDataManagers
SQL = "select * from " & TableName & " where Id = 0"
'should populate with an empty row, but all column definitions
Set rs = ExecuteSQL(SQL)
Set GetInsertableRS = rs
Set rs = Nothing
Exit Function
ErrorTrap:
Call RaiseError(Err.Number, "CDataManager:UpdateData Method",
Err.Description)
End Function
This method forms a simple SQL SELECT statement of the form "SELECT * FROM
TableName WHERE Id=0". This has the effect of creating an empty Recordset object
that has all the columns of the underlying table. This empty Recordset object is
passed back to the business layer to receive the object state information. The
business layer calls the Update method on the Recordset object, retrieves the
auto-generated OID field generated by SQL Server, and sets the value in the object.
CDataManager
ErrorTrap:
Call RaiseError(Err.Number, _
"CDataManager:GetUpdatableRS Method", _
Err.Description)
End Function
The final data access method of the CDataManager class is the ExecuteSQL method
that is used by each of the other four CRUD components. This method is also
exposed to the outside world for use directly by the business layer if something
beyond normal CRUD processing must be accomplished. As stated several times in
this chapter already, an example might include the one-shot execution of a stored
procedure. As discussed in Chapter 13, "Interoperability," these types of needs
arise when integrating to legacy systems that do not follow the framework outlined
here. The code for the ExecuteSQL method is shown in Listing 8.11.
Example 8.11. The ExecuteSQL Method on
CDataManager
If IsMissing(CursorMode) Then
CursorMode = adOpenKeyset
End If
If IsMissing(LockMode) Then
LockMode = adLockOptimistic
End If
With all the effort that has gone into creating the CDataManager class, you might
dread having to manually create the collection class associated with this
CDataManager class. Even though the Visual Modeler was not used to create the
CDataManager class, it can still be used to generate a collection class. To do this, the
DataManager component must be reverse engineered. Follow these steps to do so:
Modeler.
5. Leaving the default items selected, click the Next button to arrive at the
Assignment of Project Items step. Drag each project item onto the Data
Services logical package to make the appropriate layer assignment.
6. Click once again on the Next button to bring up a summary of the activities
that the Visual Modeler is about to perform, along with an estimated time to
complete.
7. Click the Finish button to start the reverse engineering process. Upon
completion, a summary step appears. When you close the wizard, the newly
reverse-engineered classes appear under the Data Services folder of the tree
view on the left side of the Visual Modeler screen.
8. Right-click the CDataManager class, and then select the Generate Code
menu item to launch the Code Generation Wizard discussed in Chapter 7,
"The ClassManager Library."
9. On the Class Options step of the Preview step, give the collection class the
name CDataManagers. The only other changes in this preview process are to
deselect any of the Generate Variable options in the wizard to prevent
regeneration of existing properties.
10. When generation is complete, be sure to move all the members in the Delete
These Members list to the Keep These Members List during the Delete
Members in Class step of the wizard.
Although this might seem a bit cumbersome, it is much faster to generate collection
classes in this manner when you are familiar with the Visual Modeler.
Summary
The next chapter introduces and implements the multipart business object
paradigm. It uses both the DataManager and ClassManager components as its
foundation, and also incorporates the distribution topics covered in Chapter 5,
"Distribution Considerations." The next chapter also implements the first
component that is run under the control of Microsoft Transaction Server.
Chapter 9. A Two-Part, Distributed Business
Object
We have spent a significant amount of time getting ready for this chapter. The
multi-part business object defined here represents impurity at its finest, not only in
how we define our business layer, but also in how we make it work across the client
and application tiers of the system. Before we delve into the subject matter, be
prepared to become slightly upset when we split our "pure" business object into two
"impure" components. Also be prepared for further distress when we remove many
of the business-specific methods on our classes and move them onto an application-
specific surrogate class. Our reasoning for breaking with traditional object-oriented
design theory has to do with our goal of maximum reuse and performance in a
distributed object world. Hopefully, we will make our decision factors clear as we
work our way through this chapter.
Design Theory
If we analyze the drawbacks of a pure layer-to-tier mapping, the most obvious issue
is that of round trip calls that must be made between the user layer that receives the
user input and the business layer that validates the input. Well-designed
applications should be capable of providing validation to the user as soon as possible.
If the entire business layer resides on a middle tier, then even simple property
validation becomes programmatically tedious. To accomplish this, the client must
move one or more of the object properties into the chosen transport structure,
make a validation request with this information over the DCOM boundary, wait for a
response, and handle the results accordingly. This technique represents a
significant amount of effort and bandwidth to find out that the user entered an
invalid date of "June 31, 1999." This is even more frustrating if the client is sitting in
Singapore and the server is sitting in Texas over a WAN connection. Thus, it would
be advantageous to move some of this simple functionality to the client tier without
having to move the entire business layer with it.
Thus, we base our design goals for a multi-part business object upon our desire to
have as much business functionality in a centrally controlled location as possible.
These design goals include the following:
• Provide fast response time to user validation over a potentially slow network
connection.
• Make our business layer functionality available to the widest range of
consumers, whether they connect by a Visual Basic (VB) client or an Active
Server Pages (ASP)-driven Web interface.
• Give the capability to add support for new business objects in as
straightforward a manner as possible.
• Build our client as thin as possible without sacrificing an efficient interface in
the process.
How do we make such a split in the business object? The most obvious solution is to
move the property level validation over to the client tier while leaving the core
business logic and data layer interfaces on the application tier. In fact, this is exactly
what this framework does. Although this approach does not necessarily represent a
new concept, we take it a bit further with our architecture. If we simply make the
split and nothing more, we create two halves of a business object—one that lives on
the application tier and one that lives on the client tier. This approach can lead to
some duplication of similar functionality across the two halves of the business object.
To avoid such duplication, we define our business class on the client tier and
implement basic property-level validation. On the application tier, we implement a
single business application class that can serve the CRUD (Create, Retrieve, Update,
and Delete) requirements of all business classes implemented on the client, in
essence creating a pseudo object-request broker. To do this, we use metadata
defined using our ClassManager component developed in the previous chapter. We
use this same application tier component as a surrogate to implement
application-level business logic. Thus, we have created a user-centric component
(the object that resides on the client) and a business-centric component (the
application-tier component).
From this point forward, we use a modified version of the Northwind database that
ships with Visual Basic as the example for our architecture. Figure 9.1 shows the
object model for this database.
Figure 9.1. The Northwind object model.
The modifications we have made to the database include normalization and the
implementation of our database development standards to support our architecture.
We have created a ListItem object to provide a lookup mechanism for simple data
normalization purposes. We have also made objects out of the city, region, and
country entities. The reasons for doing this are for normalization and a desire to
preserve future flexibility. At some point in the future, we might want to extend our
application to track information specific to a city, region, or country. An example
might be a country's telephone dialing prefix. By having an object instead of a
simple text field that stores the country name, we can simply add the
DialingPrefix property to the definition for the country class.
Because we have already spent a lot of effort in building helper classes in previous
chapters, our server-side component of the business layer does not need any
additional direct helper classes of its own. We define the data-centric form of the
multi-part business object first in terms of an interface definition IAppServer. This
interface implements the majority of the functionality necessary to implement
CRUD processing using our ClassManager and DataManager libraries. By
subsequently implementing this interface on a class called CNWServer, we gain
access to that functionality.
COM purity would have us define an enumeration to identify our class types that we
are implementing on the server side. The real world tells us that binary compatibility
issues down the road will have us rolling the dice too many times with each series of
recompilations of the server, so we stick to using constants. Although we still force
a recompile whenever we add a new class type to our server, VB is not going to see
any changes in the enumeration that would otherwise break compatibility. Breaking
compatibility across the DCOM boundary forces us to redeploy server- and
client-side components. Another benefit of the constant approach is that it enables
us to build the IAppServer component for use by many applications, where an
enumeration would force us to have to reimplement the CRUD functionality in a
cut-and-paste fashion.
An MTS Primer
Before going into the details of our IAppServer and CNWServer classes, we must
spend a few paragraphs talking about Microsoft Transaction Server (MTS) and the
features we are interested in for our framework. Although we can easily drop any
ActiveX DLL into MTS, we cannot take advantage of its transactional features and
object pooling mechanisms unless we program specifically for them.
The ObjectContext class is defined in the Microsoft Transaction Server Type Library
(mtxas.dll). As the name implies, the ObjectContext is an object that accesses
the current object's context. Context, in this case, provides information about the
current object's execution environment within MTS. This includes information about
our parent and, if used, the transaction in which we are running. A transaction is a
grouping mechanism that allows a single object or disparate set of objects to
interact with a database in a manner such that all interactions must complete
successfully or every interaction is rolled back.
If we want to create any further MTS objects from within our existing MTS object
that can access our transaction, we must use the CreateInstance method of the
ObjectContext class to do so.
For ODBC version 3.x, pooling is controlled at the database driver level through a
CPTimeout registry setting. Values greater than zero tell the ODBC driver to keep
the driver in the connection pool for the specified number of seconds.
IAppServer/CNWServer
We will build out our application side classes, IAppServer and CNWServer, in a
parallel fashion. In some cases, we will implement methods on IAppServer and
provide hooks into it by simply calling into an IAppServer object instantiated on
CNWServer. In other cases, the methods on IAppServer are simply abstract in
nature and will require full implementation by our CNWServer class with no calls into
IAppServer.
Getting Started
To start our development of our IAppServer and CNWServer classes, we must first
create two new ActiveX DLL projects within Visual Basic. The first project will be
called AppServer, and the second will be called NWServer. We define our
IAppServer class within our AppServer project. Likewise, we define in our
NWServer the CNWServer class that implements IAppServer. Both the IAppServer
and NWServer components will be hosted in MTS, so our normal programming
model for interface implementation will change somewhat as we go through our
development. To start with, for our CNWServer object to create a reference to an
IAppServer object, it must use a CreateObject statement, and it must do so within
the Activate event of the ObjectControl rather than the Class_Initialize event.
The following code shows this new initialization mechanism:
Note that we are using the CreateInstance method of the ObjectContext object to
create our IAppServer object. This is because we want to enlist IAppServer in our
transaction.
Our next step is to define the set of classes supported by the CNWServer component.
We first do this by adding a basic code module with class-type constants to our
NWServer project. These constants will be used as indexes into our class definitions.
They also will form the common language with the CNWClient class. If we are using
SourceSafe, we can share this file between both the client and application tiers;
otherwise, we must create matching copies of the file. The following listing shows
the constant definitions for the Northwind application:
CNWServer
If mIAppServer.DataManagers.Count = 0 Then
Init:
IAppServer_InitServer = True
Exit Function
NoInit:
IAppServer_InitServer = False
'1. Details to EventLog
Call WriteNTLogEvent("NWServer:InitServer", Err.Number, _
Err.Description, Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "NWServer:InitServer", _
Err.Description & " [" & Erl & "]"
ObjCtx.SetAbort
End Function
Although the code in Listing 9.1 looks relatively simple, there are several very
important elements to it. First, notice the If ObjCtx Is Nothing Then statement.
We must perform this check here because the ObjCtx might be invalid at this point.
As we will see later in this chapter, some of the methods on IAppServer and
CNWServer call other internal methods to perform database updates or inserts.
When those methods complete, the SetComplete method must be called to indicate
to MTS that the transaction can be committed. Calling SetComplete invalidates our
object context, so we must reestablish it here.
Also notice that if we enter into the error-handling region of the code at the bottom,
we call the SetAbort method of the object context. The reason for this call is so we
can signal to MTS that something went awry and we cannot participate in the
transaction. We call it last because it immediately passes control to the Deactivate
method on the object control, and our error-handling activities would not complete
otherwise.
Also notice that we are retrieving the connection strings for the database from the
registry. These connection strings correspond to the ADO ConnectString property
on the Connection object. At this point, we have created a DSN to the NWND2.MDB
database; therefore, the ConnectString parameter is set to "Provider=MSDASQL;
DSN=NORTHWIND". For a DSN-less version, we can have a connection string that
looks something like
on CNWServer
Call IAppServer_InitServer
If Not mIAppServer.ClassDefs.Exists(CStr(ClassId)) Then
Case CT_CATEGORY
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "Table_Category"
.WriteLocation = "Table_Category"
.IdColumnName = "Id"
.OrderByColumnName = "Name"
Case CT_CITY
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "Table_City"
.WriteLocation = "Table_City"
.IdColumnName = "Id"
.ParentIdColumnName = "Region_Id"
.OrderByColumnName = "Name"
Case CT_COUNTRY
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "Table_Country"
.WriteLocation = "Table_Country"
.IdColumnName = "Id"
.OrderByColumnName = "Name"
Case CT_CUSTOMER
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "View_Customer"
.WriteLocation = "Table_Customer"
.IdColumnName = "Id"
.OrderByColumnName = "Company_Name"
Case CT_EMPLOYEE
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "View_Employee"
.WriteLocation = "Table_Employee"
.IdColumnName = "Id"
.OrderByColumnName = "Last_Name, First_Name"
Case CT_LIST_ITEM
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "Table_List"
.WriteLocation = "Table_List"
.IdColumnName = "Id"
.ParentIdColumnName = "List_Id"
.OrderByColumnName = "Sort"
Case CT_ORDER
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "View_Order"
.WriteLocation = "Table_Order"
.IdColumnName = "Id"
Case CT_ORDER_DETAIL
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "View_Order_Detail"
.WriteLocation = "Table_Order_Detail"
.IdColumnName = "Id"
.ParentIdColumnName = "Order_Id"
.OrderByColumnName = "Id"
Case CT_PRODUCT
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "View_Product"
.WriteLocation = "Table_Product"
.IdColumnName = "Id"
.OrderByColumnName = "Name"
Case CT_REGION
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "Table_Region"
.WriteLocation = "Table_Region"
.IdColumnName = "Id"
.ParentIdColumnName = "Country_Id"
.OrderByColumnName = "Name"
Case CT_SHIPPER
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "Table_Shipper"
.WriteLocation = "Table_Shipper"
.IdColumnName = "Id"
.OrderByColumnName = "Company_Name"
Case CT_SUPPLIER
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "View_Supplier"
.WriteLocation = "Table_Supplier"
.IdColumnName = "Id"
.OrderByColumnName = "Company_Name"
IappServer
Exit Function
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("IAppServer:GetPropertyNames", _
Err.Number, Err.Description, Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "IAppServer:GetPropertyNames", _
Err.Description & " [" & Erl & "]"
End Function
This method simply iterates through the PropertyDefs collection of the class type
defined for the requested class type. Remember that we have already defined this
information using the GetClassDef method of CNWServer. Our implementation of
this method on CNWServer is as follows:
We now turn our attention to hooking into our CRUD processing routines, which we
so cleverly built into our CDataManager library. We start with data retrieval by
defining two public methods, GetObjectData and GetObjectListData. The
GetObjectData method requires that we pass in a class type, an ObjectId, and an
ObjectSubId. It returns a list of property names and the actual object data. We
declare these two return parameters as variants because of the need to support ASP,
whose underlying VBScript engine supports only this data type. GetObjectData
proceeds to build a CQueryParms object, moving the associated ReadLocation
property of the CClassDef object into the TableName property of the CQueryParms
object. Similarly, we iterate through the ColumnDefs collection of the CClassDef
object to build the ColumnList property of the CQueryParm object.
Next, the WhereList is built using the ObjectId and ObjectSubId values passed in,
combined with the IdColumnName and SubIdColumnName fields of the CClassDef
object. After the CQueryParm object is complete, we call the GetData method of a
CDataManager object to retrieve the data from the database. If data is returned,
then the fields collection of the resultset is iterated with a call to the
ColumnToPropertyDef method of the class definition to generate the
PropertyNames array that are sent back. Finally, we make a call to the GetRows
method of the recordset to generate the Data return parameter. The code for the
GetData method is provided in Listing 9.4.
IAppServer
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("IAppServer:GetObjectData", Err.Number, _
Err.Description, Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "IAppServer:GetObjectData", _
Err.Description & " [" & Erl & "]"
End Sub
As you can see, the overall method is straightforward because we are relying
heavily on our DataManager component to perform the bulk of the data access for us.
From our CNWServer component, the implementation of this method looks like
Listing 9.5.
Implemented on CNWServer
IAppServer
QueryParms.TableName = ClassDef.ReadLocation
For Each ColumnDef In ClassDef.ColumnDefs
If ColumnDef.CanRead Then
QueryParms.ColumnList.Add ColumnDef.Name
End If
Next
Set rs = DataManager.GetData(QueryParms)
If Not rs Is Nothing Then
ReDim PropertyNames(0 To QueryParms.ColumnList.Count - 1)
i = 0
For Each rsField In rs.Fields
PropertyNames(i) =
ClassDef.ColumnToPropertyDef(rsField.Name).Name
i = i + 1
Next
vData = rs.GetRows
Else
vData = vbEmpty
End If
Data = vData
Exit Sub
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("IAppServer:GetObjectListData", Err.Number, _
Err.Description, Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "IAppServer:GetObjectListData", _
Err.Description & " [" & Erl & "]"
End Sub
Again, you should be able to see that this method is straightforward, with the
CDataManager object performing the bulk of the work. Again, our CNWServer
component hooks into this component in a straightforward fashion as shown in
Listing 9.7.
Implemented on CNWServer
Now that we can retrieve individual objects or lists of objects, we turn our attention
to the deletion of objects. To delete an object or list of objects from the system, we
define DeleteObject and DeleteObjectList methods on IAppServer. As you
might surmise, DeleteObject deletes a single object, whereas DeleteObjectList
deletes a list of objects based on a master-detail or parent-child relationship.
IAppServer
Note that we have introduced the use of the object context in this method with the
SetComplete and SetAbort calls. The reason for this is that we are altering the
state of the database with this call, so it should operate within a transaction. Our
previous methods have been simple retrievals that do not require transactional
processing.
on CNWServer
If you handle your referential integrity on the RDBMS, then nothing else must be
done here. If, on the other hand, you want the business layer to manage this
functionality, you can modify this DeleteObject method to do just this using a Case
statement. Such a modification might look like the code shown in Listing 9.10.
Implemented on CNWServer
Call IAppServer_GetClassDef(ClassId)
Select Case ClassId
Case CT_CATEGORY
Call IAppServer_GetClassDef(CT_PRODUCT)
If Not mIAppServer.IsReferenced(ObjectId, ObjectSubId, _
CT_PRODUCT, "CategoryId", "") Then
Call mIAppServer.DeleteObject(CT_CATEGORY, ObjectId, _
ObjectSubId, Errors)
End If
Case CT_CITY
Call IAppServer_GetClassDef(CT_ORDER)
If Not mIAppServer.IsReferenced(ObjectId, ObjectSubId, _
CT_ORDER, "ShipToCityId", "") Then
GoTo NoDelete
End If
Call IAppServer_GetClassDef(CT_CUSTOMER)
If Not mIAppServer.IsReferenced(ObjectId, ObjectSubId, _
CT_CUSTOMER, "CityId", "") Then
GoTo NoDelete
End If
Call IAppServer_GetClassDef(CT_EMPLOYEE)
If Not mIAppServer.IsReferenced(ObjectId, ObjectSubId, _
CT_EMPLOYEE, "CityId", "") Then
GoTo NoDelete
End If
Call IAppServer_GetClassDef(CT_SUPPLIER)
If Not mIAppServer.IsReferenced(ObjectId, ObjectSubId, _
CT_SUPPLIER, "CityId", "") Then
GoTo NoDelete
End If
Call IAppServer_GetClassDef(CT_REGION)
If Not mIAppServer.IsReferenced(ObjectId, ObjectSubId, _
CT_REGION, "CityId", "") Then
GoTo NoDelete
End If
Call mIAppServer.DeleteObject(CT_CITY, ObjectId, ObjectSubId,
Errors)
Case CT_REGION
Call IAppServer_GetClassDef(CT_CITY)
If Not mIAppServer.IsReferenced(ObjectId, ObjectSubId, _
CT_CITY, "RegionId", "") Then
Call mIAppServer.DeleteObjectListData(CT_REGION, ObjectId, _
ObjectSubId, Errors)
End If
Case CT_CUSTOMER
Call IAppServer_GetClassDef(CT_ORDER)
If Not mIAppServer.IsReferenced(ObjectId, ObjectSubId, _
CT_ORDER, "CustomerId", "") Then
Call mIAppServer.DeleteObject(CT_CUSTOMER, ObjectId, _
ObjectSubId, Errors)
End If
Case CT_EMPLOYEE
Call IAppServer_GetClassDef(CT_ORDER)
If Not mIAppServer.IsReferenced(ObjectId, ObjectSubId, _
CT_ORDER, "EmployeeId", "") Then
Call mIAppServer.DeleteObject(CT_EMPLOYEE, ObjectId, _
ObjectSubId, Errors)
End If
Case CT_ORDER
Call IAppServer_GetClassDef(CT_ORDER_DETAIL)
Call mIAppServer.DeleteObjectListData(CT_ORDER_DETAIL, ObjectId,
_
ObjectSubId, Errors)
Call mIAppServer.DeleteObject(ClassId, ObjectId, _
ObjectSubId, Errors)
Case CT_PRODUCT
Call IAppServer_GetClassDef(CT_ORDER)
If Not mIAppServer.IsReferenced(ObjectId, ObjectSubId, _
CT_ORDER, "ProductId", "") Then
Call mIAppServer.DeleteObject(CT_PRODUCT, ObjectId, _
ObjectSubId, Errors)
End If
Case CT_SHIPPER
Call IAppServer_GetClassDef(CT_ORDER)
If Not mIAppServer.IsReferenced(ObjectId, ObjectSubId, _
CT_ORDER, "ShipperId", "") Then
Call mIAppServer.DeleteObject(CT_ORDER, ObjectId, _
ObjectSubId, Errors)
End If
Case CT_SUPPLIER
Call IAppServer_GetClassDef(CT_SUPPLIER)
If Not mIAppServer.IsReferenced(ObjectId, ObjectSubId, _
CT_PRODUCT, "SupplierId", "") Then
Call mIAppServer.DeleteObject(CT_SUPPLIER, ObjectId, _
ObjectSubId, Errors)
End If
Case Else
Call IAppServer_GetClassDef(ClassId)
Call mIAppServer.DeleteObject(ClassId, ObjectId, _
ObjectSubId, Errors)
End Select
ObjCtx.SetComplete
Exit Sub
NoDelete:
ErrorTrap:
ObjCtx.SetAbort
End Sub
IAppServer
QueryParms.ColumnList.Add "Count(*)"
If ObjectId > 0 And TargetPropertyName <> "" Then
QueryParms.WhereList.Add _
ClassDef.PropertyToColumnDef(TargetPropertyName).Name & "=" &
ObjectId
End If
If ObjectSubId > 0 And TargetSubPropertyName <> "" Then
QueryParms.WhereList.Add _
ClassDef.PropertyToColumnDef(TargetSubPropertyName).Name & _
"=" & ObjectSubId
End If
Set rs = DataManager.GetData(QueryParms)
If Not rs Is Nothing Then
rs.MoveFirst
IsReferenced = rs.Fields.Item(0).Value > 0
Else
IsReferenced = True ' better safe than sorry
End If
ObjCtx.SetComplete
Exit Function
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("IAppServer:IsReferenced", Err.Number, _
Err.Description, Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "IAppServer:IsReferenced", _
Err.Description & " [" & Erl & "]"
ObjCtx.SetAbort
End Function
We implement the IsReferenced method similar to our other CRUD methods in that
we build a CQueryParms object, populate our WhereList, and make a call to
GetData. However, the major difference here is that our ColumnList contains a
"count(*)" clause versus a standard column list. We retrieve this value to
determine whether any records exist that reference a given ObjectId and
ObjectSubId. Note that we have added the SetAbort and SetComplete calls on our
object context for the IsReferenced method. The reason for this is that if we have
an issue determining whether an object is referenced, we do not want a delete being
performed on the database.
IAppServer
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("IAppServer:DeleteObjectList", Err.Number, _
Err.Description, Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "IAppServer:DeleteObjectList", _
Err.Description & " [" & Erl & "]"
ObjCtx.SetAbort
End Sub
Again, we simply verify that we have defined the CClassDef object for the given
ClassId. We then call the DeleteObjectList method on our mIAppServer object.
NOTE
With retrievals and deletes out of the way, we turn our attention to inserts and
updates. As before, we have the capability to handle single objects or collections of
objects, the latter being for a master-detail or parent-child–style relationship. Our
InsertObjectData function looks similar in calling convention to our
GetObjectData method, except that now we are receiving a variant array of object
state information. The first step of the InsertObjectData function is to call the
GetInsertableRS method of the CDataManager object. We then use the
PropertyNames array to loop through the Data variant array, moving values into the
associated recordset fields. We use our PropertyDefToColumn mapping here to
assist us in this process. We also check to ensure that we do not overwrite fields
with the CanWrite property set to False. If we have validation functionality in place,
we would perform that checking here as well.
After all the data has been moved into the updateable recordset, we have a choice
on how we generate our primary key value. One option is to retrieve a value for the
primary key before the insert, while another is to allow the RDBMS to generate the
key. In the first case, we can create a method on our CDataManager object to do this,
or we can implement it on our IAppServer. This method can be called something
like GetNextKey with a parameter of ClassId or TableName. How it is implemented
will depend on how you choose to define your keys. In the case of the RDBMS
generating the key, an AutoNumber type column (in the case of Microsoft Access) or
an Identity type column (in the case of SQL Server) is used that will automatically
generate the next integer sequence. For our purposes, we will be allowing the
RDBMS to generate our keys, but you can change this to suit your needs.
IAppServer
rs.Update
' the following code only works for certain combinations of
' drivers and database engines (see MS KnowledgeBase)
' note that if there are triggers that fire and insert additional records
' with Identity/Autonumber columns, this number retrieved below
' will be wrong.
If ClassDef.IdColumnName <> "" Then
ObjectId = rs.Fields(ClassDef.IdColumnName)
End If
ObjCtx.SetComplete
Exit SubErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("IAppServer:InsertObjectData", _
Err.Number, Err.Description, Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "IAppServer:InsertObjectData", _
Err.Description & " [" & Erl & "]"
ObjCtx.SetAbort
End Sub
From our CNWServer class, the implementation of this method looks like Listing
9.14.
Implemented on CNWServer
IAppServer
Set rs = DataManager.GetInsertableRS(ClassDef.WriteLocation)
For i = LBound(Data, 2) To UBound(Data, 2)
rs.AddNew
For j = LBound(PropertyNames) To UBound(PropertyNames)
pName = PropertyNames(j)
With ClassDef
If .ColumnDefs.Item(.PropertyToColumnDef(pName).Name).CanWrite
Then
Set rsField = rs.Fields(.PropertyToColumnDef(pName).Name)
If rsField.Type = adLongVarBinary Or _
rsField.Type = adLongVarChar Then
' chunk operations required
Else
If IsEmpty(Data(j, i)) Then
rsField.Value = vbEmpty
Else
rsField.Value = Data(j, i)
End If
End If
End If
End With
Next j
Next i
Call rs.UpdateBatch
ObjCtx.SetComplete
Exit Sub
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("IAppServer:InsertObjectListData", Err.Number,
_
Err.Description, Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "IAppServer:InsertObjectListData", _
Err.Description & " [" & Erl & "]"
ObjCtx.SetAbort
End Sub
From our CNWServer class, the implementation of this method looks like Listing
9.16.
Implemented on CNWServer
Our last component of CRUD is that of update. Here, we only provide a mechanism
to update a single object. As in the insert case, this method calls the
GetUpdateableRS method of the appropriate CDataManager object. Again, we form
a CQueryParm object to help us make the appropriate call by first setting the
TableName property from the ReadLocation of the CClassDef object. We loop
through the ColumnDefs property of the CClassDef object, adding the columns,
whose CanWrite property is set to True, to the ColumnList property of the
CQueryParm object. We also add both the IdColumnName and SubIdColumnName to
the ColumnList to ensure that OLE DB has the necessary keys for the update that
is to follow. If we do not do this, OLE DB is not able to perform the update.
Remember that the Add method of our CStringList, which forms our ColumnList,
is designed to ignore duplicates, so we are safe in adding these two columns without
first checking to see if they have already been added.
IAppServer
Set rs = DataManager.GetUpdatableRS(QueryParms)
For i = LBound(PropertyNames) To UBound(PropertyNames)
pName = PropertyNames(i)
With ClassDef
If .ColumnDefs.Item(.PropertyToColumnDef(pName).CanWrite Then
Set rsField = rs.Fields(.PropertyToColumnDef(pName))
If rsField.Type = adLongVarBinary Or rsField.Type = adLongVarChar
Then
' requires chunk operations
Else
If IsEmpty(Data(i, 0)) Then
rsField.Value = vbEmpty
Else
rsField.Value = Data(i, 0)
End If
End If
End If
End With
Next i
rs.Update
ObjCtx.SetComplete
Exit Sub
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("IAppServer:UpdateObjectData", _
Err.Number, Err.Description, Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "IAppServer:UpdateObjectData", _
Err.Description & " [" & Erl & "]"
ObjCtx.SetAbort
End Sub
From our CNWServer class, the implementation of this method looks like Listing
9.18.
Implemented on CNWServer
Finally, we want to give our system the added flexibility of querying for individual
objects or lists of objects. This becomes important as we build our ASP-based
reporting engine in Chapter 11, "A Distributed Reporting Engine." To implement this,
we define a method QueryObjectListData, which looks similar to a standard
GetObjectListData call, except we have replaced the ParentId and ParentSubId
parameters with a Criteria array and a Sort array.
IAppServer
QueryParms.TableName = ClassDef.ReadLocation
For Each ColumnDef In ClassDef.ColumnDefs
If ColumnDef.CanRead Then
QueryParms.ColumnList.Add ColumnDef.Name
End If
Next
If IsArray(Criteria) Then
' Criteria(i)(0) = PropertyName
' Criteria(i)(1) = Operator
' Criteria(i)(2) = Value
If IsArray(Sort) Then
For i = LBound(Sort) To UBound(Sort)
QueryParms.OrderList.Add _
ClassDef.PropertyToColumnDef(CStr(Sort(i))).Name
Next i
End If
Set rs = DataManager.GetData(QueryParms)
Data = vData
Exit Sub
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("IAppServer:QueryObjectListData", Err.Number,
_
Err.Description, Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "IAppServer:QueryObjectListData", _
Err.Description & " [" & Erl & "]"
End Sub
From our CNWServer class, the implementation of this method is provided in Listing
9.20.
Implemented on CNWServer
With our server-side IAppServer and CNWServer classes in place, we can move
from the application tier to the user tier and build the client-side mates. In a similar
fashion to the server side, we define an interface called IAppClient. Unlike
IAppServer though, our implementing class CNWClient is responsible for
implementing all methods defined by IAppClient. Our NWClient ActiveX DLL that
contains CNWClient is responsible for defining a class for each class type of the
library that is to be exposed to the client application. This definition takes the form
of Visual Basic class modules, which define the same properties spelled out in the
CClassDef on the server side. Our first order of business is to define an InitClient
method that connects to the DCOM object using the passed-in server name. We
always override our InitClient method with code similar to that shown for the
CNWClient implementation in Listing 9.21.
CNWClient
Now that we can connect to the DCOM client, we turn our attention to defining two
more interfaces in the same ActiveX library as IAppClient. These two interfaces
are IAppObject and IAppCollection, both of which we use to implement our final
objects and collection of objects, respectively. Our IAppCollection contains a
collection of IAppObjects. On our IAppObject, we define the methods of
SetStateToVariant and SetStateFromVariant that we must override in our
implementations. These methods are responsible for converting between native
objects and variant Data arrays. We also define an IsValid method to help us check
for validity across properties. Finally, we define properties Id and SubId used during
our CRUD processing that we will be implementing. The interface definition for
IAppObject appears in Listing 9.22.
Option Explicit
Private mId As Long
Private mSubId As Long
Private mClassId As Integer
Private mIsLoaded As Boolean
Private mIsDirty As Boolean
Definition
Option Explicit
Private mCol As Collection
Private mClassId As Integer
Private mIsLoaded As Boolean
Private mIsDirty As Boolean
Public Sub SetStateFromVariant(PropertyNames As Collection, Data As
Variant)
' override this method
End Sub
Now that we can connect to the DCOM client and we have our IAppObject and
IAppCollection interfaces defined, we turn our attention to the data retrieval
methods of LoadObject and LoadCollection. As you might guess, we will be
calling the corresponding GetObjectData and GetObjectDataList methods on our
CNWServer object.
on CNWClient
SkipLoadObject:
Exit Function
ErrorTrap:
Err.Raise ERR_CANNOT_LOAD + vbObjectError, "CNWClient:LoadObject", _
LoadResString(ERR_CANNOT_LOAD) & "[" & Err.Description & "]"
End Function
As can be seen from the previous code sample, we must dimension each class type.
We then perform a Select Case statement on the ClassId, creating the specific
object instance of the requested class. We then set the instance of the generic
AppObject variable to our specific instance of an object that has implemented the
IAppObject interface. From there, we fall through to the GetObjectData method of
our CAppServer variable. If the method returns a non-empty Data variable, we call
the generic SetStateFromVariant method of our AppObject to move the
information from the variant data array into the property values of the specific
object. We then return our AppObject to the calling routine. The reason for the use
of AppObject is to prevent the late binding that can slow performance. Using this
approach can make our code base more modular.
IAppObject
Option Explicit
Implements IAppObject
As you can see from the code, we have defined all our properties using Let and Get
statements. If we choose, this technique allows us to provide to the client instant
feedback about data validation. We also define an IsValid method on IAppObject,
which performs validation across properties. If we look at the
SetStateFromVariant method, we see that we have received a PropertyNames
collection. This collection is a list of integers keyed on property names. The numeric
values correspond to column positions in the Data array for a given property. We
also receive an optional RowIndex parameter in case this Data array is the result of
a multirow resultset.
We have also defined a simple helper function called GetValue to help us trap null
values and convert them to a standard set of empty values. The simple code for this
appears in Listing 9.26.
Implemented on CNWClient
Call mIAppClient.AppServer.GetObjectListData(ClassId, _
ParentId, _
ParentSubId, _
PropertyNames, _
Data, _
Errors)
If IsArray(Data) Then
AppCollection.SetStateFromVariant(MakePropertyIndex(PropertyNames),
Data
End If
Set IAppClient_LoadCollection = AppCollection
SkipLoadCollection:
Exit Function
ErrorTrap:
Err.Raise ERR_CANNOT_LOAD + vbObjectError, "CNWClient:LoadCollection",
_
LoadResString(ERR_CANNOT_LOAD) & "[" & Err.Description & "]"
End Function
Option Explicit
Implements IAppCollection
Dim mIAppCollection As IappCollection
Private Sub Class_Initialize()
Set mIAppCollection = New IAppCollection
End Sub
End Function
Private Sub IAppCollection_Remove(vntIndexKey As Variant)
Call mIAppCollection.Remove(vntIndexKey)
End Sub
The Add, Count, Item, and NewEnum methods tap directly into the
mIAppCollection variable for functionality. Similarly, the IsLoaded, IsDirty, and
ClassId properties are inherited from our mIAppCollection variable. The only
methods that we override are the SetStateFromVariant and IsValid methods. In
the SetStateFromVariant method, we loop through the Data array a row at a time.
For each row, we instantiate our specific COrderDetailItem, set a generic
IAppObject reference to it, and call the SetStateFromVariant method on the
generic object reference. After the state has been set, we add the IAppObject
reference onto the collection. We proceed for all rows of the Data array.
Implemented on COrderDetailItem
We now define the delete portion of CRUD on the client side. Here, we define
DeleteObject and DeleteCollection methods. Because of simplicity, we can
implement the DeleteObject functionality in our IAppClient class and call into it
from our CNWClient implementation. Within the IAppClient implementation of the
DeleteObject method, we pass in our desired ClassId, Id, and SubId values. We
then pass this information off to the DeleteObject method of our IAppServer
object. The code for the DeleteObject method appears in Listing 9.30.
Implemented on CNWClient
As you can see, this method implementation is straightforward. The call into this
method from CNWClient appears in Listing 9.31.
CNWClient
CNWClient
Our attention now turns to the data insertion activity. We define an InsertObject
method that takes ClassId and AppObject parameters with the latter being a
return value. Again, we must dimension a variable of every supported class type.
Using a Select Case statement, we instantiate our specific object reference and set
it to the generic AppObject. We fall through to a block of code that creates the
necessary property index for a subsequent call to the SetStateToVariant method
of our generic AppObject. We then call the InsertObjectData method on our
AppServer object to perform the insert. We expect the method to return ObjectId
and ObjectSubId parameters, which we set to our Id and SubId properties of our
AppObject. The code for the InsertObject method on CNWClient appears in
Listing 9.33.
Implemented on CNWClient
PropertyNames = mIAppClient.AppServer.GetPropertyNames(ClassId)
Set PropertyIndex = MakePropertyIndex(PropertyNames)
ReDim Data(1 To PropertyIndex.Count, 0)
Call AppObject.SetStateToVariant(PropertyIndex, Data)
Call mIAppClient.AppServer.InsertObjectData(ClassId, PropertyNames, _
Data, Errors, _
ObjectId, ObjectSubId)
AppObject.Id = ObjectId
AppObject.SubId = ObjectSubId
SkipInsertObject:
Exit Sub
ErrorTrap:
Err.Raise ERR_CANNOT_INSERT + vbObjectError, _
"CNWClient:InsertObject", _
LoadResString(ERR_CANNOT_INSERT) & "[" & Err.Description & "]"
End Sub
Implemented on CNWClient
PropertyNames = mIAppClient.AppServer.GetPropertyNames(ClassId)
Set PropertyIndex = MakePropertyIndex(PropertyNames)
ReDim Data(1 To PropertyIndex.Count, 1 To AppCollection.Count)
Call AppCollection.SetStateToVariant(PropertyIndex, Data)
Call mIAppClient.AppServer.InsertObjectListData(ClassId, _
ParentId, _
ParentSubId, _
PropertyNames, _
Data, _
Errors)
SkipInsertCollection:
Exit Sub
ErrorTrap:
Err.Raise ERR_CANNOT_INSERT + vbObjectError,
"CNWClient:InsertCollection", _
LoadResString(ERR_CANNOT_INSERT) & "[" & Err.Description & "]"
End Sub
Implemented on CNWClient
ObjectSubId = 0
Select Case ClassId
Case CT_ORDER
Set Order = AppObject
Case CT_CATEGORY
Set CategoryItem = AppObject
Case CT_CITY
Set CityItem = AppObject
Case CT_COUNTRY
Set CountryItem = AppObject
Case CT_REGION
Set RegionItem = AppObject
Case CT_CUSTOMER
Set CustomerItem = AppObject
Case CT_EMPLOYEE
Set EmployeeItem = AppObject
Case CT_PRODUCT
Set ProductItem = AppObject
Case CT_SHIPPER
Set ShipperItem = AppObject
Case CT_SUPPLIER
Set SupplierItem = AppObject
Case Else
GoTo SkipUpdateObject
End Select
ObjectId = AppObject.Id
ObjectSubId = AppObject.SubId
PropertyNames = mIAppClient.AppServer.GetPropertyNames(ClassId)
Set PropertyIndex = MakePropertyIndex(PropertyNames)
Call AppObject.SetStateToVariant(PropertyIndex, Data)
Call mIAppClient.AppServer.UpdateObjectData(ClassId, PropertyNames, _
Data, Errors, _
ObjectId, ObjectSubId)
SkipUpdateObject:
Exit Sub
ErrorTrap:
Err.Raise ERR_CANNOT_UPDATE + vbObjectError, _
"CNWClient:UpdateObject", _
LoadResString(ERR_CANNOT_UPDATE) & "[" & Err.Description & "]"
End Sub
Implemented on COrder
We assume that the calling function has already dimensioned our Data array to the
appropriate size. We start by creating a variant array of the same size as the
number of property names. We again use the PropertyNames collection to index
into the appropriate element of the Data array to set the state value.
CNWClient
AppCollection.SetStateFromVariant(MakePropertyIndex(PropertyNames),
Data
End If
Set IAppClient_LoadQueryCollection = AppCollection
SkipQueryCollection:
Exit Function
ErrorTrap:
Err.Raise ERR_CANNOT_LOAD + vbObjectError, _
"CNWClient:LoadQueryCollection",
LoadResString(ERR_CANNOT_LOAD) & _
"[" & Err.Description & "]"
End Function
business layer.
Now that we have completed our AppServer and NWServer components, we must
install them into MTS so that our AppClient and NWClient can access them.
Components within MTS are placed into groups called packages. One package can
host multiple components, but one component can reside within only one package.
A package is an administration convenience when installing and transferring these
components between MTS machines and creating client-side installation routines to
access these components.
To start our installation process, we must start the MTS Explorer. From within the
MTS Explorer, open the Packages Installed folder under the My Computer folder, as
shown in Figure 9.4. This assumes that we are running the MTS Explorer on the
same computer that will host our MTS components.
Figure 9.4. Navigating to the Packages folder in MTS.
From the Packages folder, right-click, select New, and then Package. This brings up
the Package Wizard dialog as shown in Figure 9.5.
Figure 9.5. Launching the Package Wizard.
From the Package Wizard, select the Create an Empty Package button. In the Create
Empty Package dialog that appears, type the name of our package, in this case
Northwind Traders. Click on the Next button, which takes us to the Set Package
Identity page of the wizard. Next, select the Interactive User option and click the
Next button. Note that this option can be changed later after the package is installed.
Click the Finish button to complete the process. We now see that our new package
has been added in the MTS Explorer, as shown in Figure 9.6.
Figure 9.6. The newly added Northwind Traders
package.
To add our AppServer and NWServer components to the package, we first must
expand the Northwind Traders package to gain visibility to the Components folder.
This appears in Figure 9.7.
Figure 9.7. Navigating to the Components folder.
If we right-click on the Components folder, and then select New, Component, the
Component Wizard appears as shown in Figure 9.8.
Figure 9.8. Launching the Component Wizard.
From the first page of the Component Wizard, select the Install New Component(s)
option. From the Install Components dialog, click the Add Files button. From there
we browse to our directory with our AppServer component and click on the Open
button. Click on the Add Files button once again and select the NWServer
component. After both files have been selected, our dialog looks like Figure 9.9.
Figure 9.9. Adding components to the package.
We click on the Finish button to add our components to the package. If we take a
look at our MTS Explorer, we will see that the two new components appear under
the Components folder and in the right pane. This is shown in Figure 9.10.
Figure 9.10. Our newly added components.
With our components now running inside MTS, we must make them accessible to
our client machines. The easiest way to do this is to use MTS to create an Export
Package. This package not only creates a client-side installer, it also creates a file
necessary to move a package from one MTS machine to another.
We enter the name of the path to which we want to export, and click the Export
button. Upon completion of this process, MTS has created a NorthwindTraders.Pak
file in the directory that we specified. It has also placed a copy of AppServer.DLL
and NWServer.DLL into the same directory as the PAK file. Additionally, a
subdirectory named Clients has been created that contains a file named
NorthwindTraders.exe. This executable program is the setup program that sets
the appropriate registry settings on the client machine to enable remote access. If
we were to look at our references to our AppServer and NWServer components
within Visual Basic after running this installer, it would look something like Figure
9.12.
Figure 9.12. Our remote components installed on our
client.
From Figure 9.12, you can see how the file reference to our AppServer component
is now set to C:\Program Files\Remote Applications\{A65CA5FC-BADD-11D3…}.
The client-side installer set up this directory and remapped our AppServer reference
to it via the registry. It also modified the registry to inform the DCOM engine that
this component runs on a remote server.
Each time we install a component into an MTS server, a new GUID is generated for
that component. If we want to move our package to another MTS machine without
generating a new GUID, we must import into the new MTS machine the PAK file we
generated in the previous section. By doing this, our client applications do not need
to be recompiled with the new GUID, but instead simply point to the new MTS
server.
To import a PAK file, we simply right-click on our Packages folder on the target MTS
server and select the New menu item. From the Package Wizard that appears, we
select the Install Pre-Built Package option. On the Select Packages page, we browse
to the PAK file we created and select it. We click the Next button to arrive at the Set
Package Identity page, where we once again choose the Interactive User option. We
click the Next button once again, and enter the target location of where the files
should be installed. We click the Finish button to complete the process.
Summary
We also talked about some of the fundamentals of MTS. At one level, we looked at
the programming model that must be used to take full advantage of its transactional
and object pooling features. We also looked at how to deploy our MTS objects from
both a server- and client-side perspective.
In the next chapter, we will complete the last layer of the system, the user layer. We
will look at building reusable ActiveX controls that interface tightly with our
multipart distributed business object that we built in this chapter.
Chapter 10. Adding an ActiveX Control to the
Framework
User interface design can take on many different forms based on the many different
views on the subject. Indeed, such topics can be the subject matter of a book in
itself. In Part I, "An Overview of Tools and Technologies," we discussed how the
central design issue for an enterprise system is focused first on the business layer
and how the data and user layers are a natural outgrowth of this within our
framework. We also demonstrated the manifestation of the business and data
layers in Chapter 8," The DataManager Library," and Chapter 9," A Two-Part,
Distributed Business Object;" so now let us turn our attention to the user layer.
Design Theory
Although we can define our user layer directly using Visual Basic forms, we have
chosen to implement our user interface with ActiveX controls that are subsequently
placed into these forms. The reason for this is that it gives us the added flexibility of
placing these elements into an (IE) Internet Explorer-based browser, enabling us to
provide a rich interface that cannot be provided with simple HTML form elements.
Our design also enables us to transparently place these same controls into any other
environment that enables the use of ActiveX control hosting. The ultimate benefit
derived from this architecture is that we can place our controls in any VBA-enabled
application, giving us powerful integration opportunities.
To start our design, we must define our basic user interface metaphors. The entry
point into an application can vary, but here we follow a simple Microsoft Explorer
approach. Other approaches can include the Microsoft Outlook version, a simple
Single Document Interface (SDI) or a Multiple Document Interface (MDI) interface.
We use the Explorer here because it maps easily to an object-oriented framework
and is simpler to build for the sake of exposition. For our individual dialogs, we are
following a simple tabbed dialog approach, again because of the natural mapping to
object orientation.
Implementation
This section discusses the details of building the Explorer- and Tabbed Dialog-style
interfaces necessary for our application framework.
Our Explorer interface is covered first. This interface mechanism is more generically
called an outliner because it is especially well suited for representing an object
hierarchy or set of hierarchies. This representation enables us to build a
navigational component for the user to quickly browse to an area of the system in
which he is particularly interested. It is easy to extend the infrastructure provided
by our outliner to implement an object selection mechanism, as well.
Our Tabbed Dialog interface is covered next. This interface has a more generic name,
often referred to as a property page. It is well suited to represent an object within
our system. Through the browsing mechanism provided by the outliner, we can
choose a particular object that interests us and open it up for viewing and potential
editing.
The initial development of our Explorer interface is easy because Visual Basic
provides a wizard to do most of the dirty work. For our Explorer, we have chosen not
to implement any menus but instead to rely solely on a toolbar. After we have used
Visual Basic's wizard to create an Explorer application, we create a new User Control
project named NWExplorer and copy over the objects and code. Before copying, we
delete the common dialog control that Visual Basic creates because we will not be
using it. In the target User Control project, we must set a component reference to
the Microsoft Windows Common Controls 6.0 control. We then create a Standard
EXE project, called Northwind, and add a form with the name frmNorthWind. We
set a component reference to our newly created NWExplorer component and drop it
onto frmNorthWind. We show the end result of this effort in Figure 10.1, albeit after
we have implemented the NWExplorer control that is to follow.
Many third-party Explorer-style controls are on the market, many of which you
might prefer to use rather than one implemented in these examples. We do not
intend for the code samples that follow to constitute a complete coverage of the
design theory of an Explorer control. Instead, our goal is to discuss how to use our
client components covered in the last chapter to complete the application. As such,
we do not spend any time going over the code that Visual Basic generates to
implement the Explorer. Instead, we focus on the code that we are adding to hook
this Explorer code into our IAppClient, IAppCollection, and IAppObject
components created in Chapter 9, "A Two-Part, Distributed Business Object."
The first item to discuss is another interface, which, in this case, we define to
support our Explorer control. We use this interface, which we call IExplorerItem,
to help us manage the information necessary to manage the TreeView and
ListView controls that make up the Explorer. It is convenient that Microsoft defines
the Tag property of a TreeView Node object as a Variant so that we can use this to
hold a reference to an IExplorerItem object associated with the node. We use this
bound reference to help us determine the actions that the NWExplorer control must
take relative to user interaction. As with most of our interface definitions, the
majority of the properties are common and thus implemented by IExplorerItem.
However, there is one property that we must override for our specific
implementation.
To start with, we create a new ActiveX DLL in which to put our IExplorerItem
interface definition. Because we must create several other client-side classes to help
drive these ActiveX controls and the application in general, we call this project
AppCommon. This library constitutes our system layer on the client tier. We will be
adding other classes to this DLL throughout this chapter.
You might notice that many of the names look conspicuously close to our class type
constants, whereas others look a little different. These constants are purely
arbitrary because we tie them to our CT_xxx constants logically in our code. We use
the EIT_INIT, EIT_ROOT, EIT_CRC, EIT_ADMIN, and EIT_ALL constants for control
purposes. We demonstrate their use in code samples that follow. Note that our
Explorer not only provides navigation for the Northwind system but also selection
functions for the various dialogs we will be creating. This is the reason for the
EIT_ALL constant. We can place this control in Explorer mode by setting the
SelectMode property to EIT_ALL, while any other setting constitutes a selection
mode for a particular class.
Case EIT_ADMIN
.Caption = "Administration"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDER
.CanAdd = False
.CanDelete = False
.CanUpdate = False
.CanGoUp = True
Case EIT_COUNTRY_ROOT
.Caption = "Countries"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDER
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = True
Case EIT_COUNTRY_REGION_ROOT
.Caption = "Regions"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDER
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = True
Case EIT_COUNTRY_REGION_CITY
.Caption = "Cities"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDER
.CanAdd = False
.CanDelete = False
.CanUpdate = False
.CanGoUp = True
Case EIT_COUNTRY
.Caption = "Country"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDER
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = True
Case EIT_REGION
.Caption = "Country"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDER
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = True
Case EIT_CITY
.Caption = "Country"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDER
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = True
Case EIT_LISTITEM
.Caption = "Lists"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDER
.CanAdd = False
.CanDelete = False
.CanUpdate = False
.CanGoUp = True
Case EIT_CATEGORY
.Caption = "Categories"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDEROPEN
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = False
Case EIT_PRODUCT
.Caption = "Products"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDEROPEN
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = False
Case EIT_PRODUCT_ROOT
.Caption = "Products"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDEROPEN
.CanAdd = False
.CanDelete = False
.CanUpdate = False
.CanGoUp = True
Case EIT_PRODUCT_CATEGORY
.Caption = "Products Categories"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDEROPEN
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = True
Case EIT_EMPLOYEE
.Caption = "Employees"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDEROPEN
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = False
Case EIT_CUSTOMER
.Caption = "Customers"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDEROPEN
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = False
Case EIT_ORDER_ROOT
.Caption = "Orders"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDEROPEN
.CanAdd = False
.CanDelete = False
.CanUpdate = False
.CanGoUp = True
Case EIT_ORDER_OPEN
.Caption = "Open Orders"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDEROPEN
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = True
Case EIT_ORDER_ALL
.Caption = "All Orders"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDEROPEN
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = True
Case EIT_SUPPLIER
.Caption = "Suppliers"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDEROPEN
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = False
Case EIT_SHIPPER
.Caption = "Shippers"
.ImageIndex = IML16_FOLDER
.ImageIndexExpanded = IML16_FOLDEROPEN
.CanAdd = True
.CanDelete = True
.CanUpdate = True
.CanGoUp = False
End Select
.Mode = RHS
End Property
As you can see from the previous code sample, we are simply setting the various
properties based on the type of Explorer item we are creating.
Although the startup process for the Northwind application is not complicated, it
helps to have a flowchart to help us through our discussion. We show this in Figure
10.2.
Figure 10.2. The Northwind Explorer startup process.
The code for our Activate event for frmNorthWind appears in Listing 10.3.
frmNorthWind Form
From the flowchart, we initially follow Path 1, which has us calling the
RegisterControl method of our NWExplorer user control. We format our
CommandLine parameter in a manner similar to an HTML-form post command line.
More specifically, the format is defined as "var1=value1&var2=value2." Using this
method, we can arbitrarily define and communicate parameters that are of interest.
For our example, we pass in Server and SecurityKey parameters. This latter
parameter is used by the security mechanism that is discussed in Chapter 15,
"Concluding Remarks." We use this strange calling approach to simplify the
integration of our ActiveX controls with our IE browser. The code for the
RegisterControl method appears in Listing 10.4.
After this method completes, the RegisterControl method proceeds to call the
InitClient method on our AppClient object of the control. This object is initially
instantiated as CNWClient and then mapped to AppClient, which is an instance of
IAppClient. We define both of these variables to be global in scope relative to the
user control and instantiate them on the UserControl_Initialize event, as seen
in Listing 10.6.
The InitClient method attempts to establish the connection to the remote MTS
object running on the server that we identified with our "Server=" portion of the
command line.
With tvTreeView.Nodes
.Clear
'root item.
Set NWExplorerItem = New CNWExplorerItem
Set ExplorerItem = NWExplorerItem
ExplorerItem.Mode = EIT_ROOT
With ExplorerItem
Set oRootNode = .Add(, , , .Caption, .ImageIndex, .ImageIndex)
End With
oRootNode.ExpandedImage = ExplorerItem.ImageIndexExpanded
Set oRootNode.Tag = ExplorerItem
'initial settings….
Set tvTreeView.SelectedItem = oRootNode
CurrentNode = oRootNode
Call SetListViewHeader(EIT_INIT)
oRootNode.Expanded = True
ExitSub:
Exit Sub
End Sub
With ExplorerItem
Set oNode = tvTreeView.Nodes.Add(ANode, tvwChild, , .Caption, _
.ImageIndex, .ImageIndexExpanded)
oNode.ExpandedImage = .ImageIndexExpanded
End With
by the User
Screen.MousePointer = vbHourglass
' check for our dummy node…we put it there to get the +
If Not oTreeNode.Child Is Nothing Then
If oTreeNode.Child.Text = "DUMMY" Then
tvTreeView.Nodes.Remove (oTreeNode.Child.Index)
End If
End If
Case EIT_COUNTRY_ROOT
If Not ExplorerItem.Loaded Then
ExplorerItem.Loaded = True
Set CountryItems = AppClient.LoadCollection(CT_COUNTRY, 0, 0)
Set AppCollection = CountryItems
For i = 1 To AppCollection.Count
Set CountryItem = AppCollection.Item(i)
Set NWExplorerItem = New CNWExplorerItem
Set ExplorerItem = NWExplorerItem
ExplorerItem.Mode = EIT_COUNTRY_REGION_ROOT
ExplorerItem.AppObject = CountryItem
Set oNode = tvTreeView.Nodes.Add(oTreeNode, tvwChild, , _
CountryItem.Name, _
IML16_FOLDER, IML16_FOLDEROPEN)
Set ChildNode = tvTreeView.Nodes.Add(oNode, tvwChild, ,
"DUMMY", 0, 0)
Set oNode.Tag = ExplorerItem
Next
End If
Case EIT_COUNTRY_REGION_ROOT
If Not ExplorerItem.Loaded Then
ExplorerItem.Loaded = True
Set AppObject = ExplorerItem.AppObject
Set RegionItems = AppClient.LoadCollection(CT_REGION, _
AppObject.Id, _
AppObject.SubId)
Set AppCollection = RegionItems
For i = 1 To AppCollection.Count
Set RegionItem = AppCollection.Item(i)
Set NWExplorerItem = New CNWExplorerItem
Set ExplorerItem = NWExplorerItem
ExplorerItem.Mode = EIT_COUNTRY_REGION_CITY
ExplorerItem.AppObject = RegionItem
Set oNode = tvTreeView.Nodes.Add(oTreeNode, tvwChild, , _
RegionItem.Name, _
IML16_FOLDER, IML16_FOLDEROPEN)
Set ChildNode = tvTreeView.Nodes.Add(oNode, tvwChild, , _
"DUMMY", 0, 0)
Set oNode.Tag = ExplorerItem
Next
End If
End Select
ExitFunction:
Screen.MousePointer = vbDefault
LoadChildren = ErrorItems.Count = 0
Exit Function
ErrorTrap:
Call HandleError(Err.Number, Err.Source, Err.Description)
Resume Next
End Function
The parameters for this method include the Node object that received the event and
the event type indicated by iEventType. We define three constants that let us know
what type of event generated this method call so that we can handle it appropriately.
We define them as TRE_NODECLICK, TRE_EXPAND, and LVW_DBLCLICK. We first
ensure that the Tag property of the Node object contains a reference to an
IExplorerItem object. If so, we proceed to extract its mode property, which tells us
the type of Explorer item it is. Typically, we add special processing here only if we
have to build the child list dynamically as part of a database request. In this case,
we have two nodes of this type: "Products" and "Cities." We define all other
child nodes statically as part of the LoadRoot method, with the TreeView
automatically handling expansion. After we check for a child expansion, we proceed
to transfer any child nodes over to the ListView, mimicking the functionality of the
Microsoft Windows Explorer. If we are not performing a child expansion, we proceed
to call the LoadDetail method that populates our ListView.
Our LoadDetail method is similar to many of our business layer methods in that we
must dimension variable references for all our potential object collections that we
load into the ListView. The code for the LoadDetail method appears in Listing
10.9.
Example 10.9. The LoadDetail Method on Our
Case EIT_SHIPPER
Set ShipperItems = AppClient.LoadCollection(CT_SHIPPER, 0,
0)
Set AppCollection = ShipperItems
lvListView.Visible = False
RaiseEvent ItemSelectable(False)
CurrentListViewMode = iMode
For i = 1 To AppCollection.Count
Set ShipperItem = AppCollection.Item(i)
With ShipperItem
Set oItem = lvListView.ListItems.Add(, , .CompanyName, _
IML32_ITEM, IML16_ITEM)
oItem.SubItems(1) = .Phone
End With
Set oItem.Tag = ShipperItem
Next i
Case EIT_EMPLOYEE
Set EmployeeProxyItems = _
AppClient.LoadCollection(CT_EMPLOYEE_PROXY, 0, 0)
Set AppCollection = EmployeeProxyItems
lvListView.Visible = False
RaiseEvent ItemSelectable(False)
CurrentListViewMode = iMode
For i = 1 To AppCollection.Count
Set EmployeeProxyItem = AppCollection.Item(i)
With EmployeeProxyItem
Set oItem = lvListView.ListItems.Add(, , .LastName & ",
" & _
.FirstName, _
IML32_ITEM, IML16_ITEM)
End With
Set oItem.Tag = EmployeeProxyItem
Next i
Case EIT_CUSTOMER
Set CustomerProxyItems = _
AppClient.LoadCollection(CT_CUSTOMER_PROXY, 0, 0)
Set AppCollection = CustomerProxyItems
lvListView.Visible = False
RaiseEvent ItemSelectable(False)
CurrentListViewMode = iMode
For i = 1 To AppCollection.Count
Set CustomerProxyItem = AppCollection.Item(i)
With CustomerProxyItem
Set oItem = lvListView.ListItems.Add(, , .CompanyName, _
IML32_ITEM, IML16_ITEM)
oItem.SubItems(1) = .CustomerCode
End With
Set oItem.Tag = CustomerProxyItem
Next i
Case EIT_SUPPLIER
Set SupplierProxyItems = _
AppClient.LoadCollection(CT_SUPPLIER_PROXY, 0, 0)
Set AppCollection = SupplierProxyItems
lvListView.Visible = False
RaiseEvent ItemSelectable(False)
CurrentListViewMode = iMode
For i = 1 To AppCollection.Count
Set SupplierProxyItem = AppCollection.Item(i)
Set oItem = lvListView.ListItems.Add(, , _
SupplierProxyItem.CompanyName, _
IML32_ITEM, IML16_ITEM)
Set oItem.Tag = SupplierProxyItem
Next i
Case EIT_COUNTRY_REGION_CITY
Set AppObject = ExplorerItem.AppObject
Set CityItems = AppClient.LoadCollection(CT_CITY, _
AppObject.Id, AppObject.SubId)
Set AppCollection = CityItems
lvListView.Visible = False
RaiseEvent ItemSelectable(False)
CurrentListViewMode = iMode
For i = 1 To AppCollection.Count
Set CityItem = AppCollection.Item(i)
Set oItem = lvListView.ListItems.Add(, , CityItem.Name, _
IML32_ITEM, IML16_ITEM)
Set oItem.Tag = CityItem
Next i
Case EIT_PRODUCT_CATEGORY
CurrentListViewMode = EIT_PRODUCT
Set ExplorerItem = CurrentNode.Tag
Set CategoryItem = ExplorerItem.AppObject
Set AppObject = CategoryItem
vCriteria = Array(Array("CategoryId", "=", AppObject.Id))
vOrder = Array("Name")
Set ProductItems =
AppClient.LoadQueryCollection(CT_PRODUCT, _
vCriteria, vOrder)
Set AppCollection = ProductItems
lvListView.Visible = False
RaiseEvent ItemSelectable(False)
CurrentListViewMode = iMode
For i = 1 To AppCollection.Count
Set ProductItem = AppCollection.Item(i)
With ProductItem
Set oItem = lvListView.ListItems.Add(, , .Name, _
IML32_ITEM, IML16_ITEM)
oItem.SubItems(1) = .QuantityPerUnit
oItem.SubItems(2) = .UnitPrice
oItem.SubItems(3) = .UnitsInStock
oItem.SubItems(4) = .UnitsOnOrder
oItem.SubItems(5) = IIf(.IsDiscontinued, "Yes", "No")
End With
Set oItem.Tag = ProductItem
Next i
Case EIT_ORDER_ALL
Set OrderProxyItems =
AppClient.LoadCollection(CT_ORDER_PROXY, _
0, 0)
Set AppCollection = OrderProxyItems
lvListView.Visible = False
RaiseEvent ItemSelectable(False)
CurrentListViewMode = iMode
For i = 1 To AppCollection.Count
Set OrderProxyItem = AppCollection.Item(i)
With OrderProxyItem
Set oItem = lvListView.ListItems.Add(, , .CustomerName,
_
IML32_ITEM, IML16_ITEM)
oItem.SubItems(1) = IIf(.OrderDate = vbEmpty,
"", .OrderDate)
oItem.SubItems(2) = IIf(.RequiredDate = vbEmpty, "", _
.RequiredDate)
oItem.SubItems(3) = IIf(.ShippedDate = vbEmpty, "", _
.ShippedDate)
oItem.SubItems(4) = .EmployeeLastName & "," & _
.EmployeeFirstName
End With
Set oItem.Tag = OrderProxyItem
Next i
Case EIT_ORDER_OPEN
vCriteria = Array(Array("ShippedDate", "is", "null"))
vOrder = Array("RequiredDate", "CustomerName")
Set OrderProxyItems = _
AppClient.LoadQueryCollection(CT_ORDER_PROXY, _
vCriteria, vOrder)
Set AppCollection = OrderProxyItems
lvListView.Visible = False
RaiseEvent ItemSelectable(False)
CurrentListViewMode = iMode
For i = 1 To AppCollection.Count
Set OrderProxyItem = AppCollection.Item(i)
With OrderProxyItem
Set oItem = lvListView.ListItems.Add(, , .CustomerName,
_
IML32_ITEM, IML16_ITEM)
oItem.SubItems(1) = IIf(.OrderDate = vbEmpty, "", _
.OrderDate)
oItem.SubItems(2) = IIf(.RequiredDate = vbEmpty, "", _
.RequiredDate)
oItem.SubItems(3) = IIf(.ShippedDate = vbEmpty, "", _
.ShippedDate)
oItem.SubItems(4) = .EmployeeLastName & "," & _
.EmployeeFirstName
End With
Set oItem.Tag = OrderProxyItem
Next i
End Select
End If
ExitSub:
lvListView.Visible = True
Call SetObjectCount(lvListView.ListItems.Count)
If lvListView.ListItems.Count > 0 Then
Set lvListView.SelectedItem = lvListView.ListItems.Item(1)
RaiseEvent ItemSelectable(True)
End If
Exit Sub
ErrorTrap:
Call HandleError(Err.Number, Err.Source, Err.Description)
Resume Next
End Sub
We start this method by extracting the ExplorerItem associated with the currently
selected Node object in the TreeView. Based on the value of the Mode property of
this ExplorerItem, we run through a Select Case statement to determine our
course of action. As you might notice, most of the actions are simple calls to the
LoadCollection method of the AppClient for a given class type. After we have
loaded the necessary collection, we proceed to iterate through it, moving the
information into the ListView. A convenient CurrentListViewMode property is
responsible for setting up our ListView header columns, based on the type of
collection we are loading. By placing all this ListView initialization code into a single
property, we make it easier to maintain in the future.
We deviate a bit from this simple LoadCollection approach for our EIT_PRODUCT_
CATEGORY and EIT_ORDER_OPEN cases in which we use a LoadQueryCollection to
load the collection of products for a given category. We rely on the AppObject
property of the ExplorerItem object to get the CategoryId for the query. We also
use a LoadQueryCollection to help us load the detail for the open orders, where
we check for a null ship date.
One of the other items you might have noticed is that we have defined new
collection classes with the word Proxy in their names. We define these objects as
scaled-down versions of their fully populated siblings. We must define this all the
way back to the NWServer component, creating new class type constants and
modifying the GetClassDef method to support these new classes. We also must
define the necessary classes in NWClient. We take the extra development effort to
define these lighter-weight classes so that we can minimize network traffic and
latency during our browsing process. A user does not need to see every data
element of every object to find what interests him.
Now that we have all the pieces in place, we must begin responding to user input.
We start by attaching an event handler to our ToolBar control. To accomplish this,
we must first define a set of constants that corresponds to the button indexes within
the ToolBar control. For example:
You should notice that these constants are not contiguous because of the separator
buttons that are in use in the ToolBar control.
Next, we create a DoToolEvent function that is nothing more than a Select Case
statement switched on the index value of the button the user clicks. We map the
ButtonClick method of the ToolBar control to this DoToolEvent method (see
Listing 10.10).
Case TBR_DELETE
Call DeleteItem
Case TBR_PROPERTIES
Call EventRaise(emUpdate)
Case TBR_UPONE
CurrentNode = CurrentNode.Parent
Set tvTreeView.SelectedItem = CurrentNode
Call tvTreeView_NodeClick(CurrentNode)
Case TBR_LVLARGE
tbToolBar.Buttons.Item(TBR_LVLARGE).Value = tbrPressed
lvListView.View = lvwIcon
Case TBR_LVSMALL
tbToolBar.Buttons.Item(TBR_LVSMALL).Value = tbrPressed
lvListView.View = lvwSmallIcon
Case TBR_LVLIST
tbToolBar.Buttons.Item(TBR_LVLIST).Value = tbrPressed
lvListView.View = lvwList
Case TBR_LVDETAILS
tbToolBar.Buttons.Item(TBR_LVDETAILS).Value = tbrPressed
lvListView.View = lvwReport
Case TBR_HELP
MsgBox "Add 'Help' button code."
End Select
ExitSub:
If ErrorItems.Count > 0 Then
ErrorItems.Show
End If
Exit Sub
ErrorTrap:
Call HandleError(Err.Number, Err.Source, Err.Description)
Resume Next
End Sub
You should notice that for our Add and Edit functionality we are calling a private
method called EventRaise. We must use an event because we are within a user
control, and this is the only mechanism to communicate outward. We must send this
event out, along with critical information, to the host application whether it is a
Visual Basic form or an IE5 HTML page. The host application is then responsible for
taking the appropriate action. For all other button actions, we are relying on
functionality within this user control. Our EventRaise code appears in Listing 10.11.
Upon entering the method, we attempt to extract an AppObject object from the
ExplorerItem object that we receive via the Tag property of the currently selected
ListItem object of the ListView control. If we are in delete mode for this method,
we prompt the user with a confirmation message. We use a CMessageBox class in
our AppCommon library, which we have defined specifically for this process. For other
modes, we simply raise the ActionRequest event outward for handling. We cover
the host application's response to this event in the section titled "The Tabbed
Dialog," later in this chapter.
Within our host application, we have the following simple code within our
ActionRequest event handler to manage our object addition and update logic (see
Listing 10.12).
Form
.Id = 0
.SubId = 0
End If
.Mode = EditMode
.Server = Server
.SecurityKey = SecurityKey
.Show vbModal
End With
Set frmOrder = Nothing
End Select
End Sub
Note that the frmOrder form contains our NWOrder control that we will be
developing in the "The Tabbed Dialog" section.
tbToolBar.Buttons.Item(TBR_DELETE).Enabled = .CanDelete
mnuObjectDelete.Enabled = .CanDelete
tbToolBar.Buttons.Item(TBR_PROPERTIES).Enabled = .CanUpdate
mnuObjectEdit.Enabled = .CanUpdate
Else
tbToolBar.Buttons.Item(TBR_NEW).Enabled = False
mnuObjectNew.Enabled = False
tbToolBar.Buttons.Item(TBR_DELETE).Enabled = False
mnuObjectDelete.Enabled = False
tbToolBar.Buttons.Item(TBR_PROPERTIES).Enabled = False
End If
tbToolBar.Buttons.Item(TBR_UPONE).Enabled = .CanGoUp
End With
End If
End Sub
Now that we have the control basics down, we present NWExplorer running within
the context of IE5 in Figure 10.3. Note that IE4 is also acceptable for ActiveX control
hosting. It is also possible to host ActiveX controls within Netscape Navigator
running on Windows 95/98/NT if you use a plug-in.
Figure 10.3. The Northwind Explorer control within
IE5.
The HTML code required to embed the control and activate it appears in Listing
10.14. We will be spending much more time in later chapters demonstrating how to
implement controls as part of Web pages. Note that the value for clsid might vary
from that shown in Listing 10.14.
<HTML>
<HEAD>
<META NAME="GENERATOR" Content="Microsoft Visual Studio 6.0">
<TITLE>Northwind Traders</TITLE>
<script LANGUAGE="VBScript">
<!—
Sub Page_Initialize
On Error Resume Next
Call NWExplorer.RegisterControl("server=PTINDALL2&securitykey=")
NWExplorer.SelectMode = 999 ' EIT_ALL
NWExplorer.InitControl
End Sub
—>
</script>
</HEAD>
<BODY ONLOAD="Page_Initialize" rightmargin=0 topmargin=0
leftMargin=0 bottomMargin=0>
<OBJECT classid="clsid:41AC6690-8E70-11D3-813B-00805FF99B76"
id=NWExplorer style="LEFT: 0px; TOP: 0px"
width=100% height=100%>
</OBJECT>
</BODY>
</HTML>
The HTML shown in Listing 10.14 was generated using Microsoft Visual InterDev 6.0.
We demonstrate the use of this tool in Chapter 12, "Taking the Enterprise
Application to the Net."
Although the concept of a tabbed dialog is intrinsically simple, we must place some
thought into the best layout of our elements on the various tabs. Remembering the
statement about the user layer being an outgrowth of the business layer offers us
some guidance here. Suppose we have an object hierarchy like the one shown in
Figure 10.4. Here we have a root object containing several subobjects that are
collections.
Figure 10.4. A sample object hierarchy.
We want to handle this "bundle" of information using the root object Cportfolio;
therefore, we might lay out our tabbed dialog as shown in Figure 10.5. This model
should follow any well-designed business layer.
Figure 10.5. Our sample object hierarchy mapped to a
tabbed dialog.
For a specific implementation example, we develop a tabbed dialog control for the
COrder object and its contained COrderDetailItems collection. We will
demonstrate not only the basics of user interface design but also the integration of
user interface elements with our AppClient.
To start, we create a User Control project and name it NWOrder. We place a ToolBar
control and a tabbed dialog with two tabs onto our layout space. We name the first
tab General, as shown in Figure 10.6, and the other Detail, as shown in Figure 10.7.
Figure 10.6. The NWOrder control's General tab.
Figure 10.7. The NWOrder control's Detail tab.
Our Form_Activate event in our host application for the NWOrder control is identical
to the one we designed for our NWExplorer. Similarly, we implement
RegisterControl and InitControl methods that connect to our AppClient
component and initialize the control, respectively. Our initialization flow appears in
Figure 10.8.
Figure 10.8. The frmOrder form startup process.
The implementation of our InitControl method is quite different in our NWOrder
control than in the NWExplorer control. The code for the NWOrder implementation
appears later in Listing 10.15.
Call SetStatusText("Initializing…")
picGeneral.Visible = False
Screen.MousePointer = vbHourglass
picGeneral.Visible = True
Screen.MousePointer = vbDefault
ExitSub:
If ErrorItems.Count > 0 Then
ErrorItems.Show
RaiseEvent UnloadMe
End If
Call SetStatusText("Ready.")
Exit Sub
ErrorTrap:
Call HandleError(Err.Number, Err.Source, Err.Description)
Resume Next
End Sub
To manage our tab states, we define two form-level property arrays known as
TabClick and TabDirty. We implement these two properties as arrays, with one
element for each tab. We also have a form-level property known as FormDirty. We
initialize all these properties at the start of our InitControls method. We then
proceed to check for whether we are initializing in Update or Insert mode via our
Mode property set by our host application. If the former, we load our global private
Order and OrderDetailItems using our AppClient. If the latter, we simply
instantiate new objects of these types. We then call our ClearControls method for
the first tab, which clears all controls on the tab. Finally, we set the CurrentTab
property to the first tab.
iCurrentTab = iTab
bLoading = True
If TabClick(iTab) Then GoTo ExitProperty
Call SetStatusText("Initializing…")
Screen.MousePointer = vbHourglass
Select Case iTab
Case TAB_GENERAL
Call SetControlsFromObjects(TAB_GENERAL)
Case TAB_DETAIL
' need to load listview here
' or else we get into a nasty loop
picDetailsTab.Visible = False
Call LoadListView
picDetailsTab.Visible = True
Call SetControlsFromObjects(TAB_DETAIL)
End Select
TabClick(iTab) = True
ExitProperty:
Call SetStatusText("Ready…")
Screen.MousePointer = vbDefault
iCurrentTab = iTab
bLoading = False
Call SetStates
Exit Property
ErrorTrap:
Call HandleError(Err.Number, Err.Source, Err.Description)
End Property
We first check to see whether the user has already clicked on this tab, by examining
the TabClick property. If this returns True, we exit out of this property. If not, we
proceed to load the controls. If we are on the TAB_GENERAL tab, we simply call the
SetControlsFromObjects method. If we are on the TAB_DETAIL, tab we must first
load the ListView control with the OrderDetailItems collection before we can call
the SetControlsFromObjects method. The code for our SetControlsFromObjects
method appears in Listing 10.17.
b = bLoading
bLoading = True
Notice that our Detail tab contains a ListView control with a series of controls below
it. The values in these secondary controls correspond to a row in the ListView, with
each column mapping to one of the controls. We have chosen this approach for
demonstration purposes only. In many cases, you might want to use an advanced
grid control, which has embedded ComboBox and CommandButton capabilities.
After we have loaded our control with the necessary object information, we must
begin reacting to user inputs. We use the Validate event on our TextBox controls
to ensure that our application performs appropriate property validation. For
example, our txtFreight TextBox control has the validation code shown in Listing
10.18.
Field-Level Validation
We also use the KeyDown and KeyPress events to track whether a user changes a
value so that we can set our TabDirty and FormDirty properties. For an example,
see Listing 10.19.
Dirty Status
Notice that we have implemented many of our input fields as Label and
CommandButton controls. For these fields, we are relying on the SelectMode of our
NWExplorer control to help. Figure 10.9 shows the selection of the customer for the
order.
Figure 10.9. The Explorer control in selection mode
After the user has made the necessary changes to the order and/or modified
elements in the OrderDetailItems collection, he or she can proceed to save the
changes to the database. For this, we reverse the process of loading the NWOrder
control. The Save method implements this process (see Listing 10.20).
If TabDirty(TAB_GENERAL) Then
If Not SetControlsToObjects(TAB_GENERAL) Then GoTo ExitFunction
Set AppObject = Order
If Not AppObject.IsValid(Errors) Then
Call ErrorItems.MakeFromVariantArray(Errors, vbObjectError, _
"NWOrder", "Save")
ErrorItems.Show
GoTo ExitFunction
End If
If TabDirty(TAB_DETAIL) Then
Set AppCollection = OrderDetailItems
For i = 1 To AppCollection.Count
Set AppObject = AppCollection.Item(i)
If AppObject.Id > 0 Then
If AppObject.IsDirty Then
Call _AppClient.UpdateObject(AppObject.ClassId, AppObject)
Else
AppObject.Id = 0
Call AppClient.InsertObject(AppObject.ClassId, AppObject)
End If
Next i
Mode = emUpdate
TabDirty(TAB_DETAIL) = ErrorItems.Count > 0
Call InitControl
End If
RaiseEvent ObjectSave
Call SetStates
ExitFunction:
Save = ErrorItems.Count = 0
Screen.MousePointer = vbDefault
Call SetStatusText("Ready.")
Exit Function
ErrorTrap:
Call HandleError(Err.Number, Err.Source, Err.Description)
End Function
For a given tab, we call the SetControlsToObject method to move the control
information into the appropriate properties. We then call the IsValid method on
the AppObject or AppCollection objects to make sure that there are no issues
across property values. An example could be that the ship date occurs before the
order date. If validation succeeds, we call the necessary AppClient update or insert
functionality for the AppObject or AppCollection objects, depending on which tab
we are saving. We then clear the dirty flags and refresh the controls.
Summary
We have reached a milestone with the conclusion of this chapter because we have
implemented the complete set of functionality necessary to build a three-tiered
application. Figure 10.10 shows graphically what we have accomplished.
Figure 10.10. Our three-tiered application.
Up to this point, focus for the framework has been on the input, or information
generating, side of the application. When you look at our goal of moving the sample
Northwind application into an n-tiered, distributed framework, you can see that the
work is not complete because several reports I defined are now no longer available
with this migration of functionality. This chapter shows how Active Server Pages
(ASPs), coupled with the framework components running on Microsoft Transaction
Server (MTS), can be used to replace most of the standard reporting functions in a
manner that provides access to a much broader audience. For complex reports that
cannot be handled within ASP directly, specialized reporting objects are built and
deployed on MTS.
Design Theory
Implementation
To build out our reporting functionality, we will be using Visual InterDev 6.0. If you
have not ventured far beyond the Visual Basic environment, you will need to install
the FrontPage 98 extensions on your Internet Information Server (IIS)
development machine. You can perform this installation using the NT 4.0 Option
Pack on the IIS machine. Be aware that running the NT 4.0 Option Pack on an NT
Workstation will install Peer Web Services (PWS) instead of IIS. This is fine for our
purposes because PWS and IIS are similar. When I refer to IIS from this point
forward, it includes PWS installations.
Visual InterDev 6.0 tries to be many things, perhaps to the point of causing
confusion. When we try to create a new project, in addition to a Visual InterDev
project, we are given the choices of creating database projects, distribution units,
utility projects, and Visual Studio Analyzer projects. A database project is simply a
database development environment similar to Microsoft Access, with the added
option to debug stored procedures within Microsoft SQL Server. A distribution unit
can be one of several types. One option is a cabinet (CAB) file that is used by the
Microsoft setup engine. A second option is a self-extracting setup that uses one or
more CAB files to build a self-contained installer. The last option is simply a Zip file.
It is difficult to discern the purpose of the last two options. Nonetheless, Visual
InterDev's forte is in its capability to manage and edit Web projects. These Web
projects will be the manifestation of our reporting engine in this chapter. We will
continue with this same project in the next chapter as we create the entire Web site
portal for our application.
Before proceeding with the details of building the reporting engine, it is important to
understand that ASP represents a programming model that runs on an IIS server.
ASP code never crosses over to the browser. Instead, it produces the HTML stream
that is sent to the browser. Because of this, an ASP page that generates
browser-neutral HTML can support both Netscape Navigator and Internet Explorer.
This is no different from server-side Perl or C code that generates HTML to send back
to the browser. No Perl or C code is ever passed back to the browser.
After you have access to an IIS installation, you can create the Web project. The
easiest way to do this is from within Visual InterDev. Start Visual InterDev and
select File, New Project from the main menu. This brings up the New Project dialog
as shown in Figure 11.1.
Figure 11.1. Creating a new Web application in Visual
InterDev.
Enter NorthwindTraders for the project name, and then click the Open button. This
launches the Web Project Wizard. On Step 1 of the wizard, choose or enter the name
of the Web server that will host this application, and select Master mode to have the
Web application automatically updated with changes as they are made. This mode
should be switched to local after a Web application enters production. After you click
the Next button, the wizard attempts to contact the server and to verify that it is
configured appropriately.
On Step 2 of the wizard, select the Create a New Web Application option and accept
the default application name. Click the Next button to arrive at Step 3 of the wizard.
Ensure that <none> is selected so that no navigation bars are applied. Click the
Next button one last time to arrive at Step 4. Once again, ensure that <none> is
selected to make sure that no themes are applied either. Click the Finish button to
tell Visual InterDev to create the project.
Upon completing this process, the Project Explorer shows the newly created process
with several folders underneath it. The _private and _ScriptLibrary folders are
used directly by Visual InterDev. The images folder can be used to place the images
that are used by the Web site. A file titled global.asa also appears. This file is used
globally to declare objects and to define application- and session-level events used
by the Web application. It is discussed in further detail in Chapter 12, "Taking the
Enterprise Application to the Net."
To make the functionality of NWServer available to IIS you must create specific
wrapper functions because VBScript cannot deal with interface implementations as
can Visual Basic. For example, the following code fragment does not work in
VBScript:
Dim AppServer
Dim NWServer
Set NWServer = CreateObject("NWServer.CNWServer")
Set AppServer = NWServer
Call AppServer.InitServer
This code fails on the last line because VBScript considers AppServer to be of type
CNWServer, but it does not have visibility to its IAppServer interface in which the
InitServer method is defined.
To circumvent this issue, wrapper functions are built for each method that must be
exposed to IIS. Listing 11.1 shows the code for each data access method on the
IAppServer interface.
As you can see from Listing 11.1, the implementation of these wrapper functions are
trivial in nature.
Before the report generators using ASP within IIS can be realized, a service-layer
component needs to be built. There are two primary reasons for this. The first
reason is to provide a mechanism to implement the functionality that is available in
Visual Basic but not in VBScript. Specifically, the Visual Basic Format function—used
to format dates, currency, and percentages—is not available in VBScript; therefore,
a VBAFormat wrapper function is created. A CFunctions class is created to provide
an anchor point for this and future wrapper functions. This class is defined within an
ActiveX DLL component called AppIISCommon. The simple code for the CFunctions
class is shown in Listing 11.2.
CFunctions Class
The second reason is to simplify the retrieval of information from the variant arrays
that are returned from MTS. For this, a CDataArray class is also created within
AppIISCommon. The CDataArray class has an Initialize method that accepts
Data and PropertyNames arguments; both arguments are of the array data type.
This method sets an internal private reference to the Data argument and proceeds
to create a property index for the array using a Dictionary object. It does this by
calling a private MakeDictionary method. The Dictionary object is defined in the
Microsoft Scripting Runtime (scrrun.dll), which should be referenced by the
AppIISCommon project. Several derived properties are also defined (MinRow and
MaxRow) to simplify iteration through the data array. Finally, an Item method is
implemented to extract from the array a particular property for a given row. The
code for the CDataArray class is shown in Listing 11.3.
Option Explicit
Private vData As Variant
Private dict As Dictionary
With the NWServer component modified to handle calls from IIS and the
development of the service-layer component AppIISCommon complete, the first
report can be built. The first report to build is the Products by Category report from
the original Northwind database. This report provides a simple grouping of products
by category. The original Microsoft Access report shows only the current product list.
To demonstrate the flexibility of ASP as a reporting tool, the sample report will allow
for the display of both current and discontinued products.
The first step of adding a new report is to make sure that the appropriate
information set, in terms of a ClassDef instance, is defined within NWServer. If not,
add the definition in the GetClassDef method, making sure that the appropriate
view in the database has been defined as well. For this report, a new ClassDef is
needed. As shown in Listing 11.4 using the code fragment from the Select Case
statement in the GetClassDef method on NWServer. After this simple change is
made, NWServer is recompiled and redeployed to MTS.
Example 11.4. Addition of the ProductByCategory
Case CT_PRODUCT_BY_CATEGORY
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "View_Product_By_Category"
.WriteLocation = ""
.IdColumnName = ""
.OrderByColumnName = "Category_Name, Product_Name"
Set IAppServer_GetClassDef =
mIAppServer.ClassDefs.Item(CStr(ClassId))
End Function
With this new ClassDef in place, attention returns to Visual InterDev to write the
report in ASP and deploy it on a Web site. To create a new ASP page in the
NorthwindTraders project, simply right-click the servername/NorthwindTraders
node in the Project Explorer and select Add, Active Server Page from the pop-up
menu. This launches the Add Item dialog with the ASP page type selected. In the
Name field, enter ProductReports.asp, and then click the Open button to create
the file. Repeat this process to create a ProductReport.asp file as well.
The ProductReports.asp file is used to gather some direction from the user before
proceeding with the generation of the actual report in the ProductReport.asp file.
This technique is used across all types of reports that require initial user input. For
this set of reports, the only information needed from the user is which type of report
to run: All Products, Current Products, or Discontinued Products. The script needed
to implement a simple selector mechanism appears in Listing 11.5, whereas the
resulting HTML page appears in Figure 11.2.
<%@ Language=VBScript%>
<%Response.Expires=5%>
<HTML>
<HEAD>
<META NAME="GENERATOR" Content="Microsoft Visual Studio 6.0">
<TITLE>northwind traders</TITLE>
</HEAD>
<BODY>
<%
Dim vReports(3)
vReports(1)="Current Products"
vReports(2)="All Products"
vReports(3)="Discontinued Products"
%>
<H1>Product Reporting</H1>
<FORM action=ProductReport.asp>
<SELECT id=ReportType name=ReportType>
<%
For i = 1 To UBound(vReports)
If CInt(i) = 1 Then
%>
<option selected value='<%=i%>'><%=vReports(i)%></option>
<%
Else
%>
<option value='<%=i%>'><%=vReports(i)%></option>
<%
End If
Next
%>
</SELECT>
<P>
<INPUT type="submit" value="Run Report">
</FORM>
</BODY>
</HTML>
There is nothing too exciting about the code in Listing 11.5 because it produces a
simple HTML Form page. One item to note is that an array is being used for the
report names rather than a hard-coded mechanism. This makes it easier to add new
report types to a form by simply adding new elements to the array. The real
excitement comes in the ProductReport.asp code because it is what interacts with
MTS to produce the desired report. The code for this page appears Listing 11.6, with
the resulting HTML page shown in Figure 11.3.
Figure 11.3. The product report rendered in Internet
Explorer.
Product Report
<%@ Language=VBScript%>
<%Response.Expires=5%>
<HTML>
<HEAD>
<META NAME="GENERATOR" Content="Microsoft Visual Studio 6.0">
<TITLE>northwind traders</TITLE>
</HEAD>
<BODY>
<%
Dim Data, PropertyNames
Dim DataArray
Dim Errors, NWServer
Dim ReportType, ReportTypeId
Dim WhereClause, OrderClause
Dim IsDiscontinued
Const CT_PRODUCT_BY_CATEGORY = 201
ReportTypeId = Request.QueryString("ReportType")
Select Case ReportTypeId
Case 1
ReportType = "Current Products"
WhereClause = Array(Array("IsDiscontinued","=","False"))
Case 2
ReportType = "All Products"
WhereClause = ""
Case 3
ReportType = "Discontinued Products"
WhereClause = Array(Array("IsDiscontinued","=","True"))
End Select
OrderClause = Array("CategoryName","ProductName")
Call NWServer.IISQueryObjectListData(CT_PRODUCT_BY_CATEGORY, _
WhereClause, _
OrderClause, _
"OR", _
PropertyNames, _
Data, _
Errors)
If IsArray(PropertyNames) and IsArray(Data) Then
Set DataArray = Server.CreateObject("AppIISCommon.CDataArray")
DataArray.Initialize Data, PropertyNames
If IsArray(Data) Then
%><H1>Product By Category</H1>
<H2><%=ReportType%>
<TABLE WIDTH="100%" CELLPADDING=2 CELLSPACING=0 border=0>
<%
For i = DataArray.MinRow To DataArray.MaxRow
vThisCategory = DataArray.Item("CategoryName",i)
If (vThisCategory <> vLastCategory) Then
%>
<TR><TD colspan=2> </TD></TR>
<TR>
<TD colspan=2>
<B>Category:</B>
<%=vThisCategory%>
</TD>
</TR>
<TR>
<TH align=left>Product Name</TH>
<TH align=left>Units In Stock</TH>
<% If ReportTypeId = 2 Then %>
<TH align=left>Discontinued</TH>
<% End If %>
</TR>
<%
vLastCategory = vThisCategory
End If
%>
<TR>
<TD>
<%=DataArray.Item("ProductName",i)%>
</TD>
<TD>
<%=DataArray.Item("UnitsInStock",i)%>
</TD>
<% If ReportTypeId = 2 Then
If CBool(DataArray.Item("IsDiscontinued",i)) Then
IsDiscontinued = "Yes"
Else
IsDiscontinued = "No"
End If
Response.Write("<TD>" & IsDiscontinued & "</TD>")
End If %>
</TR>
<%
Next
Else
%>
<TR>
<TD>No data found</TD>
</TR>
<%
End if
%>
</TABLE>
<%
End If
%>
</BODY>
</HTML>
Looking at Listing 11.6, the first item to pay attention to is the Select Case
statement at the beginning of the script section. It is here that several variables are
set based on the specific report type requested. This report type is retrieved from
the QueryString collection on the Request object that ASP maintains automatically.
Based on which report type is selected, different WhereClause arrays are created to
pass to the IISQueryObjectListData method on the NWServer component. After
NWServer is created using the CreateObject method on the Server object, the
retrieval method is called. This passes control to MTS to perform the request.
Remember that this is calling the exact same underlying code as that used by the
Visual Basic client-side components developed in Chapters 9, "A Two-Part,
Distributed Business Object," and 10, "Adding an ActiveX Control to the
Framework."
After the request has been fulfilled, a CDataArray object is created and initialized
with the resulting information. From this point, iterating through the array and
formatting the report using a simple HTML table construct is easy. The MinRow and
MaxRow properties help in this iteration process. Additionally, the script chooses
whether to add the Discontinued column based on the report type because it only
makes sense on the All Products version of the report. To handle grouping, a simple
breaking mechanism that compares the current category with the last category is
used. If the values are different, a category header is written to the HTML stream.
Amazingly, this is all that is needed to build a simple ASP-based report using the
MTS infrastructure already created. One of the common statements about Web
reporting is that it just looks ugly. Well, if you leave the formatting of these reports
as it is in this example, then yes they do. Fortunately, HTML can be used to create
appealing pages with only modest effort. As proof, look at the same two reports in
Figure 11.4 and Figure 11.5 with some polish work added to them. The specific
techniques that were used to make the reports look better are discussed in more
detail in Chapter 12.
Figure 11.4. The product report mode selector with a
makeover.
Figure 11.5. The product report with a makeover.
Our last little makeover to the product report also demonstrates one of the greatest
advantages we have in using ASP as our reporting engine. We can change the look
and feel, the structure, or the processing logic at any time in a development
environment, and then push it to the production environment on the server. After it
has been moved, the report is updated. There is no redistribution of anything to the
client.
Some types of reports that are traditionally built in commercial report writers
include not only single-level grouping functionality, as demonstrated in the previous
section, but also multilevel grouping and preprocessing. ASP can easily
accommodate these features as well. To demonstrate, the Employee Sales report
from the original Northwind application will be transformed into ASP next. Again, to
enable this report, several new views are created on the database and a new
ClassDef is added to NWServer. The code fragment for this appears in Listing 11.7.
Again, after this simple change is made, NWServer is recompiled and redeployed to
MTS.
Example 11.7. Addition of the EmployeeSales Class to
Case CT_EMPLOYEE_SALES
Set ClassDef = New CclassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "View_Employee_Sales"
.WriteLocation = ""
.IdColumnName = ""
.OrderByColumnName = "Country, Shipped_Date, Last_Name,
First_Name"
Set IAppServer_GetClassDef =
mIAppServer.ClassDefs.Item(CStr(ClassId))
End Function
To continue the development of these reports, two new ASP files are added to the
project: EmployeeSalesReports and EmployeeSalesReport. For this set of reports,
the user criteria form is more complex than the previous example with the addition
of start date and stop date selection mechanisms. The generation of the SELECT
fields for these two dates is similar to the previous example. The code fragment in
Listing 11.8 shows the initialization information necessary to generate the various
form elements.
<%
Dim vStartMonth, vStartDay, vStartYear
Dim vEndMonth, vEndDay, vEndYear, vEndDate
vMonths(1) = "January"
vMonths(2) = "February"
vMonths(3) = "March"
vMonths(4) = "April"
vMonths(5) = "May"
vMonths(6) = "June"
vMonths(7) = "July"
vMonths(8) = "August"
vMonths(9) = "September"
vMonths(10) = "October"
vMonths(11) = "November"
vMonths(12) = "December"
%>
</SELECT>
Note the use of the VBAFormat method of the Functions object to extract the month,
day, and year components of the start and stop dates. This Functions object is
declared in the global.asa file for the project, which has the effect of making the
object reference available to all ASP pages within the application. By defining it in
this manner, this often-used object does not need to be constantly re-created as
users access the site. The following code fragment from the global.asa file makes
this reference:
<OBJECT RUNAT=Server
SCOPE=Application
ID=Functions
PROGID="AppIISCommon.CFunctions">
</OBJECT>
Additionally, the code to generate the SELECT form elements for the start date is
shown. Notice the use of the If CInt(i) = CInt(vStartDay) construct. Because
VBScript is based exclusively on the variant data type, these extra CInt functions
are required to ensure that the comparison is made properly. In some cases,
VBScript does not perform the appropriate comparison unless it is told to do so
explicitly. It is a good idea to develop the habit of making comparisons this way so
that you can avoid wasting hours by assuming the comparison would be made
correctly when VBScript assumed something else.
The resulting code (this time with the makeover at the outset) appears in Figure
11.6.
Figure 11.6. The employee sales report criteria
selection screen.
The code in the EmployeeSalesReport that is called interacts with MTS in a manner
similar to the ProductReport page. Again, a Select Case statement is used to set
various report-specific variables that are used by the call to the NWServer object to
retrieve the information in the appropriate sort order. The code fragment that
performs this work is shown in Listing 11.9.
<%
Dim Data, PropertyNames, Errors, NWServer
Dim DataArray
StartDateClause = Array("ShippedDate",">=",StartDate)
StopDateClause = Array("ShippedDate","<=",StopDate)
WhereClause = Array(StartDateClause,StopDateClause)
Call NWServer.IISQueryObjectListData(CT_EMPLOYEE_SALES, _
WhereClause, _
OrderByClause, _
"AND", _
PropertyNames, Data, Errors)
For this report, the difference between the two report modes is simply the sort order,
as indicated by the Select Case statement. Looking at the second case and the
assignment of the OrderByClause variable, notice the keyword DESC that follows
the SalesAmount property definition. This keyword is used by the
QueryObjectListData method of IAppServer to sort in a descending order instead
of the default ascending order. The remainder of the code fragment in Listing 11.9
is identical to that of Listing 11.6.
Because this report must calculate two aggregate fields based on the two grouping
levels by Country and Employee, the DataArray object must be preprocessed
before the report is actually written. To store these aggregates, Dictionary objects
are used; the use of the Dictionary object is mandated because, unlike Visual
Basic, VBScript does not have a Collection class. This Dictionary object is
actually a more powerful version of a collection because it has a built-in method to
check for key existence coupled with the capability to generate an array of key
values. This preprocessing is shown in Listing 11.10. Once again, we monitor the
values for the country and employee fields to determine when our groups break.
Object
<%
Set diSalesByPerson = Server.CreateObject("Scripting.Dictionary")
Set diSalesByCountry =
Server.CreateObject("Scripting.Dictionary")
For i = DataArray.MinRow To DataArray.MaxRow
vThisPerson = DataArray.Item("LastName",i) & "|" & _
DataArray.Item("FirstName",i)
If (vLastPerson <> vThisPerson) Then
Call diSalesByPerson.Add(CStr(vLastPerson),vPersonTotal)
vLastPerson = vThisPerson
vPersonTotal = 0
End If
vThisCountry = DataArray.Item("Country",i)
If (vLastCountry <> vThisCountry) Then
Call diSalesByCountry.Add(CStr(vLastCountry),vCountryTotal)
vCountryTotal = 0
vLastCountry = vThisCountry
End If
vSales = DataArray.Item("SalesAmount",i)
vPersonTotal = vPersonTotal + vSales
vCountryTotal = vCountryTotal + vSales
Next
Call diSalesByPerson.Add(CStr(vLastPerson),vPersonTotal)
Call diSalesByCountry.Add(CStr(vLastCountry),vCountryTotal)
%>
After the preprocessing is complete, a second pass through the DataArray object is
made to format the report. The resulting report is shown in Figure 11.7.
Figure 11.7. The employee sales reporting screen.
This second report example demonstrates that multilevel reports with preprocessed
data can easily be built in ASP. This section and the previous section also
demonstrate the ease at which new ClassDef objects can be added to NWServer to
enable these reports. Although this technique does involve a different development
methodology from a traditional report writer, it broadens the audience of end users
in a manner that these report writers cannot match. This technique also remains
tightly integrated to the application framework we have put into place to this point,
promoting our goal of maximum reuse.
Although several techniques have been demonstrated that implement much of the
basic functionality of standard report writers, there are still times when the
formatting complexity of a report is more than ASP can efficiently handle. In these
cases, a custom report generator can be built and deployed in MTS that writes the
complex HTML stream back to ASP. As an example, several calendar-style reports
are developed.
To begin development, a new ActiveX DLL named AppReports is created. This DLL
is designed to be usable across various applications rather than just the one from
our sample application. As such, it defines several core classes. For a basic calendar,
a CCalDay class and its CCalDays collection class are defined. The CCalDays
collection class has the intelligence necessary to build a basic calendar grid for a
given month and year. It also has the capability to generate an HTML-formatted
table for inclusion into an ASP page. The CCalDay class has a TextRows collection
that enables the report developer to place HTML-formatted information snippets for
a given day of the month. The details of the implementations of these two classes
are not discussed, although their full source code accompanies this book.
The AppReports library also defines two other classes. One is an interface class
called IcalendarReport, and the other is called CreportImplementation. These
two classes are used to enable the addition of new reports to an application as
administratively friendly a process as possible. The ICalendarReport interface is
used simply to enable the implementation of multiple calendar-style reports that
have as their only inputs the month and year of the calendar to generate. The
CReportImplementation class is used to map report names to their implementation
class for use by a Visual Basic CreateObject statement. Listing 11.11 shows the
code for IcalendarReport, whereas Listing 11.12 shows
CreportImplementation.
Interface Class
Option Explicit
Private mCalendarMonth As Integer
Private mCalendarYear As Integer
End Sub
CReportImplementation Class
Option Explicit
With the core AppReports component complete, the component to build the reports
can be built. The NWReports component is defined as an ActiveX DLL as well, and it
is designed to run under MTS. First, a special class called CNWCalendarReports is
created to do nothing more than to enumerate the calendar-style reports
implemented by the NWReports component. The code for this CNWCalendarReports
class is shown in Listing 11.13.
CNWCalendarReports Class
Option Explicit
Private Index As Integer
Private mCol As Collection
Public Sub AppendType(ReportName As String, LibraryName As String)
Dim ReportImplementation As New CReportImplementation
With ReportImplementation
.ReportName = ReportName
.LibraryName = LibraryName
End With
mCol.Add ReportImplementation, ReportImplementation.ReportName
End Sub
One other class, CNWReportServer, is built within the NWReports component. This
class is called into action by IIS to accomplish the generation of the complex HTML
stream for the calendar reports through its DoCalendarReport method. Before this
call, the user must select the desired report, which is provided to the user criteria
page through a CalendarReportNames property on the CNWReport server. The code
for the CNWReportServer class appears Listing 11.14.
CNWReportServer Class
Option Explicit
Public Property Get CalendarReportNames() As Variant
Dim vRet As Variant
Dim ReportImplementation As CReportImplementation
Dim NWCalendarReports As New CNWCalendarReports
Dim i As Integer
vRet = Array(1)
ReDim Preserve vRet(1 To NWCalendarReports.Count)
For i = 1 To NWCalendarReports.Count
vRet(i) = NWCalendarReports.Item(i).ReportName
Next i
CalendarReportNames = vRet
End Property
LibraryName = NWCalendarReports.Item(ReportName).LibraryName
Set CalendarReport = CreateObject(LibraryName)
Call CalendarReport.DoReport(vDataStream, _
CInt(CalendarMonth), _
CInt(CalendarYear))
ExitFunction:
DoCalendarReport = vDataStream
Exit Function
ErrorTrap:
'1. Send detailed message to EventLog
Call WriteNTLogEvent("CNWReportServer:DoCalendarReport", _
Err.Number, _
Err.Description, _
Err.Source & " [" & Erl & "]")
vDataStream = "<p>" & "CNWReportServer:DoCalendarReport" & _
Err.Number & " " & Err.Description & " " & _
Err.Source & " [" & Erl & "]" & "</p>"
Turning to Visual InterDev, two new ASP files are added to the Northwind Traders
project: CalendarReports and CalendarReport. To build the list of available
reports for CalendarReports, the NWReportServer object on MTS is called as
shown in Listing 11.15, to produce the page shown in Figure 11.8.
ASP
<%
Dim HTMLStream, NWReportServer
Dim i, vMonth, vYear, vReportNames
vMonth = Functions.VBAFormat(Now,"mm")
vYear = Functions.VBAFormat(Now,"yyyy")
%>
In the Calendar Reporting page, the script is simple as well, as shown in Listing
11.16, producing the page shown in Figure 11.9.
<%
Set NWReportServer = Server.CreateObject("NWReports.CNWReportServer")
vReportName = Request.QueryString.Item("ReportName")
HTMLStream = NWReportServer.DoCalendarReport(vMonth, vYear,
vReportName)
Response.Write(HTMLStream)
%>
Summary
This chapter has provided examples of how to use ASP as a distributed reporting
engine in place of traditional report writers. Several techniques have been
demonstrated to generate both simple- and medium-complexity reports using just
ASP coupled with the existing MTS business objects. Additionally, a technique to
generate complex reports was demonstrated, which used ASP in conjunction with
MTS-hosted reporting objects that subsequently tapped into the business objects.
In the next chapter, I discuss the development of an intranet portal site for the
application. Some specific topics include how style sheets and server-side include
files have been used to produce the nicely formatted pages shown in some of the
examples seen in this chapter. Additionally, the portal concept is discussed as a
means not only to provide reports to end users, but also to provide access to the
underlying information sets managed by the system.
Chapter 12. Taking the Enterprise Application
to the Net
With this trend in mind, this application framework has also been designed from the
outset to easily support an Internet component. Part of this foresight is seen in the
choice of tools and technologies that have driven the implementation to this point.
The DNA underpinnings of this framework have played a dramatic role in this effort,
as was evident during our first foray into Internet Information Server (IIS) in the
previous chapter. In this chapter, much more attention is given to the development
of the Internet portion of the framework, focusing specifically on both intranets and
Internets.
Before getting into the details of intranets and Internets, some generic techniques
are applicable to both domains. As should be clear by this point, two fundamental
principles have driven design and implementation decisions to this point: flexibility
and standardization. Development efforts in the IIS realm are no different. For
maintenance efficiency, it is highly desirable to have the flexibility to make global
application changes at singular locations. It is also desirable to have the
implementation of similar functionality performed in standardized manners. The
topics covered in this section are driven by these two requirements.
Style Sheets
A style sheet is a special HTML tag that enables a developer to control how textual
content is rendered by the browser. Specifically, the developer can specify style
classes that can be assigned to specific HTML tag types—for example, <TD>, <H1>,
or <P> tags—or used globally by any tag type. The most often-used style properties
include those for font, text color, background color, and text alignment. Style sheets
enable one other type of formatting for the control of hyperlink rendering based on
its various states.
As an aside, style sheets are gaining importance in their use beyond the simple
HMTL format standardization discussed in this section. In the eXtensible Markup
Language (XML) standard, style sheets are also used to automatically apply
formatting to the data embodied in an XML block within an HTML page. The new
eXtensible HTML (XHTML) standard also makes similar use of the style sheet
mechanism for formatting. We discuss and use the XML standard in much more
detail in the next chapter, although our primary purpose will be as a data transfer
mechanism that does not need style. Nonetheless, it is important to understand the
role that style sheets play today and where their use is headed in the near future.
NOTE
There are many more style properties than will be covered in this section, because
complete coverage of them is beyond the scope of this book. Any good book on
HTML should provide more than adequate information on this topic. The intent of
this section is to introduce the concept of using style sheets to provide a flexible
mechanism for driving Web site consistency.
Style sheets are placed into an HTML document using a <STYLE> block within the
<HEAD> block of the HTML page, as shown in Listing 12.1.
Document
<HTML>
<HEAD>
<TITLE>Some Title</TITLE>
<STYLE TYPE="text/css">
<!--
A:active { color: mediumblue; }
A:link {color: mediumblue;}
A:visited {color: mediumblue;}
A:hover {color: red;}
TD.HeaderOne
{
BACKGROUND-COLOR: #009966;
COLOR: #ffff99;
FONT-FAMILY: Arial, Verdana, 'MS Sans Serif';
FONT-SIZE: 10pt;
FONT-WEIGHT: normal
}
TD.HeaderOne-B
{
BACKGROUND-COLOR: #009966;
COLOR: #ffff99;
FONT-FAMILY: Arial, Verdana, 'MS Sans Serif';
FONT-SIZE: 10pt;
FONT-WEIGHT: bold
}
TD.ResultDetailHeader-l
{
COLOR: black;
FONT-FAMILY: Arial, Verdana, 'MS Sans Serif';
FONT-SIZE: 8pt;
FONT-WEIGHT: bold;
TEXT-ALIGN: left
}
TD.ResultData {
FONT-FAMILY: Arial, Verdana, 'MS Sans Serif';
FONT-SIZE: 8pt;
FONT-WEIGHT: normal;
TEXT-ALIGN: left
}
-->
</STYLE>
</HEAD>
<BODY>
</BODY>
</HTML>
In Listing 12.1, the information within the <SCRIPT> block is surrounded by the <!--
and --> comment tags to prevent older-vintage browsers that do not support style
sheets from being unable to render the HTML page. In the <STYLE
TYPE="text/css"> line, the css refers to the term Cascading Style Sheet, which is
the name of the standard adopted by the World Wide Web Consortium (W3C) in
1996 to define style sheets for HTML. Internet Explorer (IE) 3.0 and Netscape
Navigator 4.0 were the first browsers to adopt subsets of these standards, with later
versions of each adopting more of the standard. The term cascading refers to the
way style classes are merged if they are defined multiple times within an HTML
document.
The formats associated with hyperlinks are formally known as pseudo-classes in the
W3C standard because they are based on tag states instead of tag content. For the
<A> tag given in the example, the four pseudo-classes include active, link,
visited, and hover. The first three are formally defined by the W3C standard,
whereas the last is a Microsoft extension for Internet Explorer. For each of these
pseudo-classes, a color style property is defined using named color values. These
color names are based on extensions to HTML 3.2, which initially defined only 16
colors. Netscape extended these names to several hundred to coincide with the
colors available in the X-Windows system, with subsequent support by Microsoft
Internet Explorer. Colors can also be provided as Red-Green-Blue (RGB) color
triples using either the format #RRGGBB or the statement rgb(RRR,GGG,BBB). In the
former case, the values are given in hexadecimal format, whereas in the latter case,
the values are provided in decimal format. For example, the following are equivalent
color property statements in HTML:
color: silver
color: #C0C0C0
color: rgb(192,192,192)
Looking at the example once again, the color properties for the active, link, and
visited properties are set to mediumblue, whereas the color for the hover
property is set to red. Having the common color scheme of the first three properties
has the effect of preventing the browser from changing the color of the hyperlink
after a user has clicked on the link. The effect of the last property is to have the link
highlighted in red when the mouse pointer is directly over the hyperlink text. By
placing this set of pseudo-class definitions in the style sheet, all the hyperlinks in the
current HTML document follow these effects.
Looking next at the style properties for the various TD-based classes in Listing 12.1,
we can see that font-, text-, and color-level definitions are given. Specifically,
FONT-FAMILY, FONT-SIZE, and FONT-WEIGHT properties are defined for fonts. For
text definitions, a TEXT-ALIGN property is defined. For color definitions, COLOR and
BACKGROUND-COLOR properties are defined.
In the example, the FONT-WEIGHT property is defined next. For this property, named
values, such as bold and normal, can be used to indicate whether to use boldface.
Alternatively, boldness values can be given in the form of numbers that are
multiples of 100, between 100 (lightest) and 900 (boldest). The keyword bold
corresponds to a value of 700, whereas the value 400 corresponds to the keyword
normal.
The only text-based property defined in the example isTEXT-ALIGN. Values that can
be assigned to this property include left, right, center, or justify. If this
property is not defined, left is assumed. Other text properties that are available
but not shown include TEXT-DECORATION for special effects, such as strikethrough
and blinking; TEXT-INDENT to implement hanging and normal indents on the first
line of a paragraph; and TEXT-TRANSFORM to modify letter capitalization.
Now that a style sheet is defined within the <HEAD> section of an HTML document, it
is a simple matter to make references to the style classes from within the tags used
throughout the remainder of the HTML document. As mentioned before, the style
associated with hyperlinks is automatically enforced throughout the entire
document after the definition is made. For the other classes, they must be explicitly
used. As an example of style use, a fragment of HTML generated by the
EmployeeSalesReport.asp page in the previous chapter appears in Listing 12.2.
Sheet
As you can see in the various <TD> tags, a class= statement within the tag indicates
the style to associate with the tag. You should also note that nothing special is done
in the <A> tags to make them use the special pseudo-class effects defined in the
style sheet.
You might be thinking to yourself that although this style sheet mechanism does
offer flexibility, it still requires that each of the HTML documents making up a Web
site has a style sheet in its <HEAD> section. For a Web site with hundreds or
thousands of pages, it would be difficult to make style changes because each
document, or more appropriately, each Active Server Page (ASP) generating these
documents, would have to be modified to support the change. This would indicate
that there is no real flexibility offered by the style sheet approach. This is a valid
assessment, so the HTML specification allows for the linkage of style sheets into an
HTML document. The mechanism for this is as follows:
<head>
<title>northwind traders</title>
<LINK REL=stylesheet TYPE="text/css" HREF="stylesheets/nw01.css">
</head>
With this approach, these same hundreds or thousands of documents can make this
reference to a style sheet so that changes made to it are immediately reflected
throughout the Web site.
Creating a style sheet is easy. Although it can be done directly in a text editor
following the W3C specifications, Visual InterDev provides a simple editor for doing
so. To add a style sheet to an existing project, simply right-click on the project node
within Visual InterDev and select the Add option and then the Style Sheet option.
The Add Item dialog appears with the Style Sheet option selected by default.
Change the name of the style sheet to nw01.css, and click the Open button. This
brings up the style sheet editor with a default BODY class created, as shown in Figure
12.1.
Figure 12.1. A new style sheet added to the
To create a new class within a style sheet, right-click on the Classes folder within
the style sheet editor, and then select Insert Class to bring up the Insert New Class
dialog. To make a tag-specific class, select the Apply To Only the Following Tag
checkbox and select the appropriate tag name from the list. Type the name of the
new class in the Class Name field, and click OK. The new class is added under the
Classes folder, and the properties page for your new class, in which you can set the
various style properties, is brought up on the right side. Figure 12.2 shows the
results of adding the TD.ResultData class after the font properties have been set on
the Font tab.
Figure 12.2. A new style class added to the Northwind
Text properties are set on the Layout tab, whereas the background color is set on
the Background tab. Clicking on the Source tab shows the HTML code for the style
sheet with the currently selected style class in bold. Notice that this text is similar to
the format of the original style sheet that was embedded in the <HEAD> section.
Although style sheets can control the look and feel of individual tags in an HTML
document, they cannot provide an overall template for the document. For example,
if you look at many commercial Web sites, you might notice that they have similar
headers or footers across all their pages, or at least throughout various subsections
of the site. One mechanism to accomplish this standardization, while following the
flexibility mantra, is to use server side includes. These files are separate HMTL or
ASP code snippets that are pulled into an ASP as a pre-processing step before the
final generation of the HTML stream that is sent back to the client. An example of
such a reference can be seen in the ASP script code from the ProductReports2.asp
file given in the previous chapter. A fragment of this code is provided in Listing 12.3.
Example 12.3. Using Server Side Include Files
<%
For i = 1 To ubound(vReports) if cint(i) = 1 then
Response.Write("<option selected value='" & i & "'>" & _
vReports(i) & "</option>") else
Response.Write("<option value='" & i & "'>" & _
vReports(i) & "</option>") end if
next
%>
</SELECT>
</TD>
<TD width=30% align="center">
<BR><INPUT type="submit" value="Run Report">
</TD>
</TR>
</TABLE>
</FORM>
<!--#include file="ServerScripts\GetFormFooter.inc"-->
<!--#include file="ServerScripts\GetpageFooter.inc"-->
</BODY>
Four files are included in this simple script. The GetPageHeader.inc file is
responsible for generating the standard header of the page, whereas its
GetPageFooter.inc counterpart generates the standard footer. Similarly,
GetFormHeader.inc and GetFormFooter.inc generate the table structures to give
a consistent look and feel to all forms used throughout the Web site. Figure 12.3
indicates the specific areas that are generated by these include files.
Figure 12.3. The areas of the user criteria screen
Notice the .inc extension given to the server side include files to indicate that these
are not fully functional ASP scripts but rather ASP fragments. Although this is good
to identify them as included script files, it makes them more difficult to edit in Visual
InterDev. Because Visual InterDev does not recognize these extensions, it opens
them up in a standard text edit mode without the nice, yellow highlights at the
beginning and end of script blocks, which have the <% and %> markers. Nor is it able
to identify tags in black, keywords in red, values in blue, comments in gray, and so
forth. Thus, if you give these files ASP extensions, Visual InterDev is able to
interpret them and give you these visual clues. The choice is yours.
One other area that can add a level of standardization and flexibility is the use of
application variables. Under the IIS model, a Web application is defined based on all
the files within a given directory, or any of its subdirectories, on or mapped by the
IIS Web server. As briefly discussed in the last chapter, the global.asa file is used
as a controlling mechanism for the entire Web application, and it must reside in the
root directory of the Web application. After the Web site is first started (or restarted)
using the Internet Service Manager, the first request for any page within the context
of the Web application causes the Application_OnStart event to fire. If this
happens, application variables can be defined using the following syntax:
Application(VariableName) = VariableValue
Any ASP page within the application can then retrieve these variables by using the
reverse syntax as follows:
VariableValue = Application(VariableName)
This simple mechanism enables the application to store global variables for the
entire application, much as constants are stored in traditional programming
environments. Examples of usable information might be the name of specific page
URLs, such as a NoAccessGranted.ASP file or a MailTo:-style URL to redirect mail
to the site administrator. Examples appear in the following code fragment from the
global.asa file:
Sub Application_OnStart
Application("NoAccessURL") = "no_access.asp"
Application("SiteAdministratorMailTo") = "mailto:[email protected]"
End Sub
With some basic standardization techniques, we can now turn our attention to the
development of an intranet site for our application. From surfing the Web and
accessing commercial Web sites, you might have noticed that they typically have a
home page that enables entry into the various navigation points of the system. In
addition, there are typically links to frequently used, functional areas of the system
(such as stock quotes or local weather forecasts) from this main page. Home pages
designed in this format are often referred to as portals or consoles. We follow a
similar design philosophy in designing the intranet site for our framework.
Our goal, for now, is to provide internal access to the various objects and reports of
the application. In Chapter 13, "Interoperability," we will add a few new features to
help us move information out of our application and into other applications using the
portal as a launching point. Portal design can be accomplished in many ways. You
can prove this to yourself by looking at commercial portal sites. For our purposes,
we are going to stay somewhat basic, as shown in Figure 12.4. This page
corresponds to a file Home.asp that we have created for our Northwind application.
Traders application.
Looking at Figure 12.4, you should see four distinct areas. The first two have
headers titled ORDERS and CALENDARS, whereas the other two have headers titled
OBJECTS and TOOLS.
OrderReports.ASP
<%
…
Dim vMonths(12), vReports(2)
…
vReports(1)="All Orders"
vReports(2)="Open Orders"
…
%>
…
<FORM action=OrderReport.asp id=form1 name=form1>
<TABLE WIDTH="100%" CELLPADDING=2 CELLSPACING=0 border=0>
<TR>
<TD BGCOLOR="#ffffee" WIDTH=30%>
<FONT FACE="Arial,Helvetica,sans-serif" SIZE="-3"
COLOR="#333333">
Report Mode:<BR>
<SELECT id=ReportMode name=ReportMode>
<%
For i = 1 To UBound(vReports)
If CInt(i) = 1 Then
Response.Write("<option selected value='" & i & "'>" & _
vReports(i) & "</option>")
Else
Response.Write("<option value='" & i & "'>" & vReports(i) &
"</option>")
End If
Next
%>
</SELECT>
</FONT>
</TD>
…
Listing 12.4 demonstrates how we've made the code for this user selection form as
flexible as possible for future modification. By placing our report name information
in an array at the top of the script and using the UBound function as we iterate
through the array, we make it easy to modify this template if we need to create new
criteria selectors. The screen generated by our OrderReports.asp file appears in
Figure 12.5. Note that the default dates seen in the screen are set based on a base
date of April 15, 1995. This is done to coincide with the dates in the Northwind
database. In a real application, we would want our base date to be the current date.
Leaving the defaults as is and clicking on the Run Report button produces the report
shown in Figure 12.6.
Explorer.
Figure 12.6. The OrderReports screen in Internet Explorer
Note from Figure 12.6 that the columns in the ASP-generated screen are the same
as those in the NWExplorer control. Looking at the code in Listing 12.5 should
convince you that the techniques to retrieve the information in Visual Basic (VB) and
ASP forms are strikingly similar. This is by design.
' From VB
…
Case EIT_ORDER_OPEN
vCriteria = Array(Array("ShippedDate", "is", "null"), _
Array("ShippedDate", "=", "12:00:00 AM"))
vOrder = Array("RequiredDate", "CustomerName")
Set OrderProxyItems = _
AppClient.LoadQueryCollection(CT_ORDER_PROXY, _
vCriteria, _
vOrder, _
"OR")
Set AppCollection = OrderProxyItems
…
' From ASP
<%
…
Const CT_ORDER_PROXY = 104
…
ReportMode = Request.QueryString("ReportMode")
StartDateClause = Array("OrderDate",">=",StartDate)
StopDateClause = Array("OrderDate","<=",StopDate)
OrderByClause = Array("OrderDate","CustomerName")
You might have noticed that the Order ID column, both in these reports and the
ones from the previous chapter, have been hyperlinked to an OrderDetail.asp file.
This file represents the first detail screen that we will create. All other object detail
screens can be created in a similar manner. Because a COrder object has a
collection of COrderDetailItem objects, we design our OrderDetail.asp screen to
have a header section that contains the details for the order, followed by a section
that lists the order line items. This screen appears in Figure 12.7.
Figure 12.7. The OrderDetail screen in Internet Explorer.
What is new with this screen is the [EDIT] hyperlink in the upper-right corner.
Clicking on this link takes us to the OrderDetailControl.asp page, which has our
NWOrderControl embedded in it. This page appears in Figure 12.8, and the script
code appears in Listing 12.6.
Figure 12.8. The OrderDetailControl.asp page with the
OrderDetailControl.asp Page
<html>
<head>
<meta NAME="GENERATOR" Content="Microsoft Visual Studio 6.0">
<title>Northwind Traders</title>
<LINK REL=stylesheet TYPE="text/css" HREF="stylesheets/nw01.css">
</head>
<%
Id = CLng(Request.QueryString("orderid"))
%>
<script LANGUAGE="VBScript">
<!--
Sub Page_Initialize
On Error Resume Next
NWOrder.RegisterControl("server=alexis&id=<%=Id%>&subid=0&mode=2")
NWOrder.InitControl
End Sub
-->
</script>
<body bgcolor="#FFFFCC"
TOPMARGIN=0
marginwidth=10
marginheight=0
LEFTMARGIN=10
LANGUAGE="VBScript"
ONLOAD="Page_Initialize">
<!--#include file="ServerScripts\GetpageHeader.asp"-->
<TABLE WIDTH="800" border=0 CELLSPACING="0" CELLPADDING="0"
valign="TOP">
<TR>
<TD WIDTH="100%" align="CENTER" valign="TOP" BGCOLOR="#FFFFCC">
<OBJECT classid="clsid:692CDDDA-A494-11D3-BF79-204C4F4F5020"
id=NWOrder
align="center">
</OBJECT>
</TD>
</TR>
</TABLE>
<!--#include file="ServerScripts\GetpageFooter.asp"-->
</body>
</html>
We will demonstrate how an order can be created from the customer's perspective
in the following section "Building the External Internet Site." It is here that we follow
a pure HTML-based approach because we cannot run DCOM over the Internet,
which is what is needed by the control.
NOTE
As an aside, Microsoft's recent proposal for the Simple Object Access Protocol (SOAP)
promises to offer the capability to provide a rich control-based interface without
having to run over a DCOM layer. This protocol uses standard HTTP (HyperText
Transport Protocol) as its base, which is the same base protocol used by the World
Wide Web for delivery of HTML pages. Using this communication protocol, XML data
formatted requests are used to invoke methods on remote objects. Because this is
a standard submitted to the Internet Engineering Task Force (IETF), it has the
promise of being adopted as a true Internet standard. If this were the case, it would
not matter what type of Web server we were running, such as IIS or Apache. Nor
would it matter what type of application server we were running, such as MTS or
WebLogic. Nor would it matter whether our rich controls were based on Win32 or
Java. It will be interesting to watch the development of this standard.
Before completing this section, we still must cover a few more areas. The
upper-right corner of our home page includes a hypertext link to the
CalendarReports.asp page developed in the last chapter. You should notice from
our home page that, under the ORDERS hyperlink, there are additional hyperlinks
named Current Orders Schedule and Open Orders. These links jump directly into
the OrderReport.asp page using default information based on the current date,
bypassing OrderReports.asp's user criteria selection page. The reasoning for this
is that these are the most frequently used reports; therefore, there is no need to go
through the criteria selection page. Similar links can be found under the CALENDARS
section of the page.
The final item to investigate in this section is the OBJECTS area of the portal. This
text is not hyperlinked as the other items looked at so far. Instead, it provides a
listing under it of all the objects available for viewing from the intranet. If we select
the Products/Categories hypertext link, we jump to the Categories.asp page,
which appears in Figure 12.9. Selecting any of the hyperlinks on this page jumps us
to the ProductsByCategory.asp page, as shown in Figure 12.10.
Figure 12.9. The categories .asp page.
With our ability to generate an internal intranet site to accompany our application,
we might begin to wonder how we can leverage the external access that an Internet
can provide to enhance our system. Thinking from the perspective of a customer,
we might want to create an order ourselves, or at least check on the status of an
existing order. Enabling customers to create and access their own orders has the
advantage of reducing the sales and support staffing for Northwind Traders, as well
as the advantage of providing access in a 24×7 fashion. Other types of functionality
can be placed on an Internet-based site as well, such as yearly or quarterly order
histories and customer profile management. We focus on the online ordering
process in this section.
The script code for CustomerLogin.asp uses standard FORM elements, although in
this case we are using the POST method to prevent the password from being visible
to a malicious user. We have designed this page to enable re-entry in case the login
should fail in the CustomerLogin2.asp page that is called by the form. To enable
re-entry, we simply check for two query string variables named MSG and
CustomerCode. The MSG variable indicates the type of failure, whether it is from an
invalid CustomerCode or an invalid Password. The CustomerCode variable is used in
the case of an invalid Password so that the user does not have to re-enter it. If
either variable is undefined or contains no data then nothing shows on the form.
Listing 12.7 shows the code for the CustomerLogin.asp page.
<%
Msg = Request.QueryString("msg")
CustomerCode = Request.QueryString("CustomerCode")
%>
<FORM action="CustomerLogin2.asp" id=form1 name=form1 method=post>
<TABLE WIDTH="100%" CELLPADDING=2 CELLSPACING=0 border=0 height=100%>
<TR><TD class="ResultData"><%=Msg%></TD></TR>
<TR>
<TD class="FormCaption" width=100% height=100%>
Customer Code:<BR>
<INPUT type=text id="CustomerCode"
name="CustomerCode" value=<%=CustomerCode%>>
</TD> </TR> <TR>
<TD class="FormCaption" width=100% height=100%>
Password:<BR>
<INPUT type=password id="pwd" name="pwd">
</TD>
</TR>
<TR> <TD class="FormCaption" width=100% height=100%
align=center>
<INPUT type="submit" value="Login" id=submit1 name=submit1>
</TD>
</TR>
</TABLE>
</FORM>
Page
<%
Dim Data, PropertyNames, Errors
Dim DataArray
Const CT_CUSTOMER = 4
CustomerCode = Request.Form("CustomerCode")
Pwd = Request.Form("pwd")
If IsArray(Data) Then
Set DataArray = Server.CreateObject("AppIISCommon.CDataArray")
DataArray.Initialize Data, PropertyNames
If CStr(pwd) = CStr(DataArray.Item("Password",0)) Then
Session("CustomerId") = DataArray.Item("Id",0)
Session("CustomerName") = DataArray.Item("CompanyName",0)
Response.Redirect("CustomerConsole.asp")
Else
Response.Redirect("CustomerLogin.asp?CustomerCode=" & _
CustomerCode & "&Msg=Password is Incorrect")
End If
Else
Response.Redirect("CustomerLogin.asp?Msg=Customer Code Not Found")
End If
%>
After the customer login is passed, we have defined two session variables:
CustomerId and CustomerName. Session variables are similar to application
variables in that they are shared across all the pages within the application. The
difference is that session variables are destroyed after the user disconnects from
the site, whereas application variables persist until the Web site is restarted from
the IIS Management Console. Upon entering our CustomerConsole.asp page, we
use the CustomerName session variable to add a little personalization to the site. The
CustomerConsole.asp page appears in Figure 12.14.
Looking at the CustomerConsole.asp page, you should notice that its layout is
similar to our intranet site. This is simply a matter of convenience on our part so that
we do not have to create and maintain two sets of templates and styles. You might
need to modify your Internet site over time, based on usability studies and so forth;
so be prepared to make changes if necessary. For our example, we have chosen to
place several pieces of functionality on the customer-specific site. The Order Status
hyperlink is a straightforward implementation that is similar to the
OrderDetail.asp page from the intranet section. Likewise, the Order Listings
hyperlink is similar to the OrderReports.asp and OrderReport.asp pages in the
intranet section, except that here they must be filtered for a specific customer. You
can create another set of ASP files to drive this process, or if you cleverly modify the
existing reports, you can use them. The implementation of this set of pages is not
provided here. You might also notice the Change Profile hyperlink available under
the Administrative section. This link would lead to a series of ASP pages that
enable the user to modify properties on the Customer object, such as address,
contact person, telephone numbers, passwords, and so forth. Again, this
implementation is not provided here. Many other types of functionality can be
placed on this CustomerConsole.asp page. Fortunately, our architecture is robust
enough to accept such future enhancements.
The remaining item to be discussed is the Shopping link. As you might guess, this
link should enable the user to peruse the product catalog and create an order. To do
this, we will implement a simple shopping and checkout process that enables the
user to create an Order object and its associated OrderDetailItems collection.
NOTE
The solution presented for this process is simple in its design and implementation. It
is meant to demonstrate the flexibility of our architecture to support creates,
updates, and deletes from the intranet; it is not meant to represent a
ready-to-deploy electronic commerce solution. Our architecture is merely the
starting point for such applications.
If you have spent much time on commercial sites, you probably noticed that the
process of shopping involves searching for a product and then adding it to a
shopping cart. When you are finished shopping, you proceed to a checkout process.
We follow a modified approach here. Figure 12.15 provides a flowchart of our
order-creation process relative to the ASP pages that we will be creating.
Figure 12.15. The flowchart for the shopping process.
Following the shopping link from the CustomerConsole.asp page takes us to the
Shopping1.asp page, shown in Figure 12.16. This page retrieves the session
variable for the CustomerId and performs a query using the
IISQueryObjectListData method for the CT_ORDER class type. To enable this
query, we must first add a field to the database to indicate whether an order is
complete so that it can be submitted to the order fulfillment system—a topic that is
discussed in detail in Chapter 13. This completion flag, along with the OrderDate, is
set automatically during the checkout process that is discussed later in this section.
Thus, we will add a simple Is_Complete field to the database table and view, along
with the appropriate modification to the GetClassDef method on the NWServer
class for CT_ORDER and CT_ORDER_PROXY. It is important to note how simple and
unobtrusive this type of change is. Over time, as you are developing your
application, you will find the need to make similar changes to support expanding
business requirements. One of the underlying goals of this framework has been to
enable such simple changes.
Figure 12.16. The Shopping1.asp page.
We are specifically looking for orders where the IsComplete flag is false. We build
our page using standard HTML FORM methods, adding a New Order option at the end
of the radio button list.
Clicking the New Order radio button and then clicking the Next button takes us to
the EditOrder.asp page, as shown in Figure 12.17.
Figure 12.17. The EditOrder.asp page.
The EditOrder.asp page is built using similar techniques to the ones used to build
the other pages developed to this point. We use our IISQueryObjectListData to
help us build our Shipper and City combo boxes. Our choice to handle the City entry
this way is for simplicity.
NOTE
After we have entered our information and made our selections, we click on the Next
button. This submits the form to the UpdateOrder.asp page, which performs the
validation. If the validation fails, the page is redirected back to the EditOrder.asp
page with validation failure messages. The specific validation code appears in
Listing 12.9.
Example 12.9. The Validation Code in the
UpdateOrder.asp Page
CustomerId = Session("CustomerId")
OrderId = Session("OrderId")
ShipperId = Request.Form("ShipperId")
CityId = Request.Form("CityId")
ShipTo = Request.Form("ShipTo")
Address = Request.Form("ShipToAddress")
PostalCode = Request.Form("PostalCode")
ReqDate = Request.Form("ReqDate")
Msg = ""
From Listing 12.9, you can see where we are pulling our form information from the
Form collection of the Request object. We have chosen to use the POST method of
form processing for several reasons. First, if we begin to place the information
necessary to drive the pages on the URL as a query string, then unscrupulous users
might be able to modify the orders of others simply by editing the URL. Using the
POST methods keeps the information from users and eliminates a potential security
hole. Although this method is still not foolproof, it is much more robust than using a
query string approach.
If we fail validation, we place a message into a session variable and redirect back to
EditOrder.asp. This page is designed to check this session variable and present
the information at the top of the form. We have chosen to use a session variable to
prevent the entry of free-form text as part of a query string. Figure 12.18 shows
EditOrder.asp with a validation error message generated by UpdateOrder.asp.
errors.
If our update is successful, we insert a new order in the database and redirect to the
Shopping2.asp page, as shown in Figure 12.19.
Figure 12.19. The Shopping2.asp page after successful
order creation.
Class
Call MakeDictionary(PropertyNames)
vData = Data
End Sub
Our insertion logic within the UpdateOrder.asp page is straightforward and appears
in Listing 12.11.
Within UpdateOrder.asp
Data = vbEmpty
PropertyNames = NWServer.IISGetPropertyNames(CT_ORDER)
Set DataO = Server.CreateObject("AppIISCommon.CDataArray")
DataO.Initialize Data, PropertyNames
DataO.Item("ShipperId",0) = ShipperId
DataO.Item("ShipToCityId",0) = CityId
DataO.Item("CustomerId",0) = CustomerId
DataO.Item("EmployeeId",0) = 10
DataO.Item("RequiredDate",0)= ReqDate
DataO.Item("ShipToName",0) = ShipTo
DataO.Item("ShipToAddress",0) = Address
DataO.Item("ShipToPostalCode",0) = PostalCode
DataO.Item("IsComplete",0) = False
Data = DataO.Data
Call NWServer.IISInsertObjectData(CInt(CT_ORDER), _
PropertyNames, _
Data, _
Errors, _
ObjectId, _
ObjectSubId)
Session("OrderId") = ObjectId
On the Shopping2.asp page, we first present the user with a list of product
categories. Selecting a category produces a list of products in that category. This list
is presented in the ProductsByCategory2.asp page, as shown in Figure 12.20.
We present this list within the context of a FORM, with input fields to indicate the
quantity of items desired for purchase. To support the creation of new
OrderDetailItem objects, the user must change the quantity of a catalog item from
zero to something other than zero. Changing a quantity from a non-zero value to
zero causes a deletion to occur, whereas a change from one non-zero number to
another non-zero number performs an update. After changes are made to the
quantities, the ProductByCategory2.asp page is submitted to the
UpdateOrderDetails.asp page that performs the various inserts, updates, and
deletes. Upon completion, it redirects back to the Shopping2.asp page, showing the
changes to the order detail items, as shown in Figure 12.21. The code to perform the
inserts, updates, and deletes appears in Listing 12.12.
CustomerId = Session("CustomerId")
OrderId = Session("OrderId")
MinRow = Request.Form("MinRow")
MaxRow = Request.Form("MaxRow")
Set NWServer = Server.CreateObject("NWServer.CNWServer")
If Not NWServer.IISInitServer Then
Response.Write("Could not Initialize the MTS Server<br>")
End If
PropertyNames = NWServer.IISGetPropertyNames(CT_ORDER_DETAIL)
DataOD.Item("OrderId",0) = OrderId
DataOD.Item("ProductId",0) = vProductId
DataOD.Item("Quantity",0)= vQty
DataOD.Item("Discount",0) = 0
Data = DataOD.Data
Call NWServer.IISInsertObjectData(CInt(CT_ORDER_DETAIL), _
PropertyNames, _
Data, _
Errors, _
ObjectId, _
ObjectSubId)
Call NWServer.IISUpdateObjectData(CInt(CT_ORDER_DETAIL), _
PropertyNames, _
Data, _
Errors, _
CLng(vOrderDetailId), _
0)
End If
Next
Response.Redirect("Shopping2.asp")
After we have selected all our products, we click the Next button, which submits the
page to the OrderCheckout.asp page. This page simply performs an update,
setting the IsComplete flag and the OrderDate to the current date. Upon
completion of this update, it redirects back to the Shopping1.asp page.
Summary
In the next chapter, we will look at how our system interacts with others to integrate
itself within the landscape of the enterprise. We will also look at techniques that
involve both the movement of data between systems and the direct, real-time
access of data in foreign systems.
The topic of interoperability is one that can fill an entire book by itself. Indeed,
Enterprise Application Integration (EAI) books that are available in a variety of
series deal with this topic in detail. Nonetheless, it is important in a book on
enterprise application development to provide a basic level of coverage of this topic
for completeness, because application interoperability is fundamental to the
enterprise. Therefore, the ideas presented in this chapter are meant to discuss
some of the theory as well as the implementation for the interoperability techniques
related to our application framework.
Interoperability Defined
The term interoperability itself can mean several things. At one level, it can simply
mean the movement of data from one application to another via simple file
structures, with a person acting as an intermediary. On the other hand, it can mean
the movement of data via a direct link between the two systems, without user
involvement. This same sharing of data can also be accomplished without the
physical movement of data from one system to another; it can be accomplished
instead through the direct, real-time access of the data in the other system. At
another level, interoperability can also require collaboration, which can include both
sharing data and signaling other systems. In this mode, one application can pass a
set of information to another system, asking it to perform some function. This
second system might perform some additional work and send a signal to yet another
system. At some point, the originator might receive notice of success or failure,
possibly with some data that represents the end product of all the work.
For the sake of exposition in this chapter, let us suppose that Northwind Traders has
an order-fulfillment system that is separate from its order-taking system—a
somewhat plausible example of how such an operation might work. The
order-taking system is the sample application that we have been working on up to
this point. The order- fulfillment system is an in-house legacy system. It is
necessary for our application to send the order information to the fulfillment system
after an order has been created.
The basis for any form of data movement is a stream. SQL Server uses what is
known as a Table Data Stream (TDS) format when communicating with clients,
Internet Information Server (IIS) uses an HTML stream when sending results back
to a client, and so forth. In our example of moving the orders into the fulfillment
system, our stream carrier becomes a simple file, although its format can take one
of several forms.
Proprietary formats are typically brought about by the capabilities (or restrictions)
of one of the two systems in question. For example, if the legacy-based fulfillment
system has a defined file format for importing the orders, this is said to be a
proprietary format that the order-taking system must support. Alternatively, we
might choose to define a proprietary format within the order-taking system that the
fulfillment system must import. Typically, the former solution is easier to implement
because it is often more dangerous to make modifications to stable applications
than to newer ones just going into production.
To implement a specific export process, we must first define export class types.
These differ from the normal class type definitions implemented so far. The reason
for this is that we might have to combine or modify some of our existing class types
to arrive at the export information needed by the foreign application, or we might
need to export our existing class types in manners not specified in our original
ClassDef definitions. If we must recombine existing class types to meet our export
requirements, we first must create a new class type, for which there is no
corresponding implementation in our NWClient component. Within the
implementation of the CreateExportStream method, use a Case statement to
select from among the various export class types, which then call an appropriate
private method on NWServer, passing it the given export format identifier.
We start our implementation process by defining two new class type constants:
CT_ORDER_EXPORT and CT_ORDER_DETAIL_EXPORT. We also define a new export
format, EF_ORDER_PROPRIETARY. Listing 13.1 shows the implementation of the
CreateExportStream method on NWServer.
NWServer
Let us suppose that the proprietary format for our order information is such that
both the order and the order detail information are included in the same data stream.
Let us also assume that there is an order line followed by multiple detail lines, which
might be followed by other order and detail lines. To accommodate this, the first
character of a line is a line type indicator of either an O or a D, for order and detail,
respectively. The remaining information on a given line depends on this type
indicator, with each field being separated by the pipe (¦) character. We also assume
that the fields in both line types are defined implicitly by the proprietary standard
and cannot be changed without programmatic changes by both applications.
NWServer
Case CT_ORDER_EXPORT
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "View_Order_Export"
.WriteLocation = ""
.IdColumnName = "Id"
.OrderByColumnName = "Order_Date, Id"
Case CT_ORDER_DETAIL_EXPORT
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "View_Order_Detail_Export"
.WriteLocation = ""
.IdColumnName = "Id"
.ParentIdColumnName = "Order_Id"
.OrderByColumnName = "Id"
Case CT_LAST_ORDER_EXPORT
Set ClassDef = New CClassDef
With ClassDef
.DatabaseName = "NWIND"
.ReadLocation = "Table_Last_Order_Export"
.WriteLocation = "Table_Last_Order_Export"
.IdColumnName = "Id"
.OrderByColumnName = "Id"
Set IAppServer_GetClassDef =
mIAppServer.ClassDefs.Item(CStr(ClassId))
End Function
With our new ClassDef objects defined in GetClassDef and our new tables and
views created, we can turn our attention to the implementation of the
CreateOrderExportStream method, as shown in Listing 13.3. Although we
currently have only one format type defined, we implement a Select Case
statement to switch among the possible types. In this code, we simply obtain the list
of current exportable orders using our GetObjectListData method for the
CT_ORDER_EXPORT class type. Remember that this list is automatically controlled by
the View_Order_Export view that relies on the LastDate column in the
Table_Last_Order_Export table. We iterate through the returned orders,
requesting the order detail information with a similar call to GetObjectListData,
this time using the CT_ORDER_DETAIL_EXPORT class type and the ID of the current
order. We then write out to our output string the "O" header record, followed by the
"D" detail records. We continue this for all orders.
CreateOrderExportStream
Call mIAppServer.GetObjectListData(CT_ORDER_EXPORT, 0, 0, _
PropertyNames,
DataO, Errors)
If IsArray(DataO) Then
Set cPIO = MakePropertyIndex(PropertyNames)
For i = LBound(DataO, 2) To UBound(DataO, 2)
' get the order detail records
OrderId = DataO(cPIO.Item("OrderId"), i)
DataOD = vbEmpty
Call mIAppServer.GetObjectListData(CT_ORDER_DETAIL_EXPORT, _
OrderId, 0,
PropertyNames, _
DataOD,
Errors)
If IsArray(DataOD) Then
Set cPIOD = MakePropertyIndex(PropertyNames)
Stream = sOut
End Select
Exit Function
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("CNWServer:CreateOrderExportStream",
Err.Number, _
Err.Description & " [" & Erl & "]", Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "CNWServer:CreateOrderExportStream", _
Err.Description & " [" & Erl & "]"
End Function
Call AppServer.CreateExportStream(CT_ORDER_EXPORT,
EF_ORDER_PROPRIETARY, _
Stream, Errors)
After this output stream has been written to a file, it can be read into the fulfillment
system using whatever process is in place that accepts this proprietary input. In
some cases, it might be possible to implement a new custom loader in a language
like Visual Basic or C++, assuming there is an application programming interface
(API) to do so. Figure 13.1 shows a flowchart of how we have implemented this data
movement process so far.
Figure 13.1. Manual data movement using a
file-based process.
Standards-Based Formats
For our application, we have chosen to use the eXtensible Markup Language (XML)
as the foundation for our standards-based format. Not only does this make it easier
to implement the importation side of the data movement interface, it uses a
technology that has gained widespread acceptance. XML parsers are available for
most major platforms and operating systems, and most enterprise applications
should offer some form of XML support in the near future. To implement this type of
output stream, we must define a format identifier and implement the appropriate
code under our CreateOrderExportStream method. We call this new constant
EF_ORDER_XML. This code, as shown in Listing 13.5, leverages the XML-generation
functionality that we have placed in IappServer.
sTemp = mIAppServer.CreateXMLClass(CT_ORDER_EXPORT,
vOrderProperties)
Append sOut, sTemp
sTemp =
mIAppServer.CreateXMLCollectionClass(CT_ORDER_DETAIL_EXPORT)
Append sOut, sTemp
sTemp = mIAppServer.CreateXMLClass(CT_ORDER_DETAIL_EXPORT, _
vOrderDetailProperties)
Append sOut, sTemp
Append sOut, "]>" & vbCrLf
To build an XML document, we first must define a document type definition (DTD)
section that describes the data contained in the remainder of the section. We must
explicitly create the DTD ourselves because it is so tightly bound to our object model.
If we had built an application to support an existing industry standard
DTD—something that might become more common as XML use increases—then we
would have adapted our object model to conform to the standard DTD at the outset
or we would have to write some additional code to make sure that we can reproduce
the standard DTD from our object model. Listing 13.6 shows the DTD for our export
process.
Export Process
You might notice that the keyword #REQUIRED is used for all the attribute default
type settings. Other values could include #IMPLIED or #FIXED. If your DTD requires
these settings, it is a simple matter to add this meta information to the Attributes
collection for the required property in a ClassDef, while also modifying the
appropriate DTD generation functions. The same applies to the CDATA keyword,
which can be replaced with other attribute types, such as ENTITY, ENTITIES, ID,
IDREF, IDREFS, NMTOKEN, NMTOKENS, NOTATION, and Enumerated. We have
chosen the simplest CDATA and #REQUIRED methods as defaults because we are
using XML as a simple data transfer medium, not as a mechanism to enforce
business rules.
Looking back at the code in Listing 13.5, you should notice that in our XML version,
we follow the same data retrieval logic that we used in our proprietary format case.
The main difference is in how we write out the data. Notice the use of four methods
on the IAppServer class that assist us in formatting the information into XML. They
are CreateXMLCollectionClass, CreateXMLClass,CreateXMLCollection, and
CreateXMLObject. The first two methods correspond to the creation of the DTD,
whereas the second two methods correspond to the actual information being written
out. To create our XML-formatted stream, we ust first build the DTD. To accomplish
this, we first write out some preamble information—including the first four lines of
the DTD—to an XML output string to identify the contents as an XML document. We
then call the CreateXMLCollectionClass method for CT_ORDER_EXPORT to write
out the DTD information for the ORDERS collection, followed by a call to
CreateXMLClass to write out the DTD information for the ORDER class. Notice that in
our call to CreateXMLClass, we are passing a variant array call, vOrderProperties.
This tells the CreateXMLClass method which properties of the class to write out as
attributes in the ATTLIST section.
Notice that we have also followed the same approach in terms of object hierarchy in
our XML as we have throughout the rest of our application base. Instead of defining
the ORDER_DETAIL_ITEMS collection as a child object of the ORDER object, we have
placed them side-by-side and wrapped them in an EXPORTED_ORDER_ITEM construct.
The reason for this is that our metadata does not understand an object hierarchy,
and thus it cannot generate a DTD to support one.
IAppServer
If ClassDef.Attributes.Exists("XMLCollectionClassName") Then
XMLCollectionClassName = _
ClassDef.Attributes.Item("XMLCollectionClassName").Value
Else
XMLCollectionClassName = ClassDef.ReadLocation ' assumes table name
End If
ExitFunction:
CreateXMLCollectionClass = sXMLOut
Exit Function
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("IAppServer:CreateXMLCollectionClass",
Err.Number, _
Err.Description, Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "IAppServer:CreateXMLCollectionClass", _
Err.Description & " [" & Erl & "]"
End Function
IAppServer
If ClassDef.Attributes.Exists("XMLClassName") Then
XMLClassName = ClassDef.Attributes.Item("XMLClassName").Value
Else
XMLClassName = ClassDef.ReadLocation ' assumes table name
End If
Call Append(sXMLOut, "<!ELEMENT" & vbTab & XMLClassName & " ")
If ClassDef.Attributes.Exists("XMLClassChildren") Then
XMLThingy = ClassDef.Attributes.Item("XMLClassChildren").Value
Else
XMLThingy = "EMPTY"
End If
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("IAppServer:CreateXMLClass", Err.Number, _
Err.Description, Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "IAppServer:CreateXMLClass", _
Err.Description & " [" & Erl & "]"
End Function
IAppServer
If ClassDef.Attributes.Exists("XMLClassName") Then
XMLClassName = ClassDef.Attributes.Item("XMLClassName").Value
Else
XMLClassName = ClassDef.ReadLocation ' assumes table name
End If
ExitFunction:
CreateXMLObject = sXMLOut
Exit Function
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("IAppServer:CreateXMLObject", Err.Number, _
Err.Description & " [" & Erl &
"]",
Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "IAppServer:CreateXMLObject", &
Err.Description & " [" & Erl & "]"
End Function
If ClassDef.Attributes.Exists("XMLCollectionClassName") Then
XMLCollectionClassName = _
ClassDef.Attributes.Item("XMLCollectionClassName").Value
Else
ExitFunction:
CreateXMLCollection = sXMLOut
Exit Function
ErrorTrap:
'1. Details to EventLog
Call WriteNTLogEvent("IAppServer:CreateXMLCollection", Err.Number,
_
Err.Description & " [" & Erl & "]", Err.Source)
'2. Generic to client - passed back on error stack
Err.Raise Err.Number, "IAppServer:CreateXMLCollection", _
Err.Description & " [" & Erl & "]"
End Function
With our XML process now in place, we can modify the code snippet from Listing
13.4 to now generate an XML format of the same information, which appears in
Listing 13.11.
The rest of the standards-based process is the same process shown in Figure 13.1.
The only difference is that now the format of the transfer file is XML versus a
proprietary format. Figure 13.2 shows the XML Notepad (available from Microsoft)
with a sample of our export file loaded.
Figure 13.2. The XML Notepad showing a sample
File-Based Interoperability
If our order-fulfillment system has an API that enables us to automate our import
process, we can automate this entire data transfer process. Figure 13.3 shows an
overview of the architecture required to automate this process.
First, we create a shared file directory on our integration server machine to serve as
a common data point. If we want an order drop to occur every four hours starting at
8:00 a.m. and a command file called DROPORDERS.CMD drives it, then we would enter
the following AT commands on our integration server machine:
First, the DROPORDERS.CMD file is designed to retrieve the file from the order-taking
system via a console application. Assuming our Northwind MTS machine is named
MOJO and our integration server is called CARTMAN, then our console application can
be called as follows:
This console application would connect to the MTS machine named MOJO, calling the
CreateExportStream method and saving the resulting information to a file called
ORDERS.XML on the path \\CARTMAN\EXPORTS.
The next line in the DROPORDERS.CMD file would import the file into the fulfillment
system. Assuming an MTS machine of ALEXIS, it might look something like the
following statement:
This simple command file and the supporting console applications would be all that
is necessary to automate the data transfer process. In a real-world case, the
console applications would be designed to return errorlevel values back to the
command processor. For example, if the EXPORTORDERS.EXE were to fail, we would
not want to run the IMPORTORDERS.EXE command. In fact, we would likely be
interested in branching off to an alerting mechanism to inform support staff of the
failed export.
There are still some issues with this process in that there is a "hole" in which a set
of orders could be exported, but the import would fail and thus never have the
chance of making it into the fulfillment system on the next export. The reason is that
the LastExportDate field would have been adjusted in the CreateExportStream
method, which assumes that the downstream import processes will succeed. To
make this process as robust as possible, the CreateExportStream method should
not update the LastExportDate field. Instead, a separate public method on
NWServer named SetLastExportDate should be created. This method could be
called by yet another console application upon successful completion of the
IMPORTORDERS.EXE process. There is still an issue in that if the import fails midway
into the process, no orders from the point of failure forward will be processed.
The most robust approach using the LastExportDate field would be to have the
IMPORTORDERS.EXE process call the SetLastExportDate method after each
successful import. Upon the first failure, the process aborts, writing an application
event to the event log and sending an errorlevel back to the command processor.
Again, this would signal support staff of the issue to be resolved. This process
assumes that the orders are listed in date order.
Option Explicit
Call AppServer.CreateExportStream(CT_ORDER_EXPORT, _
EF_ORDER_XML, _
Stream, _
Errors)
iFileNum = FreeFile
Open FilePath & "\OrderExport.XML" For Output As #iFileNum
Print #iFileNum, Stream
Close #iFileNum
ExportOrders = True
Exit Function
ErrorTrap:
ExportOrders = False
End Function
Sub Main()
Dim sCommand As String
Dim sParms() As String
Dim i As Integer
sCommand = Command
sParms = Split(sCommand, " ")
For i = LBound(sParms) To UBound(sParms)
Select Case UCase(sParms(i))
Case "-S"
MTSServerName = sParms(i + 1)
Case "-P"
FilePath = sParms(i + 1)
End Select
Next I
Messaging-Based Interoperability
A message queue is an enterprise component that has been around since the early
mainframe days. The two larger message queue products include Microsoft Message
Queue (MSMQ) and IBM's MQSeries. The former runs only on NT-based platforms,
whereas the latter runs on NT and most others. There are commercial bridging
products available that can move messages from one product to another, or you can
build your own. For the purposes of our application, we use only MSMQ, although
similar techniques should apply to other message queue products.
One of the benefits of using a message queue is the concept of guaranteed delivery.
If one application places a message on the queue, it remains there until specifically
removed by another application. In our order-information transfer example, the
EXPORTORDERS.EXE console application could place the information into a message
queue rather than to a shared file directory. In this case, the EXPORTORDERS.EXE
would have the responsibility of setting the LastExportDate upon completion,
because it is now guaranteed that the message it has created will remain in the
queue until it is successfully processed. Figure 13.4 shows an architectural overview
of this process.
queue.
With these items in place, we can create a public queue using the MSMQ Explorer.
To accomplish this, we right-click on the server name in the MSMQ Explorer and
select New, and then Queue, as shown in Figure 13.5. This launches the Queue
Name dialog seen in Figure 13.6. We then name this queue OrderTransfer and
deselect the Transactional check box. Clicking on the OK button creates the queue.
Explorer.
Figure 13.6. Naming the new queue in the MSMQ
Explorer.
msg.Label = MsgTitle
msg.Body = MsgBody
msg.Delivery = MQMSG_DELIVERY_RECOVERABLE
msg.Send q1
q1.Close
End Function
We are not explicitly trapping for errors in this code because we are assuming our
calling process will want to handle it specifically.
We also modify our ExportOrders function to now send the XML-formatted stream
to the queue instead of the file used in the previous example, as shown in Listing
13.14.
to Support MSMQ
QName = "Direct=TCP:128.128.128.126\OrderTransfer"
Call QSend(QName, "ORDER_EXPORT", Stream)
ExportOrders = True
Exit Function
ErrorTrap:
ExportOrders = False
End Function
Although we have hard-coded the queue name here for exposition, we would modify
our calling convention into ExportOrders to implement a -q switch to provide the
queue name. Notice the "Direct=…" format used for the queue name. This format
tells MSMQ to delivery the message in a potentially disconnected status. If we do
not use this format and the computer is disconnected when we send the message,
an error is raised. After this method has completed successfully, the message is
visible in the MSMQ Explorer under the OrderTransfer queue name, as shown in
Figure 13.7.
Figure 13.7. The newly delivered message in the
queue.
On the import side, we implement a process that retrieves the messages for the
queue. Although we won't provide the full implementation, we do show this retrieval
process. The important item to understand is the difference between peeking and
retrieving messages. Peeking enables you to pull a message from the queue without
removing it from the queue. Retrieving a message removes it. Typically, we want to
peek the message first, attempt to process it, and remove it from the queue if we
are successful. Listing 13.15 shows the code for a queue processing procedure. We
have implemented our reader function in a mode in which it loops through the entire
queue, processes messages of interest, and then exits. An external task scheduler
can fire off our console application periodically to scan the queue in this manner.
If bReceived Then
Set msg = q1.PeekCurrent(ReceiveTimeout:=0)
Else
Set msg = q1.PeekNext(ReceiveTimeout:=0)
End If
Loop
q1.Close
End Sub
With the basic messaging system in place, there are still times when MSMQ cannot
be used. For example, if the order-taking system is perhaps hosted at an Internet
service provider (ISP) or an application service provider but the order fulfillment is
running at the home office, it might be difficult to set up MSMQ if there is not a
dedicated network in place connecting the two. Looking beyond our sample
application, there might be times when data needs to move between applications in
different companies. For example, a material requirements forecast for a
manufacturing company might need to be sent to the material supplier. In these
cases, we need something more than MSMQ alone.
One solution is to use the file-based approach, as we did before, with file transfer
protocol (FTP) paths instead of local network paths. Another is to leverage the email
system already in place and send the information over the Internet. It is easy to
think of MSMQ in terms of an email metaphor. The PathName property of the
MSMQQueue object becomes the To field, the Label property of the MSMQMessage
object becomes the Subject, and the Body property becomes the text of the email.
CDONTS
oMail.To = ToAddress
oMail.From = FromAddress
oMail.Subject = Subject
oMail.Body = Body
oMail.Send
Set oMail = Nothing
End Function
By replacing our QSend function call in the ExportOrders function with MSend, we
have bypassed MSMQ and gone directly to the Internet. On the receiving end, there
must be a listener routine that checks an email inbox for the target address with the
given subject line. The CDONTS library can be used to pull the message from the
inbox. This is followed by an attempt to process the message, as was done in the
PeekCurrent case in MSMQ. If successful, an acknowledge message can be sent
back using the FromAddress in the original field; otherwise, an error message
stream can be sent to the same address for diagnostic purposes. Because there isn't
a mechanism to guarantee delivery, the export process must be able to store
messages locally until a successful acknowledgement is received. Because only
SMTP-based mail services are required for this process, it is not dependent on any
one particular vendor of mail systems.
Cryptography
If we start sending data over the Internet as email message bodies, it might be
important to encrypt the body to prevent unwanted eyes from deciphering its
contents. Numerous cryptography solutions are available, including the CryptoAPI
that comes with NT. Unfortunately, this is a C-level API that is both difficult to
understand and proprietary to NT. To solve this problem, we can use a commercial
product, or we can choose to build our own simple encryption/decryption
mechanism, depending on the level of security required.
Without going into significant detail, the code in Listing 13.17 shows a basic
encrypter and decrypter function using a single numeric key. For this process to
work, both the sender and receiver must have an agreed-upon key value. These
algorithms also ensure that the characters that make up the encrypted text remain
within the ANSI character set (that is, character codes less than 128). It does this by
converting three 8-bit bytes into four 6-bit bytes and vice versa.
Example 13.17. Basic Encryption and Decryption
Algorithms
For i = 1 To Len(S)
Key = Key And 32767
tKey = Key
For j = 1 To 8
tKey = tKey / 2
Next j
sRet = sRet & Chr(Asc(Mid(S, i, 1)) Xor (tKey))
Key = (Asc(Mid(sRet, i, 1)) + Key) * C1 + C2
Next I
End Function
For i = 1 To Len(S)
Key = Key And 32767
tKey = Key
For j = 1 To 8
tKey = tKey / 2
Next j
Direct data access is probably the easiest form of application integration. Using ADO
or ODBC, we can connect our DataManager component to these other systems for
data retrieval purposes. In many cases, we can create a ClassDef object to map
these foreign tables and views into new classes within our system, although they
might not follow our precise design guidelines, as covered in Chapter 8, "The
DataManager Library." In some cases in which stored procedures are used for data
retrieval, a read-only ClassDef can be implemented based on the columns returned.
Data insert and updates, on the other hand, are much more difficult and might
require direct access to the underlying system. The Attributes collection on a
ClassDef object can be used to hold metadata associated with processing these
types of situations.
With a ClassManager and DataManager created, we can bring the foreign data into
our application, as well as provide information back to these same applications. In
worse-case scenarios, we can bypass our ClassManager and DataManager
altogether and place custom methods off our NWServer component. Figure 13.8
shows this form of integration and the various pathways between our application
server components and the databases.
database access.
Application Connectors
With an application connector, we can make calls into it to retrieve the information
we need or we can provide data inserts and updates. In many ways, our own
framework is a form of an application connector into our system if used by other
applications. In the case of a DCOM-based application connector, we can interact
with it using variant arrays or XML-formatted data streams as in our own examples.
Figure 13.9 shows this form of application in the context of an Enterprise Resource
Planning (ERP) system.
components.
From Figure 13.9, you should note that we are performing our integration to the
connector component through our NWServer class rather than IappServer. The
reason for this is that such integration is specific to the particular application being
built using our framework, so it belongs with NWServer.
Summary
We have just gone through a whirlwind tour of application integration using our
framework. We have covered data transfer techniques using proprietary and
XML-based data formats as transfer mediums. We have covered the use of files,
message queues, and emails as transfer conduits. We have also talked briefly of
integration using direct connect techniques, either directly at the database level or
through application connector components. Although this chapter has had a
significant amount of content, it is by no means a definitive source. Other books go
into much detail on the subject.
In the next chapter, we look at Windows 2000 and how it affects our framework
components. We specifically look at compatibility issues with MTS, MSMQ, and IIS.
We also address some of Windows 2000's new features that can enhance our
application.
Chapter 14. Windows 2000 and COM+
Considerations
If various schedules go according to plan, you should be reading this book in the
months following the release of Windows 2000, the replacement for Windows NT.
Within a few months, your company can begin making the long-anticipated
migration to this latest Microsoft server platform and its COM+ model. At this point,
you might be concerned that everything that has been demonstrated in this book is
for naught with this new technology release, or you might be concerned that
implementing an application based on the framework we have presented will have
to be reworked after you do make the migration. Fear not; much of the functionality
we have relied on to this point was released with the NT 4.0 Option Pack.
Component Services
To quote from the MSDN library at Microsoft's Web site at the time of this writing
(with the appropriate disclaimer that it is preliminary and can change):
COM+ can be viewed as a merging of the Microsoft Transaction Server (MTS) and
COM, along with the introduction of a host of new features. If you are currently
using MTS, Windows 2000 makes the change to COM+ completely automatic.
For the most part, your MTS packages are transformed to COM+ applications during
the Windows 2000 setup procedure. Without doing anything beyond the typical
setup, you can now take advantage of all the new COM+ features.
The simplest way to move our existing MTS components from NT/MTS to COM+ is to
export our components to a package file in MTS, and then import it into COM+. By
following this approach, we preserve our GUID for our DCOM objects so that
client-side applications do not have to be recompiled and redeployed. This
technique will most likely be used in migration strategies, in which companies are
moving existing MTS-based applications over to Windows 2000 Advanced Server.
The Transaction Server Explorer has been replaced with the Component Services
snap-in for the Microsoft Management Console (MMC). Figure 14.1 shows how to
navigate to the Component Services snap-in.
Figure 14.1. Navigating to the Component Services
Inside the Components Services snap-in, we see that it has a similar layout to the
Transaction Server Explorer. The only differences in the look and feel of the new
snap-in is that several of the old nodes in the tree view are gone and that the
spinning balls have gone from green to gold. In addition, most of the wizards used
to install new packages and components have been polished a bit, but they are
fundamentally the same. To import our MTS-based package, we right-click on the
COM+ Applications node and select New, followed by Application from the pop-up
menu, as shown in Figure 14.2.
Figure 14.2. Launching the COM+ Application Install
Wizard.
The first step of the wizard is simply informational. We click Next on the wizard to
advance to the second step. From there we click on the Install Pre-Built
Application(s) button, as shown in Figure 14.3.
Figure 14.3. Installing a prebuilt application.
This brings up a file selector dialog. We change the Files of Type combo box to MTS
Package Files (*.PAK) and browse to our package file. We click on the Next button
to take us to the Set Application Identity step. We leave the account set to
Interactive User for now, but this should be changed later when the application is
put into production. We click Next once again to take us to the Application
Installation Options step. We click on the Specific Directory radio button and enter
the name of the directory where our new components are to be installed. We select
our directory and click on the Next button one final time to arrive at the last step. We
click on the Finish button and find that our Northwind Traders application has been
created, as shown in Figure 14.4. We also must remember to move our
ClassManager and DataManager components over as well, although they can
simply be registered using the REGSVR32 utility.
Figure 14.4. The Northwind Traders application
added to COM+.
If we are developing a new application that has not yet been deployed, we might
want to directly install our components into Component Services. To do this, we
once again right-click on our COM+ Applications node, followed by the New and
Application submenus in the pop-up menu. We click the Next button on the first step
of the resulting wizard, followed by the Create an empty application button on step
two. We enter Northwind Traders as the application name, and select the Server
Application radio button. We click on the Next button, again leaving the account set
to the Interactive User option. Then we click the Next button, followed by the Finish
button to complete the process.
package in COM+.
The preceding steps launch the COM+ Component Install Wizard. Clicking the Next
button on the informational screen takes us to the Import or Install a Component
step. We click on the Install New Component(s) button to launch a file selection
dialog. We browse to our DCOM components, select them, and click on the Open
button. We click on the Next button, followed by the Finish button to complete the
process. Our components are now added, as shown earlier in Figure 14.4.
By installing components in this manner, they have been assigned new GUID values
and are not accessible to our client until we create new remote application installers.
In Windows 2000 and COM+, these become known as application proxies. To create
an application proxy, we right-click on the Northwind Traders folder, selecting the
Export menu item from the pop-up, as shown in Figure 14.6.
Figure 14.6. Exporting a package from COM+.
The COM+ Application Export Wizard begins with the familiar informational first step.
We click on the Next button to take us to the Application Export Information step.
We select our output path for the export file, naming it Northwind.msi. We select
the Application Proxy radio button in the Export As frame and click the Next button.
A click on the Finish button completes the process. The result is the creation of an
installer Cabinet file, otherwise known as a CAB file, and a Windows Installer
Package file. These two files can be used on the client machine to create the remote
registration entries required to access these components.
NOTE
If your client machine is not Windows 2000, you must download the Windows
Installer from the Microsoft Web site. At the time of this writing, Windows 2000 was
at RC3 level, and the Windows Installer for NT 4.0 would not recognize the
installation files generated by the COM+ export process. Until this issue is resolved,
the easiest way to migrate existing applications to COM+ while keeping the clients
at non-Windows 2000 levels is to perform the package import process from
MTS-based components.
Message Queuing
Another major component of our model tied into the NT Option Pack is Microsoft
Message Queue (MSMQ), which also has undergone some refinement. Although the
client-side programming model is compatible with MSMQ 1.0, MSMQ has undergone
several significant changes. One minor change is the name. Message Queue for
COM+ is now called simply Message Queuing, although some references are made
to it in the context of MSMQ 2.0. From a technical standpoint, MSMQ no longer
needs to coexist with SQL Server because it now uses the Active Directory to store
its topology information.
NOTE
Microsoft claims that there are no compatibility issues using an application written
for an MSMQ 1.0 object model. Our framework components from Chapter 13,
MSMQ 2.0 has new versions of its COM components that are compatible with the
MSMQ 1.0 components. The programmatic names of these components have
remained the same, enabling you to use the same names you are familiar with (for
example, MSMQQueue and MSMQMessage). However, the identifiers (GUIDs) of the
objects have changed.
Microsoft further provides the information in Table 14.1 to help determine which
version of the library to use if you are programming in a mixed NT 4.0 and Windows
2000 environment.
Table 14.1. Microsoft's Matrix for Mixed NT 4.0 and Windows 2000
MSMQ Programming
For… Select…
Applications that will run on both Windows NT 4.0 Microsoft Message Queue 1.0
and Windows 2000 Object Library
Applications that will run only on Windows 2000 Microsoft Message Queue 2.0
Object Library
The MSMQ 1.0 Explorer has been replaced with a series of MMC snap-ins. To gain
access to the queues themselves, we must go under the Computer Management
snap-in, as shown in Figure 14.7.
Although our application framework ports over to COM+ and Windows 2000
relatively easily, several new features within COM+ can be used to enhance our
framework. They are discussed in the following sections.
One of the most anticipated features of COM+ has been Component Load Balancing.
With this feature, you no longer have to marry your client or IIS server to a specific
component server. In this environment, you can have a series of clients and COM+
servers operating in a parallel fashion, with a directory service dynamically
determining the routing between the two. For example, if a request is made to
instantiate a COM+ object on the Windows 2000 server, rather than "hard-coding"
the server name into the application or the DCOM parameters on the client machine,
the call is routed to one of several servers. With this architecture, the system can
support both failover and scalability issues.
Unfortunately, based on customer feedback from the Windows 2000 beta releases,
this feature was pulled out of Windows 2000. According to Microsoft at the time of
the writing of this book, Component Load Balancing will be redeployed to the
Microsoft AppCenter Server. However, the timing of this release was not given.
Queued Components
COM+ also releases a new feature known as queued components that runs atop
MSMQ 2.0. With this new feature, components can be instantiated in an
asynchronous fashion. For example, the client machine normally instantiates a
COM+ object using an application proxy. In a queued component model, the queued
component recorder acts as the application proxy, recording all method calls made
on an object. These recorded calls are packaged into an MSMQ 2.0 message body
and sent to the server where they are unpackaged and replayed.
Queued components are well suited to solve issues with availability, scalability, and
concurrency, but these features come at the price of performance. Specifically,
recording the method, packaging it into a message body, sending the message,
unpacking the message, and replaying the method all add extra time to the process.
If you are not concerned about the performance implications, this process is
acceptable. If performance is an issue, you should investigate other mechanisms,
such as writing your own messaging layer that bypasses the recording and playback
steps.
In-Memory Databases
With COM+, Microsoft was to have released a feature known as the In-Memory
Database (IMDB) to enable application servers to store critical information in a fast
and easily accessible format. Unfortunately, based on Windows 2000 beta feedback,
this feature was removed after Release Candidate 2 with no indication of when it
might be added back. Microsoft recommends using the Shared Property Manager for
local data caching. This feature, which we have not used in our framework, was
originally released with the NT 4 Option Pack and has been carried forward with the
COM+ release.
Summary
In the next chapter, we wrap up the book by talking about a few items that did not
fit elsewhere in the book. Specifically, we talk about making applications that are
written using this framework scalable, as well as how programmatic security can be
implemented across the application.
Chapter 15. Concluding Remarks
We have made it to the last chapter of the book with several important stones
unturned. It is our goal in this chapter to spend some time with these final topics so
that we can say we are finished with the application framework. Specifically, we
start by finishing the topic of error handling, followed by a discussion of
programmatic security, and concluding with a discussion of scalability.
Error Handling
Up to this point, we have casually addressed the issue of error handling through our
event-logging and error-raising mechanisms. In many of our method calls across
the DCOM boundary, we included a parameter called Errors, meant to contain a
variant array with which we have never specifically done much. We have even
included some functions to add errors to this array and convert this array into a
native ErrorItems collection in our AppCommon component. Although the only
implementation example of these pieces has been to handle validation
requirements, they can also be used to pass back general errors resulting from
various business rules. Be sure to keep this in mind as you are building out your
application using these framework components.
Security Mechanisms
Option Explicit
With these constants in place, we can implement an ActiveX DLL component named
NWSecurity to implement the security. We define a class called CSecurityServer
to host our security mechanism.
NOTE
Do not name your security component simply Security.DLL. This conflicts with a
system DLL used by NT.
To implement our pattern, we use a simple matrix, aptly named mSecurityMatrix,
defined as a two-dimensional array, with our first dimension representing the
secured group type and the second representing the secured class type. The value
of the array at a particular position is the access mode, which is the sum of the
various constants. Because we have defined our constants as powers of the base 2,
we can use bitwise comparisons to extract a particular access mode for a given
combination of security group and class type. From the constants defined in Listing
15.1, assuming the value mSecurityMatrix(SGT_CSR, CT_CUSTOMER) is 3, we can
establish whether a customer service representative can delete a customer object
using the following statement:
' Merchandisers
Call SetSecurity(SGT_MERCHANDISER, CT_CATEGORY, _
AM_ADD + AM_UPDATE + AM_DELETE)
Call SetSecurity(SGT_MERCHANDISER, CT_PRODUCT, _
AM_ADD + AM_UPDATE + AM_DELETE)
Call SetSecurity(SGT_MERCHANDISER, CT_SUPPLIER, _
AM_ADD + AM_UPDATE + AM_DELETE)
Now that we have our matrix, we must be able to assign a user to one or more
security groups. To do this, we follow a bitwise pattern, as we previously used, and
create a security key for each employee, storing this in the database and adding it
to the CT_EMPLOYEE class type. Unfortunately, because the number of security
groups we implement might exceed the acceptable range of a long integer, we must
use a string to store this key value. To keep this string from becoming too large, we
convert our bits to a hexadecimal string. Because Visual Basic does not have full
binary and hexadecimal string-processing libraries, we must implement some of
these features ourselves. Listing 15.3 shows a simple binary-to-hexadecimal
converter.
szBinString = Len(BinString)
For i = 1 To nNibbles
byValue = 0
Nibble = Mid(BinString, (i - 1) * 4 + 1, 4)
For j = 1 To Len(Nibble)
byValue = byValue + 2 ^ (4 - j) * Val(Mid(Nibble, j, 1))
Next j
HexString = HexString & Hex(byValue)
Next i
BinToHex = HexString
End Function
Without going into significant detail, the BinToHex function takes a string in binary
format, breaks it into 4-byte nibbles, and then coverts each nibble into a
hexadecimal value.
Array of Bytes
nBytes = Len(HexString) / 2
ReDim Bytes(1 To nBytes)
j = 1
For i = nBytes To 1 Step -1
Bytes(j) = Val("&H" & Mid(HexString, (i - 1) * 2 + 1, 2))
j = j + 1
Next i
End Sub
With these basic functions in place, we can implement two methods on our
CSecurityServer class to enable us to convert our security key to an array of
Boolean values, indicating group inclusion or exclusion. Listing 15.5 shows this
process.
Example 15.5. Creating a Boolean Array from Our
Security Key
End Sub
If Groups(SGT_ACCOUNT_MGRS) Then …
We can now implement our final method on the CSecurityServer class, called
simply AccessGranted, as shown in Listing 15.6.
IsGranted = False
For i = LBound(GroupMembershipFlags) To UBound(GroupMembershipFlags)
' check if user is a member of this group
If GroupMembershipFlags(i) Then
' if so, see if this group has the appropriate access mode
If ((mSecurityMatrix(i, SecuredClass) And AccessMode) = AccessMode)
Then
IsGranted = True
GoTo ExitFunction
End If
End If
Next i
ExitFunction:
AccessGranted = IsGranted
End Function
Our AccessGranted method takes, as parameters, the SecurityKey from the user
profile, the secured class type, and the access mode to be tested. Using this
information, the method converts the security key to a Boolean array using the
MakeGroupMembershipFromKey method. It then iterates through this array,
checking each group to see whether it grants the access mode desired. If so, the
function exits with a True value. If no group is found with the desired access mode,
the method exits with a False value. The implementation has been done in this
fashion to accommodate overlapping security groups.
Scalability Concerns
Although our framework design inherently maximizes scalability by minimizing
object-state management on the MTS server, the DCOM/MTS model does not
natively handle load balancing. To be sure, MTS has sophisticated pooling
mechanisms so that a few physical object instances support many logical object
instances. In addition, the multiprocessor, multithreaded capability of NT Server
can further expand the workload afforded by a single server to increase
performance. Nonetheless, MTS reaches a saturation point as the number of users
rise. In these cases, mechanisms must be in place to balance MTS server loads
relative to database server loads. If IIS is part of the picture, it must be load
balanced as well.
In this model, each site or organization maintains its own instance of the MTS server,
database, and IIS servers. This is the easiest manner in which to address scalability
concerns because the application needs no additional components to support it. The
client applications direct their DCOM calls to the appropriate MTS server, based on
client-side registry settings. An installer program or configuration utility running on
the client can create these settings. Here, we assume that the single server instance
is sufficient to handle the user load for the site.
If each site maintains its own database server as well, a replication mechanism
must be in place to keep global information synchronized across all database server
instances. SQL Server has integrated replication support to accomplish just this
activity. Figure 15.1 shows the single server set per site model.
Figure 15.1. The single server set per site model.
One drawback to this approach is that it has no failover mechanism. If the server
instance goes offline, it is not easy to redirect the client applications to a different
server because the mappings are stored in the client registries.
In some cases, a single server set instance cannot handle the load generated by a
site or organization. We can further segregate the client applications to access
different server instances, as in the previous case. This model appears in Figure
15.2. Although this is a simplistic solution, it does not guarantee that each server
instance is loaded appropriately. Some servers might be over-used, whereas others
are under-used. Load balancing must occur by modifying client registries. Worse
still, if you achieve a good balance, there is no guarantee that it can be maintained,
because new users are added and others are removed. There is also the same
failover problem that plagues the first model.
Figure 15.2. The multiple server sets per site model.
To circumvent this, we need a server broker. In this model, the client application
might first connect to a DCOM object on a single brokerage server thats only
function is to reply with a target server for the application to use. The method that
this broker object uses to determine load can be simplistic or complicated. One
method is that the brokerage server randomly determines a target server name
from a list of available servers in the pool. Other techniques include a round robin
approach where the brokerage server iterates through the list of servers, giving out
the next server name in the list with each request. Although these are probably the
two simplest mechanisms, there is still no guarantee for proper server balancing.
As mentioned in the previous chapter, the Windows 2000 Advanced Data Center will
be releasing a form of COM object load balancing. This will be a software-oriented
solution that models the CORBA and Enterprise Java models.
Server Clustering
Another solution to the load balancing and failover issue is to use a server cluster. In
this mode, you would employ special software (and sometimes hardware) to make
multiple servers act like one large virtual server. The application software itself does
not have to be cognizant that it is operating on a cluster, because the clustering
mechanisms are bound tightly in the NT Server operating system. Microsoft supplies
a cluster software solution through its Microsoft Cluster Server (MSCS) software,
which allows a clustering of two nodes. The Windows 2000 Data Center version of
MSCS will allow four nodes. Several other clustering solutions are available from
other vendors as well; one is HolonTech's HyperECS product, which is a hardware-
and software-based solution. IBM has added extensions to MSCS for its line of
Netfinity servers to allow for clustering for up to 14 servers.
Typically, the database portion of the system operates in a cluster fashion, while
other parts of the system operate in an IP load balanced fashion. The reason for this
is that the database is the place where concurrency of information is maintained,
which requires more than simple load balancing. Microsoft SQL Server can be
clustered in a two-node fashion on top of MSCS in a fairly straightforward fashion.
Other database vendors, such as Oracle and IBM, provide clustering capabilities
using their own technology.
Hardware-based load balancers are available as well from vendors such as Cisco, F5
Networks, and QuickArrow. These solutions provide load balancing at the IP address
level. This means that anything that can operate purely on an IP address can be load
balanced. The advantage of a hardware solution is their outright speed at
performing the load-balancing act, versus the software-oriented solutions
mentioned in the previous chapter. The downside is that hardware solutions can
become rather expensive. You will have to balance price and performance in your
application.
Figure 15.3 shows the final, fully scaled and failover-protected solution. Note that
this model works well because we are not maintaining state on our MTS servers.
Figure 15.3. The fully scaleable and failed over
model.
Summary
This chapter covered two topics: programmatic security and the issues associated
with scalability and failover. With the conclusion of this chapter comes the
conclusion of this book. Although it has been a long journey, it is my hope that, for
some, the topics covered in this book were helpful in a total sense. For the rest, I
hope that it provided insight and useful information, at least in a piecemeal fashion,
that can be incorporated into your enterprise-level projects going on within the
corporate development landscape.