Part 1: What Is N-Tier Architecture?: Karim Hyatt

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 8

N-Tier Application Development with Microsoft.

NET

Part 1 : What is N-Tier Architecture?


Karim Hyatt

Introduction

This is the first in a series of articles exploring the world of n-tier architecture in terms of the
Microsoft .NET platform and associated framework.
The first of these is meant as an introduction to n-tier architecture and as such tries to explain the
reasoning behind developing applications in this way, and how it can be achieved without going into
complex implementation details. That will come later. I suppose that by even mentioning n-tier in
my opening sentence I've jumped the gun somewhat so let me backtrack slightly and explain.
The first question to ask is if this is just a new fad or fashion. After all, we've been through several
iterations of various architectures all of which have failed at some level. Well, maybe! Modern
architectural development techniques evolve and are based on our latest failures. This is a good
thing. It shows that we are learning from our mistakes. Sure we have had a few setbacks (like the
thin client episode), but in general everything exists for a reason and has developed from decades
of hit-and-miss projects.
So how often have you implemented a new system and had a nagging doubt in your mind whether
it would stand the test of time? - the main question being: "What if my system is spectacularly
successful? Will it become a victim of its own success, or will I be lucky?"
For the last few years, system architects have been touting splitting a system into tiers.
Unfortunately, many companies have yet to embrace it, fearing that they will overcomplicate their
systems, increase maintenance costs (after all, if a system is in several places, it must be more
expensive to run) and push up salaries because they have to hire more qualified staff.
Let me get something straight. In the first place you should never initiate a new systems
development project without a good business case. This usually boils down to the fact that the
system you implement will help your company make even more money. If you can't justify it in
those terms, dump the project. Therefore, I can guarantee that n-tier systems will save you money
in the short- to medium-term in hardware, software development and software maintenance costs.
In the next few articles I will show you how.

N-Tier Explained

For those who haven't read (or quite understood) the hundreds of other articles on multi-tier
development, here is a quick reminder. It is perhaps useful to go through the various stages that
we, as software developers, have been through in order to give us some perspective on what is
actually possible today.

Single Tier

When automation first hit business, it was in the form of a huge "Mainframe" computer. Here, a
central computer served the whole business community and was accessed via dumb terminals. All
processing took place on a single computer - and therefore in one place. All resources associated
with the computer (tape and disk drives, printers etc.) were attached to this same computer. This is
single tier (or 1-tier) computing. It is simple, efficient, uncomplicated, but terribly expensive to run.
Figure 1 - Single Tier Architecture

Figure 1 shows a physical layout of a single tier environment. All users run their programs from a
single machine. The ease with which deployment and even development occurs makes this model
very attractive. The cost of the central machine makes this architecture prohibitive for most
companies, especially as system costs and return on investment (ROI) are looked at carefully
nowadays.

Dual Tier Environments

In the 1980s, a revolution happened in the world of business computing: the Personal Computer
(nearly unimaginable until then) hit the streets. The PC quickly became the standard equipment on
desks around the world. The demand for personal software was also in demand and, with the
advent of Windows 3.0, this demand became a roar.
In order to provide personal software which ran on personal computers, a model needed to be
found where the enterprise could still share data. This became known as the client/server model.
The client (on the personal computer) would connect to a central computer (server) and request
data. With limited network bandwidth, this would offload the need for expensive infrastructure since
only data would be transmitted, not the huge graphics necessary to make a windows application
display. In Figure 2, we see the very model implemented in most organizations today. This model is
also quite easy to implement. All you need is an RDBMS, such as MS-SQL Server 2000, running on
Windows 2000 Server and a PC running TCP/IP. You application connects to the database server
and requests data. The server just returns the data requested.
Figure 2 - Client Server Physical Model

There are, however, several problems with this model:

1. The connections are expensive - they take a long time to establish and require a lot of RAM
on the server. Because the fact of connecting is slow, most applications connect when
launching the client application and disconnect when the application is shut down. Of
course, if the application crashes, then the connection is left open on the server and
resources are lost.
2. One can only connect a limited number of users to a server before SQL server spends more
time managing connections than processing requests. Even if you are willing to increase
server resources exponentially, as your user base grows (get your corporate wallet out)
there still comes a time when your server will choke. This can be solved in part by splitting
the database in two or three and replicating the data. This is definitely NOT recommended
as replication conflicts can occur. The more users you connect, the more errors you're likely
to get.
3. This method is so cost-ineffective. Many users only use their connection 2-3% of the time.
The rest of the time it is just sitting there hogging memory resources. This particular
problem is usually resolved artificially, by using a TP monitor such as Tuxedo which pools
and manages the connections in order to provide client/server for the masses. TP monitors
are quite expensive and sometimes require their own hardware platform to run efficiently.

Figure 3 puts the above into context and shows the logical system components - most of which are
on the client.
Figure 3 - Logical View of a 2-Tier Architecture

The Alternatives

With the advent of the Internet, many people jumped to the conclusion that the days of the
mainframe were back. Client/Server had obviously failed, personal computers had failed and, above
all, Windows was on its way out. A host of "thin client" applications were developed usually by
overzealous IT managers hoping to wrest computing control back from the users. TCO - Total Cost
of Ownership was the watchword of the day and everyone was consumed by downsizing the client.
Thus 3-tier applications were born. These applications run the traditional client/server model but
from a web server.

Figure 4 - 3-Tier Thin Client Architecture


The client only displays the user interface and data, but has no part in producing the results. Figure
4 shows the physical representation of such architecture, whilst Figure 5 gives a logical view.
This architecture presents one advantage over the former: a well implemented web server can
manage and pool database connections as well as running the applications. The disadvantage is that
the web server is quickly overwhelmed by requests and must either be clustered or upgraded.

Figure 5 - Logical 3-Tier View

Did you also notice that the software model has not significantly changed over the 2-tier model? We
have merely moved the 2-tier client processing onto the web server. Also the thin client user
interfaces, by their very nature, are not as rich as their Windows counterparts. Therefore,
applications developed using this model tend to be inferior to their Windows counterparts.
The clue in really making an application scalable, as you may have guessed, is to split up the
processing (in the red boxes) between different physical entities. The more we can split it up, the
more scalable our application will be.
"Isn't this expensive?" I hear you cry? I need a different server for each layer. The more layers, the
more machines we will need to run. Well, this is true if you have thousands of users accessing your
system continuously, but if you don't, you can run several layers on the same machine. Also,
purchasing many lower-spec servers is more cost-effective than one high-spec server.
Let's explore the various layers we can create, starting from the logical model in Figure 5

Overview of an N-Tier System

The Data Layer


The data layer can usually be split into two separate layers. The first will consist of the set of stored
procedures implemented directly within the database. These stored procedures will run on the
server and provide basic data only. Not only are they pre-compiled and pre-optimized, but they can
also be tested separately and, in the case of SQL Server 2000, run within the Query Analyzer to
make sure that there are no unnecessary table scans. Keep them as simple as possible and don't
use cursors or transactions. Cursors are slow because processing rows one by one instead of as a
set is inefficient. Transactions will be handled by the layer above, as ADO.NET gives us much more
control over these things.

Figure 6 - N-Tier Logical Model

The next layer consists of a set of classes which call and handle the stored procedures. You will
need one class per group of stored procedures which will handle all Select, Insert, Update, and
Delete operations on the database. Each class should follow OO design rules and be the result of a
single abstraction - in other words handle a single table or set of related tables. These classes will
handle all requests to or from the actual database and provide a shield to your application data. All
requests must pass through this layer and all concurrency issues can and must be handled here. In
this way you can make sure that data integrity is maintained and that no other source can modify
your data in any way.
If your database changes for any reason, you can easily modify your data layer to handle them
without affecting any other layers. This considerably simplifies maintenance.
Business Rule Layer

This layer is implemented in order to encapsulate your business rules. If you have followed best
practices, you will have created a set of documents which describe your business. In the best of
cases, you will have a set of use-cases describing your business in precise detail. From this you will
have been able to create a class association diagram which will help you create your business layer.
Here we find classes which implement your business functionality. They neither access data (except
through the data layer) nor do they bother with the display or presentation of this data to the user.
All we are interested in at this point are the complexities of the business itself. By isolating this
functionality, we are able to concentrate on the guts of our system without the worry of design,
workflow, or database access and related concurrency problems. If the business changes, only the
business layer is affected, again considerably simplifying future maintenance and/or enhancements.
In more complex cases it is entirely possible to have several business layers, each refining the layer
beneath, but that depends on the requirements of your system.

Workflow Layer

This is one of the optional layers and deals with data flow to and from your system. It may or may
not interact directly with the user interface, but always deals with external data sources.
For instance, if you send or receive messages from a messaging queue, use a web service for extra
information, send or receive information to another system, the code to handle this would be in this
layer. You may wish to wrap your whole application in XML so that the choice of presentation layer
can be expanded. This would also be handled in the Workflow Layer.

Presentation Layer

This layer handles everything to do with the presentation of your system. This does not just include
your windows or web forms (or your user interface), but also all the classes which will help you
present your data.
Ideally, your event method implementations within your form classes will only contain calls to your
presentation layer classes. The web or windows forms, used for visual representation only interface
seamlessly with the presentation layer classes which handle all translation between the business
layer/workflow layer and the forms themselves. This means that any changes on a visual front can
be implemented easily and cheaply.

Bottom Line

Can you see the pattern? Each section (or layer) of your application is a standalone entity which can
communicate with layers above and below it. Each layer is designed independently and protected
from the others by creating extensible interfaces. All changes can therefore be encapsulated within
the layer and, if not major, will not necessarily affect layers above and below it.
So how have we managed for so long with the 2-tier client/server model? Well, we haven't really
managed at all. We've shoe-horned applications into architectures instead of architecting solutions
in order to provide perfect fit. Why? Because solutions involving any degree of distribution were
difficult to implement cost-effectively - that is until now.
Although the exact implementation can vary in terms of the .NET Framework in that you have the
choice of using Web Services, Enterprise Serviced Components, and HTTP or TCP/IP Remoting, the
fact remains that we now have all the tools necessary to implement the above. If you are using or
thinking of using the .NET platform and framework, you would be well advised to architect in
several tiers.
In the next article I will show you how to do just that.

About the author

Karim Hyatt is an application development architect and consultant, based in Luxembourg with over
20 years experience in the business.
He started developing Windows applications with a special release of Windows 2.0 and quickly
moved on to version 3 SDK. Having been through several iterations of learning new APIs and
Frameworks such as MFC and ATL, .he decided to get on board with .NET in the early days of the
beta 1 release.
He now teaches, coaches, and consults companies in the use of the .NET Platform and Framework
all over Europe. He can be reached for comments at [email protected].

You might also like