100% found this document useful (1 vote)
331 views18 pages

N - Web Application

This document discusses the implementation of a sample N-tier web application to demonstrate features of ASP.NET 2.0 and SQL Server 2005. It describes the different layers of the N-tier architecture including the presentation layer, business logic layer, data access layer, and database layer. It then discusses implementing stored procedures in the database layer using managed C# code for improved performance compared to T-SQL. The data access layer is generated using TableAdapters and the presentation layer is implemented using ASP.NET web forms.

Uploaded by

Raju
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC or read online on Scribd
100% found this document useful (1 vote)
331 views18 pages

N - Web Application

This document discusses the implementation of a sample N-tier web application to demonstrate features of ASP.NET 2.0 and SQL Server 2005. It describes the different layers of the N-tier architecture including the presentation layer, business logic layer, data access layer, and database layer. It then discusses implementing stored procedures in the database layer using managed C# code for improved performance compared to T-SQL. The data access layer is generated using TableAdapters and the presentation layer is implemented using ASP.NET web forms.

Uploaded by

Raju
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC or read online on Scribd
You are on page 1/ 18

Introduction

Designing N-Tier client/server architecture is no less complex than developing two-


tier architecture, however the N-Tier architecture, produces a far more flexible and
scalable client/server environment. In two-tier architecture, the client and the server
are the only layers. In this model, both the presentation layer and the middle layer are
handled by the client. N-Tier architecture has a presentation layer and three separate
layers - a business logic layer and a data access logic layer and a database layer. The
next section discusses each of these layers in detail.

Different Layers of an N-Tier application

In a typical N-Tier environment, the client implements the presentation logic (thin
client). The business logic and data access logic are implemented on an application
server(s) and the data resides on database server(s). N-tier architecture is typically
thus defined by the following layers:

• Presentation Layer: This is a front-end component, which is responsible for


providing portable presentation logic. Since the client is freed of application
layer tasks, which eliminates the need for powerful client technology. The
presentation logic layer consists of standard ASP.NET web forms, ASP pages,
documents, and Windows Forms, etc. This layer works with the results/output
of the business logic layer and transforms the results into something usable
and readable by the end user.
• Business Logic Layer: Allows users to share and control business logic by
isolating it from the other layers of the application. The business layer
functions between the presentation layer and data access logic layers, sending
the client's data requests to the database layer through the data access layer.
• Data Access Logic Layer: Provides access to the database by executing a set
of SQL statements or stored procedures. This is where you will write generic
methods to interface with your data. For example, you will write a method for
creating and opening a SqlConnection object, create a SqlCommand object for
executing a stored procedure, etc. As the name suggests, the data access logic
layer contains no business rules or data manipulation/transformation logic. It
is merely a reusable interface to the database.
• Database Layer: Made up of a RDBMS database component such as SQL
Server that provides the mechanism to store and retrieve data.

Now that you have a general understanding of the different layers in a N-Tier
application, let us move onto discuss the implementation of a N-Tier Web application.

Implementation

In this article, I will consider an example web site (that displays authors and author
titles information) constructed using N-Tier principles and use the example Web site
to demonstrate the new features of ASP.NET 2.0 and SQL Server 2005. The sample
Web site shown in this example is very simple and straightforward and will consist of
only two pages: the first page will show the list of authors from the pubs database and
the second page will display the list of titles specific to a selected author.
Please note that this article is not aimed at providing an exhaustive coverage of the
individual features of ASP.NET 2.0, instead it only focuses on helping the readers
understand the features of ASP.NET 2.0 and SQL Server 2005 that are essential to
building a N-Tier web application.

Architecture of the Example Application

The following screenshot shows the different layers in the example application. It also
highlights the important characteristics of the example application.

Some of the important characteristics of the sample application are as follows:

• The stored procedures in the SQL Server 2005 database are created using C#.
The ability to create stored procedures in managed code enables complex
business logic to be executed close to the database resulting in performance
improvements. The compiled nature of the stored procedure also results in
increased performance.
• The data access layer classes are generated using the new TableAdapter
Configuration Wizard, which enables you to create data access layer classes
without writing a single line of code.
• ASP.NET Web forms in the user interface layer are generated using master
pages, providing a consistent look and feel for the entire application.
• Web forms utilize ObjectDataSource control to directly bind the output of the
middle tier methods to data bound controls such as a GridView control.
• Web forms also take advantage of caching of database contents to increase the
performance and throughput of the web application. This is made possible by
the use of the database cache invalidation mechanism that can automatically
remove specific items from the cache when the data in the database table
changes.

Implementation of the Application

I will discuss the implementation by discussing each of the above layers, starting with
the database layer.

Database Objects using Managed Code

One of the neat features of SQL Server 2005 is the integration with the .NET CLR.
The integration of CLR with SQL Server extends the capability of SQL Server in
several important ways. This integration enables developers to create database objects
such as stored procedures, user defined functions, and triggers by using modern
object-oriented languages such as VB.NET and C#. In this article, I will demonstrate
how to create the stored procedures using C#. Before looking at the code, let us
understand the pros and cons of using managed language in the database tier to create
server side objects.

T-SQL Vs Managed Code

Although T-SQL, the existing data access and manipulation language, is well suited
for set-oriented data access operations, it also has limitations. It was designed more
than a decade ago and it is a procedural language rather than an object-oriented
language. The integration of the .NET CLR with SQL Server enables the development
of stored procedures, user-defined functions, triggers, aggregates, and user-defined
types using any of the .NET languages. This is enabled by the fact that the SQL
Server engine hosts the CLR in-process. All managed code that executes in the server
runs within the confines of the CLR. The managed code accesses the database using
ADO.NET in conjunction with the new SQL Server Data Provider. Both Visual Basic
.NET and C# are modern programming languages offering full support for arrays,
structured exception handling, and collections. Developers can leverage CLR
integration to write code that has more complex logic and is more suited for
computation tasks using languages such as Visual Basic .NET and C#. Managed code
is better suited than Transact-SQL for number crunching and complicated execution
logic, and features extensive support for many complex tasks, including string
handling and regular expressions. T-SQL is a better candidate in situations where the
code will mostly perform data access with little or no procedural logic. Even though
the example you are going to see in this article is best written using T-SQL, I will take
the managed code approach and show you how to leverage that feature.

Creating CLR Based Stored Procedures

For the purposes of this example, create a new SQL Server Project using Visual C# as
the language of choice in Visual Studio 2005. Since you are creating a database
project, you need to associate a data source with the project. At the time of creating
the project, Visual Studio will automatically prompt you to either select an existing
database reference or add a new database reference. Choose pubs as the database.
Once the project is created, select Add Stored Procedure from the Project menu.
In the Add New Item dialog box, enter Authors.cs and click Add button. After the
class is created, modify the code in the class to look like the following.

using System;
using System.Data;
using System.Data.Sql;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;

public class Authors


{
[SqlProcedure]
public static void GetAuthors()
{
SqlPipe sp = SqlContext.Pipe;
using (SqlConnection conn = new
SqlConnection("context connection=true"))
{
conn.Open();
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.Text;
cmd.Connection = conn;
cmd.CommandText = "Select DatePart(second, GetDate()) " +
" As timestamp,* from authors";
SqlDataReader rdr = cmd.ExecuteReader();
sp.Send(rdr);
}
}

[SqlProcedure]
public static void GetTitlesByAuthor(string authorID)
{
string sql = "select T.title, T.price, T.type, " +
"T.pubdate from authors A" +
" inner join titleauthor TA on A.au_id = TA.au_id " +
" inner join titles T on TA.title_id = T.title_id " +
" where A.au_id = '" + @authorID + "'";
using (SqlConnection conn = new
SqlConnection("context connection=true"))
{
conn.Open();
SqlPipe sp = SqlContext.Pipe;
SqlCommand cmd = new SqlCommand();
cmd.CommandType = CommandType.Text;
cmd.Connection = conn;
cmd.CommandText = sql;
SqlParameter paramauthorID = new
SqlParameter("@authorID", SqlDbType.VarChar, 11);
paramauthorID.Direction = ParameterDirection.Input;
paramauthorID.Value = authorID;
cmd.Parameters.Add(paramauthorID);
SqlDataReader rdr = cmd.ExecuteReader();
sp.Send(rdr);
}
}
}

Let us examine the above lines of code. The above code starts by importing the
required namespaces and then declares a class named Authors. There are two
important classes in the Microsoft.SqlServer.Server namespace that are specific to the
in-proc provider:

• SqlContext: This class encapsulates the extensions required to execute in-


process code in SQL Server 2005. In addition it provides the transaction and
database connection which are part of the environment in which the routine
executes.
• SqlPipe: This class enables routines to send tabular results and messages to the
client. This class is conceptually similar to the Response class found in
ASP.NET in that it can be used to send messages to the callers.

The Authors class contains two static methods named GetAuthors and
GetTitlesByAuthor. As the name suggests, the GetAuthors method simply returns all
the authors from the authors table in the pubs database and the GetTitlesByAuthor
method returns all the titles for a specific author.

Inside the GetAuthors method, you start by getting reference to the SqlPipe object by
invoking the Pipe property of the SqlContext class.

SqlPipe sp = SqlContext.Pipe;

Then you open the connection to the database using the SqlConnection object. Note
that the connection string passed to the constructor of the SqlConnection object is set
to "context connection=true" meaning that you want to use the context of the logged
on user to open the connection to the database.

using (SqlConnection conn = new SqlConnection("context


connection=true"))

Here open the connection to the database using the Open() method.

conn.Open();

Then you create an instance of the SqlCommand object and set its properties
appropriately.

SqlCommand cmd = new SqlCommand();


cmd.CommandType = CommandType.Text;
cmd.Connection = conn;
cmd.CommandText = "Select DatePart(second, GetDate()) " + " As
timestamp,* from authors";

Finally you execute the sql query by calling the ExecuteReader method of the
SqlCommand object.

SqlDataReader rdr = cmd.ExecuteReader();

Then using the SqlPipe object, you then return tabular results and messages to the
client. This is accomplished using the Send method of the SqlPipe class.

sp.Send(rdr);
The Send method provides various overloads that are useful in transmitting data
through the pipe to the calling application. Various overloads of the Send method are:

• Send (ISqlDataReader) - Sends the tabular results in the form of a


SqlDataReader object.
• Send (ISqlDataRecord) - Sends the results in the form of a SqlDataRecord
object.
• Send (ISqlError) - Sends error information in the form of a SqlError object.
• Send (String) - Sends messages in the form of a string value to the calling
application.

Both the methods in the Authors class utilize one of the Send methods that allows you
to send tabular results to the client application in the form of a SqlDataReader object.
Since the GetTitlesByAuthor method implementation is very similar to the
GetAuthors method, I will not be discussing that method in detail.

Now that the stored procedures are created, deploying it is very simple and
straightforward. Before deploying it, you need to build the project first. To build the
project, select Build->Build <ProjectName> from the menu. This will compile all
the classes in the project and if there are any compilation errors, they will be
displayed in the Error List pane. Once the project is built, you can then deploy it onto
the SQL Server by selecting Build->Deploy <ProjectName> from the menu. This
will not only register the assembly in the SQL Server but also deploy the stored
procedures in the SQL Server. Once the stored procedures are deployed to the SQL
Server, they can then be invoked from the data access layer, which is the topic of
focus in the next section.

Before executing the stored procedure, ensure you execute the following sql script
using SQL Server Management Studio to enable managed code execution in the SQL
Server.

EXEC sp_configure 'clr enabled', 1;


RECONFIGURE WITH OVERRIDE;
GO

Data Access Layer using TableAdapter Configuration Wizard

Traditionally the process you employ to create data access layer classes is a manual
process, meaning that you first create a class and then add the appropriate methods to
it. With the introduction of Visual Studio 2005, Microsoft has introduced a new
TableAdapter Configuration Wizard that makes creating a data access logic layer class
a breezy experience. Using this wizard, you can create a data access logic layer
component without having to write a single line of code. This increases the
productivity of the developers to a great extent. Once you create those classes, you
can consume them exactly the same way you consume built-in objects. Before
looking at an example, let us briefly review what a TableAdapter is. A TableAdapter
connects to a database, executes queries, or stored procedures against a database, and
fills a DataTable with the data returned by the query or stored procedure. In addition
to filling existing data tables with data, TableAdapters can return new data tables
filled with data. The TableAdapter Configuration Wizard allows you to create and edit
TableAdapters in strongly typed datasets. The wizard creates TableAdapters based on
SQL statements or existing stored procedures in the database. Through the wizard,
you can also create new stored procedures in the database.

This section will discuss the creation of a data access component that will leverage the
stored procedures created in the previous step. To start, create a new ASP.NET web
site named NTierExample in Visual C# by selecting New Web Site from the File
menu as shown below.

To create a data component, begin by right clicking on the web site and selecting Add
New Item from the context menu. In the Add New Item dialog box, select DataSet
from the list of templates. Change the name of the file to Authors.xsd and click Add.

When you click Add, you will be prompted if you want to place the component inside
the App_Code directory. Click OK in the prompt and this will bring up the
TableAdapter Configuration Wizard. In the first step of the TableAdapter
Configuration Wizard, you need to specify the connection string and in the second
step you will be prompted if you want to save the connection string in the web.config
file. In this step, save the connection string to the web.config file by checking the
check box.

In the next step, you will be asked to choose a command type. Select the Use
existing stored procedures option as shown below and click Next.

Clicking Next in the above screen brings up the following screen wherein you select
the stored procedure to use.

Click Next in the above dialog box and you will see the Choose Methods to
Generate dialog box wherein you can specify the name of the method that will be
used to invoke the stored procedure selected in the previous step. Specify the name of
the method as GetAuthors as shown below:

Clicking Next in the above screenshot results in the following screen wherein you just
hit Finish.
When you click on Finish, Visual Studio will create the required classes for you.
After the classes are created, you need to rename the class to Authors. After making
all the changes, the final output should look as follows.

That's all there is to creating a data access component using the TableAdapter
Configuration Wizard. As you can see, all you have to do is to provide the wizard with
certain information and Visual Studio hides all the complexities of creating the
underlying code for you.

Now that you have created the data access layer method for the GetAuthors stored
procedure, you need to do the same thing for the GetTitlesByAuthor stored procedure.
To this end, add another TableAdapter to the Authors.xsd by selecting Data->Add-
>TableAdapter from the menu and follow through the wizard steps. Remember to
specify the stored procedure name as GetTitlesByAuthor this time. Note that at the
time of writing this article using Visual Studio 2005 Beta 2, I encountered some
problems in getting the wizard to work because of some bugs. If you run into any
problem with the wizard, simply exit from the wizard, select the appropriate
TableAdapter from the designer and select View->Properties Window from the
menu. Through the properties dialog box, you should be able to perform all the
configurations related to a TableAdapter.

Storing Utility Classes in App_Code Directory

You might remember that when you created the data component, you placed the data
component class inside the App_Code directory, which is a special directory used by
ASP.NET. It is very similar to bin directory, but with the following exceptions: While
the bin directory is designed for storing pre-compiled assemblies used by your
application, the App_Code directory is designed for storing class files to be compiled
dynamically at run time. This allows you to store classes for business logic
components, data access components, and so on in a single location in your
application, and use them from any page. Because the classes are compiled
dynamically at run time and automatically referenced by the application containing
the App_Code directory, you don't need to build the project before deploying it, nor
do you need to explicitly add a reference to the class. ASP.NET monitors the
App_Code directory and when new components are added, it dynamically compiles
them. This enables you to easily make changes to a component and deploy with a
simple XCOPY or with a drag-and-drop operation. In addition to simplifying the
deployment and referencing of components, the \App_Code directory also greatly
simplifies the creation and accessing of resource files (.resx) used in localization, as
well as automatically generating and compiling proxy classes for WSDL files (.wsdl).

With the introduction of this new directory, you might be wondering when to use this
directory when compared to the bin directory. If you have an assembly that you want
to use in your web site, create a bin subdirectory and then copy the .dll to that
subdirectory. If you are creating reusable components that you want to use only from
your ASP.NET pages, place them under the App_Code directory. Note that any class
you add to the App_Code directory is visible only within that Web site meaning that it
will not be visible outside of that Web site. So if you are creating a class that needs to
be shared across multiple Web sites, you will be better off creating that class as part of
a class library project and share that project among those Web sites.

Conclusion

In this installment, you have understood the new features of SQL Server 2005 utilized
to create stored procedures using managed language such as C#. After that, you have
also seen the steps involved in creating the data access components using
TableAdapter Configuration Wizard that greatly simplified the process of creating a
data access component. In Part-2 of this article, we will see how to consume this data
access components from a business logic layer. We will also look at the use of master
pages in creating the user interface, and caching features in ASP.NET 2.0 and so on

This is the first installment of a numerous part article on doing N-Tier


development with Microsoft Visual Studio .Net. In retrospect DNA will be
examined and basically expanded as a starting point. In the world of .Net
comparisons will be made with code examples contained in the projects in the
TVanover.zip file that demonstrate the proof of concept of this series of articles.

The purpose of this article is to examine a proof of concept on an architecture


that follows the DNA pattern on concept only. It will cover each of the layers,
some more in depth than others to expand possibilities to .Net developers.

DNA Architecture

If an alternative definition of DNA was brought to layman terms it would be


scalability. Microsoft invented this concept where developing applications in
function specific layers or tiers that an application could be scaled endlessly and
maintained with low total cost of ownership. These tiers include Presentation,
Business, Data Access, and Data Storage. The splitting of tier creates boundaries
and allows encapsulation of specific functionality from tiers that should not have
knowledge of other tiers. For example the UI in the Presentation tier should never
have direct database access or access to the Data Access layer, but should only
have access to the Business Logic.

Scaling Up
This defines an application in its relationship to where it resides on hardware. An
application that has logically separated tiers can have multiple tiers on the same
server. This type of application design is limited to the amount of memory and
processors that can be applied to the hardware. This is scaling up. Figure 1 shows
all tiers located on a single application server and is an example of logically
separated N-Tier development.
Scaling Out
Splitting up the layers to different physically separated tiers allows scaling out. In
this paradigm the tiers reside on different servers and allow more servers to be
added so that load balancing and clustering can handle a larger capacity of
simultaneous users for the application. Figure 2 shows a bit more complex
example of this concept. In this paradigm more web servers and application
servers can be added to facilitate more simultaneous users.

Figure 3 shows a Visual Studio .NET solution for physically separated middle tier
components. Notice the difference in that there are no proxies brought to the web
server from the business tier as in the case of the DNA COM architecture shown
in Figure 1. Instead a contract is brought through the web application through
Simple Object Access Protocol (SOAP) that basically takes the place of DCOM and
proxies generated by middle tier COM components. A Proxy class can be
generated that will allow asynchronous access to the web service by using the
Wsdl.exe.
This article will focus on physical separation of tiers that will enable the scalability
that is required in enterprise web based applications. This installment will focus
on the Data Access Layer and the following articles will continue walking up the
tiers until the entire proof of concept application is constructed.

COM+ Data Components

MTS and COM+ allowed for the creation of rich middle tier components that
abstracted the data and business logic from the user interface and centralized the
maintenance of that encapsulated logic. Developers had several options in
connecting to data through the command objects created in the data tier to
execute stored procedures on SQL Server databases. One of the drawbacks in
creating data access components has been an issue of maintainability of the
components as stored procedures change. When the parameters change in a
stored procedure the data access components must change and be recompiled to
accommodate the new parameter, the removed parameter, the data type change,
or the order of new parameters. Thus there exists an implied dependency
between data access components and the actual stored procedures that reside in
the DBMS. Such practices in object design and creation do not support object
reuse and subsequent recoding basic logic frequently occurs. The focus will on
these issues and a more flexible architecture and design utilizing the features that
are offered in the Microsoft .NET framework will be illustrated.

Hard Coded Parameters

In general hard coding the parameters is most efficient in terms of performance


in creating the command object parameters collection used in the old world of
ADODB and COM. This was also a maintenance issue throughout the development
cycle as fields and parameters changed in the database to accommodate the
ongoing project development cycle. Large applications that had several data
access libraries containing multiple classes defining different entity based
methods could result in thousands of lines of code dedicated to creating the
parameters and setting up their proper types, sizes, direction, and precision. Each
time parameters were added, removed or changed, compatibility was broken
between the business components and the data access components as well as the
data access components and the stored procedures. In some situations, you may
add a parameter or change the order or a parameter which could then result in a
logic error, thus the compilation was successful but at runtime you would get data
corruption, type mismatches, overflows, any number of unexpected errors, or
difficult to find bugs. Listing 1 shows typical code in Visual Basic 6.0 using hard
coded parameters of the ADODB Command object to execute communication with
the database.

Listing 1 Hard coded parameters

With cmProduct
.ActiveConnection = cnProduct
.CommandText = "usp_ProductUpd"
.CommandType = adCmdStoredProc
'@ProductID parameter
prmProduct = .CreateParameter("@ProductID", _adInteger, adParamInput, ,
ProductID)
.Parameters.Append(prmProduct)
'@ShelfID parameter
prmProduct = .CreateParameter("@ShelfID", _adInteger, adParamInput, ,
ShelfID)
.Parameters.Append(prmProduct)
'@BinID parameter
prmProduct = .CreateParameter("@BinID", _adInteger, adParamInput, , BinID)
.Parameters.Append(prmProduct)
.Execute()
End With

Parameters Refresh

The ADODB Command objects parameter collection contained a refresh method


that allows developers to dynamically create parameters. This helped overcome
the maintainability issues involved with hard coding parameters, but at a
measurable performance cost due to multiple calls on the database. When the
parameters were refreshed a connection was made to the database to query
those parameters, and then when the command was executed a second trip to
the database was made.

In the case a physical separation of tiers, this created a measurable performance


hit on the application and therefore created a tradeoff of development time
versus application performance. Compatibility issues were reduced when adding
or removing parameters as the method parameters were passed inside a variant
array. There would only be a need to add the new fields or parameters to the UI,
business object, and the stored procedure and insure that the correct ordinal
position of the value passed in all layers corresponded to the intended parameter
in the stored procedure. Listing 2 illustrates the use of code for the refresh
method.

Listing 2 Parameters Refresh

With oCmd
.ActiveConnection = oCnn
.CommandText = "usp_ProductUpd"
.CommandType = adCmdStoredProc
.Parameters.Refresh()
'zeroth parameter is @RETURN_VALUE
For intValues = 1 To .Parameters.Count - 1
.Parameters(intValues) = varValues(intValues - 1)
Next intValues
End With

XML Template Adapter

There was little stir in 2000 when several faster alternatives evolved. One of
which was encapsulating the parameter definitions in XML based schema files and
populating the parameters collection of the command object based on the
attributes or the schema of the file XML file. Performance was gained over the
refresh method and there was less maintenance associated with changes to the
data model. XML templates can automatically be generated to the appropriate
read directory required for the data component.A caveat is that some file IO is
required on each command object being created unless the XML files were cached
in IIS. Someone also had to write the new template file when changes occurred
to a stored procedure. Even so this eased maintenance somewhat and the
performance hit was a fair trade off for reduced project development time and
easier maintenance.When parameters changed in a stored procedure there was
not an issue with compatibility as the parameters were passed to the method
encapsulated inside a variant array. The business and UI tiers were the only other
places where changes would have to be propagated in order to have proper
communication on all tiers.
Listing 3 XML Template Adapter

'load the attribute data from the xml file back to the parameters object
For Each objElement In objNodeList
If lngElement = -1 Then 'write @RETURN_VALUE parameter
prmUpdate = .CreateParameter(, _
CLng(objElement.getAttribute(ATTRIBUTE_DATA_TYPE)),_
CLng(objElement.getAttribute(ATTRIBUTE_DIRECTION)), _
CLng(objElement.getAttribute(ATTRIBUTE_SIZE)), Null)
'use to keep the parameter in line with the variant array
lngElement = 0
Else 'write rest of parameters
prmUpdate = .CreateParameter(, _
CLng(objElement.getAttribute(ATTRIBUTE_DATA_TYPE)), _
CLng(objElement.getAttribute(ATTRIBUTE_DIRECTION)), _
CLng(objElement.getAttribute(ATTRIBUTE_SIZE)), _
Parameters(lngElement))
'use to keep the parameter in line with the variant array
lngElement = lngElement + 1
End If
'assign precision to parameter in case of a decimal
prmUpdate.Precision =CLng(objElement.getAttribute(ATTRIBUTE_PRECISION))
.Parameters.Append(prmUpdate)
Next 'objElement

TVanover.DDL

Figure 4 illustrates the architecture associated with the rest of the article
installments. The top center rectangle is the TVanover.DDL or Dynamic Data
Layer. This is a class library that contains both transactional and non-transactional
methods that are commonly used in an application.
In the Human Resource Web Service examine the SearchEmployees method
shown in Listing 4.

Listing 4

[WebMethod(TransactionOption=TransactionOption.NotSupported)]
public DataSet SearchEmployees(string firstName, string lastName)
{
ExecDataSet oExec = new ExecDataSet();
DataSet dsReturn = new DataSet("Employees");
object[] oParameters = new Object[] {firstName, lastName};
dsReturn = oExec.ReturnDataSet("usp_EmployeeSearch", oParameters);
return dsReturn;
}

This method has an attribute set that takes no transactions as selecting records
should not be contained in a transaction where data is not altered. There are two
parameters passed to the method for the first name and last name of the
individual being sought after. An instance of the ExecDataSet class is instantiated,
the two parameters are wrapped inside an object array, the stored procedure and
object array are passed to the ReturnDataSet method, and it is executed to
return a Dataset.

Listing 5 goes inside the ExecDataSet.ReturnDataSet method.

Listing 5

public DataSet ReturnDataSet(string storedProc, params object[] oParameters)


{
DataSet dsReturn = new DataSet();
SqlDataAdapter oAdapter = DataAdapter.GetSelectParameters(storedProc,
oParameters);
try
{
oAdapter.Fill(dsReturn);
}
catch(Exception oException)
{
if (!EventLog.SourceExists(APPLICATION_NAME))
{
EventLog.CreateEventSource(APPLICATION_NAME, "Application");
}
EventLog.WriteEntry(oException.Source, oException.Message,
EventLogEntryType.Error );
}
finally
{
oAdapter.SelectCommand.Connection.Close();
}
return dsReturn;
}

The params keyword in the ReturnDataSet method allows various numbers of


parameters to be sent to the method and is dynamic in nature. This must always
be the last in the method signature and there can only be one reference to it
inside the signature. A DataSet is created, a SqlDataAdapter is created from the
return of the static class method DataAdapter.GetSelectParameters, and the
newly created DataSet is filled from that SqlDataAdapter. If an error occurs the
application log is used to store the error. The finally code block closes the
connection regardless of error or not, and the DataSet is returned to the calling
method.

In examining the DataAdapter.GetSelectParameters method take a closer look at


Listing 6.

Listing 6

public static SqlDataAdapter GetSelectParameters(string storedProc,params


object[] oParameters)
{
SqlDataAdapter adapter = new SqlDataAdapter();
try
{
//this method requires parameters and will
//throw an exception if called without them
if(oParameters == null)
{
throw new ArgumentNullException(NULL_PARAMETER_ERROR);
}
else
{
DataTable oTable = ParameterCache.dsParameters.Tables[storedProc];
int iParameters = 0;
SqlCommand oCmd = new SqlCommand(storedProc);
oCmd.CommandType = CommandType.StoredProcedure;
//write the parameters collection
//based on the cache and values sent in
foreach(DataRow oDr in oTable.Rows)
{
oCmd.Parameters.Add(CreateParameter(Convert.ToString(oDr[PARAMETER_NAME
]),
Convert.ToInt32(oDr[PARAMETER_DIRECTION]),oParameters[iParameters]));
iParameters++;
}
//Add a return parameter
oCmd.Parameters.Add(CreateParameter());
oCmd.Connection = Connections.Connection(false);
adapter.SelectCommand = oCmd;
}
}
catch(Exception oException)
{
if (!EventLog.SourceExists(APPLICATION_NAME))
{
EventLog.CreateEventSource(APPLICATION_NAME, "Application");
}
EventLog.WriteEntry(oException.Source, oException.Message,
EventLogEntryType.Error );
}
return adapter;
}

If no parameters are passed to this method an exception is thrown back to the


caller, otherwise a new DataTable is created and is assigned the contents of a
static member ParameterCache.dsParameters. This is a public static multiple
tabled DataSet that is populated on the first access and remains in memory while
the web application is running. The table being returned is named after the stored
procedure that was sent to this method. The foreach iteration builds the
command object and populates each parameter from the data in the oTable
object and the oParameters that have been passed to this method. A connection
is returned through another static member call that maintains a connection pool
and determined whether the connection is enlisted in the root transaction or not.
The SqlDataAdapter is then returned to the calling method to be executed.

In looking at the ParameterCache.dsParameters shown in Listing 7 notice that the


class has a static constructor.

Listing 7

public class ParameterCache


{
public static DataSet dsParameters;
/// <summary>
/// Uses a static constructor so that instantiation of
/// this class is not needed
/// </summary>
static ParameterCache()
{
//instantiate and fill the dataset
ParametersLookup oParameters = new ParametersLookup();
dsParameters = oParameters.FillCache();
}

This access modifier will cause the constructor to run the first time the object is
used before any thing else, even when the object is not instantiated with the new
keyword. Inside the constructor an instance of the ParametersLookup class is
instantiated and the FillCache method is called to fill the public static
dsParameters DataSet.

In Listing 8 a new DataSet is created along with an SqlCommand object, and an


SqlDataAdapter. A connection is returned from the Connections class that will be
explained following this section. The usp_ParameterCache stored procedure is
called to populate the new dataset. The new local DataSet now contains all a
table for each user defined stored procedure in the associated database specified
in the connection. The zeroth table contains the names of the stored procedures
and each table in the new DataSet is named after its relevant position in the
tables collection of the DataSet. The connection is closed in the finally block and
the parameters DataSet is returned to the caller.

Listing 8

public DataSet FillCache()


{
DataSet dsParameters = new DataSet();
int iTables = 1;
SqlCommand oCmd = new SqlCommand();
SqlDataAdapter oDa = new SqlDataAdapter();
try
{
oCmd.CommandText = "usp_ParameterCache";
oCmd.CommandType = CommandType.StoredProcedure;
oDa.SelectCommand = oCmd;
oCmd.Connection = Connections.Connection(false);
//grab the tables
oDa.Fill(dsParameters);
//name the tables based on the names from the first table
foreach(DataRow oDr in dsParameters.Tables[0].Rows)
{
dsParameters.Tables[iTables].TableName = Convert.ToString(oDr["ProcName"]);
iTables++;
}
}
catch(Exception oException)
{
if (!EventLog.SourceExists(APPLICATION_NAME))
{
EventLog.CreateEventSource(APPLICATION_NAME, "Application");
}
EventLog.WriteEntry(oException.Source, oException.Message,
EventLogEntryType.Error );
throw new Exception(oException.Message,oException);
}
finally
{
oCmd.Connection.Close();
}
return dsParameters;
}

In examining the Connections class notice Listing 9.

Listing 9

public static SqlConnection Connection(bool enlist)


{
string Connection = string.Format("User ID=DataLayer;Password=amiinyet;" +
"Persist Security Info=False;" + "Initial Catalog=Datalayer;" + "Data
Source=coder;Max Pool Size=15;" +
"Enlist={0}; Connection Reset=true;" + "Application Name=DataAccess;" +
"Pooling=true;Connection Lifetime=0;", enlist);
SqlConnection oConn = new SqlConnection(Connection);
try
{
oConn.Open();
}
catch(Exception oException)
{
if (!EventLog.SourceExists(APPLICATION_NAME))
EventLog.CreateEventSource(APPLICATION_NAME,EVENT_LOG);
}
EventLog.WriteEntry(oException.Source, oException.Message,
EventLogEntryType.Error );
}
return oConn;
}

The static method Connection takes a single parameter and returns an open
SqlConnection that is ether enlisted in a transaction or not depending on the
value of the parameter sent in. The new connection is part of a pool. In this
instance the pool has a maximum size of 15 connections. The table in Listing 10
has more information on the connection string properties available to the
connection object.

Listing 10

User ID Logon setup in SQL Server


Password Logon password
Persist If a connection has been opened the password is not sent when it is re-used from the
Security Info pool.
Initial Catalog Database
Data Source Database Server
Number of connections to allow in the pool for reuse. Care must be taken on the size of
Max Pool Size the Max Pool Size as this will allow as many connections to the database as listed under
loaded conditions.
Enlist Add the connection to the current threads transaction context.
Connection
Determines whether the database connection is reset when being removed from the pool
Reset
Application
Identity of the connection visible in perfmon
Name
Draw from the pool if there is a connection available or create a new one and add it to
Pooling
the pool
Connection If there is no longer immediate need remove the connection. A comparison is done in
Lifetime seconds to determine the immediate need.

In conclusion it should be noted that the ParameterCache.dsParameters DataSet


will only be populated one time, and that accessing the parameters is
instantaneous on subsequent calls. Only the first call is expensive. Also some
experimenting should be done with the DDL to determine if it is more suited to
run in the context of a library or server as defined in the AssemblyInfo.cs of the
project. This library will also register itself in COM Services the first time it is run
as defined by the ApplicationName attribute defined there as well.

You might also like