0% found this document useful (0 votes)
698 views

ADO.NET

The ADO.NET object model consists of the DataView, DataSet, and Data Provider. The DataView interacts with the DataSet to provide customized data views for UI binding. The DataSet caches data locally and allows disconnected data access, tracking changes made before updating the source. It represents data consistently across storage models. The Data Provider establishes a connection to a data source and executes commands to retrieve and process results, either directly or by loading into a DataSet. It supports transactions and includes classes like Connection, Command, and DataReader.

Uploaded by

api-3775463
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
698 views

ADO.NET

The ADO.NET object model consists of the DataView, DataSet, and Data Provider. The DataView interacts with the DataSet to provide customized data views for UI binding. The DataSet caches data locally and allows disconnected data access, tracking changes made before updating the source. It represents data consistently across storage models. The Data Provider establishes a connection to a data source and executes commands to retrieve and process results, either directly or by loading into a DataSet. It supports transactions and includes classes like Connection, Command, and DataReader.

Uploaded by

api-3775463
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 14

ADO.

NET
The ADO.NET Object Model

ADO.NET can work either through Web protocols, using XML, or in more traditional client/server
architecture. The ADO.NET object model is made up
• Data View
• Data Set
• Data Provider

DataView
The DataView speaks to the DataSet and is a special class designed for UI objects to bind to and
can provide customized views of the DataSet. It provides methods and properties that enable UI
objects such as a DataGrid to bind to a DataSet and contains properties such as AllowEdit and
Count that allow the UI object to work with the data in meaningful ways. A DataView is only used
in conjunction with a DataSet and never with a DataReader.

DataSet
The DataSet is the core component of the disconnected architecture of ADO.NET that caches
data locally on the client. The DataSet is explicitly designed for data access independent of any
data source. As a result it can be used with multiple and differing data sources, XML data, or used
to manage data local to the application. The DataSet contains a collection of one or more
DataTable objects made up of rows and columns of data, as well as primary key, foreign key,
constraint and relation information about the data in the DataTable objects.
Another feature of the DataSet is that it tracks changes that are made to the data it holds before
updating the source data. DataSet are also fully XML-featured. They contain methods such as
ReadXml and WriteXml that respectively produce and consume XML data easily. In an XML
scenario where there is no database, these methods enable use of ADO.NET without the Data
Provider being involved.
The DataSet object provides a consistent programming model that works with all current models
of data storage: flat, relational, and hierarchical. It represents the data that it holds as collections
and data types. The data within a DataSet is manipulated via the set of standard APIs exposed
through the DataSet and its child objects regardless of its data source.

As the DataSet itself is disconnected from the data source, it must provide a way to track changes
to itself. For this purpose, the DataSet class provides a number of methods that can be used to
reconcile changes to itself with the actual database (or other data source) at a later point in time.
Some of these methods include HasChanges(), HasErrors(), GetChanges(), AcceptChanges(), and
RejectChanges(). These methods can be used to check for changes that have happened to the
DataSet, obtain modifications in the form of a changed DataSet, inspect the changes for errors,
and then either accept or reject those changes. If the changes need to be communicated back to
the data store back-end, the Data Adapter can be asked to be updated using the Data Adapter's
Update() method.
The DataSet is similar to a Recordset with CursorLocation = adUseClient, CursorType =
adOpenStatic, and LockType = adLockOptimistic. However, the DataSet has extended capabilities
over the Recordset for managing application data.

Typed vs. Untyped Dataset


A DataSet can be Typed or Untyped. The difference between the two lies in the fact that a Typed
DataSet has a schema and an Untyped DataSet does not have one. It should be noted that the
Typed Datasets have more support in Visual studio.
For most of the work done with data in ADO.NET, you don't have to delve deeply into schemas.
When you use the tools to create a DataSet representing tables in your database, Visual Studio
.NET generates an XML Schema describing the structure of the DataSet. The XML Schema is then
used to generate a typed DataSet, in which data elements (tables, columns, and so on) are
available as first-class members.

A typed dataset gives us easier access to the contents of the table through strongly typed
programming that uses information from the underlying data schema. A typed DataSet has a
reference to an XML schema file:

string s = dsCustomersOrders1.Customers[0].CustomerID;

In contrast, if we are working with an untyped DataSet, the equivalent code looks like this:

string s = (dsCustomersOrders1.Tables("Customers").Rows[0].Item("CustomerID").ToString();

As the syntax is much simpler and more practical, using typed Datasets is much handier.

Because a typed DataSet class inherits from the base DataSet class, the typed class assumes all
of the functionality of the DataSet class and can be used with methods that take an instance of a
DataSet class as a parameter

An untyped DataSet, in contrast, has no corresponding built-in schema. As in a typed dataset, an


untyped DataSet contains tables, columns but those are exposed only as collections. (However,
after manually creating the tables and other data elements in an untyped DataSet, you can export
the DataSet 's structure as a schema using the DataSet 's WriteXmlSchema method.)

Typed Datasets can be generated with the XSD.EXE tool, which is part of the VS.NET
environment. The tool accepts a valid XML Schema file as input, as well as the language to use
(C#, VB). The following line shows a typical command line of the tool that uses the XML Schema
file XSDSchemaFileName.xsd.

xsd.exe /d /l:C# XSDSchemaFileName.xsd /n:XSDSchema.Namespace

The /d directive tells the tool to generate DataSets, /l specifies the language to use, the optional
/n defines the namespace to generate. The generated DataSet classes will be saved in the source
file XSDSchemaFileName.cs.
Once the Typed DataSet classes are generated, the further procedure is almost a child’s play. The
provided methods and properties guarantee data access in a type safe manner.
So the next step was to populate the Typed DataSet at runtime from the XML file. The ReadXml()
and the WriteXml() methods of the typed DataSet class do this very easily without any difficulty
as the following to lines of code show:

DataSet myDS = new DataSet();


myDS.ReadXml("input.xml", XmlReadMode.ReadSchema);

Data Provider
A .NET Framework data provider is used for connecting to a database, executing commands, and
retrieving results. Those results are either processed directly, or placed in a DataSet in order to
be exposed to the user. Following are the core objects of DataProvider

• Connection
• Command
• DataAdaptor
• DataReader

Along with the core classes listed in the preceding table, a .NET Framework data provider also
contains the classes listed in the following table.

Object Description

Transaction Enables you to enlist commands in transactions at the data source.

CommandBuilder A helper object that will automatically generate command properties of a DataAdapter
or will derive parameter information from a stored procedure and populate the
Parameters collection of a Command object.

Parameter Defines input, output, and return value parameters for commands and stored
procedures.

Exception Returned when an error is encountered at the data source. For an error encountered at
the client, .NET Framework data providers throw a .NET Framework exception.

Error Exposes the information from a warning or error returned by a data source.

ClientPermission Provided for .NET Framework data provider code access security attributes.

The .NET Framework includes the .NET Framework Data Provider for SQL Server (for Microsoft
SQL Server version 7.0 or later), the .NET Framework Data Provider for OLE DB, and the .NET
Framework Data Provider for ODBC.

The .NET Framework Data Provider for SQL Server uses its own protocol to communicate
with SQL Server. It is lightweight and performs well because it is optimized to access a SQL
Server directly without adding an OLE DB or Open Database Connectivity (ODBC) layer. To use
the .NET Framework Data Provider for SQL Server, you must have access to Microsoft SQL Server
7.0 or later. For earlier versions of Microsoft SQL Server, use the .NET Framework Data Provider
for OLE DB with the SQL Server OLE DB Provider (SQLOLEDB). .NET Framework Data Provider for
SQL Server classes is located in the System.Data.SqlClient namespace.
The .NET Framework Data Provider for SQL Server supports both local and distributed
transactions. For distributed transactions, the .NET Framework Data Provider for SQL Server, by
default, automatically enlists in a transaction and obtains transaction details from Windows 2000
Component Services.

The .NET Framework Data Provider for OLE DB communicates to an OLE DB data source
through both the OLE DB Service component, which provides connection pooling, and transaction
services, and the OLE DB Provider for the data source

Note The .NET Framework Data Provider for ODBC has a similar architecture to the .NET
Framework Data Provider for OLE DB; for example, it calls into an ODBC Service Component.

The .NET Framework Data Provider for OLE DB uses native OLE DB through COM interop to
enable data access. The .NET Framework Data Provider for OLE DB supports both local and
distributed transactions. For distributed transactions, the .NET Framework Data Provider for OLE
DB, by default, automatically enlists in a transaction and obtains transaction details from Windows
2000 Component Services.
.NET Framework Data Provider for OLE DB classes is located in the System.Data.OleDb
namespace.

The .NET Framework Data Provider for ODBC uses native ODBC Driver Manager (DM) through
COM interop to enable data access. The ODBC data provider supports both local and distributed
transactions. For distributed transactions, the ODBC data provider, by default, automatically
enlists in a transaction and obtains transaction details from Windows 2000 Component Services.
.NET Framework Data Provider for ODBC classes is located in the System.Data.Odbc
namespace.

The .NET Framework Data Provider for Oracle enables data access to Oracle data sources
through Oracle client connectivity software. The data provider supports Oracle client software
version 8.1.7 and later. The data provider supports both local and distributed transactions (the
data provider automatically enlists in existing distributed transactions, but does not currently
support the EnlistDistributedTransaction method).
.NET Framework Data Provider for Oracle classes are located in the System.Data.OracleClient
namespace and are contained in the System.Data.OracleClient.dll assembly.

Connections
Connection object provides connectivity to a data source. Connections can be opened in two
ways:
1. Explicitly by calling the Open method on the connection;
2. Implicitly when using a DataAdapter.

SqlClient
SqlConnection myConn = new SqlConnection
("Data Source=localhost;Integrated Security=SSPI;Initial Catalog=northwind");
myConn.Open();

OleDb
OleDbConnection myConn = new OleDbConnection("Provider=SQLOLEDB;Data
Source=localhost;Integrated Security=SSPI;Initial Catalog=northwind");
myConn.Open();

ODBC
OdbcConnection nwindConn = new OdbcConnection("Driver={SQL Server};Server=localhost;" +
"Trusted_Connection=yes;Database=northwind");
nwindConn.Open();

Oracle
OracleConnection nwindConn = new OracleConnection("Data Source=MyOracleServer;Integrated
Security=yes;");
nwindConn.Open();

The Connection object has two events you can use to retrieve informational messages from a
data source or to determine if the state of a Connection has changed. The following table lists the
events of the Connection object.
Event Description

InfoMessage Occurs when an informational message is returned from a data source. Informational
messages are messages from a data source that do not result in an exception being thrown.

StateChange Occurs when the state of the Connection changes.


Closing the Connection
You must always close the Connection when you are finished using it. This can be done using
either the Close or Dispose methods of the Connection object. Connections are not implicitly
released when the Connection object falls out of scope or is reclaimed by garbage collection.

Commands
The Command object enables access to database commands to return data, modify data, run
stored procedures, and send or retrieve parameter information. Commands contain information
that is submitted to a database as a query, and, like connections, are represented by the
provider-specific classes SqlCommand and OleDbCommand. Functionally, once the Connections
are established and the Commands are executed the results are in the form of streams. These
resultant streams can be accessed either by DataReader object, or passed into a DataSet object
via a DataAdapter.
The SqlCommand class provides four different methods to execute a command. They are:
ExecuteReader, ExecuteNonQuery, ExecuteScalar and, newest but not least,
ExecuteXmlReader. Essentially, such methods differentiate only for the type of input they
expect, and consequently for the result they return. Incidentally, an OleDbCommand object does
not support ExecuteXmlReader.
ExecuteReader expects to run a query command or a stored procedure that selects records. It
expects to have one or more resultsets to return.

cmd.Connection.Open();
SqlDataReader dr = cmd.ExecuteReader();
// process the resultset(s) here
cmd.Connection.Close();

You access the selected records using the SqlDataReader object and use the method Read to
loop through them. You move to the next resultset using the NextResults method.
ExecuteNonQuery expects to run a command, or a stored procedure, that affects the state of
the specified table. This means anything but a query command. You normally use this method to
issue an INSERT, UPDATE, DELETE, CREATE, and SET statement.
ExecuteNonQuery returns only the number of rows affected by the command execution, or –1
should this information be unavailable. It doesn't give you a chance to access any resultset
generated by the statement or the stored procedure. Actually, there's really nothing to prevent
you from using this method for a query command, but in this case you get neither the resultset
nor the number of the affected rows.

cmd.Connection.Open();
nRecsAffected = cmd.ExecuteNonQuery();
cmd.Connection.Close();
// check the record(s) affected here

The number of affected rows is also made available through the RecordsAffected property of the
SqlCommand object. This property equals –1 in case of errors or if a query command is
executed.
ExecuteScalar expects to run a query command, or more likely a stored procedure, that returns
data. However, this method is different from ExecuteReader in that it just makes available, as a
scalar value, the first column on the first row of the selected resultset.
cmd.Connection.Open();
Object o = cmd.ExecuteScalar();
cmd.Connection.Close();
// work on the scalar here

The method returns the value as a boxed object. It’s up to you to unbox or cast that value to the
proper, expected type.
ExecuteScalar turns out to be particularly useful when you have statistical or aggregate
operations to accomplish on a certain amount of data. In these and similar circumstances, there is
just one value that you might want to return back to the caller. Because of its use cases, you
normally use this method on more or less complex stored procedures rather than on single SQL
statements.
ExecuteXmlReader builds up and returns an XmlReader object after a SELECT command that
exploits XML features in SQL Server 2000 has been issued.

DataAdapters
The DataAdapter provides a set of methods and properties to retrieve and save data between a
DataSet and its source data store. It does the actual work of putting returned data from a
database into a DataSet. It also manages reconciling how data should be updated against a
database.
The Fill method of the DataAdapter calls the SELECT command while Update method calls INSERT,
UPDATE or DELETE command for each changed row.
One of the great features about the DataAdapter object is that these commands can be set
explicitly in order to control the statements used at runtime to resolve changes, including the use
of stored procedures.

DataReaders
When dealing with large read only data, e.g. reading 5,000 rows of data, the .NET framework
includes the DataReader object, which is a read-only, forward-only stream returned from the
database. Only one record at a time is ever present in memory.
The DataReader is similar to a Recordset with CursorType = adOpenForwardOnly and
LockType = adLockReadOnly.

Comparisons between ADO and ADO.NET


ADO.NET is an evolutionary improvement on ADO. One way to quickly understand the advantages
of ADO.NET is to compare its features to those of ADO.
Feature ADO ADO.NET
Memory-resident Uses the RecordSet object, which Uses the DataSet object, which can
data representation looks like a single table. contain one or more tables represented
by DataTable objects.
Relationships Requires the JOIN query to Supports the DataRelation object to
between multiple assemble data from multiple associate rows in one DataTable
tables database tables in a single result object with rows in another DataTable
table. object.
Cursors Utilizes server-side and client-side The architecture is disconnected so
cursors. cursors are not applicable.
Programmability Uses the Connection object to Uses the strongly typed programming
transmit commands that address characteristic of XML. So Data is self-
underlying data structure of a data describing. Underlying data constructs
source. such as tables, rows, and columns do
not appear, making code easier to read
and to write.
Sharing disconnected Uses COM marshalling to transmit a Transmits a DataSet as XML. The XML
data between tiers or disconnected record set. This format places no restrictions on data
components supports only those data types types and requires no type
defined by the COM standard. conversions.
Requires type conversions, which
demand system resources.
Transmitting data Problematic, because firewalls are Supported, because ADO.NET DataSet
through firewalls typically configured to prevent objects use XML, which can pass
system-level requests such as COM through firewalls.
marshalling.
Scalability Database locks and active database Disconnected access to database data
connections for long durations without retaining database locks or
contend for limited database active database connections for lengthy
resources. periods limits contention for limited
database resources.

Stored Procedure

CREATE PROCEDURE dbo.up_GetPublisherInfo


(
@pub_id int
)
AS
SELECT pub_id, pub_name, city, state, country FROM publishers

CREATE PROCEDURE dbo.up_UpdatePubInfo


(
@pub_id int,
@pub_name varchar(40),
@city varchar (20),
@state char (2),
@country varchar (30)
)
AS
UPDATE publishers SET pub_name = @pub_name, city = @city, state = @state,
country = @country WHERE pub_id = @pub_id

CREATE PROCEDURE dbo.up_InsertPubInfo


(
@pub_id int OUTPUT,
@pub_name varchar(40),
@city varchar (20),
@state char (2),
@country varchar (30)
)
AS
INSERT into publishers(pub_name, city, state, country) values(@pub_name, @city,
@state, @country)

SqlConnection conn;
SqlCommand command;
SqlDataAdapter adapter;
DataSet ds;
SqlParameter workParam = null;

conn = new SqlConnection("Server=localhost;UID=sa;PWD=sa;Database=Northwind");


conn.Open();

SelectCommand
command = new SqlCommand(“up_GetPublisherInfo”, conn);
command.CommandType = CommandType.StoredProcedure;
//command = new SqlCommand(“SELECT * FROM publishers where pub_id=@pubid”, conn);
//command.CommandType = CommandType.Text;

command.Parameters.Add("@pubid", SqlDbType.Int).Value = 1

adapter = new SqlDataAdapter();


adapter.SelectCommand = command;
ds = new DataSet();
adapter.Fill(ds, "CardTest");

InsertCommand

adapter = new SqlDataAdapter();


adapter.InsertCommand = new SqlCommand(“up_ InsertPubInfo”, conn);
adapter.InsertCommand.CommandType = CommandType.StoredProcedure;

workParam = adapter.InsertCommand.Parameters.Add("("@RowCount", SqlDbType.Int);


workParam.Direction = ParameterDirection.ReturnValue;

workParam = adapter.InsertCommand.Parameters.Add("@pub_id", SqlDbType.Int,0, “pub_id”);


workParam.Direction = ParameterDirection.Output;

adapter.InsertCommand.Parameters.Add("@pub_name", SqlDbType.Char, 50, “pub_name”);


adapter.InsertCommand.Parameters.Add("@city", SqlDbType.VarChar, 20, “city”);
adapter.InsertCommand.Parameters.Add("@state", SqlDbType.Char, 3, “state”);
adapter.InsertCommand.Parameters.Add("@country", SqlDbType.VarChar, 30, “country”);
adapter.Update(ds);

UpdateCommand

adapter = new SqlDataAdapter();


adapter.UpdateCommand = new SqlCommand(“up_UpdatePubInfo”, conn);
adapter.UpdateCommand.CommandType = CommandType.StoredProcedure;

workParam = adapter.UpdateCommand.Parameters.Add("@pub_id", SqlDbType.Int);


workParam.SourceColumn = "pub_id";
workParam.SourceVersion = DataRowVersion.Original;

workParam = adapter.UpdateCommand.Parameters.Add("@pub_name", SqlDbType.VarChar, 50);


workParam.SourceVersion = DataRowVersion.Current;
workParam.SourceColumn = "pub_name";

workParam = adapter.UpdateCommand.Parameters.Add("@city", SqlDbType.VarChar);


workParam.SourceColumn = "city";
workParam.SourceVersion = DataRowVersion.Current;

workParam = adapter.UpdateCommand.Parameters.Add("@state", SqlDbType.Char, 3);


workParam.SourceColumn = "state";
workParam.SourceVersion = DataRowVersion.Current;

workParam = adapter.UpdateCommand.Parameters.Add("@country", SqlDbType.VarChar);


workParam.SourceColumn = "country ";
workParam.SourceVersion = DataRowVersion.Current;

adapter.Update(ds);
DeleteCommand

command = new SqlCommand(“up_DeletePubInfo”, conn);


command.CommandType = CommandType.StoredProcedure;

command.Parameters.Add("@pubid", SqlDbType.Int).Value = 1
workParam = command.Parameters.Add("@pub_id", SqlDbType.Int);
workParam.SourceColumn = "pub_id";
workParam.SourceVersion = DataRowVersion.Original;

adapter = new SqlDataAdapter();


adapter.DeleteCommand = command;
adapter.Update(ds);

Imposing Constraints on DataSet


Adding Existing Constraints to a DataSet

The Fill method of the DataAdapter fills a DataSet only with table columns and rows from a
data source; though constraints are commonly set by the data source, the Fill method does not
add this schema information to the DataSet by default. To populate a DataSet with existing
primary key constraint information from a data source, you can either call the FillSchema
method of the DataAdapter, or set the MissingSchemaAction property of the DataAdapter to
AddWithKey before calling Fill. This will ensure that primary key constraints in the DataSet
reflect those at the data source. Foreign key constraint information is not included and will need
to be created explicitly.
Adding schema information to a DataSet before filling it with data ensures that primary key
constraints are included with the DataTable objects in the DataSet. As a result, when additional
calls to Fill the DataSet are made, the primary key column information is used to match new
rows from the data source with current rows in each DataTable, and current data in the tables is
overwritten with data from the data source. Without the schema information, the new rows from
the data source are appended to the DataSet, resulting in duplicate rows.

Note If a column in a data source is identified as auto-incrementing, the FillSchema


method, or the Fill method with a MissingSchemaAction of AddWithKey, creates a
DataColumn with an AutoIncrement property set to true. However, you will need to set
the AutoIncrementStep and AutoIncrementSeed values yourself.

Using FillSchema or setting the MissingSchemaAction to AddWithKey requires extra


processing at the data source to determine primary key column information. This additional
processing can hinder performance. If you know the primary key information at design-time, it is
recommended that you specify the primary key column or columns explicitly in order to achieve
optimal performance.

The FillSchema method requires three parameters, as shown below:

SqlDataAdapter.FillSchema("DataSet", "SchemaTypeEnumeration", "Tablename")

The SchemaType enumeration has two values:


Enumeration Description
Apply any existing table mappings to the incoming schema. Configure the
Mapped
DataSet with the transformed schema.
Source Ignore any table mappings on the DataAdapter. Configure the DataSet using
the incoming schema without applying any transformations.

Please note that the SchemaType usually should be set to Mapped, because any established table
and column mappings are used. The following code example adds schema information to a
DataSet using FillSchema.

DataSet DS = new DataSet();


DA.FillSchema(DS, SchemaType.Mapped, "Orders");
//... SchemaType could be SchemaType.Mapped or SchemaType.Source
DA.Fill(DS, "Orders");

The following code example shows how to add schema information to a DataSet using the
MissingSchemaAction.AddWithKey property of the Fill method.

DataSet custDS = new DataSet();


DA.MissingSchemaAction = MissingSchemaAction.AddWithKey;
DA.Fill(DS, "Orders");

Multiple Result Sets


If the DataAdapter encounters multiple result sets returned from the SelectCommand, it will
create multiple tables in the DataSet. The tables will be given an incremental default name of
TableN, starting with "Table" for Table0. If a table name is passed as an argument to the
FillSchema method, the tables will be given an incremental default name of TableNameN,
starting with "TableName" for TableName0.

Note If the FillSchema method of the OleDbDataAdapter object is called for a


command that returns multiple result sets, only the schema information from the first
result set will be returned. When returning schema information for multiple result sets
using the OleDbDataAdapter, it is recommended that you specify a
MissingSchemaAction of AddWithKey and obtain the schema information when calling
the Fill method.

Now, let us examine the output generated by FillSchema method of DataSet to know its capability
of mapping the constraints from the data source table. In order to print the schema information
to an XML file you may add the following line of code, which will generate schemafile.xml on your
Desktop.

DS.WriteXmlSchema("C:\Documents and Settings\Administrator\Desktop\schemafile.xml")

In the SchemaFile.xml file you will see some lines like the following:

<xs:element name="OrderID" msdata:ReadOnly="true"


msdata:AutoIncrement="true" type="xs:int" />
....... Some other schema here.......

<xs:unique name="Constraint1" msdata:PrimaryKey="true">


<xs:selector xpath=".//Orders" />
<xs:field xpath="OrderID" />
</xs:unique>
...

The above XML listing represents that the OrderId column is a Primary Key Column,
AutoIncremented, ReadOnly and it accepts Integer data type.
If you further compare the schema information in the screenshot and the SchemaFile.xml file,
then you will find that while certain schema is contained within the XML file, there are two bits of
schema information that are missing. Specifically, the schema information missing is: schema
Information for Foreign Key constraints; and schema information for AutoIncrement Seed and
AutoIncrement step for AutoIncrement column (OrderId).
The FillSchema method adds a structure of a DataTable to the destination DataSet and configures
only the following DataColumn properties if they exist at the data source:
• AllowDBNull
• AutoIncrement (You must set AutoIncrementStep and AutoIncrementSeed separately)
• MaxLength
• ReadOnly
• Unique
Using FillSchema to populate a DataSet with schema information before filling the DataSet with
data ensures that primary key constraints are included with the DataTable objects in the DataSet.
However, use of FillSchema and MissingSchemaAction in your code requires extra run-time
processing so if you know the primary key information at design-time, it is advisable to add
primary key explicitly on the column(s) after filling the dataset as opposed to having this done
automatically.
It is to be noted that all the constraints at the source cannot be enforced in the Dataset at run
time. Note that primary keys and unique constraints are added to the ConstraintCollection, but
other constraint types are not added. Foreign key constraint information is not included and will
need to be created explicitly as we discussed earlier in this article.

How to add Constraints to a Table(s) inside a DataSet?


In order to maintain the integrity of data, constraints are applied to a column or several columns.
To enforce constraints in the tables inside the DataSet you will have to set the value of the
EnforceConstraints property of the DataSet to True.
In this article we will discuss the following one by one:
1. Unique Constraints
2. Primary Key Constraints
3. Creating AutoIncrement Columns, and
4. Foreign Key Constraint
Let us have a look on the following chunk of code to understand how these constraints can be
imposed on our tables within a Dataset. The following lines of codes are intended to fill two tables
from the NorthWind Database in the DataSet and a relation is added between these two tables for
the CustomerID column:

SqlDataAdapter DA = new SqlDataAdapter("select * from Customers", Con);


DataSet DS = new DataSet();
DA.Fill(DS, "Customers");
DA = new SqlDataAdapter("select * from Orders", Con);
DA.Fill(DS, "Orders");

//Let us add a relations between "CustomerID" columns as follows:


DS.Relations.Add("CustomerOrders", DS.Tables["Customers"].Columns["CustomerID"],
DS.Tables["Orders"].Columns["CustomerID"]);

//Alternatively, You can add relations as follows:


// DataColumn ParentCol = DS.Tables["Customers"].Columns["CustomerID"];
// DataColumn ChildCol = DS.Tables["Orders"].Columns["CustomerID"];
// DataRelation CustOrderRel = new DataRelation("CustomerOrders", ParentCol, ChildCol, True);
//DS.Relations.Add(CustOrderRel);

Let us examine the following code line in the above listing:

DS.Relations.Add("CustomerOrders", DS.Tables["Customers"].Columns["CustomerID"],
DS.Tables["Orders"].Columns["CustomerID"]);

This line of code will establish a Parent-Child relationship between the Customers and Orders
table for CustomerID column. By default, the DataSet object is designed in such a way that
required constraints are imposed automatically when you create a relationship between two tables
by adding a DataRelation to the DataSet. However, you can disable this behavior by specifying a
fourth parameter in the Add method in the above line of code - i.e. createConstraints = false -
when creating the relation. This value is true by default.
This Add method can optionally take four parameters as follows:

DS.Relations.Add("Relation Name", "Parent Column", "Child Column", "Bool createConstraints");

Adding a Unique Constraint


UniqueConstraints ensure that all data in the specified column or columns is unique per row
(although multiple values of NULL are allowed). You can create a unique constraint for a column
by using the UniqueConstraint constructor. The following example creates a UniqueConstraint for
the CustomerId column of the Customers DataTable in the DataSet.

DataTable custTable = DS.Tables["Customers"];


UniqueConstraint custUC = new UniqueConstraint("UC1", custTable.Columns["CustomerID"]);
DS.Tables["Customers"].Constraints.Add(custUC);

A unique constraint can also be created and imposed on a column by setting the Unique property
of the column to True. Alternatively, you can remove a unique constraint by setting the Unique
property of a single column to False. If you wish to add UniqueConstraints to a column that
shouldn't allow NULL values, simply set the column's AllowDBNull property to false, like so:

DS.Tables["Customers"].Columns["CustomerID"].Unique = true;
DS.Tables["Customers"].Columns["CustomerID"].AllowDBNull = false;

The following example creates a UniqueConstraint for two columns of a DataTable.

custDS.Tables["CustTable"].Constraints.Add (DataColumn[]
{workTable.Columns["CustLName"], worktable.Columns["CustFName"]});

Please note that at the run time, if the user violates the imposed UniqueConstraint rules you can
add a Try-Catch block to handle the related Exceptions as follows:

Try
{//Add some relevant code here.}
catch(System.Data.ConstraintException Exp1)
{
Label1.Text = "This CustomerID is already present!";
}
catch(System.Data.NoNullAllowedException Exp)
{
Label2.Text = "CustomerID should not be Null!";
}

Creating AutoIncrement Columns


The AutoIncrement property of the column may be set to true to ensure that the values in that
column are unique so that the column values to increment automatically when new rows are
added to the table. The other associated properties of an AutoIncrement column are
AutoIncrementSeed, AutoIncrementStep and ReadOnly. AutoIncrementSeed is the starting
number of increment and AutoIncrementStep is the amount the auto-increment column is
increased when each new record is added. (Typically these values are set to 1 and 1.)
The following example creates a column that starts with a value of 1,000 and is automatically
incremented by one step when a new row is inserted to the DataTable.

DataTable myTable = New DataTable("EmployeeTable");


// Create a DataColumn and set various properties.
DataColumn myColumn = new DataColumn("EmployeeId");
myColumn.DataType = System.Type.GetType("System.Int32");
myColumn.ReadOnly = true;
myColumn.AutoIncrement = True;
myColumn.AutoIncrementSeed = 1000;
myColumn.AutoIncrementStep = 1;

// Add the column to the table.


myTable.Columns.Add(myColumn);

// Let us Add 10 rows to the table.


DataRow myRow;
for(int intX=0; intX <= 9; intX++)
{
myRow = myTable.NewRow();
myTable.Rows.Add(myRow);
}

Creating a Primary Key Column for a Table


When a DataColumn is created as the PrimaryKey for a DataTable; the table automatically sets
the Unique property to true and the AllowDBNull property of the column to false. Alternatively, if
you remove a column from the PrimaryKey property of a DataTable, the UniqueConstraint is
removed. The following example defines a single column as the primary key:

CustomerTable.PrimaryKey = new DataColumn[] {CustomerTable.Columns["CustID"]} ;

The following example defines two columns as a primary key:

CustomerTable.PrimaryKey = new DataColumn[] { CustomerTable.Columns["CustLName"],


CustomerTable.Columns["CustFName"]};

// ... Or you can use ...


DataColumn PrimKey[2];
PrimKey[0] = CustomerTable.Columns["CustLName"]’
PrimKey[1] = CustomerTable.Columns["CustFName"];
CustomerTable.PrimaryKey = PrimKey;

Foreign Key Constraint


Generally, a ForeignKeyConstraint is used to impose rule for updates and deletes in related tables.
For example, if a particular value in a row of one table is updated or deleted, and that same value
is also used in one or more related tables, a ForeignKeyConstraint will take decision what are the
changes to be made in the related tables. That is, in order to take different actions on Delete and
Update, the DeleteRule and UpdateRule properties of the ForeignKeyConstraints are set with
different values as follows:

Rule Description
Cascade Updates or Deletes related rows. [default]
None Specifies that no action be taken on related rows.
SetDefault Sets values in related rows to the default value.
SetNull Sets values in related rows to DBNull
When creating a ForeignKeyConstraint, you can pass the DeleteRule and UpdateRule values to the
constructor as arguments, or you can set them as properties as in the following example (where
the UpdateRule value is set to the default, Cascade).

ForeignKeyConstraint custOrderFK = new ForeignKeyConstraint("CustOrderFK",


custDS.Tables["CustTable"].Columns["CustomerID"],
custDS.Tables["OrdersTable"].Columns["CustomerID"]);

// Don't allow a customer with existing orders be deleted.


custOrderFK.DeleteRule = Rule.None ;
custDS.Tables["OrdersTable"].Constraints.Add(custOrderFK);

AcceptRejectRule
If you are well versed with the DataSet, DataTable and DataRow objects then you are probably
aware of the AcceptChanges and RejectChanges methods that are common to all three of these
objects. When a DataSet contains ForeignKeyConstraints, invoking the AcceptChanges or
RejectChanges methods causes the AcceptRejectRule to be enforced. The AcceptRejectRule
property of the ForeignKeyConstraint determines which action is to be taken on the child rows
when AcceptChanges or RejectChanges is called on the parent row.
The following table shows the list of the values to which the AcceptRejectRule can be set.

AcceptRejectRule Description
Cascade Accepts or rejects changes to child rows. [default]
None Specifies that no action be taken on child rows.

The following line of code shows how to set the value for AcceptRejectRule property of the
ForeignKeyConstraint:

custOrderFK.AcceptRejectRule = AcceptRejectRule.None ;

Summary
As we saw in this article, the DataSet object supports constraints in order to ensure the integrity
of data. Constraints are rules that are applied when rows are inserted, updated, or deleted in a
table. You can define following types of Constraints/Columns in the DataTables of a DataSet:
• Unique constraint - that checks that the new values in a column are unique in the table.
• Foreign-key constraint - that defines rules for updating in child table when a record in a
master table is updated or deleted.
• AutoIncrement columns - ensures that the values in that column are unique and the
column values to increment automatically.
• Primary Key Constraints - which checks values in the column are unique and not Null.
The dataset itself supports a Boolean EnforceConstraints property that specifies whether
constraints are to be enforced or not. The default value of this property is set to True. When you
feel it is desired to turn the constraints off, simply change the value of this property to False.
You create foreign-key constraints by creating a DataRelation object in a DataSet. In addition to
allowing you to programmatically get information about related records, a DataRelation object
allows you to define foreign-key constraint rules.
The FillSchema method of the Dataset can be used to copy the constraint definitions of the tables
at data source. But it is not possible to map all the constraints this way. Foreign-key constraints,
for example, must be explicitly declared on the tables inside DataSet. It is always advisable to
enforce these constraints explicitly at the design time to avoid extra processing at run-time.

You might also like