0% found this document useful (0 votes)
148 views193 pages

D7 DevelopersGuide (0431-0623)

The document discusses organizing data display in database applications. It describes displaying a single record at a time versus multiple records. When displaying a single record, all information is about the same database record. The document also discusses navigating records using a TDBNavigator control and enabling/disabling data display in controls during iteration or search to prevent screen flicker.

Uploaded by

abkader alhilali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
148 views193 pages

D7 DevelopersGuide (0431-0623)

The document discusses organizing data display in database applications. It describes displaying a single record at a time versus multiple records. When displaying a single record, all information is about the same database record. The document also discusses navigating records using a TDBNavigator control and enabling/disabling data display in controls during iteration or search to prevent screen flicker.

Uploaded by

abkader alhilali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 193

Using common data control features

The OnStateChange event occurs when the state of the dataset changes. When this
event occurs, you can examine the dataset’s State property to determine its current
state.
For example, the following OnStateChange event handler enables or disables buttons
or menu items based on the current state:
procedure Form1.DataSource1.StateChange(Sender: TObject);
begin
CustTableEditBtn.Enabled := (CustTable.State = dsBrowse);
CustTableCancelBtn.Enabled := CustTable.State in [dsInsert, dsEdit, dsSetKey];
CustTableActivateBtn.Enabled := CustTable.State in [dsInactive];
ƒ
end;
Note For more information about dataset states, see “Determining dataset states” on
page 24-3.

Editing and updating data


All data controls except the navigator display data from a database field. In addition,
you can use them to edit and update data as long as the underlying dataset allows it.
Note Unidirectional datasets never permit users to edit and update data.

Enabling editing in controls on user entry


A dataset must be in dsEdit state to permit editing to its data. If the data source’s
AutoEdit property is True (the default), the data control handles the task of putting
the dataset into dsEdit mode as soon as the user tries to edit its data.
If AutoEdit is False, you must provide an alternate mechanism for putting the dataset
into edit mode. One such mechanism is to use a TDBNavigator control with an Edit
button, which lets users explicitly put the dataset into edit mode. For more
information about TDBNavigator, see “Navigating and manipulating records” on
page 20-29. Alternately, you can write code that calls the dataset’s Edit method when
you want to put the dataset into edit mode.

Editing data in a control


A data control can only post edits to its associated dataset if the dataset’s CanModify
property is True. CanModify is always False for unidirectional datasets. Some datasets
have a ReadOnly property that lets you specify whether CanModify is True.
Note Whether a dataset can update data depends on whether the underlying database
table permits updates.
Even if the dataset’s CanModify property is True, the Enabled property of the data
source that connects the dataset to the control must be True as well before the control
can post updates back to the database table. The Enabled property of the data source
determines whether the control can display field values from the dataset, and
therefore also whether a user can edit and post values. If Enabled is True (the default),
controls can display field values.

Using data controls 20-5


Using common data control features

Finally, you can control whether the user can even enter edits to the data that is
displayed in the control. The ReadOnly property of the data control determines if a
user can edit the data displayed by the control. If False (the default), users can edit
data. Clearly, you will want to ensure that the control’s ReadOnly property is True
when the dataset’s CanModify property is False. Otherwise, you give users the false
impression that they can affect the data in the underlying database table.
In all data controls except TDBGrid, when you modify a field, the modification is
copied to the underlying dataset when you Tab from the control. If you press Esc
before you Tab from a field, the data control abandons the modifications, and the
value of the field reverts to the value it held before any modifications were made.
In TDBGrid, modifications are posted when you move to a different record; you can
press Esc in any record of a field before moving to another record to cancel all
changes to the record.
When a record is posted, Delphi checks all data-aware controls associated with the
dataset for a change in status. If there is a problem updating any fields that contain
modified data, Delphi raises an exception, and no modifications are made to the
record.
Note If your application caches updates (for example, using a client dataset), all
modifications are posted to an internal cache. These modifications are not applied to
the underlying database table until you call the dataset’s ApplyUpdates method.

Disabling and enabling data display


When your application iterates through a dataset or performs a search, you should
temporarily prevent refreshing of the values displayed in data-aware controls each
time the current record changes. Preventing refreshing of values speeds the iteration
or search and prevents annoying screen-flicker.
DisableControls is a dataset method that disables display for all data-aware controls
linked to a dataset. As soon as the iteration or search is over, your application should
immediately call the dataset’s EnableControls method to re-enable display for the
controls.
Usually you disable controls before entering an iterative process. The iterative
process itself should take place inside a try...finally statement so that you can re-
enable controls even if an exception occurs during processing. The finally clause
should call EnableControls. The following code illustrates how you might use
DisableControls and EnableControls in this manner:
CustTable.DisableControls;
try
CustTable.First; { Go to first record, which sets EOF False }
while not CustTable.EOF do { Cycle until EOF is True }
begin
{ Process each record here }
ƒ
CustTable.Next; { EOF False on success; EOF True when Next fails on last record }
end;

20-6 Developer’s Guide


Choosing how to organize the data

finally
CustTable.EnableControls;
end;

Refreshing data display


The Refresh method for a dataset flushes local buffers and re-fetches data for an open
dataset. You can use this method to update the display in data-aware controls if you
think that the underlying data has changed because other applications have
simultaneous access to the data used in your application. If you are using cached
updates, before you refresh the dataset you must apply any updates the dataset has
currently cached.
Refreshing can sometimes lead to unexpected results. For example, if a user is
viewing a record deleted by another application, then the record disappears the
moment your application calls Refresh. Data can also appear to change if another user
changes a record after you originally fetched the data and before you call Refresh.

Enabling mouse, keyboard, and timer events


The Enabled property of a data control determines whether it responds to mouse,
keyboard, or timer events, and passes information to its data source. The default
setting for this property is True.
To prevent mouse, keyboard, or timer events from reaching a data control, set its
Enabled property to False. When Enabled is False, the data source that connects the
control to its dataset does not receive information from the data control. The data
control continues to display data, but the text displayed in the control is dimmed.

Choosing how to organize the data


When you build the user interface for your database application, you have choices to
make about how you want to organize the display of information and the controls
that manipulate that information.
One of the first decisions to make is whether you want to display a single record at a
time, or multiple records.
In addition, you will want to add controls to navigate and manipulate records. The
TDBNavigator control provides built-in support for many of the functions you may
want to perform.

Displaying a single record


In many applications, you may only want to provide information about a single
record of data at a time. For example, an order-entry application may display the
information about a single order without indicating what other orders are currently
logged. This information probably comes from a single record in an orders dataset.

Using data controls 20-7


Choosing how to organize the data

Applications that display a single record are usually easy to read and understand,
because all database information is about the same thing (in the previous case, the
same order). The data-aware controls in these user interfaces represent a single field
from a database record. The Data Controls page of the Component palette provides a
wide selection of controls to represent different kinds of fields. These controls are
typically data-aware versions of other controls that are available on the Component
palette. For example, the TDBEdit control is a data-aware version of the standard
TEdit control which enables users to see and edit a text string.
Which control you use depends on the type of data (text, formatted text, graphics,
boolean information, and so on) contained in the field.

Displaying data as labels


TDBText is a read-only control similar to the TLabel component on the Standard page
of the Component palette. A TDBText control is useful when you want to provide
display-only data on a form that allows user input in other controls. For example,
suppose a form is created around the fields in a customer list table, and that once the
user enters a street address, city, and state or province information in the form, you
use a dynamic lookup to automatically determine the zip code field from a separate
table. A TDBText component tied to the zip code table could be used to display the
zip code field that matches the address entered by the user.
TDBText gets the text it displays from a specified field in the current record of a
dataset. Because TDBText gets its text from a dataset, the text it displays is dynamic—
the text changes as the user navigates the database table. Therefore you cannot
specify the display text of TDBText at design time as you can with TLabel.
Note When you place a TDBText component on a form, make sure its AutoSize property is
True (the default) to ensure that the control resizes itself as necessary to display data
of varying widths. If AutoSize is False, and the control is too small, data display is
clipped.

Displaying and editing fields in an edit box


TDBEdit is a data-aware version of an edit box component. TDBEdit displays the
current value of a data field to which it is linked and permits it to be edited using
standard edit box techniques.
For example, suppose CustomersSource is a TDataSource component that is active and
linked to an open TClientDataSet called CustomersTable. You can then place a TDBEdit
component on a form and set its properties as follows:
• DataSource: CustomersSource
• DataField: CustNo
The data-aware edit box component immediately displays the value of the current
row of the CustNo column of the CustomersTable dataset, both at design time and at
runtime.

20-8 Developer’s Guide


Choosing how to organize the data

Displaying and editing text in a memo control


TDBMemo is a data-aware component—similar to the standard TMemo component—
that can display lengthy text data. TDBMemo displays multi-line text, and permits a
user to enter multi-line text as well. You can use TDBMemo controls to display large
text fields or text data contained in binary large object (BLOB) fields.
By default, TDBMemo permits a user to edit memo text. To prevent editing, set the
ReadOnly property of the memo control to True. To display tabs and permit users to
enter them in a memo, set the WantTabs property to True. To limit the number of
characters users can enter into the database memo, use the MaxLength property. The
default value for MaxLength is 0, meaning that there is no character limit other than
that imposed by the operating system.
Several properties affect how the database memo appears and how text is entered.
You can supply scroll bars in the memo with the ScrollBars property. To prevent
word wrap, set the WordWrap property to False. The Alignment property determines
how the text is aligned within the control. Possible choices are taLeftJustify (the
default), taCenter, and taRightJustify. To change the font of the text, use the Font
property.
At runtime, users can cut, copy, and paste text to and from a database memo control.
You can accomplish the same task programmatically by using the CutToClipboard,
CopyToClipboard, and PasteFromClipboard methods.
Because the TDBMemo can display large amounts of data, it can take time to populate
the display at runtime. To reduce the time it takes to scroll through data records,
TDBMemo has an AutoDisplay property that controls whether the accessed data
should be displayed automatically. If you set AutoDisplay to False, TDBMemo
displays the field name rather than actual data. Double-click inside the control to
view the actual data.

Displaying and editing text in a rich edit memo control


TDBRichEdit is a data-aware component—similar to the standard TRichEdit
component—that can display formatted text stored in a binary large object (BLOB)
field. TDBRichEdit displays formatted, multi-line text, and permits a user to enter
formatted multi-line text as well.
Note While TDBRichEdit provides properties and methods to enter and work with rich
text, it does not provide any user interface components to make these formatting
options available to the user. Your application must implement the user interface to
surface rich text capabilities.
By default, TDBRichEdit permits a user to edit memo text. To prevent editing, set the
ReadOnly property of the rich edit control to True. To display tabs and permit users to
enter them in a memo, set the WantTabs property to True. To limit the number of
characters users can enter into the database memo, use the MaxLength property. The
default value for MaxLength is 0, meaning that there is no character limit other than
that imposed by the operating system.

Using data controls 20-9


Choosing how to organize the data

Because the TDBRichEdit can display large amounts of data, it can take time to
populate the display at runtime. To reduce the time it takes to scroll through data
records, TDBRichEdit has an AutoDisplay property that controls whether the accessed
data should be displayed automatically. If you set AutoDisplay to False, TDBRichEdit
displays the field name rather than actual data. Double-click inside the control to
view the actual data.

Displaying and editing graphics fields in an image control


TDBImage is a data-aware control that displays graphics contained in BLOB fields.
By default, TDBImage permits a user to edit a graphics image by cutting and pasting
to and from the Clipboard using the CutToClipboard, CopyToClipboard, and
PasteFromClipboard methods. You can, instead, supply your own editing methods
attached to the event handlers for the control.
By default, an image control displays as much of a graphic as fits in the control,
cropping the image if it is too big. You can set the Stretch property to True to resize
the graphic to fit within an image control as it is resized.
Because the TDBImage can display large amounts of data, it can take time to populate
the display at runtime. To reduce the time it takes scroll through data records,
TDBImage has an AutoDisplay property that controls whether the accessed data
should automatically displayed. If you set AutoDisplay to False, TDBImage displays
the field name rather than actual data. Double-click inside the control to view the
actual data.

Displaying and editing data in list and combo boxes


There are four data controls that provide the user with a set of default data values to
choose from at runtime. These are data-aware versions of standard list and combo
box controls:
• TDBListBox, which displays a scrollable list of items from which a user can choose
to enter in a data field. A data-aware list box displays a default value for a field in
the current record and highlights its corresponding entry in the list. If the current
row’s field value is not in the list, no value is highlighted in the list box. When a
user selects a list item, the corresponding field value is changed in the underlying
dataset.
• TDBComboBox, which combines the functionality of a data-aware edit control and
a drop-down list. At runtime it can display a drop-down list from which a user can
pick from a predefined set of values, and it can permit a user to enter an entirely
different value.
• TDBLookupListBox, which behaves like TDBListBox except the list of display items
is looked up in another dataset.
• TDBLookupComboBox, which behaves like TDBComboBox except the list of display
items is looked up in another dataset.

20-10 Developer’s Guide


Choosing how to organize the data

Note At runtime, users can use an incremental search to find list box items. When the
control has focus, for example, typing ‘ROB’ selects the first item in the list box
beginning with the letters ‘ROB’. Typing an additional ‘E’ selects the first item
starting with ‘ROBE’, such as ‘Robert Johnson’. The search is case-insensitive.
Backspace and Esc cancel the current search string (but leave the selection intact), as
does a two second pause between keystrokes.

Using TDBListBox and TDBComboBox


When using TDBListBox or TDBComboBox, you must use the String List editor at
design time to create the list of items to display. To bring up the String List editor,
click the ellipsis button for the Items property in the Object Inspector. Then type in
the items that you want to have appear in the list. At runtime, use the methods of the
Items property to manipulate its string list.
When a TDBListBox or TDBComboBox control is linked to a field through its DataField
property, the field value appears selected in the list. If the current value is not in the
list, no item appears selected. However, TDBComboBox displays the current value for
the field in its edit box, regardless of whether it appears in the Items list.
For TDBListBox, the Height property determines how many items are visible in the
list box at one time. The IntegralHeight property controls how the last item can
appear. If IntegralHeight is False (the default), the bottom of the list box is determined
by the ItemHeight property, and the bottom item may not be completely displayed. If
IntegralHeight is True, the visible bottom item in the list box is fully displayed.
For TDBComboBox, the Style property determines user interaction with the control. By
default, Style is csDropDown, meaning a user can enter values from the keyboard, or
choose an item from the drop-down list. The following properties determine how the
Items list is displayed at runtime:
• Style determines the display style of the component:
• csDropDown (default): Displays a drop-down list with an edit box in which the
user can enter text. All items are strings and have the same height.
• csSimple: Combines an edit control with a fixed size list of items that is always
displayed. When setting Style to csSimple, be sure to increase the Height
property so that the list is displayed.
• csDropDownList: Displays a drop-down list and edit box, but the user cannot
enter or change values that are not in the drop-down list at runtime.
• csOwnerDrawFixed and csOwnerDrawVariable: Allows the items list to display
values other than strings (for example, bitmaps) or to use different fonts for
individual items in the list.
• DropDownCount: the maximum number of items displayed in the list. If the
number of Items is greater than DropDownCount, the user can scroll the list. If the
number of Items is less than DropDownCount, the list will be just large enough to
display all the Items.

Using data controls 20-11


Choosing how to organize the data

• ItemHeight: The height of each item when style is csOwnerDrawFixed.


• Sorted: If True, then the Items list is displayed in alphabetical order.

Displaying and editing data in lookup list and combo boxes


Lookup list boxes and lookup combo boxes (TDBLookupListBox and
TDBLookupComboBox) present the user with a restricted list of choices from which to
set a valid field value. When a user selects a list item, the corresponding field value is
changed in the underlying dataset.
For example, consider an order form whose fields are tied to the OrdersTable.
OrdersTable contains a CustNo field corresponding to a customer ID, but OrdersTable
does not have any other customer information. The CustomersTable, on the other
hand, contains a CustNo field corresponding to a customer ID, and also contains
additional information, such as the customer’s company and mailing address. It
would be convenient if the order form enabled a clerk to select a customer by
company name instead of customer ID when creating an invoice. A
TDBLookupListBox that displays all company names in CustomersTable enables a user
to select the company name from the list, and set the CustNo on the order form
appropriately.
These lookup controls derive the list of display items from one of two sources:
• A lookup field defined for a dataset.
To specify list box items using a lookup field, the dataset to which you link the
control must already define a lookup field. (This process is described in “Defining
a lookup field” on page 25-9). To specify the lookup field for the list box items,
a Set the DataSource property of the list box to the data source for the dataset
containing the lookup field to use.
b Choose the lookup field to use from the drop-down list for the DataField
property.
When you activate a table associated with a lookup control, the control recognizes
that its data field is a lookup field, and displays the appropriate values from the
lookup.
• A secondary data source, data field, and key.
If you have not defined a lookup field for a dataset, you can establish a similar
relationship using a secondary data source, a field value to search on in the
secondary data source, and a field value to return as a list item. To specify a
secondary data source for list box items,
a Set the DataSource property of the list box to the data source for the control.
b Choose a field into which to insert looked-up values from the drop-down list
for the DataField property. The field you choose cannot be a lookup field.
c Set the ListSource property of the list box to the data source for the dataset that
contain the field whose values you want to look up.

20-12 Developer’s Guide


Choosing how to organize the data

d Choose a field to use as a lookup key from the drop-down list for the KeyField
property. The drop-down list displays fields for the dataset associated with
data source you specified in Step 3. The field you choose need not be part of an
index, but if it is, lookup performance is even faster.
e Choose a field whose values to return from the drop-down list for the ListField
property. The drop-down list displays fields for the dataset associated with the
data source you specified in Step 3.
When you activate a table associated with a lookup control, the control recognizes
that its list items are derived from a secondary source, and displays the
appropriate values from that source.
To specify the number of items that appear at one time in a TDBLookupListBox
control, use the RowCount property. The height of the list box is adjusted to fit this
row count exactly.
To specify the number of items that appear in the drop-down list of
TDBLookupComboBox, use the DropDownRows property instead.
Note You can also set up a column in a data grid to act as a lookup combo box. For
information on how to do this, see “Defining a lookup list column” on page 20-21.

Handling Boolean field values with check boxes


TDBCheckBox is a data-aware check box control. It can be used to set the values of
Boolean fields in a dataset. For example, a customer invoice form might have a check
box control that when checked indicates the customer is tax-exempt, and when
unchecked indicates that the customer is not tax-exempt.
The data-aware check box control manages its checked or unchecked state by
comparing the value of the current field to the contents of ValueChecked and
ValueUnchecked properties. If the field value matches the ValueChecked property, the
control is checked. Otherwise, if the field matches the ValueUnchecked property, the
control is unchecked.
Note The values in ValueChecked and ValueUnchecked cannot be identical.
Set the ValueChecked property to a value the control should post to the database if the
control is checked when the user moves to another record. By default, this value is set
to “true,” but you can make it any alphanumeric value appropriate to your needs.
You can also enter a semicolon-delimited list of items as the value of ValueChecked. If
any of the items matches the contents of that field in the current record, the check box
is checked. For example, you can specify a ValueChecked string like:
DBCheckBox1.ValueChecked := 'True;Yes;On';
If the field for the current record contains values of “true,” “Yes,” or “On,” then the
check box is checked. Comparison of the field to ValueChecked strings is case-
insensitive. If a user checks a box for which there are multiple ValueChecked strings,
the first string is the value that is posted to the database.

Using data controls 20-13


Choosing how to organize the data

Set the ValueUnchecked property to a value the control should post to the database if
the control is not checked when the user moves to another record. By default, this
value is set to “false,” but you can make it any alphanumeric value appropriate to
your needs. You can also enter a semicolon-delimited list of items as the value of
ValueUnchecked. If any of the items matches the contents of that field in the current
record, the check box is unchecked.
A data-aware check box is disabled whenever the field for the current record does
not contain one of the values listed in the ValueChecked or ValueUnchecked properties.
If the field with which a check box is associated is a logical field, the check box is
always checked if the contents of the field is True, and it is unchecked if the contents
of the field is False. In this case, strings entered in the ValueChecked and
ValueUnchecked properties have no effect on logical fields.

Restricting field values with radio controls


TDBRadioGroup is a data-aware version of a radio group control. It enables you to set
the value of a data field with a radio button control where there is a limited number
of possible values for the field. The radio group includes one button for each value a
field can accept. Users can set the value for a data field by selecting the desired radio
button.
The Items property determines the radio buttons that appear in the group. Items is a
string list. One radio button is displayed for each string in Items, and each string
appears to the right of a radio button as the button’s label.
If the current value of a field associated with a radio group matches one of the strings
in the Items property, that radio button is selected. For example, if three strings,
“Red,” “Yellow,” and “Blue,” are listed for Items, and the field for the current record
contains the value “Blue,” then the third button in the group appears selected.
Note If the field does not match any strings in Items, a radio button may still be selected if
the field matches a string in the Values property. If the field for the current record
does not match any strings in Items or Values, no radio button is selected.
The Values property can contain an optional list of strings that can be returned to the
dataset when a user selects a radio button and posts a record. Strings are associated
with buttons in numeric sequence. The first string is associated with the first button,
the second string with the second button, and so on. For example, suppose Items
contains “Red,” “Yellow,” and “Blue,” and Values contains “Magenta,” “Yellow,”
and “Cyan.” If a user selects the button labeled “Red,” “Magenta” is posted to the
database.
If strings for Values are not provided, the Item string for a selected radio button is
returned to the database when a record is posted.

Displaying multiple records


Sometimes you want to display many records in the same form. For example, an
invoicing application might show all the orders made by a single customer on the
same form.

20-14 Developer’s Guide


Viewing and editing data with TDBGrid

To display multiple records, use a grid control. Grid controls provide a multi-field,
multi-record view of data that can make your application’s user interface more
compelling and effective. They are discussed in “Viewing and editing data with
TDBGrid” on page 20-15 and “Creating a grid that contains other
data-aware controls” on page 20-28.
Note You can’t display multiple records when using a unidirectional dataset.
You may want to design a user interface that displays both fields from a single record
and grids that represent multiple records. There are two models that combine these
two approaches:
• Master-detail forms: You can represent information from both a master table and
a detail table by including both controls that display a single field and grid
controls. For example, you could display information about a single customer with
a detail grid that displays the orders for that customer. For information about
linking the underlying tables in a master-detail form, see “Creating master/detail
relationships” on page 24-35 and “Establishing master/detail relationships
using parameters” on page 24-47.
• Drill-down forms: In a form that displays multiple records, you can include single
field controls that display detailed information from the current record only. This
approach is particularly useful when the records include long memos or graphic
information. As the user scrolls through the records of the grid, the memo or
graphic updates to represent the value of the current record. Setting this up is very
easy. The synchronization between the two displays is automatic if the grid and
the memo or image control share a common data source.
Tip It is generally not a good idea to combine these two approaches on a single form. It is
usually confusing for users to understand the data relationships in such forms.

Viewing and editing data with TDBGrid


A TDBGrid control lets you view and edit records in a dataset in a tabular grid
format.
Figure 20.1 TDBGrid control

Current field Column titles

Record
indicator

Using data controls 20-15


Viewing and editing data with TDBGrid

Three factors affect the appearance of records displayed in a grid control:


• Existence of persistent column objects defined for the grid using the Columns
editor. Persistent column objects provide great flexibility setting grid and data
appearance. For information on using persistent columns, see “Creating a
customized grid” on page 20-17.
• Creation of persistent field components for the dataset displayed in the grid. For
more information about creating persistent field components using the Fields
editor, see Chapter 25, “Working with field components.”
• The dataset’s ObjectView property setting for grids displaying ADT and array
fields. See “Displaying ADT and array fields” on page 20-22.
A grid control has a Columns property that is itself a wrapper on a TDBGridColumns
object. TDBGridColumns is a collection of TColumn objects representing all of the
columns in a grid control. You can use the Columns editor to set up column
attributes at design time, or use the Columns property of the grid to access the
properties, events, and methods of TDBGridColumns at runtime.

Using a grid control in its default state


The State property of the grid’s Columns property indicates whether persistent
column objects exist for the grid. Columns.State is a runtime-only property that is
automatically set for a grid. The default state is csDefault, meaning that persistent
column objects do not exist for the grid. In that case, the display of data in the grid is
determined primarily by the properties of the fields in the grid’s dataset, or, if there
are no persistent field components, by a default set of display characteristics.
When the grid’s Columns.State property is csDefault, grid columns are dynamically
generated from the visible fields of the dataset and the order of columns in the grid
matches the order of fields in the dataset. Every column in the grid is associated with
a field component. Property changes to field components immediately show up in
the grid.
Using a grid control with dynamically-generated columns is useful for viewing and
editing the contents of arbitrary tables selected at runtime. Because the grid’s
structure is not set, it can change dynamically to accommodate different datasets. A
single grid with dynamically-generated columns can display a Paradox table at one
moment, then switch to display the results of an SQL query when the grid’s
DataSource property changes or when the DataSet property of the data source itself is
changed.
You can change the appearance of a dynamic column at design time or runtime, but
what you are actually modifying are the corresponding properties of the field
component displayed in the column. Properties of dynamic columns exist only so
long as a column is associated with a particular field in a single dataset. For example,
changing the Width property of a column changes the DisplayWidth property of the
field associated with that column. Changes made to column properties that are not
based on field properties, such as Font, exist only for the lifetime of the column.

20-16 Developer’s Guide


Viewing and editing data with TDBGrid

If a grid’s dataset consists of dynamic field components, the fields are destroyed each
time the dataset is closed. When the field components are destroyed, all dynamic
columns associated with them are destroyed as well. If a grid’s dataset consists of
persistent field components, the field components exist even when the dataset is
closed, so the columns associated with those fields also retain their properties when
the dataset is closed.
Note Changing a grid’s Columns.State property to csDefault at runtime deletes all column
objects in the grid (even persistent columns), and rebuilds dynamic columns based
on the visible fields of the grid’s dataset.

Creating a customized grid


A customized grid is one for which you define persistent column objects that
describe how a column appears and how the data in the column is displayed. A
customized grid lets you configure multiple grids to present different views of the
same dataset (different column orders, different field choices, and different column
colors and fonts, for example). A customized grid also enables you to let users
modify the appearance of the grid at runtime without affecting the fields used by the
grid or the field order of the dataset.
Customized grids are best used with datasets whose structure is known at design
time. Because they expect field names established at design time to exist in the
dataset, customized grids are not well suited to browsing arbitrary tables selected at
runtime.

Understanding persistent columns


When you create persistent column objects for a grid, they are only loosely associated
with underlying fields in a grid’s dataset. Default property values for persistent
columns are dynamically fetched from a default source (the associated field or the
grid itself) until a value is assigned to the column property. Until you assign a
column property a value, its value changes as its default source changes. Once you
assign a value to a column property, it no longer changes when its default source
changes.
For example, the default source for a column title caption is an associated field’s
DisplayLabel property. If you modify the DisplayLabel property, the column title
reflects that change immediately. If you then assign a string to the column title’s
caption, the tile caption becomes independent of the associated field’s DisplayLabel
property. Subsequent changes to the field’s DisplayLabel property no longer affect the
column’s title.
Persistent columns exist independently from field components with which they are
associated. In fact, persistent columns do not have to be associated with field objects
at all. If a persistent column’s FieldName property is blank, or if the field name does
not match the name of any field in the grid’s current dataset, the column’s Field
property is NULL and the column is drawn with blank cells. If you override the cell’s
default drawing method, you can display your own custom information in the blank

Using data controls 20-17


Viewing and editing data with TDBGrid

cells. For example, you can use a blank column to display aggregated values on the
last record of a group of records that the aggregate summarizes. Another possibility
is to display a bitmap or bar chart that graphically depicts some aspect of the record’s
data.
Two or more persistent columns can be associated with the same field in a dataset.
For example, you might display a part number field at the left and right extremes of a
wide grid to make it easier to find the part number without having to scroll the grid.
Note Because persistent columns do not have to be associated with a field in a dataset, and
because multiple columns can reference the same field, a customized grid’s
FieldCount property can be less than or equal to the grid’s column count. Also note
that if the currently selected column in a customized grid is not associated with a
field, the grid’s SelectedField property is NULL and the SelectedIndex property is –1.
Persistent columns can be configured to display grid cells as a combo box drop-down
list of lookup values from another dataset or from a static pick list, or as an ellipsis
button (…) in a cell that can be clicked upon to launch special data viewers or dialogs
related to the current cell.

Creating persistent columns


To customize the appearance of grid at design time, you invoke the Columns editor
to create a set of persistent column objects for the grid. At runtime, the State property
for a grid with persistent column objects is automatically set to csCustomized.
To create persistent columns for a grid control,
1 Select the grid component in the form.
2 Invoke the Columns editor by double clicking on the grid’s Columns property in
the Object Inspector.
The Columns list box displays the persistent columns that have been defined for the
selected grid. When you first bring up the Columns editor, this list is empty because
the grid is in its default state, containing only dynamic columns.
You can create persistent columns for all fields in a dataset at once, or you can create
persistent columns on an individual basis. To create persistent columns for all fields:
1 Right-click the grid to invoke the context menu and choose Add All Fields. Note
that if the grid is not already associated with a data source, Add All Fields is
disabled. Associate the grid with a data source that has an active dataset before
choosing Add All Fields.
2 If the grid already contains persistent columns, a dialog box asks if you want to
delete the existing columns, or append to the column set. If you choose Yes, any
existing persistent column information is removed, and all fields in the current
dataset are inserted by field name according to their order in the dataset. If you
choose No, any existing persistent column information is retained, and new
column information, based on any additional fields in the dataset, are appended to
the dataset.
3 Click Close to apply the persistent columns to the grid and close the dialog box.

20-18 Developer’s Guide


Viewing and editing data with TDBGrid

To create persistent columns individually:


1 Choose the Add button in the Columns editor. The new column will be selected in
the list box. The new column is given a sequential number and default name (for
example, 0 - TColumn).
2 To associate a field with this new column, set the FieldName property in the Object
Inspector.
3 To set the title for the new column, expand the Title property in the Object
Inspector and set its Caption property.
4 Close the Columns editor to apply the persistent columns to the grid and close the
dialog box.
At runtime, you can switch to persistent columns by assigning csCustomized to the
Columns.State property. Any existing columns in the grid are destroyed and new
persistent columns are built for each field in the grid’s dataset. You can then add a
persistent column at runtime by calling the Add method for the column list:
DBGrid1.Columns.Add;

Deleting persistent columns


Deleting a persistent column from a grid is useful for eliminating fields that you do
not want to display. To remove a persistent column from a grid,
1 Double-click the grid to display the Columns editor.
2 Select the field to remove in the Columns list box.
3 Click Delete (you can also use the context menu or Del key, to remove a column).
Note If you delete all the columns from a grid, the Columns.State property reverts to its
csDefault state and automatically build dynamic columns for each field in the dataset.
You can delete a persistent column at runtime by simply freeing the column object:
DBGrid1.Columns[5].Free;

Arranging the order of persistent columns


The order in which columns appear in the Columns editor is the same as the order
the columns appear in the grid. You can change the column order by dragging and
dropping columns within the Columns list box.
To change the order of a column,
1 Select the column in the Columns list box.
2 Drag it to a new location in the list box.

Using data controls 20-19


Viewing and editing data with TDBGrid

You can also change the column order at runtime by clicking on the column title and
dragging the column to a new position.
Note Reordering persistent fields in the Fields editor also reorders columns in a default
grid, but not a custom grid.
Important You cannot reorder columns in grids containing both dynamic columns and dynamic
fields at design time, since there is nothing persistent to record the altered field or
column order.
At runtime, a user can use the mouse to drag a column to a new location in the grid if
its DragMode property is set to dmManual. Reordering the columns of a grid with a
State property of csDefault state also reorders field components in the dataset
underlying the grid. The order of fields in the physical table is not affected. To
prevent a user from rearranging columns at runtime, set the grid’s DragMode
property to dmAutomatic.
At runtime, the grid’s OnColumnMoved event fires after a column has been moved.

Setting column properties at design time


Column properties determine how data is displayed in the cells of that column. Most
column properties obtain their default values from properties associated with
another component (called the default source) such as a grid or an associated field
component.
To set a column’s properties, select the column in The Columns editor and set its
properties in the Object Inspector. The following table summarizes key column
properties you can set.

Table 20.2 Column properties


Property Purpose
Alignment Left justifies, right justifies, or centers the field data in the column. Default
source: TField.Alignment.
ButtonStyle cbsAuto: (default) Displays a drop-down list if the associated field is a lookup
field, or if the column’s PickList property contains data.
cbsEllipsis: Displays an ellipsis (...) button to the right of the cell. Clicking on
the button fires the grid’s OnEditButtonClick event.
cbsNone: The column uses only the normal edit control to edit data in the
column.
Color Specifies the background color of the cells of the column. Default source:
TDBGrid.Color. (For text foreground color, see the Font property.)
DropDownRows The number of lines of text displayed by the drop-down list. Default: 7.
Expanded Specifies whether the column is expanded. Only applies to columns
representing ADT or array fields.
FieldName Specifies the field name associated with this column. This can be blank.
ReadOnly True: The data in the column cannot be edited by the user.
False: (default) The data in the column can be edited.

20-20 Developer’s Guide


Viewing and editing data with TDBGrid

Table 20.2 Column properties (continued)


Property Purpose
Width Specifies the width of the column in screen pixels. Default source:
TField.DisplayWidth.
Font Specifies the font type, size, and color used to draw text in the column.
Default source: TDBGrid.Font.
PickList Contains a list of values to display in a drop-down list in the column.
Title Sets properties for the title of the selected column.

The following table summarizes the options you can specify for the Title property.

Table 20.3 Expanded TColumn Title properties


Property Purpose
Alignment Left justifies (default), right justifies, or centers the caption text in the column title.
Caption Specifies the text to display in the column title. Default source: TField.DisplayLabel.
Color Specifies the background color used to draw the column title cell. Default source:
TDBGrid.FixedColor.
Font Specifies the font type, size, and color used to draw text in the column title. Default
source: TDBGrid.TitleFont.

Defining a lookup list column


You can create a column that displays a drop-down list of values, similar to a lookup
combo box control. To specify that the column acts like a combo box, set the column’s
ButtonStyle property to cbsAuto. Once you populate the list with values, the grid
automatically displays a combo box-like drop-down button when a cell of that
column is in edit mode.
There are two ways to populate that list with the values for users to select:
• You can fetch the values from a lookup table. To make a column display a drop-
down list of values drawn from a separate lookup table, you must define a lookup
field in the dataset. For information about creating lookup fields, see “Defining a
lookup field” on page 25-9. Once the lookup field is defined, set the column’s
FieldName to the lookup field name. The drop-down list is automatically
populated with lookup values defined by the lookup field.
• You can specify a list of values explicitly at design time. To enter the list values at
design time, double-click the PickList property for the column in the Object
Inspector. This brings up the String List editor, where you can enter the values that
populate the pick list for the column.
By default, the drop-down list displays 7 values. You can change the length of this list by
setting the DropDownRows property.
Note To restore a column with an explicit pick list to its normal behavior, delete all the text
from the pick list using the String List editor.

Using data controls 20-21


Viewing and editing data with TDBGrid

Putting a button in a column


A column can display an ellipsis button (…) to the right of the normal cell editor.
Ctrl+Enter or a mouse click fires the grid’s OnEditButtonClick event. You can use the
ellipsis button to bring up forms containing more detailed views of the data in the
column. For example, in a table that displays summaries of invoices, you could set up
an ellipsis button in the invoice total column to bring up a form that displays the
items in that invoice, or the tax calculation method, and so on. For graphic fields, you
could use the ellipsis button to bring up a form that displays an image.
To create an ellipsis button in a column:
1 Select the column in the Columns list box.
2 Set ButtonStyle to cbsEllipsis.
3 Write an OnEditButtonClick event handler.

Restoring default values to a column


At runtime you can test a column’s AssignedValues property to determine whether a
column property has been explicitly assigned. Values that are not explicitly defined
are dynamically based on the associated field or the grid’s defaults.
You can undo property changes made to one or more columns. In the Columns
editor, select the column or columns to restore, and then select Restore Defaults from
the context menu. Restore defaults discards assigned property settings and restores a
column’s properties to those derived from its underlying field component
At runtime, you can reset all default properties for a single column by calling the
column’s RestoreDefaults method. You can also reset default properties for all
columns in a grid by calling the column list’s RestoreDefaults method:
DBGrid1.Columns.RestoreDefaults;

Displaying ADT and array fields


Sometimes the fields of the grid’s dataset do not represent simple values such as text,
graphics, numerical values, and so on. Some database servers allow fields that are a
composite of simpler data types, such as ADT fields or array fields.
There are two ways a grid can display composite fields:
• It can “flatten out” the field so that each of the simpler types that make up the field
appears as a separate field in the dataset. When a composite field is flattened out,
its constituents appear as separate fields that reflect their common source only in
that each field name is preceded by the name of the common parent field in the
underlying database table.
To display composite fields as if they were flattened out, set the dataset’s
ObjectView property to False. The dataset stores composite fields as a set of
separate fields, and the grid reflects this by assigning each constituent part a
separate column.

20-22 Developer’s Guide


Viewing and editing data with TDBGrid

• It can display composite fields in a single column, reflecting the fact that they are a
single field. When displaying composite fields in a single column, the column can
be expanded and collapsed by clicking on the arrow in the title bar of the field, or
by setting the Expanded property of the column:
• When a column is expanded, each child field appears in its own sub-column
with a title bar that appears below the title bar of the parent field. That is, the
title bar for the grid increases in height, with the first row giving the name of
the composite field, and the second row subdividing that for the individual
parts. Fields that are not composites appear with title bars that are extra high.
This expansion continues for constituents that are in turn composite fields (for
example, a detail table nested in a detail table), with the title bar growing in
height accordingly.
• When the field is collapsed, only one column appears with an uneditable
comma delimited string containing the child fields.
To display a composite field in an expanding and collapsing column, set the
dataset’s ObjectView property to True. The dataset stores the composite field as a
single field component that contains a set of nested sub-fields. The grid reflects
this in a column that can expand or collapse
Figure 20.2 shows a grid with an ADT field and an array field. The dataset’s
ObjectView property is set to False so that each child field has a column.
Figure 20.2 TDBGrid control with ObjectView set to False
ADT child fields Array child fields

Figure 20.3 and 20.4 show the grid with an ADT field and an array field. Figure 20.3
shows the fields collapsed. In this state they cannot be edited. Figure 20.4 shows the
fields expanded. The fields are expanded and collapsed by clicking on the arrow in
the fields title bar.
Figure 20.3 TDBGrid control with Expanded set to False

Using data controls 20-23


Viewing and editing data with TDBGrid

Figure 20.4 TDBGrid control with Expanded set to True


ADT child field columns Array child field columns

The following table lists the properties that affect the way ADT and array fields
appear in a TDBGrid:

Table 20.4 Properties that affect the way composite fields appear
Property Object Purpose
Expandable TColumn Indicates whether the column can be expanded to show child fields
in separate, editable columns. (read-only)
Expanded TColumn Specifies whether the column is expanded.
MaxTitleRows TDBGrid Specifies the maximum number of title rows that can appear in the
grid
ObjectView TDataSet Specifies whether fields are displayed flattened out, or in object
mode, where each object field can be expanded and collapsed.
ParentColumn TColumn Refers to the TColumn object that owns the child field’s column.

Note In addition to ADT and array fields, some datasets include fields that refer to another
dataset (dataset fields) or a record in another dataset (reference) fields. Data-aware
grids display such fields as “(DataSet)” or “(Reference)”, respectively. At runtime an
ellipsis button appears to the right. Clicking on the ellipsis brings up a new form with
a grid displaying the contents of the field. For dataset fields, this grid displays the
dataset that is the field’s value. For reference fields, this grid contains a single row
that displays the record from another dataset.

Setting grid options


You can use the grid Options property at design time to control basic grid behavior
and appearance at runtime. When a grid component is first placed on a form at
design time, the Options property in the Object Inspector is displayed with a + (plus)
sign to indicate that the Options property can be expanded to display a series of
Boolean properties that you can set individually. To view and set these properties,
click on the + sign. The list of options in the Object Inspector below the Options
property. The + sign changes to a – (minus) sign, that collapses the list back when
you click it.

20-24 Developer’s Guide


Viewing and editing data with TDBGrid

The following table lists the Options properties that can be set, and describes how
they affect the grid at runtime.

Table 20.5 Expanded TDBGrid Options properties


Option Purpose
dgEditing True: (Default). Enables editing, inserting, and deleting records in the
grid.
False: Disables editing, inserting, and deleting records in the grid.
dgAlwaysShowEditor True: When a field is selected, it is in Edit state.
False: (Default). A field is not automatically in Edit state when
selected.
dgTitles True: (Default). Displays field names across the top of the grid.
False: Field name display is turned off.
dgIndicator True: (Default). The indicator column is displayed at the left of the
grid, and the current record indicator (an arrow at the left of the grid)
is activated to show the current record. On insert, the arrow becomes
an asterisk. On edit, the arrow becomes an I-beam.
False: The indicator column is turned off.
dgColumnResize True: (Default). Columns can be resized by dragging the column rulers
in the title area. Resizing changes the corresponding width of the
underlying TField component.
False: Columns cannot be resized in the grid.
dgColLines True: (Default). Displays vertical dividing lines between columns.
False: Does not display dividing lines between columns.
dgRowLines True: (Default). Displays horizontal dividing lines between records.
False: Does not display dividing lines between records.
dgTabs True: (Default). Enables tabbing between fields in records.
False: Tabbing exits the grid control.
dgRowSelect True: The selection bar spans the entire width of the grid.
False: (Default). Selecting a field in a record selects only that field.
dgAlwaysShowSelection True: (Default). The selection bar in the grid is always visible, even if
another control has focus.
False: The selection bar in the grid is only visible when the grid has
focus.
dgConfirmDelete True: (Default). Prompt for confirmation to delete records (Ctrl+Del).
False: Delete records without confirmation.
dgCancelOnExit True: (Default). Cancels a pending insert when focus leaves the grid.
This option prevents inadvertent posting of partial or blank records.
False: Permits pending inserts.
dgMultiSelect True: Allows user to select noncontiguous rows in the grid using
Ctrl+Shift or Shift+ arrow keys.
False: (Default). Does not allow user to multi-select rows.

Using data controls 20-25


Viewing and editing data with TDBGrid

Editing in the grid


At runtime, you can use a grid to modify existing data and enter new records, if the
following default conditions are met:
• The CanModify property of the Dataset is True.
• The ReadOnly property of grid is False.
When a user edits a record in the grid, changes to each field are posted to an internal
record buffer, but are not posted until the user moves to a different record in the grid.
Even if focus is changed to another control on a form, the grid does not post changes
until another the cursor for the dataset is moved to another record. When a record is
posted, the dataset checks all associated data-aware components for a change in
status. If there is a problem updating any fields that contain modified data, the grid
raises an exception, and does not modify the record.
Note If your application caches updates, posting record changes only adds them to an
internal cache. They are not posted back to the underlying database table until your
application applies the updates.
You can cancel all edits for a record by pressing Esc in any field before moving to
another record.

Controlling grid drawing


Your first level of control over how a grid control draws itself is setting column
properties. The grid automatically uses the font, color, and alignment properties of a
column to draw the cells of that column. The text of data fields is drawn using the
DisplayFormat or EditFormat properties of the field component associated with the
column.
You can augment the default grid display logic with code in a grid’s
OnDrawColumnCell event. If the grid’s DefaultDrawing property is True, all the
normal drawing is performed before your OnDrawColumnCell event handler is
called. Your code can then draw on top of the default display. This is primarily useful
when you have defined a blank persistent column and want to draw special graphics
in that column’s cells.
If you want to replace the drawing logic of the grid entirely, set DefaultDrawing to
False and place your drawing code in the grid’s OnDrawColumnCell event. If you
want to replace the drawing logic only in certain columns or for certain field data
types, you can call the DefaultDrawColumnCell inside your OnDrawColumnCell event
handler to have the grid use its normal drawing code for selected columns. This
reduces the amount of work you have to do if you only want to change the way
Boolean field types are drawn, for example.

20-26 Developer’s Guide


Viewing and editing data with TDBGrid

Responding to user actions at runtime


You can modify grid behavior by writing event handlers to respond to specific
actions within the grid at runtime. Because a grid typically displays many fields and
records at once, you may have very specific needs to respond to changes to
individual columns. For example, you might want to activate and deactivate a button
elsewhere on the form every time a user enters and exits a specific column.
The following table lists the grid events available in the Object Inspector.

Table 20.6 Grid control events


Event Purpose
OnCellClick Occurs when a user clicks on a cell in the grid.
OnColEnter Occurs when a user moves into a column on the grid.
OnColExit Occurs when a user leaves a column on the grid.
OnColumnMoved Occurs when the user moves a column to a new location.
OnDblClick Occurs when a user double clicks in the grid.
OnDragDrop Occurs when a user drags and drops in the grid.
OnDragOver Occurs when a user drags over the grid.
OnDrawColumnCell Occurs when application needs to draw individual cells.
OnDrawDataCell (obsolete) Occurs when application needs to draw individual cells if State
is csDefault.
OnEditButtonClick Occurs when the user clicks on an ellipsis button in a column.
OnEndDrag Occurs when a user stops dragging on the grid.
OnEnter Occurs when the grid gets focus.
OnExit Occurs when the grid loses focus.
OnKeyDown Occurs when a user presses any key or key combination on the keyboard
when in the grid.
OnKeyPress Occurs when a user presses a single alphanumeric key on the keyboard
when in the grid.
OnKeyUp Occurs when a user releases a key when in the grid.
OnStartDrag Occurs when a user starts dragging on the grid.
OnTitleClick Occurs when a user clicks the title for a column.

There are many uses for these events. For example, you might write a handler for the
OnDblClick event that pops up a list from which a user can choose a value to enter in
a column. Such a handler would use the SelectedField property to determine to
current row and column.

Using data controls 20-27


Creating a grid that contains other data-aware controls

Creating a grid that contains other data-aware controls


A TDBCtrlGrid control displays multiple fields in multiple records in a tabular grid
format. Each cell in a grid displays multiple fields from a single row. To use a
database control grid:
1 Place a database control grid on a form.
2 Set the grid’s DataSource property to the name of a data source.
3 Place individual data controls within the design cell for the grid. The design cell
for the grid is the top or leftmost cell in the grid, and is the only cell into which you
can place other controls.
4 Set the DataField property for each data control to the name of a field. The data
source for these data controls is already set to the data source of the database
control grid.
5 Arrange the controls within the cell as desired.
When you compile and run an application containing a database control grid, the
arrangement of data controls you set in the design cell at runtime is replicated in each
cell of the grid. Each cell displays a different record in a dataset.
Figure 20.5 TDBCtrlGrid at design time

20-28 Developer’s Guide


Navigating and manipulating records

The following table summarizes some of the unique properties for database control
grids that you can set at design time:

Table 20.7 Selected database control grid properties


Property Purpose
AllowDelete True (default): Permits record deletion.
False: Prevents record deletion.
AllowInsert True (default): Permits record insertion.
False: Prevents record insertion.
ColCount Sets the number of columns in the grid. Default = 1.
Orientation goVertical (default): Display records from top to bottom.
goHorizontal: Displays records from left to right.
PanelHeight Sets the height for an individual panel. Default = 72.
PanelWidth Sets the width for an individual panel. Default = 200.
RowCount Sets the number of panels to display. Default = 3.
ShowFocus True (default): Displays a focus rectangle around the current record’s panel at
runtime.
False: Does not display a focus rectangle.

For more information about database control grid properties and methods, see the
online VCL Reference.

Navigating and manipulating records


TDBNavigator provides users a simple control for navigating through records in a
dataset, and for manipulating records. The navigator consists of a series of buttons
that enable a user to scroll forward or backward through records one at a time, go to
the first record, go to the last record, insert a new record, update an existing record,
post data changes, cancel data changes, delete a record, and refresh record display.
Figure 20.6 shows the navigator that appears by default when you place it on a form
at design time. The navigator consists of a series of buttons that let a user navigate
from one record to another in a dataset, and edit, delete, insert, and post records. The
VisibleButtons property of the navigator enables you to hide or show a subset of these
buttons dynamically.
Figure 20.6 Buttons on the TDBNavigator control
Insert record Delete current record
Next record Post record edits

First record Refresh records

Prior record Cancel record edits


Last record Edit current record

Using data controls 20-29


Navigating and manipulating records

The following table describes the buttons on the navigator.

Table 20.8 TDBNavigator buttons


Button Purpose
First Calls the dataset’s First method to set the current record to the first record.
Prior Calls the dataset’s Prior method to set the current record to the previous record.
Next Calls the dataset’s Next method to set the current record to the next record.
Last Calls the dataset’s Last method to set the current record to the last record.
Insert Calls the dataset’s Insert method to insert a new record before the current record, and
set the dataset in Insert state.
Delete Deletes the current record. If the ConfirmDelete property is True it prompts for
confirmation before deleting.
Edit Puts the dataset in Edit state so that the current record can be modified.
Post Writes changes in the current record to the database.
Cancel Cancels edits to the current record, and returns the dataset to Browse state.
Refresh Clears data control display buffers, then refreshes its buffers from the physical table or
query. Useful if the underlying data may have been changed by another application.

Choosing navigator buttons to display


When you first place a TDBNavigator on a form at design time, all its buttons are
visible. You can use the VisibleButtons property to turn off buttons you do not want to
use on a form. For example, when working with a unidirectional dataset, only the
First, Next, and Refresh buttons are meaningful. On a form that is intended for
browsing rather than editing, you might want to disable the Edit, Insert, Delete, Post,
and Cancel buttons.

Hiding and showing navigator buttons at design time


The VisibleButtons property in the Object Inspector is displayed with a + sign to
indicate that it can be expanded to display a Boolean value for each button on the
navigator. To view and set these values, click on the + sign. The list of buttons that
can be turned on or off appears in the Object Inspector below the VisibleButtons
property. The + sign changes to a – (minus) sign, which you can click to collapse the
list of properties.
Button visibility is indicated by the Boolean state of the button value. If a value is set
to True, the button appears in the TDBNavigator. If False, the button is removed from
the navigator at design time and runtime.
Note As button values are set to False, they are removed from the TDBNavigator on the
form, and the remaining buttons are expanded in width to fill the control. You can
drag the control’s handles to resize the buttons.

20-30 Developer’s Guide


Navigating and manipulating records

Hiding and showing navigator buttons at runtime


At runtime you can hide or show navigator buttons in response to user actions or
application states. For example, suppose you provide a single navigator for
navigating through two different datasets, one of which permits users to edit records,
and the other of which is read-only. When you switch between datasets, you want to
hide the navigator’s Insert, Delete, Edit, Post, Cancel, and Refresh buttons for the read-
only dataset, and show them for the other dataset.
For example, suppose you want to prevent edits to the OrdersTable by hiding the
Insert, Delete, Edit, Post, Cancel, and Refresh buttons on the navigator, but that you also
want to allow editing for the CustomersTable. The VisibleButtons property controls
which buttons are displayed in the navigator. Here’s one way you might code the
event handler:
procedure TForm1.CustomerCompanyEnter(Sender :TObject);
begin
if Sender = CustomerCompany then
begin
DBNavigatorAll.DataSource := CustomerCompany.DataSource;
DBNavigatorAll.VisibleButtons := [nbFirst,nbPrior,nbNext,nbLast];
end
else
begin
DBNavigatorAll.DataSource := OrderNum.DataSource;
DBNavigatorAll.VisibleButtons := DBNavigatorAll.VisibleButtons + [nbInsert,
nbDelete,nbEdit,nbPost,nbCancel,nbRefresh];
end;
end;

Displaying fly-over help


To display fly-over help for each navigator button at runtime, set the navigator
ShowHint property to True. When ShowHint is True, the navigator displays fly-by
Help hints whenever you pass the mouse cursor over the navigator buttons.
ShowHint is False by default.
The Hints property controls the fly-over help text for each button. By default Hints is
an empty string list. When Hints is empty, each navigator button displays default
help text. To provide customized fly-over help for the navigator buttons, use the
String list editor to enter a separate line of hint text for each button in the Hints
property. When present, the strings you provide override the default hints provided
by the navigator control.

Using data controls 20-31


Navigating and manipulating records

Using a single navigator for multiple datasets


As with other data-aware controls, a navigator’s DataSource property specifies the
data source that links the control to a dataset. By changing a navigator’s DataSource
property at runtime, a single navigator can provide record navigation and
manipulation for multiple datasets.
Suppose a form contains two edit controls linked to the CustomersTable and
OrdersTable datasets through the CustomersSource and OrdersSource data sources
respectively. When a user enters the edit control connected to CustomersSource, the
navigator should also use CustomersSource, and when the user enters the edit control
connected to OrdersSource, the navigator should switch to OrdersSource as well. You
can code an OnEnter event handler for one of the edit controls, and then share that
event with the other edit control. For example:
procedure TForm1.CustomerCompanyEnter(Sender :TObject);
begin
if Sender = CustomerCompany then
DBNavigatorAll.DataSource := CustomerCompany.DataSource
else
DBNavigatorAll.DataSource := OrderNum.DataSource;
end;

20-32 Developer’s Guide


Chapter

Creating reports with Rave Reports


Chapter21
21
This chapter provides an overview of using Rave Reports from Nevrona Designs to
generate reports within a Delphi application. Additional documentation for Rave
Reports is included in the Delphi installation directory, as described in “Getting more
information” on page 21-6.
Note: Rave Reports is automatically installed with the Professional and Enterprise editions
of Delphi.

Overview
Rave Reports is a component-based visual report design tool that simplifies the
process of adding reports to an application. You can use Rave Reports to create a
variety of reports, from simple banded reports to more complex, highly customized
reports. Report features include:
• Word wrapped memos
• Full graphics
• Justification
• Precise page positioning
• Printer configuration
• Font control
• Print preview
• Reuse of report content
• PDF, HTML, RTF, and text report renditions

Creating reports with Rave Reports 21-1


Getting started

Getting started
You can use Rave Reports in both VCL and CLX applications to generate reports
from database and non-database data. The following procedure explains how to add
a simple report to an existing database application.
1 Open a database application in Delphi.
2 From the Rave page of the Component palette, add the TRvDataSetConnection
component to a form in the application.
3 In the Object Inspector, set the DataSet property to a dataset component that is
already defined in your application.
4 Use the Rave Visual Designer to design your report and create a report project file
(.rav file).
a Choose Tools|Rave Designer to launch the Rave Visual Designer.
b Choose File|New Data Object to display the Data Connections dialog box.
c In the Data Object Type list, select Direct Data View and click Next.
d In the Active Data Connections list, select RVDataSetConnection1 and click
Finish.
In the Project Tree on the left side of the Rave Visual Designer window, expand
the Data View Dictionary node, then expand the newly created DataView1
node. Your application data fields are displayed under the DataView1 node.
e Choose Tools|Report Wizards|Simple Table to display the Simple Table
wizard.
f Select DataView1 and click Next.
g Select two or three fields that you want to display in the report and click Next.
h Follow the prompts on the subsequent wizard pages to set the order of the
fields, margins, heading text, and fonts to be used in the report.
i On the final wizard page, click Generate to complete the wizard and display the
report in the Page Designer.
j Choose File|Save as to display the Save As dialog box. Navigate to the
directory in which your Delphi application is located and save the Rave project
file as MyRave.rav.
k Minimize the Rave Visual Designer window and return to Delphi.
5 From the Rave page of the Component palette, add the Rave project component,
TRvProject, to the form.
6 In the Object Inspector, set the ProjectFile property to the report project file
(MyRave.rav) that you created in step j.

21-2 Developer’s Guide


The Rave Visual Designer

7 From the Standard page of the Component palette, add the TButton component.
8 In the Object Inspector, click the Events tab and double-click the OnClick event.
9 Write an event handler that uses the ExecuteReport method to execute the Rave
project component.
10 Press F9 to run the application.
11 Click the button that you added in step 7.
12 The Output Options dialog box is displayed. Click OK to display the report.
For a more information on using the Rave Visual Designer, use the Help menu or see
the Rave Reports documentation listed in “Getting more information” on page 21-6.

The Rave Visual Designer


To launch the Rave Visual Designer, do one of the following:
• Choose Tools|Rave Designer.
• Double-click a TRvProject component on a form.
• Right-click a TRvProject component on a form, and choose Rave Visual Designer.
Use the component
toolbars to add
Use the Property components to the
Panel to set the Page Designer
properties, (click a toolbar
methods, and button and then
events for the click the grid). Use
selected the editor toolbars
component. to change the
report project or
Use the Page components.
Designer to lay
out your report by
adding
components from
the toolbars. Use the Project
Tree to display
and navigate the
structure of the
report project.

For a detailed information on using the Rave Visual Designer, use the Help menu or
see the Rave Reports documentation listed in “Getting more information” on
page 21-6.

Creating reports with Rave Reports 21-3


Component overview

Component overview
This section provides an overview of the Rave Reports components. For detailed
component information, see the documentation listed in “Getting more information”
on page 21-6.

VCL/CLX components
The VCL/CLX components are non-visual components that you add to a form in
your VCL or CLX application. They are available on the Rave page of the Component
palette. There are four categories of components: engine, render, data connection and
Rave project.

Engine components
The Engine components are used to generate reports. Reports can be generated from
a pre-defined visual definition (using the Engine property of TRvProject) or by
making calls to the Rave code-based API library from within the OnPrint event. The
engine components are:
TRvNDRWriter
TRvSystem

Render components
The Render components are used to convert an NDR file (Rave snapshot report file)
or a stream generated from TRvNDRWriter to a variety of formats. Rendering can be
done programmatically or added to the standard setup and preview dialogs of
TRvSystem by dropping a render component on an active form or data module
within your application. The render components are:
TRvRenderPreview TRvRenderPrinter
TRvRenderPDF TRvRenderHTML
TRvRenderRTF TRvRenderText

Data connection components


The Data Connection components provide the link between application data and the
Direct Data Views in visually designed Rave reports. The data connection
components are:
TRvCustomConnection TRvDataSetConnection
TRvTableConnection TRvQueryConnection

21-4 Developer’s Guide


Component overview

Rave project component


The TRvProject component interface with and executes visually designed Rave
reports within an application. Normally a TRvSystem component would be assigned
to the Engine property. The reporting project (.rav) should be specified in the
ProjectFile property or loaded into the DFM using the StoreRAV property. Project
parameters can be set using the SetParam method and reports can be executed using
the ExecuteReport method.

Reporting components
The following components are available in the Rave Visual Designer.

Project components
The Project toolbar provides the essential building blocks for all reports. The project
components are:
TRavePage
TRaveProjectManager
TRaveReport

Data objects
Data objects connect to data or control access to reports from the Rave Reporting
Server. The File|New Data Object menu command displays the Data Connections
dialog box, which you can use to create each of the data objects. The data object
components are:
TRaveDatabase TRaveDriverDataView TRaveSimpleSecurity
TRaveDirectDataView TRaveLookupSecurity

Standard components
The Standard toolbar provides components that are frequently used when designing
reports. The standard components are:
TRaveBitmap TRaveMetaFile TRaveText
TRaveFontMaster TRavePageNumInit
TRaveMemo TRaveSection

Drawing components
The Drawing toolbar provides components to create lines and shapes in a report. To
color and style the components, use the Fills, Lines, and Colors toolbars. The drawing
components are:
TRaveCircle TRaveLine TRaveVLine
TRaveEllipse TRaveRectangle
TRaveHLine TRaveSquare

Creating reports with Rave Reports 21-5


Getting more information

Report components
The Report toolbar provides components that used most often in data-aware reports.
The report components are:
Band Style Editor TRaveCalcText TRaveDataMirrorSection
DataText Editor TRaveCalcTotal TRaveDataText
TRaveBand TRaveDataBand TRaveRegion
TRaveCalcController TRaveDataCycle
TRaveCalcOp Component TRaveDataMemo

Bar code components


The Bar Code toolbar provides different types of bar codes in a report. The bar code
components are:
TRaveCode128BarCode TRaveEANBarCode TRavePostNetBarCode
TRaveCode39BarCode TRaveI2of5Bar Code TRaveUPCBarCode

Getting more information


Delphi includes the following Nevrona Designs documentation for Rave Reports.

Table 21.1 Rave Reports documentation


Title Description
Rave Visual Designer Manual for Provides detailed information about using the Rave Visual
Reference and Learning Designer to create reports.
Rave Tutorial and Reference Provides step-by-step instructions on using the Rave Reports
components and includes a reference of classes, components,
and units.
Rave Application Interface Explains how to create custom Rave Reports components,
Technology Specification property editors, component editors, project editors, and control
the Rave environment.

These books are distributed as PDF files on the Delphi installation CD.
Most of the information in the PDF files is also available in the online Help. To
display online Help for a Rave Reports component on a form, select the component
and press F1. To display online Help for the Rave Visual Designer, use the Help
menu.

21-6 Developer’s Guide


Chapter

Using decision support components


Chapter22
22
The decision support components help you create cross-tabulated—or, crosstab—
tables and graphs. You can then use these tables and graphs to view and summarize
data from different perspectives. For more information on cross-tabulated data, see
“About crosstabs” on page 22-2.

Overview
The decision support components appear on the Decision Cube page of the
Component palette:
• The decision cube, TDecisionCube, is a multidimensional data store.
• The decision source, TDecisionSource, defines the current pivot state of a decision
grid or a decision graph.
• The decision query, TDecisionQuery, is a specialized form of TQuery used to define
the data in a decision cube.
• The decision pivot, TDecisionPivot, lets you open or close decision cube
dimensions, or fields, by pressing buttons.
• The decision grid, TDecisionGrid, displays single- and multidimensional data in
table form.
• The decision graph, TDecisionGraph, displays fields from a decision grid as a
dynamic graph that changes when data dimensions are modified.

Using decision support components 22-1


About crosstabs

Figure 22.1 shows all the decision support components placed on a form at design
time.
Figure 22.1 Decision support components at design time
Decision query
Decision cube
Decision source

Decision pivot

Decision grid

Decision graph

About crosstabs
Cross-tabulations, or crosstabs, are a way of presenting subsets of data so that
relationships and trends are more visible. Table fields become the dimensions of the
crosstab while field values define categories and summaries within a dimension.
You can use the decision support components to set up crosstabs in forms.
TDecisionGrid shows data in a table, while TDecisionGraph charts it graphically.
TDecisionPivot has buttons that make it easier to display and hide dimensions and
move them between columns and rows.
Crosstabs can be one-dimensional or multidimensional.

22-2 Developer’s Guide


About crosstabs

One-dimensional crosstabs
One-dimensional crosstabs show a summary row (or column) for the categories of a
single dimension. For example, if Payment is the chosen column dimension and
Amount Paid is the summary category, the crosstab in Figure 22.2 shows the amount
paid using each method.
Figure 22.2 One-dimensional crosstab

Multidimensional crosstabs
Multidimensional crosstabs use additional dimensions for the rows and/or columns.
For example, a two-dimensional crosstab could show amounts paid by payment
method for each country.
A three-dimensional crosstab could show amounts paid by payment method and
terms by country, as shown in Figure 22.3.
Figure 22.3 Three-dimensional crosstab

Using decision support components 22-3


Guidelines for using decision support components

Guidelines for using decision support components


The decision support components listed on page 22-1 can be used together to present
multidimensional data as tables and graphs. More than one grid or graph can be
attached to each dataset. More than one instance of TDecisionPivot can be used to
display the data from different perspectives at runtime.
To create a form with tables and graphs of multidimensional data, follow these steps:
1 Create a form.
2 Add these components to the form and use the Object Inspector to bind them as
indicated:
• A dataset, usually TDecisionQuery (for details, see “Creating decision datasets
with the Decision Query editor” on page 22-6) or TQuery
• A decision cube, TDecisionCube, bound to the dataset by setting its DataSet
property to the dataset’s name
• A decision source, TDecisionSource, bound to the decision cube by setting its
DecisionCube property to the decision cube’s name
3 Add a decision pivot, TDecisionPivot, and bind it to the decision source with the
Object Inspector by setting its DecisionSource property to the appropriate decision
source name. The decision pivot is optional but useful; it lets the form developer
and end users change the dimensions displayed in decision grids or decision
graphs by pushing buttons.
In its default orientation, horizontal, buttons on the left side of the decision pivot
apply to fields on the left side of the decision grid (rows); buttons on the right side
apply to fields at the top of the decision grid (columns).
You can determine where the decision pivot’s buttons appear by setting its
GroupLayout property to xtVertical, xtLeftTop, or xtHorizontal (the default). For
more information on decision pivot properties, see “Using decision pivots” on
page 22-10.
4 Add one or more decision grids and graphs, bound to the decision source. For
details, see “Creating and using decision grids” on page 22-11 and “Creating and
using decision graphs” on page 22-13.
5 Use the Decision Query editor or SQL property of TDecisionQuery (or TQuery) to
specify the tables, fields, and summaries to display in the grid or graph. The last
field of the SQL SELECT should be the summary field. The other fields in the
SELECT must be GROUP BY fields. For instructions, see “Creating decision
datasets with the Decision Query editor” on page 22-6.
6 Set the Active property of the decision query (or alternate dataset component) to
True.

22-4 Developer’s Guide


Using datasets with decision support components

7 Use the decision grid and graph to show and chart different data dimensions. See
“Using decision grids” on page 22-11 and “Using decision graphs” on page 22-14
for instructions and suggestions.
For an illustration of all decision support components on a form, see Figure 22.1 on
page 22-2.

Using datasets with decision support components


The only decision support component that binds directly to a dataset is the decision
cube, TDecisionCube. TDecisionCube expects to receive data with groups and
summaries defined by an SQL statement of an acceptable format. The GROUP BY
phrase must contain the same non-summarized fields (and in the same order) as the
SELECT phrase, and summary fields must be identified.
The decision query component, TDecisionQuery, is a specialized form of TQuery. You
can use TDecisionQuery to more simply define the setup of dimensions (rows and
columns) and summary values used to supply data to decision cubes
(TDecisionCube). You can also use an ordinary TQuery or other BDE-enabled dataset
as a dataset for TDecisionCube, but the correct setup of the dataset and TDecisionCube
are then the responsibility of the designer.
To work correctly with the decision cube, all projected fields in the dataset must
either be dimensions or summaries. The summaries should be additive values (like
sum or count), and should represent totals for each combination of dimension values.
For maximum ease of setup, sums should be named “Sum...” in the dataset while
counts should be named “Count...”.
The Decision Cube can pivot, subtotal, and drill-in correctly only for summaries
whose cells are additive. (SUM and COUNT are additive, while AVERAGE, MAX,
and MIN are not.) Build pivoting crosstab displays only for grids that contain only
additive aggregators. If you are using non-additive aggregators, use a static decision
grid that does not pivot, drill, or subtotal.
Since averages can be calculated using SUM divided by COUNT, a pivoting average
is added automatically when SUM and COUNT dimensions for a field are included
in the dataset. Use this type of average in preference to an average calculated using
an AVERAGE statement.
Averages can also be calculated using COUNT(*). To use COUNT(*) to calculate
averages, include a "COUNT(*) COUNTALL" selector in the query. If you use
COUNT(*) to calculate averages, the single aggregator can be used for all fields. Use
COUNT(*) only in cases where none of the fields being summarized include blank
values, or where a COUNT aggregator is not available for every field.

Using decision support components 22-5


Using datasets with decision support components

Creating decision datasets with TQuery or TTable


If you use an ordinary TQuery component as a decision dataset, you must manually
set up the SQL statement, taking care to supply a GROUP BY phrase which contains
the same fields (and in the same order) as the SELECT phrase.
The SQL should look similar to this:
SELECT ORDERS."Terms", ORDERS."ShipVIA",
ORDERS."PaymentMethod", SUM( ORDERS."AmountPaid" )
FROM "ORDERS.DB" ORDERS
GROUP BY ORDERS."Terms", ORDERS."ShipVIA", ORDERS."PaymentMethod"
The ordering of the SELECT fields should match the ordering of the GROUP BY
fields.
With TTable, you must supply information to the decision cube about which of the
fields in the query are grouping fields, and which are summaries. To do this, Fill in
the Dimension Type for each field in the DimensionMap of the Decision Cube. You
must indicate whether each field is a dimension or a summary, and if a summary,
you must provide the summary type. Since pivoting averages depend on SUM/
COUNT calculations, you must also provide the base field name to allow the decision
cube to match pairs of SUM and COUNT aggregators.

Creating decision datasets with the Decision Query editor


All data used by the decision support components passes through the decision cube,
which accepts a specially formatted dataset most easily produced by an SQL query.
See “Using datasets with decision support components” on page 22-5 for more
information.
While both TTable and TQuery can be used as decision datasets, it is easier to use
TDecisionQuery; the Decision Query editor supplied with it can be used to specify
tables, fields, and summaries to appear in the decision cube and will help you set up
the SELECT and GROUP BY portions of the SQL correctly.
To use the Decision Query editor:
1 Select the decision query component on the form, then right-click and choose
Decision Query editor. The Decision Query editor dialog box appears.
2 Choose the database to use.
3 For single-table queries, click the Select Table button.
For more complex queries involving multi-table joins, click the Query Builder
button to display the SQL Builder or type the SQL statement into the edit box on
the SQL tab page.
4 Return to the Decision Query editor dialog box.

22-6 Developer’s Guide


Using decision cubes

5 In the Decision Query editor dialog box, select fields in the Available Fields list
box and assign them to be either Dimensions or Summaries by clicking the
appropriate right arrow button. As you add fields to the Summaries list, select
from the menu displayed the type of summary to use: sum, count, or average.
6 By default, all fields and summaries defined in the SQL property of the decision
query appear in the Active Dimensions and Active Summaries list boxes. To
remove a dimension or summary, select it in the list and click the left arrow beside
the list, or double-click the item to remove. To add it back, select it in the Available
Fields list box and click the appropriate right arrow.
Once you define the contents of the decision cube, you can further manipulate
dimension display with its DimensionMap property and the buttons of TDecisionPivot.
For more information, see the next section, “Using decision cubes,” “Using decision
sources” on page 22-9, and “Using decision pivots” on page 22-10.
Note When you use the Decision Query editor, the query is initially handled in ANSI-92
SQL syntax, then translated (if necessary) into the dialect used by the server. The
Decision Query editor reads and displays only ANSI standard SQL. The dialect
translation is automatically assigned to the TDecisionQuery’s SQL property. To
modify a query, edit the ANSI-92 version in the Decision Query rather then the SQL
property.

Using decision cubes


The decision cube component, TDecisionCube, is a multidimensional data store that
fetches its data from a dataset (typically a specially structured SQL statement entered
through TDecisionQuery or TQuery). The data is stored in a form that makes its easy
to pivot (that is, change the way in which the data is organized and summarized)
without needing to run the query a second time.

Decision cube properties and events


The DimensionMap properties of TDecisionCube not only control which dimensions
and summaries appear but also let you set date ranges and specify the maximum
number of dimensions the decision cube may support. You can also indicate whether
or not to display data during design. You can display names, (categories) values,
subtotals, or data. Display of data at design time can be time consuming, depending
on the data source.
When you click the ellipsis next to DimensionMap in the Object Inspector, the
Decision Cube editor dialog box appears. You can use its pages and controls to set
the DimensionMap properties.
The OnRefresh event fires whenever the decision cube cache is rebuilt. Developers can
access the new dimension map and change it at that time to free up memory, change
the maximum summaries or dimensions, and so on. OnRefresh is also useful if users
access the Decision Cube editor; application code can respond to user changes at that
time.

Using decision support components 22-7


Using decision cubes

Using the Decision Cube editor


You can use the Decision Cube editor to set the DimensionMap properties of decision
cubes. You can display the Decision Cube editor through the Object Inspector, as
described in the previous section, or by right-clicking a decision cube on a form at
design time and choosing Decision Cube editor.
The Decision Cube Editor dialog box has two tabs:
• Dimension Settings, used to activate or disable available dimensions, rename and
reformat dimensions, put dimensions in a permanently drilled state, and set date
ranges to display.
• Memory Control, used to set the maximum number of dimensions and summaries
that can be active at one time, to display information about memory usage, and to
determine the names and data that appear at design time.

Viewing and changing dimension settings


To view the dimension settings, display the Decision Cube editor and click the
Dimension Settings tab. Then, select a dimension or summary in the Available Fields
list. Its information appears in the boxes on the right side of the editor:
• To change the dimension or summary name that appears in the decision pivot,
decision grid, or decision graph, enter a new name in the Display Name edit box.
• To determine whether the selected field is a dimension or summary, read the text
in the Type edit box. If the dataset is a TTable component, you can use Type to
specify whether the selected field is a dimension or summary.
• To disable or activate the selected dimension or summary, change the setting in
the Active Type drop-down list box: Active, As Needed, or Inactive. Disabling a
dimension or setting it to As Needed saves memory.
• To change the format of that dimension or summary, enter a format string in the
Format edit box.
• To display that dimension or summary by Year, Quarter, or Month, change the
setting in the Binning drop-down list box. Note that you can choose Set in the
Binning list box to put the selected dimension or summary in a permanently
“drilled down” state. This can be useful for saving memory when a dimension has
many values. For more information, see “Decision support components and
memory control” on page 22-20.
• To determine the starting value for ranges, or the drill-down value for a “Set”
dimension, first choose the appropriate Grouping value in the Grouping drop-
down, and then enter the starting range value or permanent drill-down value in
the Initial Value drop-down list.

22-8 Developer’s Guide


Using decision sources

Setting the maximum available dimensions and summaries


To determine the maximum number of dimensions and summaries available for
decision pivots, decision grids, and decision graphs bound to the selected decision
cube, display the Decision Cube editor and click the Memory Control tab. Use the
edit controls to adjust the current settings, if necessary. These settings help to control
the amount of memory required by the decision cube. For more information, see
“Decision support components and memory control” on page 22-20.

Viewing and changing design options


To determine how much information appears at design time, display the Decision
Cube editor and click the Memory Control tab. Then, check the setting that indicates
which names and data to display. Display of data or field names at design time can
cause performance delays in some cases because of the time needed to fetch the data.

Using decision sources


The decision source component, TDecisionSource, defines the current pivot state of
decision grids or decision graphs. Any two objects which use the same decision
source also share pivot states.

Properties and events


The following are some special properties and events that control the appearance and
behavior of decision sources:
• The ControlType property of TDecisionSource indicates whether the decision pivot
buttons should act like check boxes (multiple selections) or radio buttons
(mutually exclusive selections).
• The SparseCols and SparseRows properties of TDecisionSource indicate whether to
display columns or rows with no values; if True, sparse columns or rows are
displayed.
• TDecisionSource has the following events:
• OnLayoutChange occurs when the user performs pivots or drill-downs that
reorganize the data.
• OnNewDimensions occurs when the data is completely altered, such as when the
summary or dimension fields are altered.
• OnSummaryChange occurs when the current summary is changed.
• OnStateChange occurs when the Decision Cube activates or deactivates.

Using decision support components 22-9


Using decision pivots

• OnBeforePivot occurs when changes are committed but not yet reflected in the
user interface. Developers have an opportunity to make changes, for example,
in capacity or pivot state, before application users see the result of their
previous action.
• OnAfterPivot fires after a change in pivot state. Developers can capture
information at that time.

Using decision pivots


The decision pivot component, TDecisionPivot, lets you open or close decision cube
dimensions, or fields, by pressing buttons. When a row or column is opened by
pressing a TDecisionPivot button, the corresponding dimension appears on the
TDecisionGrid or TDecisionGraph component. When a dimension is closed, its detailed
data doesn’t appear; it collapses into the totals of other dimensions. A dimension
may also be in a “drilled” state, where only the summaries for a particular value of
the dimension field appear.
You can also use the decision pivot to reorganize dimensions displayed in the
decision grid and decision graph. Just drag a button to the row or column area or
reorder buttons within the same area.
For illustrations of decision pivots at design time, see Figures 22.1, 22.2, and 22.3.

Decision pivot properties


The following are some special properties that control the appearance and behavior
of decision pivots:
• The first properties listed for TDecisionPivot define its overall behavior and
appearance. You might want to set ButtonAutoSize to False for TDecisionPivot to
keep buttons from expanding and contracting as you adjust the size of the
component.
• The Groups property of TDecisionPivot defines which dimension buttons appear.
You can display the row, column, and summary selection button groups in any
combination. Note that if you want more flexibility over the placement of these
groups, you can place one TDecisionPivot on your form which contains only rows
in one location, and a second which contains only columns in another location.
• Typically, TDecisionPivot is added above TDecisionGrid. In its default orientation,
horizontal, buttons on the left side of TDecisionPivot apply to fields on the left side
of TDecisionGrid (rows); buttons on the right side apply to fields at the top of
TDecisionGrid (columns).
• You can determine where TDecisionPivot’s buttons appear by setting its
GroupLayout property to xtVertical, xtLeftTop, or xtHorizontal (the default, described
in the previous paragraph).

22-10 Developer’s Guide


Creating and using decision grids

Creating and using decision grids


Decision grid components, TDecisionGrid, present cross-tabulated data in table form.
These tables are also called crosstabs, described on page 22-2. Figure 22.1 on
page 22-2 shows a decision grid on a form at design time.

Creating decision grids


To create a form with one or more tables of cross-tabulated data,
1 Follow steps 1–3 listed under “Guidelines for using decision support components”
on page 22-4.
2 Add one or more decision grid components (TDecisionGrid) and bind them to the
decision source, TDecisionSource, with the Object Inspector by setting their
DecisionSource property to the appropriate decision source component.
3 Continue with steps 5–7 listed under “Guidelines for using decision support
components.”
For a description of what appears in the decision grid and how to use it, see “Using
decision grids” on page 22-11.
To add a graph to the form, follow the instructions in “Creating decision graphs” on
page 22-13.

Using decision grids


The decision grid component, TDecisionGrid, displays data from decision cubes
(TDecisionCube) bound to decision sources (TDecisionSource).
By default, the grid appears with dimension fields at its left side and/or top,
depending on the grouping instructions defined in the dataset. Categories, one for
each data value, appear under each field. You can
• Open and close dimensions
• Reorganize, or pivot, rows and columns
• Drill down for detail
• Limit dimension selection to a single dimension for each axis
For more information about special properties and events of the decision grid, see
“Decision grid properties” on page 22-12.

Opening and closing decision grid fields


If a plus sign (+) appears in a dimension or summary field, one or more fields to its
right are closed (hidden). You can open additional fields and categories by clicking
the sign. A minus sign (-) indicates a fully opened (expanded) field. When you click
the sign, the field closes. This outlining feature can be disabled; see “Decision grid
properties” on page 22-12 for details.

Using decision support components 22-11


Creating and using decision grids

Reorganizing rows and columns in decision grids


You can drag row and column headings to new locations within the same axis or to
the other axis. In this way, you can reorganize the grid and see the data from new
perspectives as the data groupings change. This pivoting feature can be disabled; see
“Decision grid properties” on page 22-12 for details.
If you included a decision pivot, you can push and drag its buttons to reorganize the
display. See “Using decision pivots” on page 22-10 for instructions.

Drilling down for detail in decision grids


You can drill down to see more detail in a dimension.
For example, if you right-click a category label (row heading) for a dimension with
others collapsed beneath it, you can choose to drill down and only see data for that
category. When a dimension is drilled, you do not see the category labels for that
dimension displayed on the grid, since only the records for a single category value
are being displayed. If you have a decision pivot on the form, it displays category
values and lets you change to other values if you want.
To drill down into a dimension,
• Right-click a category label and choose Drill In To This Value, or
• Right-click a pivot button and choose Drilled In.
To make the complete dimension active again,
• Right-click the corresponding pivot button, or
• right-click the decision grid in the upper-left corner and select the dimension.

Limiting dimension selection in decision grids


You can change the ControlType property of the decision source to determine whether
more than one dimension can be selected for each axis of the grid. For more
information, see “Using decision sources” on page 22-9.

Decision grid properties


The decision grid component, TDecisionGrid, displays data from the TDecisionCube
component bound to TDecisionSource. By default, data appears in a grid with
category fields on the left side and top of the grid.
The following are some special properties that control the appearance and behavior
of decision grids:
• TDecisionGrid has unique properties for each dimension. To set these, choose
Dimensions in the Object Inspector, then select a dimension. Its properties then
appear in the Object Inspector: Alignment defines the alignment of category labels
for that dimension, Caption can be used to override the default dimension name,
Color defines the color of category labels, FieldName displays the name of the active
dimension, Format can hold any standard format for that data type, and Subtotals

22-12 Developer’s Guide


Creating and using decision graphs

indicates whether to display subtotals for that dimension. With summary fields,
these same properties are used to changed the appearance of the data that appears
in the summary area of the grid. When you’re through setting dimension
properties, either click a component in the form or choose a component in the
drop-down list box at the top of the Object Inspector.
• The Options property of TDecisionGrid lets you control display of grid lines
(cgGridLines = True), enabling of outline features (collapse and expansion of
dimensions with + and - indicators; cgOutliner = True), and enabling of drag-and-
drop pivoting (cgPivotable = True).
• The OnDecisionDrawCell event of TDecisionGrid gives you a chance to change the
appearance of each cell as it is drawn. The event passes the String, Font, and Color
of the current cell as reference parameters. You are free to alter those parameters to
achieve effects such as special colors for negative values. In addition to the
DrawState which is passed by TCustomGrid, the event passes TDecisionDrawState,
which can be used to determine what type of cell is being drawn. Further
information about the cell can be fetched using the Cells, CellValueArray, or
CellDrawState functions.
• The OnDecisionExamineCell event of TDecisionGrid lets you hook the right-click-on-
event to data cells, and is intended to allow a program to display information
(such as detail records) about that particular data cell. When the user right-clicks a
data cell, the event is supplied with all the information which is was used to
compose the data value, including the currently active summary value and a
ValueArray of all the dimension values which were used to create the summary
value.

Creating and using decision graphs


Decision graph components, TDecisionGraph, present cross-tabulated data in graphic
form. Each decision graph shows the value of a single summary, such as Sum, Count,
or Avg, charted for one or more dimensions. For more information on crosstabs, see
page 22-3. For illustrations of decision graphs at design time, see Figure 22.1 on
page 22-2 and Figure 22.4 on page 22-15.

Creating decision graphs


To create a form with one or more decision graphs,
1 Follow steps 1–3 listed under “Guidelines for using decision support components”
on page 22-4.
2 Add one or more decision graph components (TDecisionGraph) and bind them to
the decision source, TDecisionSource, with the Object Inspector by setting their
DecisionSource property to the appropriate decision source component.

Using decision support components 22-13


Creating and using decision graphs

3 Continue with steps 5–7 listed under “Guidelines for using decision support
components.”
4 Finally, right-click the graph and choose Edit Chart to modify the appearance of
the graph series. You can set template properties for each graph dimension, then
set individual series properties to override these defaults. For details, see
“Customizing decision graphs” on page 22-16.
For a description of what appears in the decision graph and how to use it, see the
next section, “Using decision graphs.”
To add a decision grid—or crosstab table—to the form, follow the instructions in
“Creating and using decision grids” on page 22-11.

Using decision graphs


The decision graph component, TDecisionGraph, displays fields from the decision
source (TDecisionSource) as a dynamic graph that changes when data dimensions are
opened, closed, dragged and dropped, or rearranged with the decision pivot
(TDecisionPivot).
Graphed data comes from a specially formatted dataset such as TDecisionQuery. For
an overview of how the decision support components handle and arrange this data,
see page 22-1.
By default, the first row dimension appears as the x-axis and the first column
dimension appears as the y-axis.
You can use decision graphs instead of or in addition to decision grids, which present
cross-tabulated data in tabular form. Decision grids and decision graphs that are
bound to the same decision source present the same data dimensions. To show
different summary data for the same dimensions, you can bind more than one
decision graph to the same decision source. To show different dimensions, bind
decision graphs to different decision sources.
For example, in Figure 22.4 the first decision pivot and graph are bound to the first
decision source and the second decision pivot and graph are bound to the second. So,
each graph can show different dimensions.

22-14 Developer’s Guide


Creating and using decision graphs

Figure 22.4 Decision graphs bound to different decision sources

For more information about what appears in a decision graph, see the next section,
“The decision graph display.”
To create a decision graph, see the previous section, “Creating decision graphs.”
For a discussion of decision graph properties and how to change the appearance and
behavior of decision graphs, see “Customizing decision graphs” on page 22-16.

The decision graph display


By default, the decision graph plots summary values for categories in the first active
row field (along the y-axis) against values in the first active column field (along the x-
axis). Each graphed category appears as a separate series.
If only one dimension is selected—for example, by clicking only one TDecisionPivot
button—only one series is graphed.
If you used a decision pivot, you can push its buttons to determine which decision
cube fields (dimensions) are graphed. To exchange graph axes, drag the decision
pivot dimension buttons from one side of the separator space to the other. If you
have a one-dimensional graph with all buttons on one side of the separator space,
you can use the Row or Column icon as a drop target for adding buttons to the other
side of the separator and making the graph multidimensional.

Using decision support components 22-15


Creating and using decision graphs

If you only want one column and one row to be active at a time, you can set the
ControlType property for TDecisionSource to xtRadio. Then, there can be only one
active field at a time for each decision cube axis, and the decision pivot’s
functionality will correspond to the graph’s behavior. xtRadioEx works the same as
xtRadio, but does not allow the state where all row or all columns dimensions are
closed.
When you have both a decision grid and graph connected to the same
TDecisionSource, you’ll probably want to set ControlType back to xtCheck to
correspond to the more flexible behavior of TDecisionGrid.

Customizing decision graphs


The decision graph component, TDecisionGraph, displays fields from the decision
source (TDecisionSource) as a dynamic graph that changes when data dimensions are
opened, closed, dragged and dropped, or rearranged with the decision pivot
(TDecisionPivot). You can change the type, colors, marker types for line graphs, and
many other properties of decision graphs.
To customize a graph,
1 Right-click it and choose Edit Chart. The Chart Editing dialog box appears.
2 Use the Chart page of the Chart Editing dialog box to view a list of visible series,
select the series definition to use when two or more are available for the same
series, change graph types for a template or series, and set overall graph
properties.
The Series list on the Chart page shows all decision cube dimensions (preceded by
Template:) and currently visible categories. Each category, or series, is a separate
object. You can:
• Add or delete series derived from existing decision-graph series. Derived series
can provide annotations for existing series or represent values calculated from
other series.
• Change the default graph type, and change the title of templates and series.
For a description of the other Chart page tabs, search for the following topic in
online Help: “Chart page (Chart Editing dialog box).”
3 Use the Series page to establish dimension templates, then customize properties
for each individual graph series.
By default, all series are graphed as bar graphs and up to 16 default colors are
assigned. You can edit the template type and properties to create a new default.
Then, as you pivot the decision source to different states, the template is used to
dynamically create the series for each new state. For template details, see “Setting
decision graph template defaults” on page 22-17.

22-16 Developer’s Guide


Creating and using decision graphs

To customize individual series, follow the instructions in “Customizing decision


graph series” on page 22-18.
For a description of each Series page tab, search for the following topic in online
Help: “Series page (Chart Editing dialog box).”

Setting decision graph template defaults


Decision graphs display the values from two dimensions of the decision cube: one
dimension is displayed as an axis of the graph, and the other is used to create a set of
series. The template for that dimension provides default properties for those series
(such as whether the series are bar, line, area, and so on). As users pivot from one
state to another, any required series for the dimension are created using the series
type and other defaults specified in the template.
A separate template is provided for cases where users pivot to a state where only one
dimension is active. A one-dimensional state is often represented with a pie chart, so
a separate template is provided for this case.
You can
• Change the default graph type.
• Change other graph template properties.
• View and set overall graph properties.

Changing the default decision graph type


To change the default graph type,
1 Select a template in the Series list on the Chart page of the Chart Editing dialog
box.
2 Click the Change button.
3 Select a new type and close the Gallery dialog box.

Changing other decision graph template properties


To change color or other properties of a template,
1 Select the Series page at the top of the Chart Editing dialog box.
2 Choose a template in the drop-down list at the top of the page.
3 Choose the appropriate property tab and select settings.

Viewing overall decision graph properties


To view and set decision graph properties other than type and series,
1 Select the Chart page at the top of the Chart Editing dialog box.
2 Choose the appropriate property tab and select settings.

Using decision support components 22-17


Creating and using decision graphs

Customizing decision graph series


The templates supply many defaults for each decision cube dimension, such as graph
type and how series are displayed. Other defaults, such as series color, are defined by
TDecisionGraph. If you want you can override the defaults for each series.
The templates are intended for use when you want the program to create the series
for categories as they are needed, and discard them when they are no longer needed.
If you want, you can set up custom series for specific category values. To do this,
pivot the graph so its current display has a series for the category you want to
customize. When the series is displayed on the graph, you can use the Chart editor to
• Change the graph type.
• Change other series properties.
• Save specific graph series that you have customized.
To define series templates and set overall graph defaults, see “Setting decision graph
template defaults” on page 22-17.

Changing the series graph type


By default, each series has the same graph type, defined by the template for its
dimension. To change all series to the same graph type, you can change the template
type. See “Changing the default decision graph type” on page 22-17 for instructions.
To change the graph type for a single series,
1 Select a series in the Series list on the Chart page of the Chart editor.
2 Click the Change button.
3 Select a new type and close the Gallery dialog box.
4 Check the Save Series check box.

Changing other decision graph series properties


To change color or other properties of a decision graph series,
1 Select the Series page at the top of the Chart Editing dialog box.
2 Choose a series in the drop-down list at the top of the page.
3 Choose the appropriate property tab and select settings.
4 Check the Save Series check box.

Saving decision graph series settings


By default, only settings for templates are saved at design time. Changes made to
specific series are only saved if the Save box is checked for that series in the Chart
Editing dialog box.
Saving series can be memory intensive, so if you don’t need to save them you can
uncheck the Save box.

22-18 Developer’s Guide


Decision support components at runtime

Decision support components at runtime


At runtime, users can perform many operations by left-clicking, right-clicking, and
dragging visible decision support components. These operations, discussed earlier in
this chapter, are summarized below.

Decision pivots at runtime


Users can:
• Left-click the summary button at the left end of the decision pivot to display a list
of available summaries. They can use this list to change the summary data
displayed in decision grids and decision graphs.
• Right-click a dimension button and choose to:
• Move it from the row area to the column area or the reverse.
• Drill In to display detail data.
• Left-click a dimension button following the Drill In command and choose:
• Open Dimension to move back to the top level of that dimension.
• All Values to toggle between displaying just summaries and summaries plus all
other values in decision grids.
• From a list of available categories for that dimension, a category to drill into for
detail values.
• Left-click a dimension button to open or close that dimension.
• Drag and drop dimension buttons from the row area to the column area and the
reverse; they can drop them next to existing buttons in that area or onto the row or
column icon.

Decision grids at runtime


Users can:
• Right-click within the decision grid and choose to:
• Toggle subtotals on and off for individual data groups, for all values of a
dimension, or for the whole grid.
• Display the Decision Cube editor, described on page 22-8.
• Toggle dimensions and summaries open and closed.
• Click + and – within the row and column headings to open and close dimensions.
• Drag and drop dimensions from rows to columns and the reverse.

Using decision support components 22-19


Decision support components and memory control

Decision graphs at runtime


Users can drag from side to side or up and down in the graph grid area to scroll
through off-screen categories and values.

Decision support components and memory control


When a dimension or summary is loaded into the decision cube, it takes up memory.
Adding a new summary increases memory consumption linearly: that is, a decision
cube with two summaries uses twice as much memory as the same cube with only
one summary, a decision cube with three summaries uses three times as much
memory as the same cube with one summary, and so on. Memory consumption for
dimensions increases more quickly. Adding a dimension with 10 values increases
memory consumption by a factor of 10. Adding a dimension with 100 values
increases memory consumption 100 times. Thus adding dimensions to a decision
cube can have a dramatic effect on memory use, and can quickly lead to performance
problems. This effect is especially pronounced when adding dimensions that have
many values.
The decision support components have a number of settings to help you control how
and when memory is used. For more information on the properties and techniques
mentioned here, look up TDecisionCube in the online Help.

Setting maximum dimensions, summaries, and cells


The decision cube’s MaxDimensions and MaxSummaries properties can be used with
the CubeDim.ActiveFlag property to control how many dimensions and summaries
can be loaded at a time. You can set the maximum values on the Cube Capacity page
of the Decision Cube editor to place some overall control on how many dimensions
or summaries can be brought into memory at the same time.
Limiting the number of dimensions or summaries provides a rough limit on the
amount of memory used by the decision cube. However, it does not distinguish
between dimensions with many values and those with only a few. For greater control
of the absolute memory demands of the decision cube, you can also limit the number
of cells in the cube. Set the maximum number of cells on the Cube Capacity page of
the Decision Cube editor.

22-20 Developer’s Guide


Decision support components and memory control

Setting dimension state


The ActiveFlag property controls which dimensions get loaded. You can set this
property on the Dimension Settings tab of the Decision Cube editor using the
Activity Type control. When this control is set to Active, the dimension is loaded
unconditionally, and will always take up space. Note that the number of dimensions
in this state must always be less than MaxDimensions, and the number of summaries
set to Active must be less than MaxSummaries. You should set a dimension or
summary to Active only when it is critical that it be available at all times. An Active
setting decreases the ability of the cube to manage the available memory.
When ActiveFlag is set to AsNeeded, a dimension or summary is loaded only if it can
be loaded without exceeding the MaxDimensions, MaxSummaries, or MaxCells limit.
The decision cube will swap dimensions and summaries that are marked AsNeeded in
and out of memory to keep within the limits imposed by MaxCells, MaxDimensions,
and MaxSummaries. Thus, a dimension or summary may not be loaded in memory if
it is not currently being used. Setting dimensions that are not used frequently to
AsNeeded results in better loading and pivoting performance, although there will be a
time delay to access dimensions which are not currently loaded.

Using paged dimensions


When Binning is set to Set on the Dimension Settings tab of the Decision cube editor
and Start Value is not NULL, the dimension is said to be “paged,” or “permanently
drilled down.” You can access data for just a single value of that dimension at a time,
although you can programmatically access a series of values sequentially. Such a
dimension may not be pivoted or opened.
It is extremely memory intensive to include dimensional data for dimensions that
have very large numbers of values. By making such dimensions paged, you can
display summary information for one value at a time. Information is usually easier to
read when displayed this way, and memory consumption is much easier to manage.

Using decision support components 22-21


22-22 Developer’s Guide
Chapter

Connecting to databases
Chapter23
23
Most dataset components can connect directly to a database server. Once connected,
the dataset communicates with the server automatically. When you open the dataset,
it populates itself with data from the server, and when you post records, they are sent
back the server and applied. A single connection component can be shared by
multiple datasets, or each dataset can use its own connection.
Each type of dataset connects to the database server using its own type of connection
component, which is designed to work with a single data access mechanism. The
following table lists these data access mechanisms and the associated connection
components:

Table 23.1 Database connection components


Data access mechanism Connection component
Borland Database Engine (BDE) TDatabase
ActiveX Data Objects (ADO) TADOConnection
dbExpress TSQLConnection
InterBase Express TIBDatabase

Note For a discussion of some pros and cons of each of these mechanisms, see “Using
databases” on page 19-1.
The connection component provides all the information necessary to establish a
database connection. This information is different for each type of connection
component:
• For information about describing a BDE-based connection, see “Identifying the
database” on page 26-14.
• For information about describing an ADO-based connection, see “Connecting to a
data store using TADOConnection” on page 27-3.

Connecting to databases 23-1


Using implicit connections

• For information about describing a dbExpress connection, see “Setting up


TSQLConnection” on page 28-3.
• For information about describing an InterBase Express connection, see the online
help for TIBDatabase.
Although each type of dataset uses a different connection component, they are all
descendants of TCustomConnection. They all perform many of the same tasks and
surface many of the same properties, methods, and events. This chapter discusses
many of these common tasks.

Using implicit connections


No matter what data access mechanism you are using, you can always create the
connection component explicitly and use it to manage the connection to and
communication with a database server. For BDE-enabled and ADO-based datasets,
you also have the option of describing the database connection through properties of
the dataset and letting the dataset generate an implicit connection. For BDE-enabled
datasets, you specify an implicit connection using the DatabaseName property. For
ADO-based datasets, you use the ConnectionString property.
When using an implicit connection, you do not need to explicitly create a connection
component. This can simplify your application development, and the default
connection you specify can cover a wide variety of situations. For complex, mission-
critical client/server applications with many users and different requirements for
database connections, however, you should create your own connection components
to tune each database connection to your application’s needs. Explicit connection
components give you greater control. For example, you need to access the connection
component to perform the following tasks:
• Customize database server login support. (Implicit connections display a default
login dialog to prompt the user for a user name and password.)
• Control transactions and specify transaction isolation levels.
• Execute SQL commands on the server without using a dataset.
• Perform actions on all open datasets that are connected to the same database.
In addition, if you have multiple datasets that all use the same server, it can be easier
to use an connection component, so that you only have to specify the server to use in
one place. That way, if you later change the server, you do not need to update several
dataset components: only the connection component.

23-2 Developer’s Guide


Controlling connections

Controlling connections
Before you can establish a connection to a database server, your application must
provide certain key pieces of information that describe the desired server. Each type
of connection component surfaces a different set of properties to let you identify the
server. In general, however, they all provide a way for you to name the server you
want and supply a set of connection parameters that control how the connection is
formed. Connection parameters vary from server to server. They can include
information such as user name and password, the maximum size of BLOB fields,
SQL roles, and so on.
Once you have identified the desired server and any connection parameters, you can
use the connection component to explicitly open or close a connection. The
connection component generates events when it opens or closes a connection that
you can use to customize the response of your application to changes in the database
connection.

Connecting to a database server


There are two ways to connect to a database server using a connection component:
• Call the Open method.
• Set the Connected property to True.
Calling the Open method sets Connected to True.
Note When a connection component is not connected to a server and an application
attempts to open one of its associated datasets, the dataset automatically calls the
connection component’s Open method.
When you set Connected to True, the connection component first generates a
BeforeConnect event, where you can perform any initialization. For example, you can
use this event to alter connection parameters.
After the BeforeConnect event, the connection component may display a default login
dialog, depending on how you choose to control server login. It then passes the user
name and password to the driver, opening a connection.
Once the connection is open, the connection component generates an AfterConnect
event, where you can perform any tasks that require an open connection.
Note Some connection components generate additional events as well when establishing a
connection.

Connecting to databases 23-3


Controlling server login

Once a connection is established, it is maintained as long as there is at least one active


dataset using it. When there are no more active datasets, the connection component
drops the connection. Some connection components surface a KeepConnection
property that allows the connection to remain open even if all the datasets that use it
are closed. If KeepConnection is True, the connection is maintained. For connections to
remote database servers, or for applications that frequently open and close datasets,
setting KeepConnection to True reduces network traffic and speeds up the application.
If KeepConnection is False, the connection is dropped when there are no active datasets
using the database. If a dataset that uses the database is later opened, the connection
must be reestablished and initialized.

Disconnecting from a database server


There are two ways to disconnect a server using a connection component:
• Set the Connected property to False.
• Call the Close method.
Calling Close sets Connected to False.
When Connected is set to False, the connection component generates a BeforeDisconnect
event, where you can perform any cleanup before the connection closes. For example,
you can use this event to cache information about all open datasets before they are
closed.
After the BeforeConnect event, the connection component closes all open datasets and
disconnects from the server.
Finally, the connection component generates an AfterDisconnect event, where you can
respond to the change in connection status, such as enabling a Connect button in
your user interface.
Note Calling Close or setting Connected to False disconnects from a database server even if
the connection component has a KeepConnection property that is True.

Controlling server login


Most remote database servers include security features to prohibit unauthorized
access. Usually, the server requires a user name and password login before
permitting database access.
At design time, if a server requires a login, a standard login dialog box prompts for a
user name and password when you first attempt to connect to the database.
At runtime, there are three ways you can handle a server’s request for a login:
• Let the default login dialog and processes handle the login. This is the default
approach. Set the LoginPrompt property of the connection component to True (the
default) and add DBLogDlg to the uses clause of the unit that declares the
connection component. Your application displays the standard login dialog box
when the server requests a user name and password.

23-4 Developer’s Guide


Controlling server login

• Supply the login information before the login attempt. Each type of connection
component uses a different mechanism for specifying the user name and
password:
• For BDE, dbExpress, and InterBase express datasets, the user name and
password connection parameters can be accessed through the Params property.
(For BDE datasets, the parameter values can also be associated with a BDE alias,
while for dbExpress datasets, they can also be associated with a connection
name).
• For ADO datasets, the user name and password can be included in the
ConnectionString property (or provided as parameters to the Open method).
If you specify the user name and password before the server requests them, be
sure to set the LoginPrompt to False, so that the default login dialog does not
appear. For example, the following code sets the user name and password on a
SQL connection component in the BeforeConnect event handler, decrypting an
encrypted password that is associated with the current connection name:
procedure TForm1.SQLConnectionBeforeConnect(Sender: TObject);
begin
with Sender as TSQLConnection do
begin
if LoginPrompt = False then
begin
Params.Values['User_Name'] := 'SYSDBA';
Params.Values['Password'] := Decrypt(Params.Values['Password']);
end;
end;
end;
Note that setting the user name and password at design-time or using hard-coded
strings in code causes the values to be embedded in the application’s executable
file. This still leaves them easy to find, compromising server security.
• Provide your own custom handling for the login event. The connection
component generates an event when it needs the user name and password.
• For TDatabase, TSQLConnection, and TIBDatabase, this is an OnLogin event. The
event handler has two parameters, the connection component, and a local copy
of the user name and password parameters in a string list. (TSQLConnection
includes the database parameter as well). You must set the LoginPrompt
property to True for this event to occur. Having a LoginPrompt value of False and
assigning a handler for the OnLogin event creates a situation where it is
impossible to log in to the database because the default dialog does not appear
and the OnLogin event handler never executes.
• For TADOConnection, the event is an OnWillConnect event. The event handler
has five parameters, the connection component and four parameters that return
values to influence the connection (including two for user name and password).
This event always occurs, regardless of the value of LoginPrompt.

Connecting to databases 23-5


Managing transactions

Write an event handler for the event in which you set the login parameters. Here is
an example where the values for the USER NAME and PASSWORD parameters
are provided from a global variable (UserName) and a method that returns a
password given a user name (PasswordSearch)
procedure TForm1.Database1Login(Database: TDatabase; LoginParams: TStrings);
begin
LoginParams.Values['USER NAME'] := UserName;
LoginParams.Values['PASSWORD'] := PasswordSearch(UserName);
end;
As with the other methods of providing login parameters, when writing an
OnLogin or OnWillConnect event handler, avoid hard coding the password in your
application code. It should appear only as an encrypted value, an entry in a secure
database your application uses to look up the value, or be dynamically obtained
from the user.

Managing transactions
A transaction is a group of actions that must all be carried out successfully on one or
more tables in a database before they are committed (made permanent). If one of the
actions in the group fails, then all actions are rolled back (undone). By using
transactions, you ensure that the database is not left in an inconsistent state when a
problem occurs completing one of the actions that make up the transaction.
For example, in a banking application, transferring funds from one account to
another is an operation you would want to protect with a transaction. If, after
decrementing the balance in one account, an error occurred incrementing the balance
in the other, you want to roll back the transaction so that the database still reflects the
correct total balance.
It is always possible to manage transactions by sending SQL commands directly to
the database. Most databases provide their own transaction management model,
although some have no transaction support at all. For servers that support it, you
may want to code your own transaction management directly, taking advantage of
advanced transaction management capabilities on a particular database server, such
as schema caching.
If you do not need to use any advanced transaction management capabilities,
connection components provide a set of methods and properties you can use to
manage transactions without explicitly sending any SQL commands. Using these
properties and methods has the advantage that you do not need to customize your
application for each type of database server you use, as long as the server supports
transactions. (The BDE also provides limited transaction support for local tables with
no server transaction support. When not using the BDE, trying to start transactions
on a database that does not support them causes connection components to raise an
exception.)
Warning When a dataset provider component applies updates, it implicitly generates
transactions for any updates. Be careful that any transactions you explicitly start do
not conflict with those generated by the provider.

23-6 Developer’s Guide


Managing transactions

Starting a transaction
When you start a transaction, all subsequent statements that read from or write to the
database occur in the context of that transaction, until the transaction is explicitly
terminated or (in the case of overlapping transactions) until another transaction is
started. Each statement is considered part of a group. Changes must be successfully
committed to the database, or every change made in the group must be undone.
While the transaction is in process, your view of the data in database tables is
determined by your transaction isolation level. For information about transaction
isolation levels, see “Specifying the transaction isolation level” on page 23-9.
For TADOConnection, start a transaction by calling the BeginTrans method:
Level := ADOConnection1.BeginTrans;
BeginTrans returns the level of nesting for the transaction that started. A nested
transaction is one that is nested within another, parent, transaction. After the server
starts the transaction, the ADO connection receives an OnBeginTransComplete event.
For TDatabase, use the StartTransactionmethod instead. TDataBase does not support
nested or overlapped transactions: If you call a TDatabase component’s
StartTransaction method while another transaction is underway, it raises an
exception. To avoid calling StartTransaction, you can check the InTransaction
property:
if not Database1.InTransaction then
Database1.StartTransaction;
TSQLConnection also uses the StartTransactionmethod, but it uses a version that gives
you a lot more control. Specifically, StartTransaction takes a transaction descriptor,
which lets you manage multiple simultaneous transactions and specify the
transaction isolation level on a per-transaction basis. (For more information on
transaction levels, see “Specifying the transaction isolation level” on page 23-9.) In
order to manage multiple simultaneous transactions, set the TransactionID field of the
transaction descriptor to a unique value. TransactionID can be any value you choose,
as long as it is unique (does not conflict with any other transaction currently
underway). Depending on the server, transactions started by TSQLConnection can be
nested (as they can be when using ADO) or they can be overlapped.
var
TD: TTransactionDesc;
begin
TD.TransactionID := 1;
TD.IsolationLevel := xilREADCOMMITTED;
SQLConnection1.StartTransaction(TD);
By default, with overlapped transactions, the first transaction becomes inactive when
the second transaction starts, although you can postpone committing or rolling back
the first transaction until later. If you are using TSQLConnection with an InterBase
database, you can identify each dataset in your application with a particular active
transaction, by setting its TransactionLevel property. That is, after starting a second
transaction, you can continue to work with both transactions simultaneously, simply
by associating a dataset with the transaction you want.

Connecting to databases 23-7


Managing transactions

Note Unlike TADOConnection, TSQLConnection and TDatabase do not receive any events
when the transactions starts.
InterBase express offers you even more control than TSQLConnection by using a
separate transaction component rather than starting transactions using the
connection component. You can, however, use TIBDatabase to start a default
transaction:
if not IBDatabase1.DefaultTransaction.InTransaction then
IBDatabase1.DefaultTransaction.StartTransaction;
You can have overlapped transactions by using two separate transaction
components. Each transaction component has a set of parameters that let you
configure the transaction. These let you specify the transaction isolation level, as well
as other properties of the transaction.

Ending a transaction
Ideally, a transaction should only last as long as necessary. The longer a transaction is
active, the more simultaneous users that access the database, and the more
concurrent, simultaneous transactions that start and end during the lifetime of your
transaction, the greater the likelihood that your transaction will conflict with another
when you attempt to commit any changes.

Ending a successful transaction


When the actions that make up the transaction have all succeeded, you can make the
database changes permanent by committing the transaction. For TDatabase, you
commit a transaction using the Commit method:
MyOracleConnection.Commit;
For TSQLConnection, you also use the Commit method, but you must specify which
transaction you are committing by supplying the transaction descriptor you gave to
the StartTransaction method:
MyOracleConnection.Commit(TD);
For TIBDatabase, you commit a transaction object using its Commit method:
IBDatabase1.DefaultTransaction.Commit;
For TADOConnection, you commit a transaction using the CommitTrans method:
ADOConnection1.CommitTrans;
Note It is possible for a nested transaction to be committed, only to have the changes rolled
back later if the parent transaction is rolled back.
After the transaction is successfully committed, an ADO connection component
receives an OnCommitTransComplete event. Other connection components do not
receive any similar events.

23-8 Developer’s Guide


Managing transactions

A call to commit the current transaction is usually attempted in a try...except


statement. That way, if the transaction cannot commit successfully, you can use the
except block to handle the error and retry the operation or to roll back the
transaction.

Ending an unsuccessful transaction


If an error occurs when making the changes that are part of the transaction or when
trying to commit the transaction, you will want to discard all changes that make up
the transaction. Discarding these changes is called rolling back the transaction.
For TDatabase, you roll back a transaction by calling the Rollback method:
MyOracleConnection.Rollback;
For TSQLConnection, you also use the Rollback method, but you must specify which
transaction you are rolling back by supplying the transaction descriptor you gave to
the StartTransaction method:
MyOracleConnection.Rollback(TD);
For TIBDatabase, you roll back a transaction object by calling its Rollback method:
IBDatabase1.DefaultTransaction.Rollback;
For TADOConnection, you roll back a transaction by calling the RollbackTrans method:
ADOConnection1.RollbackTrans;
After the transaction is successfully rolled back, an ADO connection component
receives an OnRollbackTransComplete event. Other connection components do not
receive any similar events.
A call to roll back the current transaction usually occurs in
• Exception handling code when you can’t recover from a database error.
• Button or menu event code, such as when a user clicks a Cancel button.

Specifying the transaction isolation level


Transaction isolation level determines how a transaction interacts with other
simultaneous transactions when they work with the same tables. In particular, it
affects how much a transaction “sees” of other transactions’ changes to a table.
Each server type supports a different set of possible transaction isolation levels.
There are three possible transaction isolation levels:
• DirtyRead: When the isolation level is DirtyRead, your transaction sees all changes
made by other transactions, even if they have not been committed. Uncommitted
changes are not permanent, and might be rolled back at any time. This value
provides the least isolation, and is not available for many database servers (such as
Oracle, Sybase, MS-SQL, and InterBase).

Connecting to databases 23-9


Sending commands to the server

• ReadCommitted: When the isolation level is ReadCommitted, only committed


changes made by other transactions are visible. Although this setting protects
your transaction from seeing uncommitted changes that may be rolled back, you
may still receive an inconsistent view of the database state if another transaction is
committed while you are in the process of reading. This level is available for all
transactions except local transactions managed by the BDE.
• RepeatableRead: When the isolation level is RepeatableRead, your transaction is
guaranteed to see a consistent state of the database data. Your transaction sees a
single snapshot of the data. It cannot see any subsequent changes to data by other
simultaneous transactions, even if they are committed. This isolation level
guarantees that once your transaction reads a record, its view of that record will
not change. At this level your transaction is most isolated from changes made by
other transactions. This level is not available on some servers, such as Sybase and
MS-SQL and is unavailable on local transactions managed by the BDE.
In addition, TSQLConnection lets you specify database-specific custom isolation
levels. Custom isolation levels are defined by the dbExpress driver. See your driver
documentation for details.
Note For a detailed description of how each isolation level is implemented, see your server
documentation.
TDatabase and TADOConnection let you specify the transaction isolation level by
setting the TransIsolation property. When you set TransIsolation to a value that is not
supported by the database server, you get the next highest level of isolation (if
available). If there is no higher level available, the connection component raises an
exception when you try to start a transaction.
When using TSQLConnection, transaction isolation level is controlled by the
IsolationLevel field of the transaction descriptor.
When using InterBase express, transaction isolation level is controlled by a
transaction parameter.

Sending commands to the server


All database connection components except TIBDatabase let you execute SQL
statements on the associated server by calling the Execute method. Although Execute
can return a cursor when the statement is a SELECT statement, this use is not
recommended. The preferred method for executing statements that return data is to
use a dataset.
The Execute method is very convenient for executing simple SQL statements that do
not return any records. Such statements include Data Definition Language (DDL)
statements, which operate on or create a database’s metadata, such as CREATE
INDEX, ALTER TABLE, and DROP DOMAIN. Some Data Manipulation Language
(DML) SQL statements also do not return a result set. The DML statements that
perform an action on data but do not return a result set are: INSERT, DELETE, and
UPDATE.

23-10 Developer’s Guide


Sending commands to the server

The syntax for the Execute method varies with the connection type:
• For TDatabase, Execute takes four parameters: a string that specifies a single SQL
statement that you want to execute, a TParams object that supplies any parameter
values for that statement, a boolean that indicates whether the statement should be
cached because you will call it again, and a pointer to a BDE cursor that can be
returned (It is recommended that you pass nil).
• For TADOConnection, there are two versions of Execute. The first takes a
WideString that specifies the SQL statement and a second parameter that specifies
a set of options that control whether the statement is executed asynchronously and
whether it returns any records. This first syntax returns an interface for the
returned records. The second syntax takes a WideString that specifies the SQL
statement, a second parameter that returns the number of records affected when
the statement executes, and a third that specifies options such as whether the
statement executes asynchronously. Note that neither syntax provides for passing
parameters.
• For TSQLConnection, Execute takes three parameters: a string that specifies a single
SQL statement that you want to execute, a TParams object that supplies any
parameter values for that statement, and a pointer that can receive a
TCustomSQLDataSet that is created to return records.
Note Execute can only execute one SQL statement at a time. It is not possible to execute
multiple SQL statements with a single call to Execute, as you can with SQL scripting
utilities. To execute more than one statement, call Execute repeatedly.
It is relatively easy to execute a statement that does not include any parameters. For
example, the following code executes a CREATE TABLE statement (DDL) without
any parameters on a TSQLConnection component:
procedure TForm1.CreateTableButtonClick(Sender: TObject);
var
SQLstmt: String;
begin
SQLConnection1.Connected := True;
SQLstmt := 'CREATE TABLE NewCusts ' +
'( ' +
' CustNo INTEGER, ' +
' Company CHAR(40), ' +
' State CHAR(2), ' +
' PRIMARY KEY (CustNo) ' +
')';
SQLConnection1.Execute(SQLstmt, nil, nil);
end;
To use parameters, you must create a TParams object. For each parameter value, use
the TParams.CreateParam method to add a TParam object. Then use properties of
TParam to describe the parameter and set its value.

Connecting to databases 23-11


Working with associated datasets

This process is illustrated in the following example, which uses TDatabase to execute
an INSERT statement. The INSERT statement has a single parameter named:
StateParam. A TParams object (called stmtParams) is created to supply a value of “CA”
for that parameter.
procedure TForm1.INSERT_WithParamsButtonClick(Sender: TObject);
var
SQLstmt: String;
stmtParams: TParams;
begin
stmtParams := TParams.Create;
try
Database1.Connected := True;
stmtParams.CreateParam(ftString, 'StateParam', ptInput);
stmtParams.ParamByName('StateParam').AsString := 'CA';
SQLstmt := 'INSERT INTO "Custom.db" '+
'(CustNo, Company, State) ' +
'VALUES (7777, "Robin Dabank Consulting", :StateParam)';
Database1.Execute(SQLstmt, stmtParams, False, nil);
finally
stmtParams.Free;
end;
end;
If the SQL statement includes a parameter but you do not supply a TParam object to
provide its value, the SQL statement may cause an error when executed (this
depends on the particular database back-end used). If a TParam object is provided
but there is no corresponding parameter in the SQL statement, an exception is raised
when the application attempts to use the TParam.

Working with associated datasets


All database connection components maintain a list of all datasets that use them to
connect to a database. A connection component uses this list, for example, to close all
of the datasets when it closes the database connection.
You can use this list as well, to perform actions on all the datasets that use a specific
connection component to connect to a particular database.

Closing all datasets without disconnecting from the server


The connection component automatically closes all datasets when you close its
connection. There may be times, however, when you want to close all datasets
without disconnecting from the database server.
To close all open datasets without disconnecting from a server, you can use the
CloseDataSets method.
For TADOConnection and TIBDatabase, calling CloseDataSets always leaves the
connection open. For TDatabase and TSQLConnection, you must also set the
KeepConnection property to True.

23-12 Developer’s Guide


Obtaining metadata

Iterating through the associated datasets


To perform any actions (other than closing them all) on all the datasets that use a
connection component, use the DataSets and DataSetCount properties. DataSets is an
indexed array of all datasets that are linked to the connection component. For all
connection components except TADOConnection, this list includes only the active
datasets. TADOConnection lists the inactive datasets as well. DataSetCount is the
number of datasets in this array.
Note When you use a specialized client dataset to cache updates (as opposed to the generic
client dataset, TClientDataSet), the DataSets property lists the internal dataset owned
by the client dataset, not the client dataset itself.
You can use DataSets with DataSetCount to cycle through all currently active datasets
in code. For example, the following code cycles through all active datasets and
disables any controls that use the data they provide:
var
I: Integer;
begin
with MyDBConnection do
begin
for I := 0 to DataSetCount - 1 do
DataSets[I].DisableControls;
end;
end;
Note TADOConnection supports command objects as well as datasets. You can iterate
through these much like you iterate through the datasets, by using the Commands and
CommandCount properties.

Obtaining metadata
All database connection components can retrieve lists of metadata on the database
server, although they vary in the types of metadata they retrieve. The methods that
retrieve metadata fill a string list with the names of various entities available on the
server. You can then use this information, for example, to let your users dynamically
select a table at runtime.
You can use a TADOConnection component to retrieve metadata about the tables and
stored procedures available on the ADO data store. You can then use this
information, for example, to let your users dynamically select a table or stored
procedure at runtime.

Connecting to databases 23-13


Obtaining metadata

Listing available tables


The GetTableNames method copies a list of table names to an already-existing string
list object. This can be used, for example, to fill a list box with table names that the
user can then use to choose a table to open. The following line fills a listbox with the
names of all tables on the database:
MyDBConnection.GetTableNames(ListBox1.Items, False);
GetTableNames has two parameters: the string list to fill with table names, and a
boolean that indicates whether the list should include system tables, or ordinary
tables. Note that not all servers use system tables to store metadata, so asking for
system tables may result in an empty list.
Note For most database connection components, GetTableNames returns a list of all
available non-system tables when the second parameter is False. For TSQLConnection,
however, you have more control over what type is added to the list when you are not
fetching only the names of system tables. When using TSQLConnection, the types of
names added to the list are controlled by the TableScope property. TableScope indicates
whether the list should contain any or all of the following: ordinary tables, system
tables, synonyms, and views.

Listing the fields in a table


The GetFieldNames method fills an existing string list with the names of all fields
(columns) in a specified table. GetFieldNames takes two parameters, the name of the
table for which you want to list the fields, and an existing string list to be filled with
field names:
MyDBConnection.GetFieldNames('Employee', ListBox1.Items);

Listing available stored procedures


To get a listing of all of the stored procedures contained in the database, use the
GetProcedureNamesmethod. This method takes a single parameter: an already-
existing string list to fill:
MyDBConnection.GetProcedureNames(ListBox1.Items);
Note GetProcedureNames is only available for TADOConnection and TSQLConnection.

Listing available indexes


To get a listing of all indexes defined for a specific table, use the GetIndexNames
method. This method takes two parameters: the table whose indexes you want, and
an already-existing string list to fill:
SQLConnection1.GetIndexNames('Employee', ListBox1.Items);
Note GetIndexNames is only available for TSQLConnection, although most table-type
datasets have an equivalent method.

23-14 Developer’s Guide


Obtaining metadata

Listing stored procedure parameters


To get a list of all parameters defined for a specific stored procedure, use the
GetProcedureParams method. GetProcedureParams fills a TList object with pointers to
parameter description records, where each record describes a parameter of a
specified stored procedure, including its name, index, parameter type, field type, and
so on.
GetProcedureParams takes two parameters: the name of the stored procedure, and an
already-existing TList object to fill:
SQLConnection1.GetProcedureParams('GetInterestRate', List1);
To convert the parameter descriptions that are added to the list into the more familiar
TParams object, call the global LoadParamListItemsprocedure. Because
GetProcedureParams dynamically allocates the individual records, your application
must free them when it is finished with the information. The global FreeProcParams
routine can do this for you.
Note GetProcedureParams is only available for TSQLConnection.

Connecting to databases 23-15


23-16 Developer’s Guide
Chapter

Understanding datasets
Chapter24
24
The fundamental unit for accessing data is the dataset family of objects. Your
application uses datasets for all database access. A dataset object represents a set of
records from a database organized into a logical table. These records may be the
records from a single database table, or they may represent the results of executing a
query or stored procedure.
All dataset objects that you use in your database applications descend from TDataSet,
and they inherit data fields, properties, events, and methods from this class. This
chapter describes the functionality of TDataSet that is inherited by the dataset objects
you use in your database applications. You need to understand this shared
functionality to use any dataset object.
TDataSet is a virtualized dataset, meaning that many of its properties and methods
are virtual or abstract. A virtual method is a function or procedure declaration where
the implementation of that method can be (and usually is) overridden in descendant
objects. An abstract method is a function or procedure declaration without an actual
implementation. The declaration is a prototype that describes the method (and its
parameters and return type, if any) that must be implemented in all descendant
dataset objects, but that might be implemented differently by each of them.
Because TDataSet contains abstract methods, you cannot use it directly in an
application without generating a runtime error. Instead, you either create instances
of the built-in TDataSet descendants and use them in your application, or you derive
your own dataset object from TDataSet or its descendants and write implementations
for all its abstract methods.
TDataSet defines much that is common to all dataset objects. For example, TDataSet
defines the basic structure of all datasets: an array of TField components that
correspond to actual columns in one or more database tables, lookup fields provided
by your application, or calculated fields provided by your application. For
information about TField components, see Chapter 25, “Working with field
components.”

Understanding datasets 24-1


Using TDataSet descendants

This chapter describes how to use the common database functionality introduced by
TDataSet. Bear in mind, however, that although TDataSet introduces the methods for
this functionality, not all TDataSet dependants implement them. In particular,
unidirectional datasets implement only a limited subset.

Using TDataSet descendants


TDataSet has several immediate descendants, each of which corresponds to a
different data access mechanism. You do not work directly with any of these
descendants. Rather, each descendant introduces the properties and methods for
using a particular data access mechanism. These properties and methods are then
exposed by descendant classes that are adapted to different types of server data. The
immediate descendants of TDataSet include
• TBDEDataSet, which uses the Borland Database Engine (BDE) to communicate
with the database server. The TBDEDataSet descendants you use are TTable,
TQuery, TStoredProc, and TNestedTable. The unique features of BDE-enabled
datasets are described in Chapter 26, “Using the Borland Database Engine.”
• TCustomADODataSet, which uses ActiveX Data Objects (ADO) to communicate
with an OLEDB data store. The TCustomADODataSet descendants you use are
TADODataSet, TADOTable, TADOQuery, and TADOStoredProc. The unique
features of ADO-based datasets are described in Chapter 27, “Working with ADO
components.”
• TCustomSQLDataSet, which uses dbExpress to communicate with a database
server. The TCustomSQLDataSet descendants you use are TSQLDataSet,
TSQLTable, TSQLQuery, and TSQLStoredProc. The unique features of dbExpress
datasets are described in Chapter 28, “Using unidirectional datasets.”
• TIBCustomDataSet, which communicates directly with an InterBase database
server. The TIBCustomDataSet descendants you use are TIBDataSet, TIBTable,
TIBQuery, and TIBStoredProc.
• TCustomClientDataSet, which represents the data from another dataset component
or the data from a dedicated file on disk. The TCustomClientDataSet descendants
you use are TClientDataSet, which can connect to an external (source) dataset, and
the client datasets that are specialized to a particular data access mechanism
(TBDEClientDataSet, TSimpleDataSet, and TIBClientDataSet), which use an internal
source dataset. The unique features of client datasets are described in Chapter 29,
“Using client datasets.”
Some pros and cons of the various data access mechanisms employed by these
TDataSet descendants are described in “Using databases” on page 19-1.

24-2 Developer’s Guide


Determining dataset states

In addition to the built-in datasets, you can create your own custom TDataSet
descendants — for example to supply data from a process other than a database
server, such as a spreadsheet. Writing custom datasets allows you the flexibility of
managing the data using any method you choose, while still letting you use the VCL
data controls to build your user interface. For more information about creating
custom components, see the Component Writer’s Guide, Chapter 1, “Overview of
component creation.”
Although each TDataSet descendant has its own unique properties and methods,
some of the properties and methods introduced by descendant classes are the same
as those introduced by other descendant classes that use another data access
mechanism. For example, there are similarities between the “table” components
(TTable, TADOTable, TSQLTable, and TIBTable). For information about the
commonalities introduced by TDataSet descendants, see “Types of datasets” on
page 24-24.

Determining dataset states


The state—or mode—of a dataset determines what can be done to its data. For
example, when a dataset is closed, its state is dsInactive, meaning that nothing can be
done to its data. At runtime, you can examine a dataset’s read-only State property to
determine its current state. The following table summarizes possible values for the
State property and what they mean:

Table 24.1 Values for the dataset State property


Value State Meaning
dsInactive Inactive DataSet closed. Its data is unavailable.
dsBrowse Browse DataSet open. Its data can be viewed, but not changed. This is the
default state of an open dataset.
dsEdit Edit DataSet open. The current row can be modified. (not supported
on unidirectional datasets)
dsInsert Insert DataSet open. A new row is inserted or appended. (not
supported on unidirectional datasets)
dsSetKey SetKey DataSet open. Enables setting of ranges and key values used for
ranges and GotoKey operations. (not supported by all datasets)
dsCalcFields CalcFields DataSet open. Indicates that an OnCalcFields event is under way.
Prevents changes to fields that are not calculated.
dsCurValue CurValue DataSet open. Indicates that the CurValue property of fields is
being fetched for an event handler that responds to errors in
applying cached updates.
dsNewValue NewValue DataSet open. Indicates that the NewValue property of fields is
being fetched for an event handler that responds to errors in
applying cached updates.
dsOldValue OldValue DataSet open. Indicates that the OldValue property of fields is
being fetched for an event handler that responds to errors in
applying cached updates.

Understanding datasets 24-3


Opening and closing datasets

Table 24.1 Values for the dataset State property (continued)


Value State Meaning
dsFilter Filter DataSet open. Indicates that a filter operation is under way. A
restricted set of data can be viewed, and no data can be changed.
(not supported on unidirectional datasets)
dsBlockRead Block Read DataSet open. Data-aware controls are not updated and events
are not triggered when the current record changes.
dsInternalCalc Internal Calc DataSet open. An OnCalcFields event is underway for calculated
values that are stored with the record. (client datasets only)
dsOpening Opening DataSet is in the process of opening but has not finished. This
state occurs when the dataset is opened for asynchronous
fetching.

Typically, an application checks the dataset state to determine when to perform


certain tasks. For example, you might check for the dsEdit or dsInsert state to ascertain
whether you need to post updates.
Note Whenever a dataset’s state changes, the OnStateChange event is called for any data
source components associated with the dataset. For more information about data
source components and OnStateChange, see “Responding to changes mediated
by the data source” on page 20-4.

Opening and closing datasets


To read or write data in a dataset, an application must first open it. You can open a
dataset in two ways,
• Set the Active property of the dataset to True, either at design time in the Object
Inspector, or in code at runtime:
CustTable.Active := True;
• Call the Open method for the dataset at runtime,
CustQuery.Open;c
When you open the dataset, the dataset first receives a BeforeOpen event, then it opens
a cursor, populating itself with data, and finally, it receives an AfterOpen event.
The newly-opened dataset is in browse mode, which means your application can
read the data and navigate through it.
You can close a dataset in two ways,
• Set the Active property of the dataset to False, either at design time in the Object
Inspector, or in code at runtime,
CustQuery.Active := False;
• Call the Close method for the dataset at runtime,
CustTable.Close;

24-4 Developer’s Guide


Navigating datasets

Just as the dataset receives BeforeOpen and AfterOpen events when you open it, it
receives a BeforeClose and AfterClose event when you close it. handlers that respond to
the Close method for a dataset. You can use these events, for example, to prompt the
user to post pending changes or cancel them before closing the dataset. The following
code illustrates such a handler:
procedure TForm1.CustTableVerifyBeforeClose(DataSet: TDataSet);
begin
if (CustTable.State in [dsEdit, dsInsert]) then begin
case MessageDlg('Post changes before closing?', mtConfirmation, mbYesNoCancel, 0) of
mrYes: CustTable.Post; { save the changes }
mrNo: CustTable.Cancel; { abandon the changes}
mrCancel: Abort; { abort closing the dataset }
end;
end;
end;
Note You may need to close a dataset when you want to change certain of its properties,
such as TableName on a TTable component. When you reopen the dataset, the new
property value takes effect.

Navigating datasets
Each active dataset has a cursor, or pointer, to the current row in the dataset. The
current row in a dataset is the one whose field values currently show in single-field,
data-aware controls on a form, such as TDBEdit, TDBLabel, and TDBMemo. If the
dataset supports editing, the current record contains the values that can be
manipulated by edit, insert, and delete methods.
You can change the current row by moving the cursor to point at a different row. The
following table lists methods you can use in application code to move to different
records:

Table 24.2 Navigational methods of datasets


Method Moves the cursor to
First The first row in a dataset.
Last The last row in a dataset. (not available for unidirectional datasets)
Next The next row in a dataset.
Prior The previous row in a dataset. (not available for unidirectional datasets)
MoveBy A specified number of rows forward or back in a dataset.

The data-aware, visual component TDBNavigator encapsulates these methods as


buttons that users can click to move among records at runtime. For information
about the navigator component, see “Navigating and manipulating records” on
page 20-29.

Understanding datasets 24-5


Navigating datasets

Whenever you change the current record using one of these methods (or by other
methods that navigate based on a search criterion), the dataset receives two events:
BeforeScroll (before leaving the current record) and AfterScroll (after arriving at the
new record). You can use these events to update your user interface (for example, to
update a status bar that indicates information about the current record).
TDataSet also defines two boolean properties that provide useful information when
iterating through the records in a dataset.

Table 24.3 Navigational properties of datasets


Property Description
Bof (Beginning-of-file) True: the cursor is at the first row in the dataset.
False: the cursor is not known to be at the first row in the dataset
Eof (End-of-file) True: the cursor is at the last row in the dataset.
False: the cursor is not known to be at the first row in the dataset

Using the First and Last methods


The First method moves the cursor to the first row in a dataset and sets the Bof
property to True. If the cursor is already at the first row in the dataset, First does
nothing.
For example, the following code moves to the first record in CustTable:
CustTable.First;
The Last method moves the cursor to the last row in a dataset and sets the Eof
property to True. If the cursor is already at the last row in the dataset, Last does
nothing.
The following code moves to the last record in CustTable:
CustTable.Last;
Note The Last method raises an exception in unidirectional datasets.
Tip While there may be programmatic reasons to move to the first or last rows in a
dataset without user intervention, you can also enable your users to navigate from
record to record using the TDBNavigator component. The navigator component
contains buttons that, when active and visible, enable a user to move to the first and
last rows of an active dataset. The OnClick events for these buttons call the First and
Last methods of the dataset. For more information about making effective use of the
navigator component, see “Navigating and manipulating records” on page 20-29.

24-6 Developer’s Guide


Navigating datasets

Using the Next and Prior methods


The Next method moves the cursor forward one row in the dataset and sets the Bof
property to False if the dataset is not empty. If the cursor is already at the last row in
the dataset when you call Next, nothing happens.
For example, the following code moves to the next record in CustTable:
CustTable.Next;
The Prior method moves the cursor back one row in the dataset, and sets Eof to False if
the dataset is not empty. If the cursor is already at the first row in the dataset when
you call Prior, Prior does nothing.
For example, the following code moves to the previous record in CustTable:
CustTable.Prior;
Note The Prior method raises an exception in unidirectional datasets.

Using the MoveBy method


MoveBy lets you specify how many rows forward or back to move the cursor in a
dataset. Movement is relative to the current record at the time that MoveBy is called.
MoveBy also sets the Bof and Eof properties for the dataset as appropriate.
This function takes an integer parameter, the number of records to move. Positive
integers indicate a forward move and negative integers indicate a backward move.
Note MoveBy raises an exception in unidirectional datasets if you use a negative argument.
MoveBy returns the number of rows it moves. If you attempt to move past the
beginning or end of the dataset, the number of rows returned by MoveBy differs from
the number of rows you requested to move. This is because MoveBy stops when it
reaches the first or last record in the dataset.
The following code moves two records backward in CustTable:
CustTable.MoveBy(-2);
Note If your application uses MoveBy in a multi-user database environment, keep in mind
that datasets are fluid. A record that was five records back a moment ago may now
be four, six, or even an unknown number of records back if several users are
simultaneously accessing the database and changing its data.

Understanding datasets 24-7


Navigating datasets

Using the Eof and Bof properties


Two read-only, runtime properties, Eof (End-of-file) and Bof (Beginning-of-file), are
useful when you want to iterate through all records in a dataset.

Eof
When Eof is True, it indicates that the cursor is unequivocally at the last row in a
dataset. Eof is set to True when an application
• Opens an empty dataset.
• Calls a dataset’s Last method.
• Calls a dataset’s Next method, and the method fails (because the cursor is
currently at the last row in the dataset.
• Calls SetRange on an empty range or dataset.
Eof is set to False in all other cases; you should assume Eof is False unless one of the
conditions above is met and you test the property directly.
Eof is commonly tested in a loop condition to control iterative processing of all
records in a dataset. If you open a dataset containing records (or you call First) Eof is
False. To iterate through the dataset a record at a time, create a loop that steps
through each record by calling Next, and terminates when Eof is True. Eof remains
False until you call Next when the cursor is already on the last record.
The following code illustrates one way you might code a record-processing loop for a
dataset called CustTable:
CustTable.DisableControls;
try
CustTable.First; { Go to first record, which sets Eof False }
while not CustTable.Eof do { Cycle until Eof is True }
begin
{ Process each record here }
ƒ
CustTable.Next; { Eof False on success; Eof True when Next fails on last record }
end;
finally
CustTable.EnableControls;
end;
Tip This example also shows how to disable and enable data-aware visual controls tied to
a dataset. If you disable visual controls during dataset iteration, it speeds processing
because your application does not need to update the contents of the controls as the
current record changes. After iteration is complete, controls should be enabled again
to update them with values for the new current row. Note that enabling of the visual
controls takes place in the finally clause of a try...finally statement. This guarantees
that even if an exception terminates loop processing prematurely, controls are not left
disabled.

24-8 Developer’s Guide


Navigating datasets

Bof
When Bof is True, it indicates that the cursor is unequivocally at the first row in a
dataset. Bof is set to True when an application
• Opens a dataset.
• Calls a dataset’s First method.
• Calls a dataset’s Prior method, and the method fails (because the cursor is
currently at the first row in the dataset.
• Calls SetRange on an empty range or dataset.
Bof is set to False in all other cases; you should assume Bof is False unless one of the
conditions above is met and you test the property directly.
Like Eof, Bof can be in a loop condition to control iterative processing of records in a
dataset. The following code illustrates one way you might code a record-processing
loop for a dataset called CustTable:
CustTable.DisableControls; { Speed up processing; prevent screen flicker }
try
while not CustTable.Bof do { Cycle until Bof is True }
begin
{ Process each record here }
ƒ
CustTable.Prior; { Bof False on success; Bof True when Prior fails on first record }
end;
finally
CustTable.EnableControls; { Display new current row in controls }
end;

Marking and returning to records


In addition to moving from record to record in a dataset (or moving from one record
to another by a specific number of records), it is often also useful to mark a particular
location in a dataset so that you can return to it quickly when desired. TDataSet
introduces a bookmarking feature that consists of a Bookmark property and five
bookmark methods.
TDataSet implements virtual bookmark methods. While these methods ensure that
any dataset object derived from TDataSet returns a value if a bookmark method is
called, the return values are merely defaults that do not keep track of the current
location. TDataSet descendants vary in the level of support they provide for
bookmarks. None of the dbExpress datasets add any support for bookmarks. ADO
datasets can support bookmarks, depending on the underlying database tables. BDE
datasets, InterBase express datasets, and client datasets always support bookmarks.

The Bookmark property


The Bookmark property indicates which bookmark among any number of bookmarks
in your application is current. Bookmark is a string that identifies the current
bookmark. Each time you add another bookmark, it becomes the current bookmark.

Understanding datasets 24-9


Navigating datasets

The GetBookmark method


To create a bookmark, you must declare a variable of type TBookmark in your
application, then call GetBookmark to allocate storage for the variable and set its value
to a particular location in a dataset. The TBookmark type is a Pointer.

The GotoBookmark and BookmarkValid methods


When passed a bookmark, GotoBookmark moves the cursor for the dataset to the
location specified in the bookmark. Before calling GotoBookmark, you can call
BookmarkValid to determine if the bookmark points to a record. BookmarkValid returns
True if a specified bookmark points to a record.

The CompareBookmarks method


You can also call CompareBookmarks to see if a bookmark you want to move to is
different from another (or the current) bookmark. If the two bookmarks refer to the
same record (or if both are nil), CompareBookmarks returns 0.

The FreeBookmark method


FreeBookmark frees the memory allocated for a specified bookmark when you no
longer need it. You should also call FreeBookmark before reusing an existing
bookmark.

A bookmarking example
The following code illustrates one use of bookmarking:
procedure DoSomething (const Tbl: TTable)
var
Bookmark: TBookmark;
begin
Bookmark := Tbl.GetBookmark; { Allocate memory and assign a value }
Tbl.DisableControls; { Turn off display of records in data controls }
try
Tbl.First; { Go to first record in table }
while not Tbl.Eof do {Iterate through each record in table }
begin
{ Do your processing here }
ƒ
Tbl.Next;
end;
finally
Tbl.GotoBookmark(Bookmark);
Tbl.EnableControls; { Turn on display of records in data controls, if necessary }
Tbl.FreeBookmark(Bookmark); {Deallocate memory for the bookmark }
end;
end;
Before iterating through records, controls are disabled. Should an error occur during
iteration through records, the finally clause ensures that controls are always enabled
and that the bookmark is always freed even if the loop terminates prematurely.

24-10 Developer’s Guide


Searching datasets

Searching datasets
If a dataset is not unidirectional, you can search against it using the Locate and Lookup
methods. These methods enable you to search on any type of columns in any dataset.
Note Some TDataSet descendants introduce an additional family of methods for searching
based on an index. For information about these additional methods, see “Using
Indexes to search for records” on page 24-28.

Using Locate
Locate moves the cursor to the first row matching a specified set of search criteria. In
its simplest form, you pass Locate the name of a column to search, a field value to
match, and an options flag specifying whether the search is case-insensitive or if it
can use partial-key matching. (Partial-key matching is when the criterion string need
only be a prefix of the field value.) For example, the following code moves the cursor
to the first row in the CustTable where the value in the Company column is
“Professional Divers, Ltd.”:
var
LocateSuccess: Boolean;
SearchOptions: TLocateOptions;
begin
SearchOptions := [loPartialKey];
LocateSuccess := CustTable.Locate('Company', 'Professional Divers, Ltd.', SearchOptions);
end;
If Locate finds a match, the first record containing the match becomes the current
record. Locate returns True if it finds a matching record, False if it does not. If a search
fails, the current record does not change.
The real power of Locate comes into play when you want to search on multiple
columns and specify multiple values to search for. Search values are Variants, which
means you can specify different data types in your search criteria. To specify
multiple columns in a search string, separate individual items in the string with
semicolons.
Because search values are Variants, if you pass multiple values, you must either pass
a Variant array as an argument (for example, the return values from the Lookup
method), or you must construct the Variant array in code using the VarArrayOf
function. The following code illustrates a search on multiple columns using multiple
search values and partial-key matching:
with CustTable do
Locate('Company;Contact;Phone', VarArrayOf(['Sight Diver','P']), loPartialKey);
Locate uses the fastest possible method to locate matching records. If the columns to
search are indexed and the index is compatible with the search options you specify,
Locate uses the index.

Understanding datasets 24-11


Searching datasets

Using Lookup
Lookup searches for the first row that matches specified search criteria. If it finds a
matching row, it forces the recalculation of any calculated fields and lookup fields
associated with the dataset, then returns one or more fields from the matching row.
Lookup does not move the cursor to the matching row; it only returns values from it.
In its simplest form, you pass Lookup the name of field to search, the field value to
match, and the field or fields to return. For example, the following code looks for the
first record in the CustTable where the value of the Company field is “Professional
Divers, Ltd.”, and returns the company name, a contact person, and a phone number
for the company:
var
LookupResults: Variant;
begin
LookupResults := CustTable.Lookup('Company', 'Professional Divers, Ltd.',
'Company;Contact; Phone');
end;
Lookup returns values for the specified fields from the first matching record it finds.
Values are returned as Variants. If more than one return value is requested, Lookup
returns a Variant array. If there are no matching records, Lookup returns a Null
Variant. For more information about Variant arrays, see the online Help.
The real power of Lookup comes into play when you want to search on multiple
columns and specify multiple values to search for. To specify strings containing
multiple columns or result fields, separate individual fields in the string items with
semicolons.
Because search values are Variants, if you pass multiple values, you must either pass
a Variant array as an argument (for example, the return values from the Lookup
method), or you must construct the Variant array in code using the VarArrayOf
function. The following code illustrates a lookup search on multiple columns:
var
LookupResults: Variant;
begin
with CustTable do
LookupResults := Lookup('Company; City', VarArrayOf(['Sight Diver', 'Christiansted']),
'Company; Addr1; Addr2; State; Zip');
end;
Like Locate, Lookup uses the fastest possible method to locate matching records. If the
columns to search are indexed, Lookup uses the index.

24-12 Developer’s Guide


Displaying and editing a subset of data using filters

Displaying and editing a subset of data using filters


An application is frequently interested in only a subset of records from a dataset. For
example, you may be interested in retrieving or viewing only those records for
companies based in California in your customer database, or you may want to find a
record that contains a particular set of field values. In each case, you can use filters to
restrict an application’s access to a subset of all records in the dataset.
With unidirectional datasets, you can only limit the records in the dataset by using a
query that restricts the records in the dataset. With other TDataSet descendants,
however, you can define a subset of the data that has already been fetched. To restrict
an application’s access to a subset of all records in the dataset, you can use filters.
A filter specifies conditions a record must meet to be displayed. Filter conditions can
be stipulated in a dataset’s Filter property or coded into its OnFilterRecord event
handler. Filter conditions are based on the values in any specified number of fields in
a dataset, regardless of whether those fields are indexed. For example, to view only
those records for companies based in California, a simple filter might require that
records contain a value in the State field of “CA”.
Note Filters are applied to every record retrieved in a dataset. When you want to filter
large volumes of data, it may be more efficient to use a query to restrict record
retrieval, or to set a range on an indexed table rather than using filters.

Enabling and disabling filtering


Enabling filters on a dataset is a three step process:
1 Create a filter.
2 Set filter options for string-based filter tests, if necessary.
3 Set the Filtered property to True.
When filtering is enabled, only those records that meet the filter criteria are available
to an application. Filtering is always a temporary condition. You can turn off filtering
by setting the Filtered property to False.

Creating filters
There are two ways to create a filter for a dataset:
• Specify simple filter conditions in the Filter property. Filter is especially useful for
creating and applying filters at runtime.
• Write an OnFilterRecord event handler for simple or complex filter conditions.
With OnFilterRecord, you specify filter conditions at design time. Unlike the Filter
property, which is restricted to a single string containing filter logic, an
OnFilterRecord event can take advantage of branching and looping logic to create
complex, multi-level filter conditions.

Understanding datasets 24-13


Displaying and editing a subset of data using filters

The main advantage to creating filters using the Filter property is that your
application can create, change, and apply filters dynamically, (for example, in
response to user input). Its main disadvantages are that filter conditions must be
expressible in a single text string, cannot make use of branching and looping
constructs, and cannot test or compare its values against values not already in the
dataset.
The strengths of the OnFilterRecord event are that a filter can be complex and
variable, can be based on multiple lines of code that use branching and looping
constructs, and can test dataset values against values outside the dataset, such as the
text in an edit box. The main weakness of using OnFilterRecord is that you set the
filter at design time and it cannot be modified in response to user input. (You can,
however, create several filter handlers and switch among them in response to general
application conditions.)
The following sections describe how to create filters using the Filter property and the
OnFilterRecord event handler.

Setting the Filter property


To create a filter using the Filter property, set the value of the property to a string that
contains the filter’s test condition. For example, the following statement creates a
filter that tests a dataset’s State field to see if it contains a value for the state of
California:
Dataset1.Filter := 'State = ' + QuotedStr('CA');
You can also supply a value for Filter based on text supplied by the user. For
example, the following statement assigns the text in from edit box to Filter:
Dataset1.Filter := Edit1.Text;
You can, of course, create a string based on both hard-coded text and user-supplied
data:
Dataset1.Filter := 'State = ' + QuotedStr(Edit1.Text);
Blank field values do not appear unless they are explicitly included in the filter:
Dataset1.Filter := 'State <> ‘’CA’’ or State = BLANK';
Note After you specify a value for Filter, to apply the filter to the dataset, set the Filtered
property to True.
Filters can compare field values to literals and to constants using the following
comparison and logical operators:

Table 24.4 Comparison and logical operators that can appear in a filter
Operator Meaning
< Less than
> Greater than
>= Greater than or equal to
<= Less than or equal to
= Equal to

24-14 Developer’s Guide


Displaying and editing a subset of data using filters

Table 24.4 Comparison and logical operators that can appear in a filter (continued)
Operator Meaning
<> Not equal to
AND Tests two statements are both True
NOT Tests that the following statement is not True
OR Tests that at least one of two statements is True
+ Adds numbers, concatenates strings, adds numbers to date/time values (only
available for some drivers)
- Subtracts numbers, subtracts dates, or subtracts a number from a date (only available
for some drivers)
* Multiplies two numbers (only available for some drivers)
/ Divides two numbers (only available for some drivers)
* wildcard for partial comparisons (FilterOptions must include foPartialCompare)

By using combinations of these operators, you can create fairly sophisticated filters.
For example, the following statement checks to make sure that two test conditions
are met before accepting a record for display:
(Custno > 1400) AND (Custno < 1500);
Note When filtering is on, user edits to a record may mean that the record no longer meets
a filter’s test conditions. The next time the record is retrieved from the dataset, it may
therefore “disappear.” If that happens, the next record that passes the filter condition
becomes the current record.

Writing an OnFilterRecord event handler


You can write code to filter records using the OnFilterRecord events generated by the
dataset for each record it retrieves. This event handler implements a test that
determines if a record should be included in those that are visible to the application.
To indicate whether a record passes the filter condition, your OnFilterRecord handler
sets its Accept parameter to True to include a record, or False to exclude it. For
example, the following filter displays only those records with the State field set to
“CA”:
procedure TForm1.Table1FilterRecord(DataSet: TDataSet; var Accept: Boolean);
begin
Accept := DataSet['State'].AsString = 'CA';
end;
When filtering is enabled, an OnFilterRecord event is generated for each record
retrieved. The event handler tests each record, and only those that meet the filter’s
conditions are displayed. Because the OnFilterRecord event is generated for every
record in a dataset, you should keep the event handler as tightly coded as possible to
avoid adversely affecting the performance.

Understanding datasets 24-15


Displaying and editing a subset of data using filters

Switching filter event handlers at runtime


You can code any number of OnFilterRecord event handlers and switch among them
at runtime. For example, the following statements switch to an OnFilterRecord event
handler called NewYorkFilter:
DataSet1.OnFilterRecord := NewYorkFilter;
Refresh;

Setting filter options


The FilterOptions property lets you specify whether a filter that compares string-
based fields accepts records based on partial comparisons and whether string
comparisons are case-sensitive. FilterOptions is a set property that can be an empty set
(the default), or that can contain either or both of the following values:

Table 24.5 FilterOptions values


Value Meaning
foCaseInsensitive Ignore case when comparing strings.
foNoPartialCompare Disable partial string matching; that is, don’t match strings that end with
an asterisk (*).

For example, the following statements set up a filter that ignores case when
comparing values in the State field:
FilterOptions := [foCaseInsensitive];
Filter := 'State = ' + QuotedStr('CA');

Navigating records in a filtered dataset


There are four dataset methods that navigate among records in a filtered dataset. The
following table lists these methods and describes what they do:

Table 24.6 Filtered dataset navigational methods


Method Purpose
FindFirst Move to the first record that matches the current filter criteria. The search for the first
matching record always begins at the first record in the unfiltered dataset.
FindLast Move to the last record that matches the current filter criteria.
FindNext Moves from the current record in the filtered dataset to the next one.
FindPrior Move from the current record in the filtered dataset to the previous one.

For example, the following statement finds the first filtered record in a dataset:
DataSet1.FindFirst;

24-16 Developer’s Guide


Modifying data

Provided that you set the Filter property or create an OnFilterRecord event handler for
your application, these methods position the cursor on the specified record
regardless of whether filtering is currently enabled. If you call these methods when
filtering is not enabled, then they
• Temporarily enable filtering.
• Position the cursor on a matching record if one is found.
• Disable filtering.
Note If filtering is disabled and you do not set the Filter property or create an
OnFilterRecord event handler, these methods do the same thing as First, Last, Next,
and Prior.
All navigational filter methods position the cursor on a matching record (if one is
found), make that record the current one, and return True. If a matching record is not
found, the cursor position is unchanged, and these methods return False. You can
check the status of the Found property to wrap these calls, and only take action when
Found is True. For example, if the cursor is already on the last matching record in the
dataset and you call FindNext, the method returns False, and the current record is
unchanged.

Modifying data
You can use the following dataset methods to insert, update, and delete data if the
read-only CanModify property is True. CanModify is True unless the dataset is
unidirectional, the database underlying the dataset does not permit read and write
privileges, or some other factor intervenes. (Intervening factors include the ReadOnly
property on some datasets or the RequestLive property on TQuery components.)

Table 24.7 Dataset methods for inserting, updating, and deleting data
Method Description
Edit Puts the dataset into dsEdit state if it is not already in dsEdit or dsInsert states.
Append Posts any pending data, moves current record to the end of the dataset, and puts the
dataset in dsInsert state.
Insert Posts any pending data, and puts the dataset in dsInsert state.
Post Attempts to post the new or altered record to the database. If successful, the dataset
is put in dsBrowse state; if unsuccessful, the dataset remains in its current state.
Cancel Cancels the current operation and puts the dataset in dsBrowse state.
Delete Deletes the current record and puts the dataset in dsBrowse state.

Understanding datasets 24-17


Modifying data

Editing records
A dataset must be in dsEdit mode before an application can modify records. In your
code you can use the Edit method to put a dataset into dsEdit mode if the read-only
CanModify property for the dataset is True.
When a dataset transitions to dsEdit mode, it first receives a BeforeEdit event. After the
transition to edit mode is successfully completed, the dataset receives an AfterEdit
event. Typically, these events are used for updating the user interface to indicate the
current state of the dataset. If the dataset can’t be put into edit mode for some reason,
an OnEditError event occurs, where you can inform the user of the problem or try to
correct the situation that prevented the dataset from entering edit mode.
On forms in your application, some data-aware controls can automatically put a
dataset into dsEdit state if
• The control’s ReadOnly property is False (the default),
• The AutoEdit property of the data source for the control is True, and
• CanModify is True for the dataset.
Note Even if a dataset is in dsEdit state, editing records may not succeed for SQL-based
databases if your application’s user does not have proper SQL access privileges.
Once a dataset is in dsEdit mode, a user can modify the field values for the current
record that appears in any data-aware controls on a form. Data-aware controls for
which editing is enabled automatically call Post when a user executes any action that
changes the current record (such as moving to a different record in a grid).
If you have a navigator component on your form, users can cancel edits by clicking
the navigator’s Cancel button. Canceling edits returns a dataset to dsBrowse state.
In code, you must write or cancel edits by calling the appropriate methods. You write
changes by calling Post. You cancel them by calling Cancel. In code, Edit and Post are
often used together. For example,
with CustTable do
begin
Edit;
FieldValues['CustNo'] := 1234;
Post;
end;
In the previous example, the first line of code places the dataset in dsEdit mode. The
next line of code assigns the number 1234 to the CustNo field of the current record.
Finally, the last line writes, or posts, the modified record. If you are not caching
updates, posting writes the change back to the database. If you are caching updates,
the change is written to a temporary buffer, where it stays until the dataset’s
ApplyUpdates method is called.

24-18 Developer’s Guide


Modifying data

Adding new records


A dataset must be in dsInsert mode before an application can add new records. In
code, you can use the Insert or Append methods to put a dataset into dsInsert mode if
the read-only CanModify property for the dataset is True.
When a dataset transitions to dsInsert mode, it first receives a BeforeInsert event. After
the transition to insert mode is successfully completed, the dataset receives first an
OnNewRecord event hand then an AfterInsert event. You can use these events, for
example, to provide initial values to newly inserted records:
procedure TForm1.OrdersTableNewRecord(DataSet: TDataSet);
begin
DataSet.FieldByName('OrderDate').AsDateTime := Date;
end;
On forms in your application, the data-aware grid and navigator controls can put a
dataset into dsInsert state if
• The control’s ReadOnly property is False (the default), and
• CanModify is True for the dataset.
Note Even if a dataset is in dsInsert state, adding records may not succeed for SQL-based
databases if your application’s user does not have proper SQL access privileges.
Once a dataset is in dsInsert mode, a user or application can enter values into the
fields associated with the new record. Except for the grid and navigational controls,
there is no visible difference to a user between Insert and Append. On a call to Insert,
an empty row appears in a grid above what was the current record. On a call to
Append, the grid is scrolled to the last record in the dataset, an empty row appears at
the bottom of the grid, and the Next and Last buttons are dimmed on any navigator
component associated with the dataset.
Data-aware controls for which inserting is enabled automatically call Post when a
user executes any action that changes which record is current (such as moving to a
different record in a grid). Otherwise you must call Post in your code.
Post writes the new record to the database, or, if you are caching updates, Post writes
the record to an in-memory cache. To write cached inserts and appends to the
database, call the dataset’s ApplyUpdates method.

Inserting records
Insert opens a new, empty record before the current record, and makes the empty
record the current record so that field values for the record can be entered either by a
user or by your application code.
When an application calls Post (or ApplyUpdates when using cached updates), a
newly inserted record is written to a database in one of three ways:
• For indexed Paradox and dBASE tables, the record is inserted into the dataset in a
position based on its index.
• For unindexed Paradox and dBASE tables, the record is inserted into the dataset at
its current position.

Understanding datasets 24-19


Modifying data

• For SQL databases, the physical location of the insertion is implementation-


specific. If the table is indexed, the index is updated with the new record
information.

Appending records
Append opens a new, empty record at the end of the dataset, and makes the empty
record the current one so that field values for the record can be entered either by a
user or by your application code.
When an application calls Post (or ApplyUpdates when using cached updates), a
newly appended record is written to a database in one of three ways:
• For indexed Paradox and dBASE tables, the record is inserted into the dataset in a
position based on its index.
• For unindexed Paradox and dBASE tables, the record is added to the end of the
dataset.
• For SQL databases, the physical location of the append is implementation-specific.
If the table is indexed, the index is updated with the new record information.

Deleting records
Use the Delete method to delete the current record in an active dataset. When the
Delete method is called,
• The dataset receives a BeforeDelete event.
• The dataset attempts to delete the current record.
• The dataset returns to the dsBrowse state.
• The dataset receives an AfterDelete event.
If want to prevent the deletion in the BeforeDelete event handler, you can call the
global Abort procedure:
procedure TForm1.TableBeforeDelete (Dataset: TDataset)
begin
if MessageDlg('Delete This Record?', mtConfirmation, mbYesNoCancel, 0) <> mrYes then
Abort;
end;
If Delete fails, it generates an OnDeleteError event. If the OnDeleteError event handler
can’t correct the problem, the dataset remains in dsEdit state. If Delete succeeds, the
dataset reverts to the dsBrowse state and the record that followed the deleted record
becomes the current record.
If you are caching updates, the deleted record is not removed from the underlying
database table until you call ApplyUpdates.
If you provide a navigator component on your forms, users can delete the current
record by clicking the navigator’s Delete button. In code, you must call Delete
explicitly to remove the current record.

24-20 Developer’s Guide


Modifying data

Posting data
After you finish editing a record, you must call the Post method to write out your
changes. The Post method behaves differently, depending on the dataset’s state and
on whether you are caching updates.
• If you are not caching updates, and the dataset is in the dsEdit or dsInsert state, Post
writes the current record to the database and returns the dataset to the dsBrowse
state.
• If you are caching updates, and the dataset is in the dsEdit or dsInsert state, Post
writes the current record to an internal cache and returns the dataset to the
dsBrowse state. The edits are net written to the database until you call
ApplyUpdates.
• If the dataset is in the dsSetKey state, Post returns the dataset to the dsBrowse state.
Regardless of the initial state of the dataset, Post generates BeforePost and AfterPost
events, before and after writing the current changes. You can use these events to
update the user interface, or prevent the dataset from posting changes by calling the
Abort procedure. If the call to Post fails, the dataset receives an OnPostError event,
where you can inform the user of the problem or attempt to correct it.
Posting can be done explicitly, or implicitly as part of another procedure. When an
application moves off the current record, Post is called implicitly. Calls to the First,
Next, Prior, and Last methods perform a Post if the table is in dsEdit or dsInsert modes.
The Append and Insert methods also implicitly post any pending data.
Warning The Close method does not call Post implicitly. Use the BeforeClose event to post any
pending edits explicitly.

Canceling changes
An application can undo changes made to the current record at any time, if it has not
yet directly or indirectly called Post. For example, if a dataset is in dsEdit mode, and a
user has changed the data in one or more fields, the application can return the record
back to its original values by calling the Cancel method for the dataset. A call to Cancel
always returns a dataset to dsBrowse state.
If the dataset was in dsEdit or dsInsert mode when your application called Cancel, it
receives BeforeCancel and AfterCancel events before and after the current record is
restored to its original values.
On forms, you can allow users to cancel edit, insert, or append operations by
including the Cancel button on a navigator component associated with the dataset, or
you can provide code for your own Cancel button on the form.

Understanding datasets 24-21


Modifying data

Modifying entire records


On forms, all data-aware controls except for grids and the navigator provide access
to a single field in a record.
In code, however, you can use the following methods that work with entire record
structures provided that the structure of the database tables underlying the dataset is
stable and does not change. The following table summarizes the methods available
for working with entire records rather than individual fields in those records:

Table 24.8 Methods that work with entire records


Method Description
AppendRecord([array of values]) Appends a record with the specified column values at the end
of a table; analogous to Append. Performs an implicit Post.
InsertRecord([array of values]) Inserts the specified values as a record before the current
cursor position of a table; analogous to Insert. Performs an
implicit Post.
SetFields([array of values]) Sets the values of the corresponding fields; analogous to
assigning values to TFields. The application must perform an
explicit Post.

These method take an array of values as an argument, where each value corresponds
to a column in the underlying dataset. The values can be literals, variables, or NULL.
If the number of values in an argument is less than the number of columns in a
dataset, then the remaining values are assumed to be NULL.
For unindexed datasets, AppendRecord adds a record to the end of the dataset and
InsertRecord inserts a record after the current cursor position. For indexed datasets,
both methods place the record in the correct position in the table, based on the index.
In both cases, the methods move the cursor to the record’s position.
SetFields assigns the values specified in the array of parameters to fields in the
dataset. To use SetFields, an application must first call Edit to put the dataset in dsEdit
mode. To apply the changes to the current record, it must perform a Post.
If you use SetFields to modify some, but not all fields in an existing record, you can
pass NULL values for fields you do not want to change. If you do not supply enough
values for all fields in a record, SetFields assigns NULL values to them. NULL values
overwrite any existing values already in those fields.
For example, suppose a database has a COUNTRY table with columns for Name,
Capital, Continent, Area, and Population. If a TTable component called CountryTable
were linked to the COUNTRY table, the following statement would insert a record
into the COUNTRY table:
CountryTable.InsertRecord(['Japan', 'Tokyo', 'Asia']);
This statement does not specify values for Area and Population, so NULL values are
inserted for them. The table is indexed on Name, so the statement would insert the
record based on the alphabetic collation of “Japan”.

24-22 Developer’s Guide


Calculating fields

To update the record, an application could use the following code:


with CountryTable do
begin
if Locate('Name', 'Japan', loCaseInsensitive) then;
begin
Edit;
SetFields(nil, nil, nil, 344567, 164700000);
Post;
end;
end;
This code assigns values to the Area and Population fields and then posts them to the
database. The three NULL pointers act as place holders for the first three columns to
preserve their current contents.

Calculating fields
Using the Fields editor, you can define calculated fields for your datasets. When a
dataset contains calculated fields, you provide the code to calculate those field’s
values in an OnCalcFields event handler. For details on how to define calculated fields
using the Fields editor, see “Defining a calculated field” on page 25-7.
The AutoCalcFields property determines when OnCalcFields is called. If AutoCalcFields
is True, OnCalcFields is called when
• A dataset is opened.
• The dataset enters edit mode.
• A record is retrieved from the database.
• Focus moves from one visual component to another, or from one column to
another in a data-aware grid control and the current record has been modified.
If AutoCalcFields is False, then OnCalcFields is not called when individual fields within
a record are edited (the fourth condition above).
Caution OnCalcFields is called frequently, so the code you write for it should be kept short.
Also, if AutoCalcFields is True, OnCalcFields should not perform any actions that
modify the dataset (or a linked dataset if it is part of a master-detail relationship),
because this leads to recursion. For example, if OnCalcFields performs a Post, and
AutoCalcFields is True, then OnCalcFields is called again, causing another Post, and so
on.
When OnCalcFields executes, a dataset enters dsCalcFields mode. This state prevents
modifications or additions to the records except for the calculated fields the handler
is designed to modify. The reason for preventing other modifications is because
OnCalcFields uses the values in other fields to derive calculated field values. Changes
to those other fields might otherwise invalidate the values assigned to calculated
fields. After OnCalcFields is completed, the dataset returns to dsBrowse state.

Understanding datasets 24-23


Types of datasets

Types of datasets
“Using TDataSet descendants” on page 24-2 classifies TDataSet descendants by the
method they use to access their data. Another useful way to classify TDataSet
descendants is to consider the type of server data they represent. Viewed this way,
there are three basic classes of datasets:
• Table type datasets: Table type datasets represent a single table from the database
server, including all of its rows and columns. Table type datasets include TTable,
TADOTable, TSQLTable, and TIBTable.
Table type datasets let you take advantage of indexes defined on the server.
Because there is a one-to-one correspondence between database table and dataset,
you can use server indexes that are defined for the database table. Indexes allow
your application to sort the records in the table, speed searches and lookups, and
can form the basis of a master/detail relationship. Some table type datasets also
take advantage of the one-to-one relationship between dataset and database table
to let you perform table-level operations such as creating and deleting database
tables.
• Query-type datasets: Query-type datasets represent a single SQL command, or
query. Queries can represent the result set from executing a command (typically a
SELECT statement), or they can execute a command that does not return any
records (for example, an UPDATE statement). Query-type datasets include
TQuery, TADOQuery, TSQLQuery, and TIBQuery.
To use a query-type dataset effectively, you must be familiar with SQL and your
server’s SQL implementation, including limitations and extensions to the SQL-92
standard. If you are new to SQL, you may want to purchase a third party book that
covers SQL in-depth. One of the best is Understanding the New SQL: A Complete
Guide, by Jim Melton and Alan R. Simpson, Morgan Kaufmann Publishers.
• Stored procedure-type datasets: Stored procedure-type datasets represent a
stored procedure on the database server. Stored procedure-type datasets include
TStoredProc, TADOStoredProc, TSQLStoredProc, and TIBStoredProc.
A stored procedure is a self-contained program written in the procedure and
trigger language specific to the database system used. They typically handle
frequently repeated database-related tasks, and are especially useful for
operations that act on large numbers of records or that use aggregate or
mathematical functions. Using stored procedures typically improves the
performance of a database application by:
• Taking advantage of the server’s usually greater processing power and speed.
• Reducing network traffic by moving processing to the server.

24-24 Developer’s Guide


Using table type datasets

Stored procedures may or may not return data. Those that return data may return
it as a cursor (similar to the results of a SELECT query), as multiple cursors
(effectively returning multiple datasets), or they may return data in output
parameters. These differences depend in part on the server: Some servers do not
allow stored procedures to return data, or only allow output parameters. Some
servers do not support stored procedures at all. See your server documentation to
determine what is available.
Note You can usually use a query-type dataset to execute stored procedures because most
servers provide extensions to SQL for working with stored procedures. Each server,
however, uses its own syntax for this. If you choose to use a query-type dataset
instead of a stored procedure-type dataset, see your server documentation for the
necessary syntax.
In addition to the datasets that fall neatly into these three categories, TDataSet has
some descendants that fit into more than one category:
• TADODataSet and TSQLDataSet have a CommandType property that lets you
specify whether they represent a table, query, or stored procedure. Property and
method names are most similar to query-type datasets, although TADODataSet
lets you specify an index like a table type dataset.
• TClientDataSet represents the data from another dataset. As such, it can represent a
table, query, or stored procedure. TClientDataSet behaves most like a table type
dataset, because of its index support. However, it also has some of the features of
queries and stored procedures: the management of parameters and the ability to
execute without retrieving a result set.
• Some other client datasets (like TBDEClientDataSet) have a CommandType property
that lets you specify whether they represent a table, query, or stored procedure.
Property and method names are like TClientDataSet, including parameter support,
indexes, and the ability to execute without retrieving a result set.
• TIBDataSet can represent both queries and stored procedures. In fact, it can
represent multiple queries and stored procedures simultaneously, with separate
properties for each.

Using table type datasets


To use a table type dataset,
1 Place the appropriate dataset component in a data module or on a form, and set its
Name property to a unique value appropriate to your application.
2 Identify the database server that contains the table you want to use. Each table
type dataset does this differently, but typically you specify a database connection
component:
• For TTable, specify a TDatabase component or a BDE alias using the
DatabaseName property.
• For TADOTable, specify a TADOConnection component using the Connection
property.

Understanding datasets 24-25


Using table type datasets

• For TSQLTable, specify a TSQLConnection component using the SQLConnection


property.
• For TIBTable, specify a TIBConnection component using the Database property.
For information about using database connection components, see Chapter 23,
“Connecting to databases.”
3 Set the TableName property to the name of the table in the database. You can select
tables from a drop-down list if you have already identified a database connection
component.
4 Place a data source component in the data module or on the form, and set its
DataSet property to the name of the dataset. The data source component is used to
pass a result set from the dataset to data-aware components for display.

Advantages of using table type datasets


The main advantage of using table type datasets is the availability of indexes. Indexes
enable your application to
• Sort the records in the dataset.
• Locate records quickly.
• Limit the records that are visible.
• Establish master/detail relationships.
In addition, the one-to-one relationship between table type datasets and database
tables enables many of them to be used for
• Controlling Read/write access to tables
• Creating and deleting tables
• Emptying tables
• Synchronizing tables

Sorting records with indexes


An index determines the display order of records in a table. Typically, records appear
in ascending order based on a primary, or default, index. This default behavior does
not require application intervention. If you want a different sort order, however, you
must specify either
• An alternate index.
• A list of columns on which to sort (not available on servers that aren’t SQL-based).
Indexes let you present the data from a table in different orders. On SQL-based
tables, this sort order is implemented by using the index to generate an ORDER BY
clause in a query that fetches the table’s records. On other tables (such as Paradox
and dBASE tables), the index is used by the data access mechanism to present records
in the desired order.

24-26 Developer’s Guide


Using table type datasets

Obtaining information about indexes


You application can obtain information about server-defined indexes from all table
type datasets. To obtain a list of available indexes for the dataset, call the
GetIndexNames method. GetIndexNames fills a string list with valid index names. For
example, the following code fills a listbox with the names of all indexes defined for
the CustomersTable dataset:
CustomersTable.GetIndexNames(ListBox1.Items);
Note For Paradox tables, the primary index is unnamed, and is therefore not returned by
GetIndexNames. You can still change the index back to a primary index on a Paradox
table after using an alternative index, however, by setting the IndexName property to
a blank string.
To obtain information about the fields of the current index, use the
• IndexFieldCount property, to determine the number of columns in the index.
• IndexFields property, to examine a list the field components for the columns that
comprise the index.
The following code illustrates how you might use IndexFieldCount and IndexFields to
iterate through a list of column names in an application:
var
I: Integer;
ListOfIndexFields: array[0 to 20} of string;
begin
with CustomersTable do
begin
for I := 0 to IndexFieldCount - 1 do
ListOfIndexFields[I] := IndexFields[I].FieldName;
end;
end;
Note IndexFieldCount is not valid for a dBASE table opened on an expression index.

Specifying an index with IndexName


Use the IndexName property to cause an index to be active. Once active, an index
determines the order of records in the dataset. (It can also be used as the basis for a
master-detail link, an index-based search, or index-based filtering.)
To activate an index, set the IndexName property to the name of the index. In some
database systems, primary indexes do not have names. To activate one of these
indexes, set IndexName to a blank string.
At design-time, you can select an index from a list of available indexes by clicking the
property’s ellipsis button in the Object Inspector. At runtime set IndexName using a
String literal or variable. You can obtain a list of available indexes by calling the
GetIndexNames method.
The following code sets the index for CustomersTable to CustDescending:
CustomersTable.IndexName := 'CustDescending';

Understanding datasets 24-27


Using table type datasets

Creating an index with IndexFieldNames


If there is no defined index that implements the sort order you want, you can create a
pseudo-index using the IndexFieldNames property.
Note IndexName and IndexFieldNames are mutually exclusive. Setting one property clears
values set for the other.
The value of IndexFieldNames is a string. To specify a sort order, list each column
name to use in the order it should be used, and delimit the names with semicolons.
Sorting is by ascending order only. Case-sensitivity of the sort depends on the
capabilities of your server. See your server documentation for more information.
The following code sets the sort order for PhoneTable based on LastName, then
FirstName:
PhoneTable.IndexFieldNames := 'LastName;FirstName';
Note If you use IndexFieldNames on Paradox and dBASE tables, the dataset attempts to find
an index that uses the columns you specify. If it cannot find such an index, it raises an
exception.

Using Indexes to search for records


You can search against any dataset using the Locate and Lookup methods of TDataSet.
However, by explicitly using indexes, some table type datasets can improve over the
searching performance provided by the Locate and Lookup methods.
ADO datasets all support the Seek method, which moves to a record based on a set of
field values for fields in the current index. Seek lets you specify where to move the
cursor relative to the first or last matching record.
TTable and all types of client dataset support similar indexed-based searches, but use
a combination of related methods. The following table summarizes the six related
methods provided by TTable and client datasets to support index-based searches:

Table 24.9 Index-based search methods


Method Purpose
EditKey Preserves the current contents of the search key buffer and puts the dataset into
dsSetKey state so your application can modify existing search criteria prior to
executing a search.
FindKey Combines the SetKey and GotoKey methods in a single method.
FindNearest Combines the SetKey and GotoNearest methods in a single method.
GotoKey Searches for the first record in a dataset that exactly matches the search criteria, and
moves the cursor to that record if one is found.
GotoNearest Searches on string-based fields for the closest match to a record based on partial key
values, and moves the cursor to that record.
SetKey Clears the search key buffer and puts the table into dsSetKey state so your
application can specify new search criteria prior to executing a search.

24-28 Developer’s Guide


Using table type datasets

GotoKey and FindKey are boolean functions that, if successful, move the cursor to a
matching record and return True. If the search is unsuccessful, the cursor is not
moved, and these functions return False.
GotoNearest and FindNearest always reposition the cursor either on the first exact
match found or, if no match is found, on the first record that is greater than the
specified search criteria.

Executing a search with Goto methods


To execute a search using Goto methods, follow these general steps:
1 Specify the index to use for the search. This is the same index that sorts the records
in the dataset (see “Sorting records with indexes” on page 24-26). To specify the
index, use the IndexName or IndexFieldNames property.
2 Open the dataset.
3 Put the dataset in dsSetKey state by calling the SetKey method.
4 Specify the value(s) to search on in the Fields property. Fields is a TFields object,
which maintains an indexed list of field components you can access by specifying
ordinal numbers corresponding to columns. The first column number in a dataset
is 0.
5 Search for and move to the first matching record found with GotoKey or
GotoNearest.
For example, the following code, attached to a button’s OnClick event, uses the
GotoKey method to move to the first record where the first field in the index has a
value that exactly matches the text in an edit box:
procedure TSearchDemo.SearchExactClick(Sender: TObject);
begin
ClientDataSet1.SetKey;
ClientDataSet1.Fields[0].AsString := Edit1.Text;
if not ClientDataSet1.GotoKey then
ShowMessage('Record not found');
end;
GotoNearest is similar. It searches for the nearest match to a partial field value. It can
be used only for string fields. For example,
Table1.SetKey;
Table1.Fields[0].AsString := 'Sm';
Table1.GotoNearest;
If a record exists with “Sm” as the first two characters of the first indexed field’s
value, the cursor is positioned on that record. Otherwise, the position of the cursor
does not change and GotoNearest returns False.

Understanding datasets 24-29


Using table type datasets

Executing a search with Find methods


The Find methods do the same thing as the Goto methods, except that you do not
need to explicitly put the dataset in dsSetKey state to specify the key field values on
which to search. To execute a search using Find methods, follow these general steps:
1 Specify the index to use for the search. This is the same index that sorts the records
in the dataset (see “Sorting records with indexes” on page 24-26). To specify the
index, use the IndexName or IndexFieldNames property.
2 Open the dataset.
3 Search for and move to the first or nearest record with FindKey or FindNearest. Both
methods take a single parameter, a comma-delimited list of field values, where
each value corresponds to an indexed column in the underlying table.
Note FindNearest can only be used for string fields.

Specifying the current record after a successful search


By default, a successful search positions the cursor on the first record that matches
the search criteria. If you prefer, you can set the KeyExclusive property to True to
position the cursor on the next record after the first matching record.
By default, KeyExclusive is False, meaning that successful searches position the cursor
on the first matching record.

Searching on partial keys


If the dataset has more than one key column, and you want to search for values in a
subset of that key, set KeyFieldCount to the number of columns on which you are
searching. For example, if the dataset’s current index has three columns, and you
want to search for values using just the first column, set KeyFieldCount to 1.
For table type datasets with multiple-column keys, you can search only for values in
contiguous columns, beginning with the first. For example, for a three-column key
you can search for values in the first column, the first and second, or the first, second,
and third, but not just the first and third.

Repeating or extending a search


Each time you call SetKey or FindKey, the method clears any previous values in the
Fields property. If you want to repeat a search using previously set fields, or you want
to add to the fields used in a search, call EditKey in place of SetKey and FindKey.
For example, suppose you have already executed a search of the Employee table
based on the City field of the “CityIndex” index. Suppose further that “CityIndex”
includes both the City and Company fields. To find a record with a specified company
name in a specified city, use the following code:
Employee.KeyFieldCount := 2;
Employee.EditKey;
Employee['Company'] := Edit2.Text;
Employee.GotoNearest;

24-30 Developer’s Guide


Using table type datasets

Limiting records with ranges


You can temporarily view and edit a subset of data for any dataset by using filters
(see “Displaying and editing a subset of data using filters” on page 24-13). Some table
type datasets support an additional way to access a subset of available records, called
ranges.
Ranges only apply to TTable and to client datasets. Despite their similarities, ranges
and filters have different uses. The following topics discuss the differences between
ranges and filters and how to use ranges.

Understanding the differences between ranges and filters


Both ranges and filters restrict visible records to a subset of all available records, but
the way they do so differs. A range is a set of contiguously indexed records that fall
between specified boundary values. For example, in an employee database indexed
on last name, you might apply a range to display all employees whose last names are
greater than “Jones” and less than “Smith”. Because ranges depend on indexes, you
must set the current index to one that can be used to define the range. As with
specifying an index to sort records, you can assign the index on which to define a
range using either the IndexName or the IndexFieldNames property.
A filter, on the other hand, is any set of records that share specified data points,
regardless of indexing. For example, you might filter an employee database to
display all employees who live in California and who have worked for the company
for five or more years. While filters can make use of indexes if they apply, filters are
not dependent on them. Filters are applied record-by-record as an application scrolls
through a dataset.
In general, filters are more flexible than ranges. Ranges, however, can be more
efficient when datasets are large and the records of interest to an application are
already blocked in contiguously indexed groups. For very large datasets, it may be
still more efficient to use the WHERE clause of a query-type dataset to select data. For
details on specifying a query, see “Using query-type datasets” on page 24-42.

Specifying ranges
There are two mutually exclusive ways to specify a range:
• Specify the beginning and ending separately using SetRangeStart and SetRangeEnd.
• Specify both endpoints at once using SetRange.

Setting the beginning of a range


Call the SetRangeStart procedure to put the dataset into dsSetKey state and begin
creating a list of starting values for the range. Once you call SetRangeStart,
subsequent assignments to the Fields property are treated as starting index values to
use when applying the range. Fields specified must apply to the current index.
For example, suppose your application uses a TSimpleDataSet component named
Customers, linked to the CUSTOMER table, and that you have created persistent field
components for each field in the Customers dataset. CUSTOMER is indexed on its first

Understanding datasets 24-31


Using table type datasets

column (CustNo). A form in the application has two edit components named StartVal
and EndVal, used to specify start and ending values for a range. The following code
can be used to create and apply a range:
with Customers do
begin
SetRangeStart;
FieldByName('CustNo').AsString := StartVal.Text;
SetRangeEnd;
if (Length(EndVal.Text) > 0) then
FieldByName('CustNo').AsString := EndVal.Text;
ApplyRange;
end;
This code checks that the text entered in EndVal is not null before assigning any
values to Fields. If the text entered for StartVal is null, then all records from the
beginning of the dataset are included, since all values are greater than null. However,
if the text entered for EndVal is null, then no records are included, since none are less
than null.
For a multi-column index, you can specify a starting value for all or some fields in the
index. If you do not supply a value for a field used in the index, a null value is
assumed when you apply the range. If you try to set a value for a field that is not in
the index, the dataset raises an exception.
Tip To start at the beginning of the dataset, omit the call to SetRangeStart.
To finish specifying the start of a range, call SetRangeEnd or apply or cancel the range.
For information about applying and canceling ranges, see “Applying or canceling a
range” on page 24-34.

Setting the end of a range


Call the SetRangeEnd procedure to put the dataset into dsSetKey state and start
creating a list of ending values for the range. Once you call SetRangeEnd, subsequent
assignments to the Fields property are treated as ending index values to use when
applying the range. Fields specified must apply to the current index.
Warning Always specify the ending values for a range, even if you want a range to end on the
last record in the dataset. If you do not provide ending values, Delphi assumes the
ending value of the range is a null value. A range with null ending values is always
empty.
The easiest way to assign ending values is to call the FieldByName method. For
example,
with Contacts do
begin
SetRangeStart;
FieldByName('LastName').AsString := Edit1.Text;
SetRangeEnd;
FieldByName('LastName').AsString := Edit2.Text;
ApplyRange;
end;

24-32 Developer’s Guide


Using table type datasets

As with specifying start of range values, if you try to set a value for a field that is not
in the index, the dataset raises an exception.
To finish specifying the end of a range, apply or cancel the range. For information
about applying and canceling ranges, see “Applying or canceling a range” on
page 24-34.

Setting start- and end-range values


Instead of using separate calls to SetRangeStart and SetRangeEnd to specify range
boundaries, you can call the SetRange procedure to put the dataset into dsSetKey state
and set the starting and ending values for a range with a single call.
SetRange takes two constant array parameters: a set of starting values, and a set of
ending values. For example, the following statement establishes a range based on a
two-column index:
SetRange([Edit1.Text, Edit2.Text], [Edit3.Text, Edit4.Text]);
For a multi-column index, you can specify starting and ending values for all or some
fields in the index. If you do not supply a value for a field used in the index, a null
value is assumed when you apply the range. To omit a value for the first field in an
index, and specify values for successive fields, pass a null value for the omitted field.
Always specify the ending values for a range, even if you want a range to end on the
last record in the dataset. If you do not provide ending values, the dataset assumes
the ending value of the range is a null value. A range with null ending values is
always empty because the starting range is greater than or equal to the ending range.

Specifying a range based on partial keys


If a key is composed of one or more string fields, the SetRange methods support
partial keys. For example, if an index is based on the LastName and FirstName
columns, the following range specifications are valid:
Contacts.SetRangeStart;
Contacts['LastName'] := 'Smith';
Contacts.SetRangeEnd;
Contacts['LastName'] := 'Zzzzzz';
Contacts.ApplyRange;
This code includes all records in a range where LastName is greater than or equal to
“Smith.” The value specification could also be:
Contacts['LastName'] := 'Sm';
This statement includes records that have LastName greater than or equal to “Sm.”

Including or excluding records that match boundary values


By default, a range includes all records that are greater than or equal to the specified
starting range, and less than or equal to the specified ending range. This behavior is
controlled by the KeyExclusive property. KeyExclusive is False by default.

Understanding datasets 24-33


Using table type datasets

If you prefer, you can set the KeyExclusive property for a dataset to True to exclude
records equal to ending range. For example,
Contacts.KeyExclusive := True;
Contacts.SetRangeStart;
Contacts['LastName'] := 'Smith';
Contacts.SetRangeEnd;
Contacts['LastName'] := 'Tyler';
Contacts.ApplyRange;
This code includes all records in a range where LastName is greater than or equal to
“Smith” and less than “Tyler”.

Modifying a range
Two functions enable you to modify the existing boundary conditions for a range:
EditRangeStart, for changing the starting values for a range; and EditRangeEnd, for
changing the ending values for the range.
The process for editing and applying a range involves these general steps:
1 Putting the dataset into dsSetKey state and modifying the starting index value for
the range.
2 Modifying the ending index value for the range.
3 Applying the range to the dataset.
You can modify either the starting or ending values of the range, or you can modify
both boundary conditions. If you modify the boundary conditions for a range that is
currently applied to the dataset, the changes you make are not applied until you call
ApplyRange again.

Editing the start of a range


Call the EditRangeStart procedure to put the dataset into dsSetKey state and begin
modifying the current list of starting values for the range. Once you call
EditRangeStart, subsequent assignments to the Fields property overwrite the current
index values to use when applying the range.
Tip If you initially created a start range based on a partial key, you can use EditRangeStart
to extend the starting value for a range. For more information about ranges based on
partial keys, see “Specifying a range based on partial keys” on page 24-33.

Editing the end of a range


Call the EditRangeEnd procedure to put the dataset into dsSetKey state and start
creating a list of ending values for the range. Once you call EditRangeEnd, subsequent
assignments to the Fields property are treated as ending index values to use when
applying the range.

Applying or canceling a range


When you call SetRangeStart or EditRangeStart to specify the start of a range, or
SetRangeEnd or EditRangeEnd to specify the end of a range, the dataset enters the
dsSetKey state. It stays in that state until you apply or cancel the range.

24-34 Developer’s Guide


Using table type datasets

Applying a range
When you specify a range, the boundary conditions you define are not put into effect
until you apply the range. To make a range take effect, call the ApplyRange method.
ApplyRange immediately restricts a user’s view of and access to data in the specified
subset of the dataset.

Canceling a range
The CancelRange method ends application of a range and restores access to the full
dataset. Even though canceling a range restores access to all records in the dataset,
the boundary conditions for that range are still available so that you can reapply the
range at a later time. Range boundaries are preserved until you provide new range
boundaries or modify the existing boundaries. For example, the following code is
valid:
ƒ
MyTable.CancelRange;
ƒ
{later on, use the same range again. No need to call SetRangeStart, etc.}
MyTable.ApplyRange;
ƒ

Creating master/detail relationships


Table type datasets can be linked into master/detail relationships. When you set up a
master/detail relationship, you link two datasets so that all the records of one (the
detail) always correspond to the single current record in the other (the master).
Table type datasets support master/detail relationships in two very distinct ways:
• All table type datasets can act as the detail of another dataset by linking cursors.
This process is described in “Making the table a detail of another dataset” below.
• TTable, TSQLTable, and all client datasets can act as the master in a master/detail
relationship that uses nested detail tables. This process is described in “Using
nested detail tables” on page 24-37.
Each of these approaches has its unique advantages. Linking cursors lets you create
master/detail relationships where the master table is any type of dataset. With
nested details, the type of dataset that can act as the detail table is limited, but they
provide for more options in how to display the data. If the master is a client dataset,
nested details provide a more robust mechanism for applying cached updates.

Making the table a detail of another dataset


A table type dataset’s MasterSource and MasterFields properties can be used to
establish one-to-many relationships between two datasets.
The MasterSource property is used to specify a data source from which the table gets
data from the master table. This data source can be linked to any type of dataset. For
instance, by specifying a query’s data source in this property, you can link a client
dataset as the detail of the query, so that the client dataset tracks events occurring in
the query.

Understanding datasets 24-35


Using table type datasets

The dataset is linked to the master table based on its current index. Before you specify
the fields in the master dataset that are tracked by the detail dataset, first specify the
index in the detail dataset that starts with the corresponding fields. You can use
either the IndexName or the IndexFieldNames property.
Once you specify the index to use, use the MasterFields property to indicate the
column(s) in the master dataset that correspond to the index fields in the detail table.
To link datasets on multiple column names, separate field names with semicolons:
Parts.MasterFields := 'OrderNo;ItemNo';
To help create meaningful links between two datasets, you can use the Field Link
designer. To use the Field Link designer, double click on the MasterFields property in
the Object Inspector after you have assigned a MasterSource and an index.
The following steps create a simple form in which a user can scroll through customer
records and display all orders for the current customer. The master table is the
CustomersTable table, and the detail table is OrdersTable. The example uses the BDE-
based TTable component, but you can use the same methods to link any table type
datasets.
1 Place two TTable components and two TDataSource components in a data module.
2 Set the properties of the first TTable component as follows:
• DatabaseName: DBDEMOS
• TableName: CUSTOMER
• Name: CustomersTable
3 Set the properties of the second TTable component as follows:
• DatabaseName: DBDEMOS
• TableName: ORDERS
• Name: OrdersTable
4 Set the properties of the first TDataSource component as follows:
• Name: CustSource
• DataSet: CustomersTable
5 Set the properties of the second TDataSource component as follows:
• Name: OrdersSource
• DataSet: OrdersTable
6 Place two TDBGrid components on a form.
7 Choose File|Use Unit to specify that the form should use the data module.
8 Set the DataSource property of the first grid component to
“CustSource”, and set the DataSource property of the second grid to
“OrdersSource”.

24-36 Developer’s Guide


Using table type datasets

9 Set the MasterSource property of OrdersTable to “CustSource”. This links the


CUSTOMER table (the master table) to the ORDERS table (the detail table).
10 Double-click the MasterFields property value box in the Object Inspector to invoke
the Field Link Designer to set the following properties:
• In the Available Indexes field, choose CustNo to link the two tables by the
CustNo field.
• Select CustNo in both the Detail Fields and Master Fields field lists.
• Click the Add button to add this join condition. In the Joined Fields list,
“CustNo -> CustNo” appears.
• Choose OK to commit your selections and exit the Field Link Designer.
11 Set the Active properties of CustomersTable and OrdersTable to True to display data
in the grids on the form.
12 Compile and run the application.
If you run the application now, you will see that the tables are linked together, and
that when you move to a new record in the CUSTOMER table, you see only those
records in the ORDERS table that belong to the current customer.

Using nested detail tables


A nested table is a detail dataset that is the value of a single dataset field in another
(master) dataset. For datasets that represent server data, a nested detail dataset can
only be used for a dataset field on the server. TClientDataSet components do not
represent server data, but they can also contain dataset fields if you create a dataset
for them that contains nested details, or if they receive data from a provider that is
linked to the master table of a master/detail relationship.
Note For TClientDataSet, using nested detail sets is necessary if you want to apply updates
from master and detail tables to a database server.
To use nested detail sets, the ObjectView property of the master dataset must be True.
When your table type dataset contains nested detail datasets, TDBGrid provides
support for displaying the nested details in a popup window. For more information
on how this works, see “Displaying dataset fields” on page 25-27.
Alternately, you can display and edit detail datasets in data-aware controls by using
a separate dataset component for the detail set. At design time, create persistent
fields for the fields in your (master) dataset, using the Fields Editor: right click the
master dataset and choose Fields Editor. Add a new persistent field to your dataset
by right-clicking and choosing Add Fields. Define your new field with type DataSet
Field. In the Fields Editor, define the structure of the detail table. You must also add
persistent fields for any other fields used in your master dataset.

Understanding datasets 24-37


Using table type datasets

The dataset component for the detail table is a dataset descendant of a type allowed
by the master table. TTable components only allow TNestedDataSet components as
nested datasets. TSQLTable components allow other TSQLTable components.
TClientDataset components allow other client datasets. Choose a dataset of the
appropriate type from the Component palette and add it to your form or data
module. Set this detail dataset’s DataSetField property to the persistent DataSet field
in the master dataset. Finally, place a data source component on the form or data
module and set its DataSet property to the detail dataset. Data-aware controls can use
this data source to access the data in the detail set.

Controlling Read/write access to tables


By default when a table type dataset is opened, it requests read and write access for
the underlying database table. Depending on the characteristics of the underlying
database table, the requested write privilege may not be granted (for example, when
you request write access to an SQL table on a remote server and the server restricts
the table’s access to read only).
Note This is not true for TClientDataSet, which determines whether users can edit data
from information that the dataset provider supplies with data packets. It is also not
true for TSQLTable, which is a unidirectional dataset, and hence always read-only.
When the table opens, you can check the CanModify property to ascertain whether the
underlying database (or the dataset provider) allows users to edit the data in the
table. If CanModify is False, the application cannot write to the database. If CanModify
is True, your application can write to the database provided the table’s ReadOnly
property is False.
ReadOnly determines whether a user can both view and edit data. When ReadOnly is
False (the default), a user can both view and edit data. To restrict a user to viewing
data, set ReadOnly to True before opening the table.
Note ReadOnly is implemented on all table type datasets except TSQLTable, which is
always read-only.

Creating and deleting tables


Some table type datasets let you create and delete the underlying tables at design
time or at runtime. Typically, database tables are created and deleted by a database
administrator. However, it can be handy during application development and testing
to create and destroy database tables that your application can use.

Creating tables
TTable and TIBTable both let you create the underlying database table without using
SQL. Similarly, TClientDataSet lets you create a dataset when you are not working
with a dataset provider. Using TTable and TClientDataSet, you can create the table at
design time or runtime. TIBTable only lets you create tables at runtime.

24-38 Developer’s Guide


Using table type datasets

Before you can create the table, you must be set properties to specify the structure of
the table you are creating. In particular, you must specify
• The database that will host the new table. For TTable, you specify the database
using the DatabaseName property. For TIBTable, you must use a TIBDatabase
component, which is assigned to the Database property. (Client datasets do not use
a database.)
• The type of database (TTable only). Set the TableType property to the desired type
of table. For Paradox, dBASE, or ASCII tables, set TableType to ttParadox, ttDBase,
or ttASCII, respectively. For all other table types, set TableType to ttDefault.
• The name of the table you want to create. Both TTable and TIBTable have a
TableName property for the name of the new table. Client datasets do not use a
table name, but you should specify the FileName property before you save the new
table. If you create a table that duplicates the name of an existing table, the existing
table and all its data are overwritten by the newly created table. The old table and
its data cannot be recovered. To avoid overwriting an existing table, you can check
the Exists property at runtime. Exists is only available on TTable and TIBTable.
• The fields for the new table. There are two ways to do this:
• You can add field definitions to the FieldDefs property. At design time, double-
click the FieldDefs property in the Object Inspector to bring up the collection
editor. Use the collection editor to add, remove, or change the properties of the
field definitions. At runtime, clear any existing field definitions and then use
the AddFieldDef method to add each new field definition. For each new field
definition, set the properties of the TFieldDef object to specify the desired
attributes of the field.
• You can use persistent field components instead. At design time, double-click
on the dataset to bring up the Fields editor. In the Fields editor, right-click and
choose the New Field command. Describe the basic properties of your field.
Once the field is created, you can alter its properties in the Object Inspector by
selecting the field in the Fields editor.
• Indexes for the new table (optional). At design time, double-click the IndexDefs
property in the Object Inspector to bring up the collection editor. Use the
collection editor to add, remove, or change the properties of index definitions. At
runtime, clear any existing index definitions, and then use the AddIndexDef
method to add each new index definition. For each new index definition, set the
properties of the TIndexDef object to specify the desired attributes of the index.
Note You can’t define indexes for the new table if you are using persistent field
components instead of field definition objects.
To create the table at design time, right-click the dataset and choose Create Table
(TTable) or Create Data Set (TClientDataSet). This command does not appear on the
context menu until you have specified all the necessary information.
To create the table at runtime, call the CreateTable method (TTable and TIBTable) or the
CreateDataSet method (TClientDataSet).

Understanding datasets 24-39


Using table type datasets

Note You can set up the definitions at design time and then call the CreateTable (or
CreateDataSet) method at runtime to create the table. However, to do so you must
indicate that the definitions specified at runtime should be saved with the dataset
component. (by default, field and index definitions are generated dynamically at
runtime). Specify that the definitions should be saved with the dataset by setting its
StoreDefs property to True.
Tip If you are using TTable, you can preload the field definitions and index definitions of
an existing table at design time. Set the DatabaseName and TableName properties to
specify the existing table. Right click the table component and choose Update Table
Definition. This automatically sets the values of the FieldDefs and IndexDefs
properties to describe the fields and indexes of the existing table. Next, reset the
DatabaseName and TableName to specify the table you want to create, canceling any
prompts to rename the existing table.
Note When creating Oracle8 tables, you can’t create object fields (ADT fields, array fields,
and dataset fields).
The following code creates a new table at runtime and associates it with the
DBDEMOS alias. Before it creates the new table, it verifies that the table name
provided does not match the name of an existing table:
var
TableFound: Boolean;
begin
with TTable.Create(nil) do // create a temporary TTable component
begin
try
{ set properties of the temporary TTable component }
Active := False;
DatabaseName := 'DBDEMOS';
TableName := Edit1.Text;
TableType := ttDefault;
{ define fields for the new table }
FieldDefs.Clear;
with FieldDefs.AddFieldDef do begin
Name := 'First';
DataType := ftString;
Size := 20;
Required := False;
end;
with FieldDefs.AddFieldDef do begin
Name := 'Second';
DataType := ftString;
Size := 30;
Required := False;
end;
{ define indexes for the new table }
IndexDefs.Clear;
with IndexDefs.AddIndexDef do begin
Name := '';
Fields := 'First';
Options := [ixPrimary];
end;

24-40 Developer’s Guide


Using table type datasets

TableFound := Exists; // check whether the table already exists


if TableFound then
if MessageDlg('Overwrite existing table ' + Edit1.Text + '?',
mtConfirmation, mbYesNoCancel, 0) = mrYes then
TableFound := False;
if not TableFound then
CreateTable; // create the table
finally
Free; // destroy the temporary TTable when done
end;
end;
end;

Deleting tables
TTable and TIBTable let you delete tables from the underlying database table without
using SQL. To delete a table at runtime, call the dataset’s DeleteTable method. For
example, the following statement removes the table underlying a dataset:
CustomersTable.DeleteTable;
Caution When you delete a table with DeleteTable, the table and all its data are gone forever.
If you are using TTable, you can also delete tables at design time: Right-click the table
component and select Delete Table from the context menu. The Delete Table menu
pick is only present if the table component represents an existing database table (the
DatabaseName and TableName properties specify an existing table).

Emptying tables
Many table type datasets supply a single method that lets you delete all rows of data
in the table.
• For TTable and TIBTable, you can delete all the records by calling the EmptyTable
method at runtime:
PhoneTable.EmptyTable;
• For TADOTable, you can use the DeleteRecords method.
PhoneTable.DeleteRecords;
• For TSQLTable, you can use the DeleteRecords method as well. Note, however, that
the TSQLTable version of DeleteRecords never takes any parameters.
PhoneTable.DeleteRecords;
• For client datasets, you can use the EmptyDataSet method.
PhoneTable.EmptyDataSet;
Note For tables on SQL servers, these methods only succeed if you have DELETE privilege
for the table.
Caution When you empty a dataset, the data you delete is gone forever.

Understanding datasets 24-41


Using query-type datasets

Synchronizing tables
If you have two or more datasets that represent the same database table but do not
share a data source component, then each dataset has its own view on the data and
its own current record. As users access records through each datasets, the
components’ current records will differ.
If the datasets are all instances of TTable, or all instances of TIBTable, or all client
datasets, you can force the current record for each of these datasets to be the same by
calling the GotoCurrent method. GotoCurrent sets its own dataset’s current record to
the current record of a matching dataset. For example, the following code sets the
current record of CustomerTableOne to be the same as the current record of
CustomerTableTwo:
CustomerTableOne.GotoCurrent(CustomerTableTwo);
Tip If your application needs to synchronize datasets in this manner, put the datasets in a
data module and add the unit for the data module to the uses clause of each unit that
accesses the tables.
To synchronize datasets from separate forms, you must add one form’s unit to the
uses clause of the other, and you must qualify at least one of the dataset names with
its form name. For example:
CustomerTableOne.GotoCurrent(Form2.CustomerTableTwo);

Using query-type datasets


To use a query-type dataset,
1 Place the appropriate dataset component in a data module or on a form, and set its
Name property to a unique value appropriate to your application.
2 Identify the database server to query. Each query-type dataset does this
differently, but typically you specify a database connection component:
• For TQuery, specify a TDatabase component or a BDE alias using the
DatabaseName property.
• For TADOQuery, specify a TADOConnection component using the Connection
property.
• For TSQLQuery, specify a TSQLConnection component using the SQLConnection
property.
• For TIBQuery, specify a TIBConnection component using the Database property.
For information about using database connection components, see Chapter 23,
“Connecting to databases.”
3 Specify an SQL statement in the SQL property of the dataset, and optionally
specify any parameters for the statement. For more information, see “Specifying
the query” on page 24-43 and “Using parameters in queries” on page 24-45.

24-42 Developer’s Guide


Using query-type datasets

4 If the query data is to be used with visual data controls, add a data source
component to the data module, and set its DataSet property to the query-type
dataset. The data source component forwards the results of the query (called a
result set) to data-aware components for display. Connect data-aware components
to the data source using their DataSource and DataField properties.
5 Activate the query component. For queries that return a result set, use the Active
property or the Open method. To execute queries that only perform an action on a
table and return no result set, use the ExecSQL method at runtime. If you plan to
execute the query more than once, you may want to call Prepare to initialize the
data access layer and bind parameter values into the query. For information about
preparing a query, see “Preparing queries” on page 24-48.

Specifying the query


For true query-type datasets, you use the SQL property to specify the SQL statement
for the dataset to execute. Some datasets, such as TADODataSet, TSQLDataSet, and
client datasets, use a CommandText property to accomplish the same thing.
Most queries that return records are SELECT commands. Typically, they define the
fields to include, the tables from which to select those fields, conditions that limit
what records to include, and the order of the resulting dataset. For example:
SELECT CustNo, OrderNo, SaleDate
FROM Orders
WHERE CustNo = 1225
ORDER BY SaleDate
Queries that do not return records include statements that use Data Definition
Language (DDL) or Data Manipulation Language (DML) statements other than
SELECT statements (For example, INSERT, DELETE, UPDATE, CREATE INDEX,
and ALTER TABLE commands do not return any records). The language used in
commands is server-specific, but usually compliant with the SQL-92 standard for the
SQL language.
The SQL command you execute must be acceptable to the server you are using.
Datasets neither evaluate the SQL nor execute it. They merely pass the command to
the server for execution. In most cases, the SQL command must be only one complete
SQL statement, although that statement can be as complex as necessary (for example,
a SELECT statement with a WHERE clause that uses several nested logical operators
such as AND and OR). Some servers also support “batch” syntax that permits
multiple statements; if your server supports such syntax, you can enter multiple
statements when you specify the query.
The SQL statements used by queries can be verbatim, or they can contain replaceable
parameters. Queries that use parameters are called parameterized queries. When you
use parameterized queries, the actual values assigned to the parameters are inserted
into the query before you execute, or run, the query. Using parameterized queries is
very flexible, because you can change a user’s view of and access to data on the fly at
runtime without having to alter the SQL statement. For more information about
parameterized queries, see “Using parameters in queries” on page 24-45.

Understanding datasets 24-43


Using query-type datasets

Specifying a query using the SQL property


When using a true query-type dataset (TQuery, TADOQuery, TSQLQuery, or
TIBQuery), assign the query to the SQL property. The SQL property is a TStrings
object. Each separate string in this TStrings object is a separate line of the query.
Using multiple lines does not affect the way the query executes on the server, but can
make it easier to modify and debug the query if you divide the statement into logical
units:
MyQuery.Close;
MyQuery.SQL.Clear;
MyQuery.SQL.Add('SELECT CustNo, OrderNO, SaleDate');
MyQuery.SQL.Add(' FROM Orders');
MyQuery.SQL.Add('ORDER BY SaleDate');
MyQuery.Open;
The code below demonstrates modifying only a single line in an existing SQL
statement. In this case, the ORDER BY clause already exists on the third line of the
statement. It is referenced via the SQL property using an index of 2.
MyQuery.SQL[2] := ‘ORDER BY OrderNo’;
Note The dataset must be closed when you specify or modify the SQL property.
At design time, use the String List editor to specify the query. Click the ellipsis button
by the SQL property in the Object Inspector to display the String List editor.
Note With some versions of Delphi, if you are using TQuery, you can also use the SQL
Builder to construct a query based on a visible representation of tables and fields in a
database. To use the SQL Builder, select the query component, right-click it to invoke
the context menu, and choose Graphical Query Editor. To learn how to use SQL
Builder, open it and use its online help.
Because the SQL property is a TStrings object, you can load the text of the query from
a file by calling the TStrings.LoadFromFile method:
MyQuery.SQL.LoadFromFile('custquery.sql');
You can also use the Assign method of the SQL property to copy the contents of a
string list object into the SQL property. The Assign method automatically clears the
current contents of the SQL property before copying the new statement:
MyQuery.SQL.Assign(Memo1.Lines);

Specifying a query using the CommandText property


When using TADODataSet, TSQLDataSet, or a client dataset, assign the text of the
query statement to the CommandText property:
MyQuery.CommandText := 'SELECT CustName, Address FROM Customer';
At design time, you can type the query directly into the Object Inspector, or, if the
dataset already has an active connection to the database, you can click the ellipsis
button by the CommandText property to display the Command Text editor. The
Command Text editor lists the available tables, and the fields in those tables, to make
it easier to compose your queries.

24-44 Developer’s Guide


Using query-type datasets

Using parameters in queries


A parameterized SQL statement contains parameters, or variables, the values of
which can be varied at design time or runtime. Parameters can replace data values,
such as those used in a WHERE clause for comparisons, that appear in an SQL
statement. Ordinarily, parameters stand in for data values passed to the statement.
For example, in the following INSERT statement, values to insert are passed as
parameters:
INSERT INTO Country (Name, Capital, Population)
VALUES (:Name, :Capital, :Population)
In this SQL statement, :Name, :Capital, and :Population are placeholders for actual
values supplied to the statement at runtime by your application. Note that the names
of parameters begin with a colon. The colon is required so that the parameter names
can be distinguished from literal values. You can also include unnamed parameters
by adding a question mark (?) to your query. Unnamed parameters are identified by
position, because they do not have unique names.
Before the dataset can execute the query, you must supply values for any parameters
in the query text. TQuery, TIBQuery, TSQLQuery, and client datasets use the Params
property to store these values. TADOQuery uses the Parameters property instead.
Params (or Parameters) is a collection of parameter objects (TParam or TParameter),
where each object represents a single parameter. When you specify the text for the
query, the dataset generates this set of parameter objects, and (depending on the
dataset type) initializes any of their properties that it can deduce from the query.
Note You can suppress the automatic generation of parameter objects in response to
changing the query text by setting the ParamCheck property to False. This is useful for
data definition language (DDL) statements that contain parameters as part of the
DDL statement that are not parameters for the query itself. For example, the DDL
statement to create a stored procedure may define parameters that are part of the
stored procedure. By setting ParamCheck to False, you prevent these parameters from
being mistaken for parameters of the query.
Parameter values must be bound into the SQL statement before it is executed for the
first time. Query components do this automatically for you even if you do not
explicitly call the Prepare method before executing a query.
Tip It is a good programming practice to provide variable names for parameters that
correspond to the actual name of the column with which it is associated. For
example, if a column name is “Number,” then its corresponding parameter would be
“:Number”. Using matching names is especially important if the dataset uses a data
source to obtain parameter values from another dataset. This process is described in
“Establishing master/detail relationships using parameters” on page 24-47.

Supplying parameters at design time


At design time, you can specify parameter values using the parameter collection
editor. To display the parameter collection editor, click on the ellipsis button for the
Params or Parameters property in the Object Inspector. If the SQL statement does not
contain any parameters, no objects are listed in the collection editor.

Understanding datasets 24-45


Using query-type datasets

Note The parameter collection editor is the same collection editor that appears for other
collection properties. Because the editor is shared with other properties, its right-click
context menu contains the Add and Delete commands. However, they are never
enabled for query parameters. The only way to add or delete parameters is in the
SQL statement itself.
For each parameter, select it in the parameter collection editor. Then use the Object
Inspector to modify its properties.
When using the Params property (TParam objects), you will want to inspect or modify
the following:
• The DataType property lists the data type for the parameter’s value. For some
datasets, this value may be correctly initialized. If the dataset did not deduce the
type, DataType is ftUnknown, and you must change it to indicate the type of the
parameter value.
The DataType property lists the logical data type for the parameter. In general,
these data types conform to server data types. For specific logical type-to-server
data type mappings, see the documentation for the data access mechanism (BDE,
dbExpress, InterBase).
• The ParamType property lists the type of the selected parameter. For queries, this is
always ptInput, because queries can only contain input parameters. If the value of
ParamType is ptUnknown, change it to ptInput.
• The Value property specifies a value for the selected parameter. You can leave
Value blank if your application supplies parameter values at runtime.
When using the Parameters property (TParameter objects), you will want to inspect or
modify the following:
• The DataType property lists the data type for the parameter’s value. For some data
types, you must provide additional information:
• The NumericScale property indicates the number of decimal places for numeric
parameters.
• The Precision property indicates the total number of digits for numeric
parameters.
• The Size property indicates the number of characters in string parameters.
• The Direction property lists the type of the selected parameter. For queries, this is
always pdInput, because queries can only contain input parameters.
• The Attributes property controls the type of values the parameter will accept.
Attributes may be set to a combination of psSigned, psNullable, and psLong.
• The Value property specifies a value for the selected parameter. You can leave
Value blank if your application supplies parameter values at runtime.

24-46 Developer’s Guide


Using query-type datasets

Supplying parameters at runtime


To create parameters at runtime, you can use the
• ParamByName method to assign values to a parameter based on its name (not
available for TADOQuery)
• Params or Parameters property to assign values to a parameter based on the
parameter’s ordinal position within the SQL statement.
• Params.ParamValues or Parameters.ParamValues property to assign values to one or
more parameters in a single command line, based on the name of each parameter
set.
The following code uses ParamByName to assign the text of an edit box to the :Capital
parameter:
SQLQuery1.ParamByName('Capital').AsString := Edit1.Text;
The same code can be rewritten using the Params property, using an index of 0
(assuming the :Capital parameter is the first parameter in the SQL statement):
SQLQuery1.Params[0].AsString := Edit1.Text;
The command line below sets three parameters at once, using the
Params.ParamValues property:
Query1.Params.ParamValues['Name;Capital;Continent'] :=
VarArrayOf([Edit1.Text, Edit2.Text, Edit3.Text]);
Note that ParamValues uses Variants, avoiding the need to cast values.

Establishing master/detail relationships using parameters


To set up a master/detail relationship where the detail set is a query-type dataset,
you must specify a query that uses parameters. These parameters refer to current
field values on the master dataset. Because the current field values on the master
dataset change dynamically at runtime, you must rebind the detail set’s parameters
every time the master record changes. Although you could write code to do this
using an event handler, all query-type datasets except TIBQuery provide an easier
mechanism using the DataSource property.
If parameter values for a parameterized query are not bound at design time or
specified at runtime, query-type datasets attempt to supply values for them based on
the DataSource property. DataSource identifies a different dataset that is searched for
field names that match the names of unbound parameters. This search dataset can be
any type of dataset. The search dataset must be created and populated before you
create the detail dataset that uses it. If matches are found in the search dataset, the
detail dataset binds the parameter values to the values of the fields in the current
record pointed to by the data source.

Understanding datasets 24-47


Using query-type datasets

To illustrate how this works, consider two tables: a customer table and an orders
table. For every customer, the orders table contains a set of orders that the customer
made. The Customer table includes an ID field that specifies a unique customer ID.
The orders table includes a CustID field that specifies the ID of the customer who
made an order.
The first step is to set up the Customer dataset:
1 Add a table type dataset to your application and bind it to the Customer table.
2 Add a TDataSource component named CustomerSource. Set its DataSet property to
the dataset added in step 1. This data source now represents the Customer dataset.
3 Add a query-type dataset and set its SQL property to
SELECT CustID, OrderNo, SaleDate
FROM Orders
WHERE CustID = :ID
Note that the name of the parameter is the same as the name of the field in the
master (Customer) table.
4 Set the detail dataset’s DataSource property to CustomerSource. Setting this
property makes the detail dataset a linked query.
At runtime the :ID parameter in the SQL statement for the detail dataset is not
assigned a value, so the dataset tries to match the parameter by name against a
column in the dataset identified by CustomersSource. CustomersSource gets its data
from the master dataset, which, in turn, derives its data from the Customer table.
Because the Customer table contains a column called “ID,” the value from the ID
field in the current record of the master dataset is assigned to the :ID parameter for
the detail dataset’s SQL statement. The datasets are linked in a master-detail
relationship. Each time the current record changes in the Customers dataset, the
detail dataset’s SELECT statement executes to retrieve all orders based on the current
customer id.

Preparing queries
Preparing a query is an optional step that precedes query execution. Preparing a
query submits the SQL statement and its parameters, if any, to the data access layer
and the database server for parsing, resource allocation, and optimization. In some
datasets, the dataset may perform additional setup operations when preparing the
query. These operations improve query performance, making your application
faster, especially when working with updatable queries.
An application can prepare a query by setting the Prepared property to True. If you do
not prepare a query before executing it, the dataset automatically prepares it for you
each time you call Open or ExecSQL. Even though the dataset prepares queries for
you, you can improve performance by explicitly preparing the dataset before you
open it the first time.
CustQuery.Prepared := True;

24-48 Developer’s Guide


Using query-type datasets

When you explicitly prepare the dataset, the resources allocated for executing the
statement are not freed until you set Prepared to False.
Set the Prepared property to False if you want to ensure that the dataset is re-prepared
before it executes (for example, if you add a parameter).
Note When you change the text of the SQL property for a query, the dataset automatically
closes and unprepares the query.

Executing queries that don’t return a result set


When a query returns a set of records (such as a SELECT query), you execute the
query the same way you populate any dataset with records: by setting Active to True
or calling the Open method.
However, often SQL commands do not return any records. Such commands include
statements that use Data Definition Language (DDL) or Data Manipulation
Language (DML) statements other than SELECT statements (For example, INSERT,
DELETE, UPDATE, CREATE INDEX, and ALTER TABLE commands do not return
any records).
For all query-type datasets, you can execute a query that does not return a result set
by calling ExecSQL:
CustomerQuery.ExecSQL; { query does not return a result set }
Tip If you are executing the query multiple times, it is a good idea to set the Prepared
property to True.
Although the query does not return any records, you may want to know the number
of records it affected (for example, the number of records deleted by a DELETE
query). The RowsAffected property gives the number of affected records after a call to
ExecSQL.
Tip When you do not know at design time whether the query returns a result set (for
example, if the user supplies the query dynamically at runtime), you can code both
types of query execution statements in a try...except block. Put a call to the Open
method in the try clause. An action query is executed when the query is activated
with the Open method, but an exception occurs in addition to that. Check the
exception, and suppress it if it merely indicates the lack of a result set. (For example,
TQuery indicates this by an ENoResultSet exception.)

Using unidirectional result sets


When a query-type dataset returns a result set, it also receives a cursor, or pointer to
the first record in that result set. The record pointed to by the cursor is the currently
active record. The current record is the one whose field values are displayed in data-
aware components associated with the result set’s data source. Unless you are using
dbExpress, this cursor is bi-directional by default. A bi-directional cursor can
navigate both forward and backward through its records. Bi-directional cursor
support requires some additional processing overhead, and can slow some queries.

Understanding datasets 24-49


Using stored procedure-type datasets

If you do not need to be able to navigate backward through a result set, TQuery and
TIBQuery let you improve query performance by requesting a unidirectional cursor
instead. To request a unidirectional cursor, set the UniDirectional property to True.
Set UniDirectional before preparing and executing a query. The following code
illustrates setting UniDirectional prior to preparing and executing a query:
if not (CustomerQuery.Prepared) then
begin
CustomerQuery.UniDirectional := True;
CustomerQuery.Prepared := True;
end;
CustomerQuery.Open; { returns a result set with a one-way cursor }
Note Do not confuse the UniDirectional property with a unidirectional dataset.
Unidirectional datasets (TSQLDataSet, TSQLTable, TSQLQuery, and TSQLStoredProc)
use dbExpress, which only returns unidirectional cursors. In addition to restricting
the ability to navigate backwards, unidirectional datasets do not buffer records, and
so have additional limitations (such as the inability to use filters).

Using stored procedure-type datasets


How your application uses a stored procedure depends on how the stored procedure
was coded, whether and how it returns data, the specific database server used, or a
combination of these factors.
In general terms, to access a stored procedure on a server, an application must:
1 Place the appropriate dataset component in a data module or on a form, and set its
Name property to a unique value appropriate to your application.
2 Identify the database server that defines the stored procedure. Each stored
procedure-type dataset does this differently, but typically you specify a database
connection component:
• For TStoredProc, specify a TDatabase component or a BDE alias using the
DatabaseName property.
• For TADOStoredProc, specify a TADOConnection component using the
Connection property.
• For TSQLStoredProc, specify a TSQLConnection component using the
SQLConnection property.
• For TIBStoredProc, specify a TIBConnection component using the Database
property.
For information about using database connection components, see Chapter 23,
“Connecting to databases.”
3 Specify the stored procedure to execute. For most stored procedure-type datasets,
you do this by setting the StoredProcName property. The one exception is
TADOStoredProc, which has a ProcedureName property instead.

24-50 Developer’s Guide


Using stored procedure-type datasets

4 If the stored procedure returns a cursor to be used with visual data controls, add a
data source component to the data module, and set its DataSet property to the
stored procedure-type dataset. Connect data-aware components to the data source
using their DataSource and DataField properties.
5 Provide input parameter values for the stored procedure, if necessary. If the server
does not provide information about all stored procedure parameters, you may
need to provide additional input parameter information, such as parameter names
and data types. For information about working with stored procedure parameters,
see “Working with stored procedure parameters” on page 24-51.
6 Execute the stored procedure. For stored procedures that return a cursor, use the
Active property or the Open method. To execute stored procedures that do not
return any results or that only return output parameters, use the ExecProc method
at runtime. If you plan to execute the stored procedure more than once, you may
want to call Prepare to initialize the data access layer and bind parameter values
into the stored procedure. For information about preparing a query, see
“Executing stored procedures that don’t return a result set” on page 24-55.
7 Process any results. These results can be returned as result and output parameters,
or they can be returned as a result set that populates the stored procedure-type
dataset. Some stored procedures return multiple cursors. For details on how to
access the additional cursors, see “Fetching multiple result sets” on page 24-56.

Working with stored procedure parameters


There are four types of parameters that can be associated with stored procedures:
• Input parameters, used to pass values to a stored procedure for processing.
• Output parameters, used by a stored procedure to pass return values to an
application.
• Input/output parameters, used to pass values to a stored procedure for processing,
and used by the stored procedure to pass return values to the application.
• A result parameter, used by some stored procedures to return an error or status
value to an application. A stored procedure can only return one result parameter.
Whether a stored procedure uses a particular type of parameter depends both on the
general language implementation of stored procedures on your database server and
on a specific instance of a stored procedure. For any server, individual stored
procedures may or may not use input parameters. On the other hand, some uses of
parameters are server-specific. For example, on MS-SQL Server and Sybase stored
procedures always return a result parameter, but the InterBase implementation of a
stored procedure never returns a result parameter.

Understanding datasets 24-51


Using stored procedure-type datasets

Access to stored procedure parameters is provided by the Params property (in


TStoredProc, TSQLStoredProc, TIBStoredProc) or the Parameters property (in
TADOStoredProc). When you assign a value to the StoredProcName (or ProcedureName)
property, the dataset automatically generates objects for each parameter of the stored
procedure. For some datasets, if the stored procedure name is not specified until
runtime, objects for each parameter must be programmatically created at that time.
Not specifying the stored procedure and manually creating the TParam or TParameter
objects allows a single dataset to be used with any number of available stored
procedures.
Note Some stored procedures return a dataset in addition to output and result parameters.
Applications can display dataset records in data-aware controls, but must separately
process output and result parameters.

Setting up parameters at design time


You can specify stored procedure parameter values at design time using the
parameter collection editor. To display the parameter collection editor, click on the
ellipsis button for the Params or Parameters property in the Object Inspector.
Important You can assign values to input parameters by selecting them in the parameter
collection editor and using the Object Inspector to set the Value property. However,
do not change the names or data types for input parameters reported by the server.
Otherwise, when you execute the stored procedure an exception is raised.
Some servers do not report parameter names or data types. In these cases, you must
set up the parameters manually using the parameter collection editor. Right click and
choose Add to add parameters. For each parameter you add, you must fully describe
the parameter. Even if you do not need to add any parameters, you should check the
properties of individual parameter objects to ensure that they are correct.
If the dataset has a Params property (TParam objects), the following properties must
be correctly specified:
• The Name property indicates the name of the parameter as it is defined by the
stored procedure.
• The DataType property gives the data type for the parameter’s value. When using
TSQLStoredProc, some data types require additional information:
• The NumericScale property indicates the number of decimal places for numeric
parameters.
• The Precision property indicates the total number of digits for numeric
parameters.
• The Size property indicates the number of characters in string parameters.

24-52 Developer’s Guide


Using stored procedure-type datasets

• The ParamType property indicates the type of the selected parameter. This can be
ptInput (for input parameters), ptOutput (for output parameters), ptInputOutput
(for input/output parameters) or ptResult (for result parameters).
• The Value property specifies a value for the selected parameter. You can never set
values for output and result parameters. These types of parameters have values
set by the execution of the stored procedure. For input and input/output
parameters, you can leave Value blank if your application supplies parameter
values at runtime.
If the dataset uses a Parameters property (TParameter objects), the following properties
must be correctly specified:
• The Name property indicates the name of the parameter as it is defined by the
stored procedure.
• The DataType property gives the data type for the parameter’s value. For some
data types, you must provide additional information:
• The NumericScale property indicates the number of decimal places for numeric
parameters.
• The Precision property indicates the total number of digits for numeric
parameters.
• The Size property indicates the number of characters in string parameters.
• The Direction property gives the type of the selected parameter. This can be
pdInput (for input parameters), pdOutput (for output parameters), pdInputOutput
(for input/output parameters) or pdReturnValue (for result parameters).
• The Attributes property controls the type of values the parameter will accept.
Attributes may be set to a combination of psSigned, psNullable, and psLong.
• The Value property specifies a value for the selected parameter. Do not set values
for output and result parameters. For input and input/output parameters, you can
leave Value blank if your application supplies parameter values at runtime.

Understanding datasets 24-53


Using stored procedure-type datasets

Using parameters at runtime


With some datasets, if the name of the stored procedure is not specified until
runtime, no TParam objects are automatically created for parameters and they must
be created programmatically. This can be done using the TParam.Create method or
the TParams.AddParam method:
var
P1, P2: TParam;
begin
ƒ
with StoredProc1 do begin
StoredProcName := 'GET_EMP_PROJ';
Params.Clear;
P1 := TParam.Create(Params, ptInput);
P2 := TParam.Create(Params, ptOutput);
try
Params[0].Name := ‘EMP_NO’;
Params[1].Name := ‘PROJ_ID’;
ParamByname(‘EMP_NO’).AsSmallInt := 52;
ExecProc;
Edit1.Text := ParamByname(‘PROJ_ID’).AsString;
finally
P1.Free;
P2.Free;
end;
end;
ƒ
end;
Even if you do not need to add the individual parameter objects at runtime, you may
want to access individual parameter objects to assign values to input parameters and
to retrieve values from output parameters. You can use the dataset’s ParamByName
method to access individual parameters based on their names. For example, the
following code sets the value of an input/output parameter, executes the stored
procedure, and retrieves the returned value:
with SQLStoredProc1 do
begin
ParamByName('IN_OUTVAR').AsInteger := 103;
ExecProc;
IntegerVar := ParamByName('IN_OUTVAR').AsInteger;
end;

24-54 Developer’s Guide


Using stored procedure-type datasets

Preparing stored procedures


As with query-type datasets, stored procedure-type datasets must be prepared
before they execute the stored procedure. Preparing a stored procedure tells the data
access layer and the database server to allocate resources for the stored procedure
and to bind parameters. These operations can improve performance.
If you attempt to execute a stored procedure before preparing it, the dataset
automatically prepares it for you, and then unprepares it after it executes. If you plan
to execute a stored procedure a number of times, it is more efficient to explicitly
prepare it by setting the Prepared property to True.
MyProc.Prepared := True;
When you explicitly prepare the dataset, the resources allocated for executing the
stored procedure are not freed until you set Prepared to False.
Set the Prepared property to False if you want to ensure that the dataset is re-prepared
before it executes (for example, if you change the parameters when using Oracle
overloaded procedures).

Executing stored procedures that don’t return a result set


When a stored procedure returns a cursor, you execute it the same way you populate
any dataset with records: by setting Active to True or calling the Open method.
However, often stored procedures do not return any data, or only return results in
output parameters. You can execute a stored procedure that does not return a result
set by calling ExecProc. After executing the stored procedure, you can use the
ParamByName method to read the value of the result parameter or of any output
parameters:
MyStoredProcedure.ExecProc; { does not return a result set }
Edit1.Text := MyStoredProcedure.ParamByName('OUTVAR').AsString;
Note TADOStoredProc does not have a ParamByName method. To obtain output parameter
values when using ADO, access parameter objects using the Parameters property.
Tip If you are executing the procedure multiple times, it is a good idea to set the Prepared
property to True.

Understanding datasets 24-55


Using stored procedure-type datasets

Fetching multiple result sets


Some stored procedures return multiple sets of records. The dataset only fetches the
first set when you open it. If you are using TSQLStoredProc or TADOStoredProc, you
can access the other sets of records by calling the NextRecordSet method:
var
DataSet2: TCustomSQLDataSet;
begin
DataSet2 := SQLStoredProc1.NextRecordSet;
...
In TSQLStoredProc, NextRecordSet returns a newly created TCustomSQLDataSet
component that provides access to the next set of records. In TADOStoredProc,
NextRecordset returns an interface that can be assigned to the RecordSet property of an
existing ADO dataset. For either class, the method returns the number of records in
the returned dataset as an output parameter.
The first time you call NextRecordSet, it returns the second set of records. Calling
NextRecordSet again returns a third dataset, and so on, until there are no more sets of
records. When there are no additional cursors, NextRecordSet returns nil.

24-56 Developer’s Guide


Chapter

Working with field components


Chapter25
25
This chapter describes the properties, events, and methods common to the TField
object and its descendants. Field components represent individual fields (columns) in
datasets. This chapter also describes how to use field components to control the
display and editing of data in your applications.
Field components are always associated with a dataset. You never use a TField object
directly in your applications. Instead, each field component in your application is a
TField descendant specific to the datatype of a column in a dataset. Field components
provide data-aware controls such as TDBEdit and TDBGrid access to the data in a
particular column of the associated dataset.
Generally speaking, a single field component represents the characteristics of a single
column, or field, in a dataset, such as its data type and size. It also represents the
field’s display characteristics, such as alignment, display format, and edit format. For
example, a TFloatField component has four properties that directly affect the
appearance of its data:

Table 25.1 TFloatField properties that affect data display


Property Purpose
Alignment Specifies whether data is displayed left-aligned, centered, or right-aligned.
DisplayWidth Specifies the number of digits to display in a control at one time.
DisplayFormat Specifies data formatting for display (such as how many decimal places to
show).
EditFormat Specifies how to display a value during editing.

As you scroll from record to record in a dataset, a field component lets you view and
change the value for that field in the current record.

Working with field components 25-1


Dynamic field components

Field components have many properties in common with one another (such as
DisplayWidth and Alignment), and they have properties specific to their data types
(such as Precision for TFloatField). Each of these properties affect how data appears to
an application’s users on a form. Some properties, such as Precision, can also affect
what data values the user can enter in a control when modifying or entering data.
All field components for a dataset are either dynamic (automatically generated for
you based on the underlying structure of database tables), or persistent (generated
based on specific field names and properties you set in the Fields editor). Dynamic
and persistent fields have different strengths and are appropriate for different types
of applications. The following sections describe dynamic and persistent fields in
more detail and offer advice on choosing between them.

Dynamic field components


Dynamically generated field components are the default. In fact, all field components
for any dataset start out as dynamic fields the first time you place a dataset on a data
module, specify how that dataset fetches its data, and open it. A field component is
dynamic if it is created automatically based on the underlying physical characteristics
of the data represented by a dataset. Datasets generate one field component for each
column in the underlying data. The exact TField descendant created for each column
is determined by field type information received from the database or (for
TClientDataSet) from a provider component.
Dynamic fields are temporary. They exist only as long as a dataset is open. Each time
you reopen a dataset that uses dynamic fields, it rebuilds a completely new set of
dynamic field components based on the current structure of the data underlying the
dataset. If the columns in the underlying data change, then the next time you open a
dataset that uses dynamic field components, the automatically generated field
components are also changed to match.
Use dynamic fields in applications that must be flexible about data display and
editing. For example, to create a database browsing tool such as SQL explorer, you
must use dynamic fields because every database table has different numbers and
types of columns. You might also want to use dynamic fields in applications where
user interaction with data mostly takes place inside grid components and you know
that the datasets used by the application change frequently.
To use dynamic fields in an application:
1 Place datasets and data sources in a data module.
2 Associate the datasets with data. This involves using a connection component or
provider to connect to the source of the data and setting any properties that
specify what data the dataset represents.
3 Associate the data sources with the datasets.

25-2 Developer’s Guide


Persistent field components

4 Place data-aware controls in the application’s forms, add the data module to each
uses clause for each form’s unit, and associate each data-aware control with a data
source in the module. In addition, associate a field with each data-aware control
that requires one. Note that because you are using dynamic field components,
there is no guarantee that any field name you specify will exist when the dataset is
opened.
5 Open the datasets.
Aside from ease of use, dynamic fields can be limiting. Without writing code, you
cannot change the display and editing defaults for dynamic fields, you cannot safely
change the order in which dynamic fields are displayed, and you cannot prevent
access to any fields in the dataset. You cannot create additional fields for the dataset,
such as calculated fields or lookup fields, and you cannot override a dynamic field’s
default data type. To gain control and flexibility over fields in your database
applications, you need to invoke the Fields editor to create persistent field
components for your datasets.

Persistent field components


By default, dataset fields are dynamic. Their properties and availability are
automatically set and cannot be changed in any way. To gain control over a field’s
properties and events you must create persistent fields for the dataset. Persistent
fields let you
• Set or change the field’s display or edit characteristics at design time or runtime.
• Create new fields, such as lookup fields, calculated fields, and aggregated fields,
that base their values on existing fields in a dataset.
• Validate data entry.
• Remove field components from the list of persistent components to prevent your
application from accessing particular columns in an underlying database.
• Define new fields to replace existing fields, based on columns in the table or query
underlying a dataset.
At design time, you can—and should—use the Fields editor to create persistent lists
of the field components used by the datasets in your application. Persistent field
component lists are stored in your application, and do not change even if the
structure of a database underlying a dataset is changed. Once you create persistent
fields with the Fields editor, you can also create event handlers for them that respond
to changes in data values and that validate data entries.
Note When you create persistent fields for a dataset, only those fields you select are
available to your application at design time and runtime. At design time, you can
always use the Fields editor to add or remove persistent fields for a dataset.

Working with field components 25-3


Persistent field components

All fields used by a single dataset are either persistent or dynamic. You cannot mix
field types in a single dataset. If you create persistent fields for a dataset, and then
want to revert to dynamic fields, you must remove all persistent fields from the
dataset. For more information about dynamic fields, see “Dynamic field
components” on page 25-2.
Note One of the primary uses of persistent fields is to gain control over the appearance and
display of data. You can also control the appearance of columns in data-aware grids.
To learn about controlling column appearance in grids, see “Creating a customized
grid” on page 20-17.

Creating persistent fields


Persistent field components created with the Fields editor provide efficient, readable,
and type-safe programmatic access to underlying data. Using persistent field
components guarantees that each time your application runs, it always uses and
displays the same columns, in the same order even if the physical structure of the
underlying database has changed. Data-aware components and program code that
rely on specific fields always work as expected. If a column on which a persistent
field component is based is deleted or changed, Delphi generates an exception rather
than running the application against a nonexistent column or mismatched data.
To create persistent fields for a dataset:
1 Place a dataset in a data module.
2 Bind the dataset to its underlying data. This typically involves associating the
dataset with a connection component or provider and specifying any properties to
describe the data. For example, If you are using TADODataSet, you can set the
Connection property to a properly configured TADOConnection component and set
the CommandText property to a valid query.
3 Double-click the dataset component in the data module to invoke the Fields editor.
The Fields editor contains a title bar, navigator buttons, and a list box.
The title bar of the Fields editor displays both the name of the data module or form
containing the dataset, and the name of the dataset itself. For example, if you open
the Customers dataset in the CustomerData data module, the title bar displays
‘CustomerData.Customers,’ or as much of the name as fits.
Below the title bar is a set of navigation buttons that let you scroll one-by-one
through the records in an active dataset at design time, and to jump to the first or
last record. The navigation buttons are dimmed if the dataset is not active or if the
dataset is empty. If the dataset is unidirectional, the buttons for moving to the last
record and the previous record are always dimmed.
The list box displays the names of persistent field components for the dataset. The
first time you invoke the Fields editor for a new dataset, the list is empty because
the field components for the dataset are dynamic, not persistent. If you invoke the
Fields editor for a dataset that already has persistent field components, you see the
field component names in the list box.

25-4 Developer’s Guide


Persistent field components

4 Choose Add Fields from the Fields editor context menu.


5 Select the fields to make persistent in the Add Fields dialog box. By default, all
fields are selected when the dialog box opens. Any fields you select become
persistent fields.
The Add Fields dialog box closes, and the fields you selected appear in the Fields
editor list box. Fields in the Fields editor list box are persistent. If the dataset is active,
note, too, that the Next and (if the dataset is not unidirectional) Last navigation
buttons above the list box are enabled.
From now on, each time you open the dataset, it no longer creates dynamic field
components for every column in the underlying database. Instead it only creates
persistent components for the fields you specified.
Each time you open the dataset, it verifies that each non-calculated persistent field
exists or can be created from data in the database. If it cannot, the dataset raises an
exception warning you that the field is not valid, and does not open the dataset.

Arranging persistent fields


The order in which persistent field components are listed in the Fields editor list box
is the default order in which the fields appear in a data-aware grid component. You
can change field order by dragging and dropping fields in the list box.
To change the order of fields:
1 Select the fields. You can select and order one or more fields at a time.
2 Drag the fields to a new location.
If you select a noncontiguous set of fields and drag them to a new location, they are
inserted as a contiguous block. Within the block, the order of fields does not change.
Alternatively, you can select the field, and use Ctrl+Up and Ctrl+Dn to change an
individual field’s order in the list.

Defining new persistent fields


Besides making existing dataset fields into persistent fields, you can also create
special persistent fields as additions to or replacements of the other persistent fields
in a dataset.
New persistent fields that you create are only for display purposes. The data they
contain at runtime are not retained either because they already exist elsewhere in the
database, or because they are temporary. The physical structure of the data
underlying the dataset is not changed in any way.
To create a new persistent field component, invoke the context menu for the Fields
editor and choose New field. The New Field dialog box appears.

Working with field components 25-5


Persistent field components

The New Field dialog box contains three group boxes: Field properties, Field type,
and Lookup definition.
• The Field properties group box lets you enter general field component
information. Enter the field name in the Name edit box. The name you enter here
corresponds to the field component’s FieldName property. The New Field dialog
uses this name to build a component name in the Component edit box. The name
that appears in the Component edit box corresponds to the field component’s
Name property and is only provided for informational purposes (Name is the
identifier by which you refer to the field component in your source code). The
dialog discards anything you enter directly in the Component edit box.
• The Type combo box in the Field properties group lets you specify the field
component’s data type. You must supply a data type for any new field component
you create. For example, to display floating-point currency values in a field, select
Currency from the drop-down list. Use the Size edit box to specify the maximum
number of characters that can be displayed or entered in a string-based field, or
the size of Bytes and VarBytes fields. For all other data types, Size is meaningless.
• The Field type radio group lets you specify the type of new field component to
create. The default type is Data. If you choose Lookup, the Dataset and Source
Fields edit boxes in the Lookup definition group box are enabled. You can also
create Calculated fields, and if you are working with a client dataset, you can
create InternalCalc fields or Aggregate fields. The following table describes these
types of fields you can create:

Table 25.2 Special persistent field kinds


Field kind Purpose
Data Replaces an existing field (for example to change its data type)
Calculated Displays values calculated at runtime by a dataset’s OnCalcFields event handler.
Lookup Retrieve values from a specified dataset at runtime based on search criteria you
specify. (not supported by unidirectional datasets)
InternalCalc Displays values calculated at runtime by a client dataset and stored with its
data.
Aggregate Displays a value summarizing the data in a set of records from a client dataset.

The Lookup definition group box is only used to create lookup fields. This is described more
fully in “Defining a lookup field” on page 25-9.

Defining a data field


A data field replaces an existing field in a dataset. For example, for programmatic
reasons you might want to replace a TSmallIntField with a TIntegerField. Because you
cannot change a field’s data type directly, you must define a new field to replace it.
Important Even though you define a new field to replace an existing field, the field you define
must derive its data values from an existing column in a table underlying a dataset.

25-6 Developer’s Guide


Persistent field components

To create a replacement data field for a field in a table underlying a dataset, follow
these steps:
1 Remove the field from the list of persistent fields assigned for the dataset, and then
choose New Field from the context menu.
2 In the New Field dialog box, enter the name of an existing field in the database
table in the Name edit box. Do not enter a new field name. You are actually
specifying the name of the field from which your new field will derive its data.
3 Choose a new data type for the field from the Type combo box. The data type you
choose should be different from the data type of the field you are replacing. You
cannot replace a string field of one size with a string field of another size. Note that
while the data type should be different, it must be compatible with the actual data
type of the field in the underlying table.
4 Enter the size of the field in the Size edit box, if appropriate. Size is only relevant
for fields of type TStringField, TBytesField, and TVarBytesField.
5 Select Data in the Field type radio group if it is not already selected.
6 Choose OK. The New Field dialog box closes, the newly defined data field
replaces the existing field you specified in Step 1, and the component declaration
in the data module or form’s type declaration is updated.
To edit the properties or events associated with the field component, select the
component name in the Field editor list box, then edit its properties or events with
the Object Inspector. For more information about editing field component properties
and events, see “Setting persistent field properties and events” on page 25-11.

Defining a calculated field


A calculated field displays values calculated at runtime by a dataset’s OnCalcFields
event handler. For example, you might create a string field that displays
concatenated values from other fields.
To create a calculated field in the New Field dialog box:
1 Enter a name for the calculated field in the Name edit box. Do not enter the name
of an existing field.
2 Choose a data type for the field from the Type combo box.
3 Enter the size of the field in the Size edit box, if appropriate. Size is only relevant
for fields of type TStringField, TBytesField, and TVarBytesField.
4 Select Calculated or InternalCalc in the Field type radio group. InternalCalc is only
available if you are working with a client dataset. The significant difference
between these types of calculated fields is that the values calculated for an
InternalCalc field are stored and retrieved as part of the client dataset’s data.

Working with field components 25-7


Persistent field components

5 Choose OK. The newly defined calculated field is automatically added to the end
of the list of persistent fields in the Field editor list box, and the component
declaration is automatically added to the form’s or data module’s type
declaration.
6 Place code that calculates values for the field in the OnCalcFields event handler for
the dataset. For more information about writing code to calculate field values, see
“Programming a calculated field” on page 25-8.
Note To edit the properties or events associated with the field component, select the
component name in the Field editor list box, then edit its properties or events with
the Object Inspector. For more information about editing field component properties
and events, see “Setting persistent field properties and events” on page 25-11.

Programming a calculated field


After you define a calculated field, you must write code to calculate its value.
Otherwise, it always has a null value. Code for a calculated field is placed in the
OnCalcFields event for its dataset.
To program a value for a calculated field:
1 Select the dataset component from the Object Inspector drop-down list.
2 Choose the Object Inspector Events page.
3 Double-click the OnCalcFields property to bring up or create a CalcFields procedure
for the dataset component.
4 Write the code that sets the values and other properties of the calculated field as
desired.
For example, suppose you have created a CityStateZip calculated field for the
Customers table on the CustomerData data module. CityStateZip should display a
company’s city, state, and zip code on a single line in a data-aware control.
To add code to the CalcFields procedure for the Customers table, select the Customers
table from the Object Inspector drop-down list, switch to the Events page, and
double-click the OnCalcFields property.
The TCustomerData.CustomersCalcFields procedure appears in the unit’s source code
window. Add the following code to the procedure to calculate the field:
CustomersCityStateZip.Value := CustomersCity.Value + ', ' + CustomersState.Value
+ ' ' + CustomersZip.Value;
Note When writing the OnCalcFields event handler for an internally calculated field, you
can improve performance by checking the client dataset’s State property and only
recomputing the value when State is dsInternalCalc. See “Using internally calculated
fields in client datasets” on page 29-11 for details.

25-8 Developer’s Guide


Persistent field components

Defining a lookup field


A lookup field is a read-only field that displays values at runtime based on search
criteria you specify. In its simplest form, a lookup field is passed the name of an
existing field to search on, a field value to search for, and a different field in a lookup
dataset whose value it should display.
For example, consider a mail-order application that enables an operator to use a
lookup field to determine automatically the city and state that correspond to the zip
code a customer provides. The column to search on might be called ZipTable.Zip, the
value to search for is the customer’s zip code as entered in Order.CustZip, and the
values to return would be those for the ZipTable.City and ZipTable.State columns of
the record where the value of ZipTable.Zip matches the current value in the
Order.CustZip field.
Note Unidirectional datasets do not support lookup fields.
To create a lookup field in the New Field dialog box:
1 Enter a name for the lookup field in the Name edit box. Do not enter the name of
an existing field.
2 Choose a data type for the field from the Type combo box.
3 Enter the size of the field in the Size edit box, if appropriate. Size is only relevant
for fields of type TStringField, TBytesField, and TVarBytesField.
4 Select Lookup in the Field type radio group. Selecting Lookup enables the Dataset
and Key Fields combo boxes.
5 Choose from the Dataset combo box drop-down list the dataset in which to look
up field values. The lookup dataset must be different from the dataset for the field
component itself, or a circular reference exception is raised at runtime. Specifying
a lookup dataset enables the Lookup Keys and Result Field combo boxes.
6 Choose from the Key Fields drop-down list a field in the current dataset for which
to match values. To match more than one field, enter field names directly instead
of choosing from the drop-down list. Separate multiple field names with
semicolons. If you are using more than one field, you must use persistent field
components.
7 Choose from the Lookup Keys drop-down list a field in the lookup dataset to
match against the Source Fields field you specified in step 6. If you specified more
than one key field, you must specify the same number of lookup keys. To specify
more than one field, enter field names directly, separating multiple field names
with semicolons.
8 Choose from the Result Field drop-down list a field in the lookup dataset to return
as the value of the lookup field you are creating.
When you design and run your application, lookup field values are determined
before calculated field values are calculated. You can base calculated fields on lookup
fields, but you cannot base lookup fields on calculated fields.

Working with field components 25-9


Persistent field components

You can use the LookupCache property to hone the way lookup fields are determined.
LookupCache determines whether the values of a lookup field are cached in memory
when a dataset is first opened, or looked up dynamically every time the current
record in the dataset changes. Set LookupCache to True to cache the values of a lookup
field when the LookupDataSet is unlikely to change and the number of distinct lookup
values is small. Caching lookup values can speed performance, because the lookup
values for every set of LookupKeyFields values are preloaded when the DataSet is
opened. When the current record in the DataSet changes, the field object can locate its
Value in the cache, rather than accessing the LookupDataSet. This performance
improvement is especially dramatic if the LookupDataSet is on a network where
access is slow.
Tip You can use a lookup cache to provide lookup values programmatically rather than
from a secondary dataset. Be sure that the LookupDataSet property is nil. Then, use the
LookupList property’s Add method to fill it with lookup values. Set the LookupCache
property to True. The field will use the supplied lookup list without overwriting it
with values from a lookup dataset.
If every record of DataSet has different values for KeyFields, the overhead of locating
values in the cache can be greater than any performance benefit provided by the
cache. The overhead of locating values in the cache increases with the number of
distinct values that can be taken by KeyFields.
If LookupDataSet is volatile, caching lookup values can lead to inaccurate results. Call
RefreshLookupList to update the values in the lookup cache. RefreshLookupList
regenerates the LookupList property, which contains the value of the LookupResultField
for every set of LookupKeyFields values.
When setting LookupCache at runtime, call RefreshLookupList to initialize the cache.

Defining an aggregate field


An aggregate field displays values from a maintained aggregate in a client dataset.
An aggregate is a calculation that summarizes the data in a set of records. See “Using
maintained aggregates” on page 29-11 for details about maintained aggregates.
To create an aggregate field in the New Field dialog box:
1 Enter a name for the aggregate field in the Name edit box. Do not enter the name
of an existing field.
2 Choose aggregate data type for the field from the Type combo box.
3 Select Aggregate in the Field type radio group.
4 Choose OK. The newly defined aggregate field is automatically added to the client
dataset and its Aggregates property is automatically updated to include the
appropriate aggregate specification.
5 Place the calculation for the aggregate in the ExprText property of the newly
created aggregate field. For more information about defining an aggregate, see
“Specifying aggregates” on page 29-12.

25-10 Developer’s Guide


Persistent field components

Once a persistent TAggregateField is created, a TDBText control can be bound to the


aggregate field. The TDBText control will then display the value of the aggregate
field that is relevant to the current record of the underlying client data set.

Deleting persistent field components


Deleting a persistent field component is useful for accessing a subset of available
columns in a table, and for defining your own persistent fields to replace a column in
a table. To remove one or more persistent field components for a dataset:
1 Select the field(s) to remove in the Fields editor list box.
2 Press Del.
Note You can also delete selected fields by invoking the context menu and choosing
Delete.
Fields you remove are no longer available to the dataset and cannot be displayed by
data-aware controls. You can always recreate a persistent field component that you
delete by accident, but any changes previously made to its properties or events is
lost. For more information, see “Creating persistent fields” on page 25-4.
Note If you remove all persistent field components for a dataset, the dataset reverts to
using dynamic field components for every column in the underlying database table.

Setting persistent field properties and events


You can set properties and customize events for persistent field components at
design time. Properties control the way a field is displayed by a data-aware
component, for example, whether it can appear in a TDBGrid, or whether its value
can be modified. Events control what happens when data in a field is fetched,
changed, set, or validated.
To set the properties of a field component or write customized event handlers for it,
select the component in the Fields editor, or select it from the component list in the
Object Inspector.

Setting display and edit properties at design time


To edit the display properties of a selected field component, switch to the Properties
page on the Object Inspector window. The following table summarizes display
properties that can be edited.

Table 25.3 Field component properties


Property Purpose
Alignment Left justifies, right justifies, or centers a field contents within a data-
aware component.
ConstraintErrorMessage Specifies the text to display when edits clash with a constraint condition.
CustomConstraint Specifies a local constraint to apply to data during editing.

Working with field components 25-11


Persistent field components

Table 25.3 Field component properties (continued)


Property Purpose
Currency Numeric fields only.
True: displays monetary values.
False (default): does not display monetary values.
DisplayFormat Specifies the format of data displayed in a data-aware component.
DisplayLabel Specifies the column name for a field in a data-aware grid component.
DisplayWidth Specifies the width, in characters, of a grid column that display this field.
EditFormat Specifies the edit format of data in a data-aware component.
EditMask Limits data-entry in an editable field to specified types and ranges of
characters, and specifies any special, non-editable characters that appear
within the field (hyphens, parentheses, and so on).
FieldKind Specifies the type of field to create.
FieldName Specifies the actual name of a column in the table from which the field
derives its value and data type.
HasConstraints Indicates whether there are constraint conditions imposed on a field.
ImportedConstraint Specifies an SQL constraint imported from the Data Dictionary or an
SQL server.
Index Specifies the order of the field in a dataset.
LookupDataSet Specifies the table used to look up field values when Lookup is True.
LookupKeyFields Specifies the field(s) in the lookup dataset to match when doing a
lookup.
LookupResultField Specifies the field in the lookup dataset from which to copy values into
this field.
MaxValue Numeric fields only. Specifies the maximum value a user can enter for
the field.
MinValue Numeric fields only. Specifies the minimum value a user can enter for
the field.
Name Specifies the component name of the field component within Delphi.
Origin Specifies the name of the field as it appears in the underlying database.
Precision Numeric fields only. Specifies the number of significant digits.
ReadOnly True: Displays field values in data-aware controls, but prevents editing.
False (the default): Permits display and editing of field values.
Size Specifies the maximum number of characters that can be displayed or
entered in a string-based field, or the size, in bytes, of TBytesField and
TVarBytesField fields.
Tag Long integer bucket available for programmer use in every component
as needed.
Transliterate True (default): specifies that translation to and from the respective
locales will occur as data is transferred between a dataset and a
database.
False: Locale translation does not occur.
Visible True (the default): Permits display of field in a data-aware grid.
False: Prevents display of field in a data-aware grid component.
User-defined components can make display decisions based on this
property.

25-12 Developer’s Guide


Persistent field components

Not all properties are available for all field components. For example, a field
component of type TStringField does not have Currency, MaxValue, or DisplayFormat
properties, and a component of type TFloatField does not have a Size property.
While the purpose of most properties is straightforward, some properties, such as
Calculated, require additional programming steps to be useful. Others, such as
DisplayFormat, EditFormat, and EditMask, are interrelated; their settings must be
coordinated. For more information about using DisplayFormat, EditFormat, and
EditMask, see “Controlling and masking user input” on page 25-15.

Setting field component properties at runtime


You can use and manipulate the properties of field component at runtime. Access
persistent field components by name, where the name can be obtained by
concatenating the field name to the dataset name.
For example, the following code sets the ReadOnly property for the CityStateZip field
in the Customers table to True:
CustomersCityStateZip.ReadOnly := True;
And this statement changes field ordering by setting the Index property of the
CityStateZip field in the Customers table to 3:
CustomersCityStateZip.Index := 3;

Creating attribute sets for field components


When several fields in the datasets used by your application share common
formatting properties (such as Alignment, DisplayWidth, DisplayFormat, EditFormat,
MaxValue, MinValue, and so on), it is more convenient to set the properties for a
single field, then store those properties as an attribute set in the Data Dictionary.
Attribute sets stored in the data dictionary can be easily applied to other fields.
Note Attribute sets and the Data Dictionary are only available for BDE-enabled datasets.
To create an attribute set based on a field component in a dataset:
1 Double-click the dataset to invoke the Fields editor.
2 Select the field for which to set properties.
3 Set the desired properties for the field in the Object Inspector.
4 Right-click the Fields editor list box to invoke the context menu.
5 Choose Save Attributes to save the current field’s property settings as an attribute
set in the Data Dictionary.
The name for the attribute set defaults to the name of the current field. You can
specify a different name for the attribute set by choosing Save Attributes As instead
of Save Attributes from the context menu.

Working with field components 25-13


Persistent field components

Once you have created a new attribute set and added it to the Data Dictionary, you
can then associate it with other persistent field components. Even if you later remove
the association, the attribute set remains defined in the Data Dictionary.
Note You can also create attribute sets directly from the SQL Explorer. When you create an
attribute set using SQL Explorer, it is added to the Data Dictionary, but not applied to
any fields. SQL Explorer lets you specify two additional attributes: a field type (such
as TFloatField, TStringField, and so on) and a data-aware control (such as TDBEdit,
TDBCheckBox, and so on) that are automatically placed on a form when a field based
on the attribute set is dragged to the form. For more information, see the online help
for the SQL Explorer.

Associating attribute sets with field components


When several fields in the datasets used by your application share common
formatting properties (such as Alignment, DisplayWidth, DisplayFormat, EditFormat,
MaxValue, MinValue, and so on), and you have saved those property settings as
attribute sets in the Data Dictionary, you can easily apply the attribute sets to fields
without having to recreate the settings manually for each field. In addition, if you
later change the attribute settings in the Data Dictionary, those changes are
automatically applied to every field associated with the set the next time field
components are added to the dataset.
To apply an attribute set to a field component:
1 Double-click the dataset to invoke the Fields editor.
2 Select the field for which to apply an attribute set.
3 Invoke the context menu and choose Associate Attributes.
4 Select or enter the attribute set to apply from the Associate Attributes dialog box. If
there is an attribute set in the Data Dictionary that has the same name as the
current field, that set name appears in the edit box.
Important If the attribute set in the Data Dictionary is changed at a later date, you must reapply
the attribute set to each field component that uses it. You can invoke the Fields editor
and multi-select field components within a dataset when reapplying attributes.

Removing attribute associations


If you change your mind about associating an attribute set with a field, you can
remove the association by following these steps:
1 Invoke the Fields editor for the dataset containing the field.
2 Select the field or fields from which to remove the attribute association.
3 Invoke the context menu for the Fields editor and choose Unassociate Attributes.
Important Unassociating an attribute set does not change any field properties. A field retains
the settings it had when the attribute set was applied to it. To change these
properties, select the field in the Fields editor and set its properties in the Object
Inspector.

25-14 Developer’s Guide


Persistent field components

Controlling and masking user input


The EditMask property provides a way to control the type and range of values a user
can enter into a data-aware component associated with TStringField, TDateField,
TTimeField, and TDateTimeField, and TSQLTimeStampField components. You can use
existing masks or create your own. The easiest way to use and create edit masks is
with the Input Mask editor. You can, however, enter masks directly into the EditMask
field in the Object Inspector.
Note For TStringField components, the EditMask property is also its display format.
To invoke the Input Mask editor for a field component:
1 Select the component in the Fields editor or Object Inspector.
2 Click the Properties page in the Object Inspector.
3 Double-click the values column for the EditMask field in the Object Inspector, or
click the ellipsis button. The Input Mask editor opens.
The Input Mask edit box lets you create and edit a mask format. The Sample Masks
grid lets you select from predefined masks. If you select a sample mask, the mask
format appears in the Input Mask edit box where you can modify it or use it as is.
You can test the allowable user input for a mask in the Test Input edit box.
The Masks button enables you to load a custom set of masks—if you have created
one—into the Sample Masks grid for easy selection.

Using default formatting for numeric, date, and time fields


Delphi provides built-in display and edit format routines and intelligent default
formatting for TFloatField, TCurrencyField, TBCDField, TFMTBCDField, TIntegerField,
TSmallIntField, TWordField, TDateField, TDateTimeField, and TTimeField, and
TSQLTimeStampField components. To use these routines, you need do nothing.
Default formatting is performed by the following routines:

Table 25.4 Field component formatting routines


Routine Used by . . .
FormatFloat TFloatField, TCurrencyField
FormatDateTime TDateField, TTimeField, TDateTimeField,
SQLTimeStampToString TSQLTimeStampField
FormatCurr TCurrencyField, TBCDField
BcdToStrF TFMTBCDField

Only format properties appropriate to the data type of a field component are
available for a given component.
Default formatting conventions for date, time, currency, and numeric values are
based on the Regional Settings properties in the Control Panel. For example, using
the default settings for the United States, a TFloatField column with the Currency
property set to True sets the DisplayFormat property for the value 1234.56 to $1234.56,
while the EditFormat is 1234.56.

Working with field components 25-15


Persistent field components

At design time or runtime, you can edit the DisplayFormat and EditFormat properties
of a field component to override the default display settings for that field. You can
also write OnGetText and OnSetText event handlers to do custom formatting for field
components at runtime.

Handling events
Like most components, field components have events associated with them. Methods
can be assigned as handlers for these events. By writing these handlers you can react
to the occurrence of events that affect data entered in fields through data-aware
controls and perform actions of your own design. The following table lists the events
associated with field components:

Table 25.5 Field component events


Event Purpose
OnChange Called when the value for a field changes.
OnGetText Called when the value for a field component is retrieved for display or editing.
OnSetText Called when the value for a field component is set.
OnValidate Called to validate the value for a field component whenever the value is changed
because of an edit or insert operation.

OnGetText and OnSetText events are primarily useful to programmers who want to
do custom formatting that goes beyond the built-in formatting functions. OnChange
is useful for performing application-specific tasks associated with data change, such
as enabling or disabling menus or visual controls. OnValidate is useful when you
want to control data-entry validation in your application before returning values to a
database server.
To write an event handler for a field component:
1 Select the component.
2 Select the Events page in the Object Inspector.
3 Double-click the Value field for the event handler to display its source code
window.
4 Create or edit the handler code.

25-16 Developer’s Guide


Working with field component methods at runtime

Working with field component methods at runtime


Field components methods available at runtime enable you to convert field values
from one data type to another, and enable you to set focus to the first data-aware
control in a form that is associated with a field component.
Controlling the focus of data-aware components associated with a field is important
when your application performs record-oriented data validation in a dataset event
handler (such as BeforePost). Validation may be performed on the fields in a record
whether or not its associated data-aware control has focus. Should validation fail for
a particular field in the record, you want the data-aware control containing the faulty
data to have focus so that the user can enter corrections.
You control focus for a field’s data-aware components with a field’s FocusControl
method. FocusControl sets focus to the first data-aware control in a form that is
associated with a field. An event handler should call a field’s FocusControl method
before validating the field. The following code illustrates how to call the FocusControl
method for the Company field in the Customers table:
CustomersCompany.FocusControl;
The following table lists some other field component methods and their uses. For a
complete list and detailed information about using each method, see the entries for
TField and its descendants in the online VCL Reference.

Table 25.6 Selected field component methods


Method Purpose
AssignValue Sets a field value to a specified value using an automatic conversion function
based on the field’s type.
Clear Clears the field and sets its value to NULL.
GetData Retrieves unformatted data from the field.
IsValidChar Determines if a character entered by a user in a data-aware control to set a value
is allowed for this field.
SetData Assigns unformatted data to this field.

Working with field components 25-17


Displaying, converting, and accessing field values

Displaying, converting, and accessing field values


Data-aware controls such as TDBEdit and TDBGrid automatically display the values
associated with field components. If editing is enabled for the dataset and the
controls, data-aware controls can also send new and changed values to the database.
In general, the built-in properties and methods of data-aware controls enable them to
connect to datasets, display values, and make updates without requiring extra
programming on your part. Use them whenever possible in your database
applications. For more information about data-aware control, see Chapter 20, “Using
data controls.”
Standard controls can also display and edit database values associated with field
components. Using standard controls, however, may require additional
programming on your part. For example, when using standard controls, your
application is responsible for tracking when to update controls because field values
change. If the dataset has a datasource component, you can use its events to help you
do this. In particular, the OnDataChange event lets you know when you may need to
update a control’s value and the OnStateChange event can help you determine when
to enable or disable controls. For more information on these events, see “Responding
to changes mediated by the data source” on page 20-4.
The following topics discuss how to work with field values so that you can display
them in standard controls.

Displaying field component values in standard controls


An application can access the value of a dataset column through the Value property
of a field component. For example, the following OnDataChange event handler
updates the text in a TEdit control because the value of the CustomersCompany field
may have changed:
procedure TForm1.CustomersDataChange(Sender: TObject, Field: TField);
begin
Edit3.Text := CustomersCompany.Value;
end;
This method works well for string values, but may require additional programming
to handle conversions for other data types. Fortunately, field components have built-
in properties for handling conversions.
Note You can also use Variants to access and set field values. For more information about
using variants to access and set field values, see “Accessing field values with the
default dataset property” on page 25-20.

25-18 Developer’s Guide


Displaying, converting, and accessing field values

Converting field values


Conversion properties attempt to convert one data type to another. For example, the
AsString property converts numeric and Boolean values to string representations.
The following table lists field component conversion properties, and which
properties are recommended for field components by field-component class:

AsFloat
AsCurrency AsDateTime
AsVariant AsString AsInteger AsBCD AsSQLTimeStamp AsBoolean
TStringField yes NA yes yes yes yes
TWideStringField yes yes yes yes yes yes
TIntegerField yes yes NA yes
TSmallIntField yes yes yes yes
TWordField yes yes yes yes
TLargeintField yes yes yes yes
TFloatField yes yes yes yes
TCurrencyField yes yes yes yes
TBCDField yes yes yes yes
TFMTBCDField yes yes yes yes
TDateTimeField yes yes yes yes
TDateField yes yes yes yes
TTimeField yes yes yes yes
TSQLTimeStampField yes yes yes yes
TBooleanField yes yes
TBytesField yes yes
TVarBytesField yes yes
TBlobField yes yes
TMemoField yes yes
TGraphicField yes yes
TVariantField NA yes yes yes yes yes
TAggregateField yes yes

Note that some columns in the table refer to more than one conversion property
(such as AsFloat, AsCurrency, and AsBCD). This is because all field data types that
support one of those properties always support the others as well.
Note also that the AsVariant property can translate among all data types. For any
datatypes not listed above, AsVariant is also available (and is, in fact, the only option).
When in doubt, use AsVariant.

Working with field components 25-19


Displaying, converting, and accessing field values

In some cases, conversions are not always possible. For example, AsDateTime can be
used to convert a string to a date, time, or datetime format only if the string value is
in a recognizable datetime format. A failed conversion attempt raises an exception.
In some other cases, conversion is possible, but the results of the conversion are not
always intuitive. For example, what does it mean to convert a TDateTimeField value
into a float format? AsFloat converts the date portion of the field to the number of
days since 12/31/1899, and it converts the time portion of the field to a fraction of 24
hours. Table 25.7 lists permissible conversions that produce special results:

Table 25.7 Special conversion results


Conversion Result
String to Boolean Converts “True,” “False,” “Yes,” and “No” to Boolean.
Other values raise exceptions.
Float to Integer Rounds float value to nearest integer value.
DateTime or SQLTimeStamp to Float Converts date to number of days since 12/31/1899, time to a
fraction of 24 hours.
Boolean to String Converts any Boolean value to “True” or “False.”

In other cases, conversions are not possible at all. In these cases, attempting a
conversion also raises an exception.
Conversion always occurs before an assignment is made. For example, the following
statement converts the value of CustomersCustNo to a string and assigns the string to
the text of an edit control:
Edit1.Text := CustomersCustNo.AsString;
Conversely, the next statement assigns the text of an edit control to the
CustomersCustNo field as an integer:
MyTableMyField.AsInteger := StrToInt(Edit1.Text);

Accessing field values with the default dataset property


The most general method for accessing a field’s value is to use Variants with the
FieldValues property. For example, the following statement puts the value of an edit
box into the CustNo field in the Customers table:
Customers.FieldValues['CustNo'] := Edit2.Text;
Because the FieldValues property is of type Variant, it automatically converts other
datatypes into a Variant value.
For more information about Variants, see the online help.

25-20 Developer’s Guide


Displaying, converting, and accessing field values

Accessing field values with a dataset’s Fields property


You can access the value of a field with the Fields property of the dataset component
to which the field belongs. Fields maintains an indexed list of all the fields in the
dataset. Accessing field values with the Fields property is useful when you need to
iterate over a number of columns, or if your application works with tables that are
not available to you at design time.
To use the Fields property you must know the order of and data types of fields in the
dataset. You use an ordinal number to specify the field to access. The first field in a
dataset is numbered 0. Field values must be converted as appropriate using each
field component’s conversion properties. For more information about field
component conversion properties, see “Converting field values” on page 25-19.
For example, the following statement assigns the current value of the seventh column
(Country) in the Customers table to an edit control:
Edit1.Text := CustTable.Fields[6].AsString;
Conversely, you can assign a value to a field by setting the Fields property of the
dataset to the desired field. For example:
begin
Customers.Edit;
Customers.Fields[6].AsString := Edit1.Text;
Customers.Post;
end;

Accessing field values with a dataset’s FieldByName method


You can also access the value of a field with a dataset’s FieldByName method. This
method is useful when you know the name of the field you want to access, but do not
have access to the underlying table at design time.
To use FieldByName, you must know the dataset and name of the field you want to
access. You pass the field’s name as an argument to the method. To access or change
the field’s value, convert the result with the appropriate field component conversion
property, such as AsString or AsInteger. For example, the following statement assigns
the value of the CustNo field in the Customers dataset to an edit control:
Edit2.Text := Customers.FieldByName('CustNo').AsString;
Conversely, you can assign a value to a field:
begin
Customers.Edit;
Customers.FieldByName('CustNo').AsString := Edit2.Text;
Customers.Post;
end;

Working with field components 25-21


Setting a default value for a field

Setting a default value for a field


You can specify how a default value for a field in a client dataset or a BDE-enabled
dataset should be calculated at runtime using the DefaultExpression property.
DefaultExpression can be any valid SQL value expression that does not refer to field
values. If the expression contains literals other than numeric values, they must
appear in quotes. For example, a default value of noon for a time field would be
‘12:00:00’
including the quotes around the literal value.
Note If the underlying database table defines a default value for the field, the default you
specify in DefaultExpression takes precedence. That is because DefaultExpression is
applied when the dataset posts the record containing the field, before the edited
record is applied to the database server.

Working with constraints


Field components in client datasets or BDE-enabled datasets can use SQL server
constraints. In addition, your applications can create and use custom constraints for
these datasets that are local to your application. All constraints are rules or
conditions that impose a limit on the scope or range of values that a field can store.

Creating a custom constraint


A custom constraint is not imported from the server like other constraints. It is a
constraint that you declare, implement, and enforce in your local application. As
such, custom constraints can be useful for offering a prevalidation enforcement of
data entry, but a custom constraint cannot be applied against data received from or
sent to a server application.
To create a custom constraint, set the CustomConstraint property to specify a
constraint condition, and set ConstraintErrorMessage to the message to display when a
user violates the constraint at runtime.
CustomConstraint is an SQL string that specifies any application-specific constraints
imposed on the field’s value. Set CustomConstraint to limit the values that the user
can enter into a field. CustomConstraint can be any valid SQL search expression such
as
x > 0 and x < 100
The name used to refer to the value of the field can be any string that is not a reserved
SQL keyword, as long as it is used consistently throughout the constraint expression.
Note Custom constraints are only available in BDE-enabled and client datasets.
Custom constraints are imposed in addition to any constraints to the field’s value
that come from the server. To see the constraints imposed by the server, read the
ImportedConstraint property.

25-22 Developer’s Guide


Using object fields

Using server constraints


Most production SQL databases use constraints to impose conditions on the possible
values for a field. For example, a field may not permit NULL values, may require that
its value be unique for that column, or that its values be greater than 0 and less than
150. While you could replicate such conditions in your client applications, client
datasets and BDE-enabled datasets offer the ImportedConstraint property to
propagate a server’s constraints locally.
ImportedConstraint is a read-only property that specifies an SQL clause that limits
field values in some manner. For example:
Value > 0 and Value < 100
Do not change the value of ImportedConstraint, except to edit nonstandard or server-
specific SQL that has been imported as a comment because it cannot be interpreted
by the database engine.
To add additional constraints on the field value, use the CustomConstraint property.
Custom constraints are imposed in addition to the imported constraints. If the server
constraints change, the value of ImportedConstraint also changed but constraints
introduced in the CustomConstraint property persist.
Removing constraints from the ImportedConstraint property will not change the
validity of field values that violate those constraints. Removing constraints results in
the constraints being checked by the server instead of locally. When constraints are
checked locally, the error message supplied as the ConstraintErrorMessage property is
displayed when violations are found, instead of displaying an error message from
the server.

Using object fields


Object fields are fields that represent a composite of other, simpler datatypes. These
include ADT (Abstract Data Type) fields, Array fields, DataSet fields, and Reference
fields. All of these field types either contain or reference child fields or other data
sets.
ADT fields and array fields are fields that contain child fields. The child fields of an
ADT field can be any scalar or object type (that is, any other field type). These child
fields may differ in type from each other. An array field contains an array of child
fields, all of the same type.

Working with field components 25-23


Using object fields

Dataset and reference fields are fields that access other data sets. A dataset field
provides access to a nested (detail) dataset and a reference field stores a pointer
(reference) to another persistent object (ADT).

Table 25.8 Types of object field components


Component name Purpose
TADTField Represents an ADT (Abstract Data Type) field.
TArrayField Represents an array field.
TDataSetField Represents a field that contains a nested data set reference.
TReferenceField Represents a REF field, a pointer to an ADT.

When you add fields with the Fields editor to a dataset that contains object fields,
persistent object fields of the correct type are automatically created for you. Adding
persistent object fields to a dataset automatically sets the dataset’s ObjectView
property to True, which instructs the dataset to store these fields hierarchically, rather
than flattening them out as if the constituent child fields were separate, independent
fields.
The following properties are common to all object fields and provide the
functionality to handle child fields and datasets.

Table 25.9 Common object field descendant properties


Property Purpose
Fields Contains the child fields belonging to the object field.
ObjectType Classifies the object field.
FieldCount Number of child fields belonging to the object field.
FieldValues Provides access to the values of the child fields.

Displaying ADT and array fields


Both ADT and array fields contain child fields that can be displayed through data-
aware controls.
Data-aware controls such as TDBEdit that represent a single field value display child
field values in an uneditable comma delimited string. In addition, if you set the
control’s DataField property to the child field instead of the object field itself, the child
field can be viewed an edited just like any other normal data field.
A TDBGrid control displays ADT and array field data differently, depending on the
value of the dataset’s ObjectView property. When ObjectView is False, each child field
appears in a single column. When ObjectView is True, an ADT or array field can be
expanded and collapsed by clicking on the arrow in the title bar of the column. When
the field is expanded, each child field appears in its own column and title bar, all
below the title bar of the ADT or array itself. When the ADT or array is collapsed,
only one column appears with an uneditable comma-delimited string containing the
child fields.

25-24 Developer’s Guide


Using object fields

Working with ADT fields


ADTs are user-defined types created on the server, and are similar to the record type.
An ADT can contain most scalar field types, array fields, reference fields, and nested
ADTs.
There are a variety of ways to access the data in ADT field types. These are illustrated
in the following examples, which assign a child field value to an edit box called
CityEdit, and use the following ADT structure,
Address
Street
City
State
Zip

Using persistent field components


The easiest way to access ADT field values is to use persistent field components. For
the ADT structure above, the following persistent fields can be added to the Customer
table using the Fields editor:
CustomerAddress: TADTField;
CustomerAddrStreet: TStringField;
CustomerAddrCity: TStringField;
CustomerAddrState: TStringField;
CustomerAddrZip: TStringField;
Given these persistent fields, you can simply access the child fields of an ADT field
by name:
CityEdit.Text := CustomerAddrCity.AsString;
Although persistent fields are the easiest way to access ADT child fields, it is not
possible to use them if the structure of the dataset is not known at design time. When
accessing ADT child fields without using persistent fields, you must set the dataset’s
ObjectView property to True.

Using the dataset’s FieldByName method


You can access the children of an ADT field using the dataset’s FieldByName method
by qualifying the name of the child field with the ADT field’s name:
CityEdit.Text := Customer.FieldByName(‘Address.City’).AsString;

Using the dateset’s FieldValues property


You can also use qualified field names with a dataset’s FieldValues property:
CityEdit.Text := Customer['Address.City'];
Note that you can omit the property name (FieldValues) because FieldValues is the
dataset’s default property.
Note Unlike other runtime methods for accessing ADT child field values, the FieldValues
property works even if the dataset’s ObjectView property is False.

Working with field components 25-25


Using object fields

Using the ADT field’s FieldValues property


You can access the value of a child field with the TADTField’s FieldValues property.
FieldValues accepts and returns a Variant, so it can handle and convert fields of any
type. The index parameter is an integer value that specifies the offset of the field.
CityEdit.Text := TADTField(Customer.FieldByName('Address')).FieldValues[1];
Because FieldValues is the default property of TADTField, the property name
(FieldValues) can be omitted. Thus, the following statement is equivalent to the one
above:
CityEdit.Text := TADTField(Customer.FieldByName('Address'))[1];

Using the ADT field’s Fields property


Each ADT field has a Fields property that is analogous to the Fields property of a
dataset. Like the Fields property of a dataset, you can use it to access child fields by
position:
CityEdit.Text := TADTField(Customer.FieldByName(‘Address’)).Fields[1].AsString;
or by name:
CityEdit.Text :=
TADTField(Customer.FieldByName(‘Address’)).Fields.FieldByName(‘City’).AsString;

Working with array fields


Array fields consist of a set of fields of the same type. The field types can be scalar
(for example, float, string), or non-scalar (an ADT), but an array field of arrays is not
permitted. The SparseArrays property of TDataSet determines whether a unique
TField object is created for each element of the array field.
There are a variety of ways to access the data in array field types. If you are not using
persistent fields, the dataset’s ObjectView property must be set to True before you can
access the elements of an array field.

Using persistent fields


You can map persistent fields to the individual array elements in an array field. For
example, consider an array field TelNos_Array, which is a six element array of strings.
The following persistent fields created for the Customer table component represent
the TelNos_Array field and its six elements:
CustomerTelNos_Array: TArrayField;
CustomerTelNos_Array0: TStringField;
CustomerTelNos_Array1: TStringField;
CustomerTelNos_Array2: TStringField;
CustomerTelNos_Array3: TStringField;
CustomerTelNos_Array4: TStringField;
CustomerTelNos_Array5: TStringField;

25-26 Developer’s Guide


Using object fields

Given these persistent fields, the following code uses a persistent field to assign an
array element value to an edit box named TelEdit.
TelEdit.Text := CustomerTelNos_Array0.AsString;

Using the array field’s FieldValues property


You can access the value of a child field with the array field’s FieldValues property.
FieldValues accepts and returns a Variant, so it can handle and convert child fields of
any type. For example,
TelEdit.Text := TArrayField(Customer.FieldByName('TelNos_Array')).FieldValues[1];
Because FieldValues is the default property of TArrayField, this can also be written
TelEdit.Text := TArrayField(Customer.FieldByName('TelNos_Array'))[1];

Using the array field’s Fields property


TArrayField has a Fields property that you can use to access individual sub-fields. This
is illustrated below, where an array field (OrderDates) is used to populate a list box
with all non-null array elements:
for I := 0 to OrderDates.Size - 1 do
begin
if not OrderDates.Fields[I].IsNull then
OrderDateListBox.Items.Add(OrderDates[I]);
end;

Working with dataset fields


Dataset fields provide access to data stored in a nested dataset. The NestedDataSet
property references the nested dataset. The data in the nested dataset is then accessed
through the field objects of the nested dataset.

Displaying dataset fields


TDBGrid controls enable the display of data stored in data set fields. In a TDBGrid
control, a dataset field is indicated in each cell of a dataset column with the string
“(DataSet)”, and at runtime an ellipsis button also exists to the right. Clicking on the
ellipsis brings up a new form with a grid displaying the dataset associated with the
current record’s dataset field. This form can also be brought up programmatically
with the DB grid’s ShowPopupEditor method. For example, if the seventh column in
the grid represents a dataset field, the following code will display the dataset
associated with that field for the current record.
DBGrid1.ShowPopupEditor(DBGrid1.Columns[7]);

Working with field components 25-27


Using object fields

Accessing data in a nested dataset


A dataset field is not normally bound directly to a data aware control. Rather, since a
nested data set is just that, a data set, the means to get at its data is via a TDataSet
descendant. The type of dataset you use is determined by the parent dataset (the one
with the dataset field.) For example, a BDE-enabled dataset uses TNestedTable to
represent the data in its dataset fields, while client datasets use other client datasets.
To access the data in a dataset field,
1 Create a persistent TDataSetField object by invoking the Fields editor for the parent
dataset.
2 Create a dataset to represent the values in that dataset field. It must be of a type
compatible with the parent dataset.
3 Set that DataSetField property of the dataset created in step 2 to the persistent
dataset field you created in step 1.
If the nested dataset field for the current record has a value, the detail dataset
component will contain records with the nested data; otherwise, the detail dataset
will be empty.
Before inserting records into a nested dataset, you should be sure to post the
corresponding record in the master table, if it has just been inserted. If the inserted
record is not posted, it will be automatically posted before the nested dataset posts.

Working with reference fields


Reference fields store a pointer or reference to another ADT object. This ADT object is
a single record of another object table. Reference fields always refer to a single record
in a dataset (object table). The data in the referenced object is actually returned in a
nested dataset, but can also be accessed via the Fields property on the TReferenceField.

Displaying reference fields


In a TDBGrid control a reference field is designated in each cell of the dataset column,
with (Reference) and, at runtime, an ellipsis button to the right. At runtime, clicking
on the ellipsis brings up a new form with a grid displaying the object associated with
the current record’s reference field.
This form can also be brought up programmatically with the DB grid’s
ShowPopupEditor method. For example, if the seventh column in the grid represents a
reference field, the following code will display the object associated with that field
for the current record.
DBGrid1.ShowPopupEditor(DBGrid1.Columns[7]);

25-28 Developer’s Guide


Using object fields

Accessing data in a reference field


You can access the data in a reference field in the same way you access a nested
dataset:
1 Create a persistent TDataSetField object by invoking the Fields editor for the parent
dataset.
2 Create a dataset to represent the value of that dataset field.
3 Set that DataSetField property of the dataset created in step 2 to the persistent
dataset field you created in step 1.
If the reference is assigned, the reference dataset will contain a single record with the
referenced data. If the reference is null, the reference dataset will be empty.
You can also use the reference field’s Fields property to access the data in a reference
field. For example, the following lines are equivalent and assign data from the
reference field CustomerRefCity to an edit box called CityEdit:
CityEdit.Text := CustomerRefCity.Fields[1].AsString;
CityEdit.Text := CustomerRefCity.NestedDataSet.Fields[1].AsString;
When data in a reference field is edited, it is actually the referenced data that is
modified.
To assign a reference field, you need to first use a SELECT statement to select the
reference from the table, and then assign. For example:
var
AddressQuery: TQuery;
CustomerAddressRef: TReferenceField;
begin
AddressQuery.SQL.Text := ‘SELECT REF(A) FROM AddressTable A WHERE A.City = ‘’San
Francisco’’’;
AddressQuery.Open;
CustomerAddressRef.Assign(AddressQuery.Fields[0]);
end;

Working with field components 25-29


25-30 Developer’s Guide
Chapter

Using the Borland Database Engine


Chapter26
26
The Borland Database Engine (BDE) is a data-access mechanism that can be shared
by several applications. The BDE defines a powerful library of API calls that can
create, restructure, fetch data from, update, and otherwise manipulate local and
remote database servers. The BDE provides a uniform interface to access a wide
variety of database servers, using drivers to connect to different databases.
Depending on your version of Delphi, you can use the drivers for local databases
(Paradox, dBASE, FoxPro, and Access), SQL Links drivers for remote database
servers such as InterBase, Oracle, Sybase, Informix, Microsoft SQL server, and DB2,
and an ODBC adapter that lets you supply your own ODBC drivers.
When deploying BDE-based applications, you must include the BDE with your
application. While this increases the size of the application and the complexity of
deployment, the BDE can be shared with other BDE-based applications and provides
a broad range of support for database manipulation. Although you can use the BDE’s
API directly in your application, the components on the BDE page of the Component
palette wrap most of this functionality for you.

BDE-based architecture
When using the BDE, your application uses a variation of the general database
architecture described in “Database architecture” on page 19-6. In addition to the
user interface elements, datasource, and datasets common to all Delphi database
applications, A BDE-based application can include
• One or more database components to control transactions and to manage database
connections.
• One or more session components to isolate data access operations such as database
connections, and to manage groups of databases.

Using the Borland Database Engine 26-1


BDE-based architecture

The relationships between the components in a BDE-based application are illustrated


in Figure 26.1:
Figure 26.1 Components in a BDE-based application

data source dataset


user
interface Borland
Database database
elements data source dataset Engine

database

Session

Form Data Module

Using BDE-enabled datasets


BDE-enabled datasets use the Borland Database Engine (BDE) to access data. They
inherit the common dataset capabilities described in Chapter 24, “Understanding
datasets,” using the BDE to provide the implementation. In addition, all BDE
datasets add properties, events, and methods for
• Associating a dataset with database and session connections.
• Caching BLOBs.
• Obtaining a BDE handle.
There are three BDE-enabled datasets:
• TTable, a table type dataset that represents all of the rows and columns of a single
database table. See “Using table type datasets” on page 24-25 for a description of
features common to table type datasets. See “Using TTable” on page 26-5 for a
description of features unique to TTable.
• TQuery, a query-type dataset that encapsulates an SQL statement and enables
applications to access the resulting records, if any. See “Using query-type
datasets” on page 24-42 for a description of features common to query-type
datasets. See “Using TQuery” on page 26-9 for a description of features unique to
TQuery.
• TStoredProc, a stored procedure-type dataset that executes a stored procedure that
is defined on a database server. See “Using stored procedure-type datasets” on
page 24-50 for a description of features common to stored procedure-type
datasets. See “Using TStoredProc” on page 26-11 for a description of features
unique to TStoredProc.
Note In addition to the three types of BDE-enabled datasets, there is a BDE-based client
dataset (TBDEClientDataSet) that can be used for caching updates. For information on
caching updates, see “Using a client dataset to cache updates” on page 29-16.

26-2 Developer’s Guide


BDE-based architecture

Associating a dataset with database and session connections


In order for a BDE-enabled dataset to fetch data from a database server it needs to
use both a database and a session.
• Databases represent connections to specific database servers. The database
identifies a BDE driver, a particular database server that uses that driver, and a set
of connection parameters for connecting to that database server. Each database is
represented by a TDatabase component. You can either associate your datasets
with a TDatabase component you add to a form or data module, or you can simply
identify the database server by name and let Delphi generate an implicit database
component for you. Using an explicitly-created TDatabase component is
recommended for most applications, because the database component gives you
greater control over how the connection is established, including the login process,
and lets you create and use transactions.
To associate a BDE-enabled dataset with a database, use the DatabaseName
property. DatabaseName is a string that contains different information, depending
on whether you are using an explicit database component and, if not, the type of
database you are using:
• If you are using an explicit TDatabase component, DatabaseName is the value of
the DatabaseName property of the database component.
• If you are want to use an implicit database component and the database has a
BDE alias, you can specify a BDE alias as the value of DatabaseName. A BDE
alias represents a database plus configuration information for that database.
The configuration information associated with an alias differs by database type
(Oracle, Sybase, InterBase, Paradox, dBASE, and so on). Use the BDE
Administration tool or the SQL explorer to create and manage BDE aliases.
• If you want to use an implicit database component for a Paradox or dBASE
database, you can also use DatabaseName to simply specify the directory where
the database tables are located.
• A session provides global management for a group of database connections in an
application. When you add BDE-enabled datasets to your application, your
application automatically contains a session component, named Session. As you
add database and dataset components to the application, they are automatically
associated with this default session. It also controls access to password protected
Paradox files, and it specifies directory locations for sharing Paradox files over a
network. You can control database connections and access to Paradox files using
the properties, events, and methods of the session.
You can use the default session to control all database connections in your
application. Alternatively, you can add additional session components at design
time or create them dynamically at runtime to control a subset of database
connections in an application. To associate your dataset with an explicitly created
session component, use the SessionName property. If you do not use explicit
session components in your application, you do not have to provide a value for
this property. Whether you use the default session or explicitly specify a session
using the SessionName property, you can access the session associated with a
dataset by reading the DBSession property.

Using the Borland Database Engine 26-3


BDE-based architecture

Note If you use a session component, the SessionName property of a dataset must match the
SessionName property for the database component with which the dataset is
associated.
For more information about TDatabase and TSession, see “Connecting to databases
with TDatabase” on page 26-12 and “Managing database sessions” on page 26-16.

Caching BLOBs
BDE-enabled datasets all have a CacheBlobs property that controls whether BLOB
fields are cached locally by the BDE when an application reads BLOB records. By
default, CacheBlobs is True, meaning that the BDE caches a local copy of BLOB fields.
Caching BLOBs improves application performance by enabling the BDE to store local
copies of BLOBs instead of fetching them repeatedly from the database server as a
user scrolls through records.
In applications and environments where BLOBs are frequently updated or replaced,
and a fresh view of BLOB data is more important than application performance, you
can set CacheBlobs to False to ensure that your application always sees the latest
version of a BLOB field.

Obtaining a BDE handle


You can use BDE-enabled datasets without ever needing to make direct API calls to
the Borland Database Engine. The BDE-enabled datasets, in combination with
database and session components, encapsulate much of the BDE functionality.
However, if you need to make direct API calls to the BDE, you may need BDE
handles for resources managed by the BDE. Many BDE APIs require these handles as
parameters.
All BDE-enabled datasets include three read-only properties for accessing BDE
handles at runtime:
• Handle is a handle to the BDE cursor that accesses the records in the dataset.
• DBHandle is a handle to the database that contains the underlying tables or stored
procedure.
• DBLocale is a handle to the BDE language driver for the dataset. The locale controls
the sort order and character set used for string data.
These properties are automatically assigned to a dataset when it is connected to a
database server through the BDE.

26-4 Developer’s Guide


BDE-based architecture

Using TTable
TTable encapsulates the full structure of and data in an underlying database table. It
implements all of the basic functionality introduced by TDataSet, as well as all of the
special features typical of table type datasets. Before looking at the unique features
introduced by TTable, you should familiarize yourself with the common database
features described in “Understanding datasets,” including the section on table type
datasets that starts on page 24-25.
Because TTable is a BDE-enabled dataset, it must be associated with a database and a
session. “Associating a dataset with database and session connections” on page 26-3
describes how you form these associations. Once the dataset is associated with a
database and session, you can bind it to a particular database table by setting the
TableName property and, if you are using a Paradox, dBASE, FoxPro, or comma-
delimited ASCII text table, the TableType property.
Note The table must be closed when you change its association to a database, session, or
database table, or when you set the TableType property. However, before you close
the table to change these properties, first post or discard any pending changes. If
cached updates are enabled, call the ApplyUpdates method to write the posted
changes to the database.
TTable components are unique in the support they offer for local database tables
(Paradox, dBASE, FoxPro, and comma-delimited ASCII text tables). The following
topics describe the special properties and methods that implement this support.
In addition, TTable components can take advantage of the BDE’s support for batch
operations (table level operations to append, update, delete, or copy entire groups of
records). This support is described in “Importing data from another table” on
page 26-8.

Specifying the table type for local tables


If an application accesses Paradox, dBASE, FoxPro, or comma-delimited ASCII text
tables, then the BDE uses the TableType property to determine the table’s type (its
expected structure). TableType is not used when TTable represents an SQL-based table
on a database server.
By default TableType is set to ttDefault. When TableType is ttDefault, the BDE
determines a table’s type from its filename extension. Table 26.1 summarizes the file
extensions recognized by the BDE and the assumptions it makes about a table’s type:

Table 26.1 Table types recognized by the BDE based on file extension
Extension Table type
No file extension Paradox
.DB Paradox
.DBF dBASE
.TXT ASCII text

Using the Borland Database Engine 26-5


BDE-based architecture

If your local Paradox, dBASE, and ASCII text tables use the file extensions as
described in Table 26.1, then you can leave TableType set to ttDefault. Otherwise, your
application must set TableType to indicate the correct table type. Table 26.2 indicates
the values you can assign to TableType:

Table 26.2 TableType values


Value Table type
ttDefault Table type determined automatically by the BDE
ttParado Paradox
x
ttDBase dBASE
ttFoxPro FoxPro
ttASCII Comma-delimited ASCII text

Controlling read/write access to local tables


Like any table type dataset, TTable lets you control read and write access by your
application using the ReadOnly property.
In addition, for Paradox, dBASE, and FoxPro tables, TTable can let you control read
and write access to tables by other applications. The Exclusive property controls
whether your application gains sole read/write access to a Paradox, dBASE, or
FoxPro table. To gain sole read/write access for these table types, set the table
component’s Exclusive property to True before opening the table. If you succeed in
opening a table for exclusive access, other applications cannot read data from or
write data to the table. Your request for exclusive access is not honored if the table is
already in use when you attempt to open it.
The following statements open a table for exclusive access:
CustomersTable.Exclusive := True; {Set request for exclusive lock}
CustomersTable.Active := True; {Now open the table}
Note You can attempt to set Exclusive on SQL tables, but some servers do not support
exclusive table-level locking. Others may grant an exclusive lock, but permit other
applications to read data from the table. For more information about exclusive
locking of database tables on your server, see your server documentation.

Specifying a dBASE index file


For most servers, you use the methods common to all table type datasets to specify
an index. These methods are described in “Sorting records with indexes” on
page 24-26.
For dBASE tables that use non-production index files or dBASE III PLUS-style
indexes (*.NDX), however, you must use the IndexFiles and IndexName properties
instead. Set the IndexFiles property to the name of the non-production index file or list
the .NDX files. Then, specify one index in the IndexName property to have it actively
sorting the dataset.

26-6 Developer’s Guide


BDE-based architecture

At design time, click the ellipsis button in the IndexFiles property value in the Object
Inspector to invoke the Index Files editor. To add one non-production index file or
.NDX file: click the Add button in the Index Files dialog and select the file from the
Open dialog. Repeat this process once for each non-production index file or .NDX
file. Click the OK button in the Index Files dialog after adding all desired indexes.
This same operation can be performed programmatically at runtime. To do this,
access the IndexFiles property using properties and methods of string lists. When
adding a new set of indexes, first call the Clear method of the table’s IndexFiles
property to remove any existing entries. Call the Add method to add each non-
production index file or .NDX file:
with Table2.IndexFiles do begin
Clear;
Add('Bystate.ndx');
Add('Byzip.ndx');
Add('Fullname.ndx');
Add('St_name.ndx');
end;
After adding any desired non-production or .NDX index files, the names of
individual indexes in the index file are available, and can be assigned to the
IndexName property. The index tags are also listed when using the GetIndexNames
method and when inspecting index definitions through the TIndexDef objects in the
IndexDefs property. Properly listed .NDX files are automatically updated as data is
added, changed, or deleted in the table (regardless of whether a given index is used
in the IndexName property).
In the example below, the IndexFiles for the AnimalsTable table component is set to the
non-production index file ANIMALS.MDX, and then its IndexName property is set to
the index tag called “NAME”:
AnimalsTable.IndexFiles.Add('ANIMALS.MDX');
AnimalsTable.IndexName := 'NAME';
Once you have specified the index file, using non-production or .NDX indexes works
the same as any other index. Specifying an index name sorts the data in the table and
makes it available for indexed-based searches, ranges, and (for non-production
indexes) master-detail linking. See “Using table type datasets” on page 24-25 for
details on these uses of indexes.
There are two special considerations when using dBASE III PLUS-style .NDX indexes
with TTable components. The first is that .NDX files cannot be used as the basis for
master-detail links. The second is that when activating a .NDX index with the
IndexName property, you must include the .NDX extension in the property value as
part of the index name:
with Table1 do begin
IndexName := 'ByState.NDX';
FindKey(['CA']);
end;

Using the Borland Database Engine 26-7


BDE-based architecture

Renaming local tables


To rename a Paradox or dBASE table at design time, right-click the table component
and select Rename Table from the context menu.
To rename a Paradox or dBASE table at runtime, call the table’s RenameTable method.
For example, the following statement renames the Customer table to CustInfo:
Customer.RenameTable(‘CustInfo’);

Importing data from another table


You can use a table component’s BatchMove method to import data from another
table. BatchMove can
• Copy records from another table into this table.
• Update records in this table that occur in another table.
• Append records from another table to the end of this table.
• Delete records in this table that occur in another table.
BatchMove takes two parameters: the name of the table from which to import data,
and a mode specification that determines which import operation to perform. Table
26.3 describes the possible settings for the mode specification:

Table 26.3 BatchMove import modes


Value Meaning
batAppend Append all records from the source table to the end of this table.
batAppendUpdate Append all records from the source table to the end of this table and update
existing records in this table with matching records from the source table.
batCopy Copy all records from the source table into this table.
batDelete Delete all records in this table that also appear in the source table.
batUpdate Update existing records in this table with matching records from the source
table.

For example, the following code updates all records in the current table with records
from the Customer table that have the same values for fields in the current index:
Table1.BatchMove('CUSTOMER.DB', batUpdate);
BatchMove returns the number of records it imports successfully.
Caution Importing records using the batCopy mode overwrites existing records. To preserve
existing records use batAppend instead.
BatchMove performs only some of the batch operations supported by the BDE.
Additional functions are available using the TBatchMove component. If you need to
move a large amount of data between or among tables, use TBatchMove instead of
calling a table’s BatchMove method. For information about using TBatchMove, see
“Using TBatchMove” on page 26-49.

26-8 Developer’s Guide


BDE-based architecture

Using TQuery
TQuery represents a single Data Definition Language (DDL) or Data Manipulation
Language (DML) statement (For example, a SELECT, INSERT, DELETE, UPDATE,
CREATE INDEX, or ALTER TABLE command). The language used in commands is
server-specific, but usually compliant with the SQL-92 standard for the SQL
language. TQuery implements all of the basic functionality introduced by TDataSet,
as well as all of the special features typical of query-type datasets. Before looking at
the unique features introduced by TQuery, you should familiarize yourself with the
common database features described in “Understanding datasets,” including the
section on query-type datasets that starts on page 24-42.
Because TQuery is a BDE-enabled dataset, it must usually be associated with a
database and a session. (The one exception is when you use the TQuery for a
heterogeneous query.) “Associating a dataset with database
and session connections” on page 26-3 describes how you form these associations.
You specify the SQL statement for the query by setting the SQL property.
A TQuery component can access data in:
• Paradox or dBASE tables, using Local SQL, which is part of the BDE. Local SQL is
a subset of the SQL-92 specification. Most DML is supported and enough DDL
syntax to work with these types of tables. See the local SQL help,
LOCALSQL.HLP, for details on supported SQL syntax.
• Local InterBase Server databases, using the InterBase engine. For information on
InterBase’s SQL-92 standard SQL syntax support and extended syntax support,
see the InterBase Language Reference.
• Databases on remote database servers such as Oracle, Sybase, MS-SQL Server,
Informix, DB2, and InterBase. You must install the appropriate SQL Link driver
and client software (vendor-supplied) specific to the database server to access a
remote server. Any standard SQL syntax supported by these servers is allowed.
For information on SQL syntax, limitations, and extensions, see the documentation
for your particular server.

Creating heterogeneous queries


TQuery supports heterogeneous queries against more than one server or table type
(for example, data from an Oracle table and a Paradox table). When you execute a
heterogeneous query, the BDE parses and processes the query using Local SQL.
Because BDE uses Local SQL, extended, server-specific SQL syntax is not supported.
To perform a heterogeneous query, follow these steps:
1 Define separate BDE aliases for each database accessed in the query using the BDE
BDE Administration tool or the SQL explorer.
2 Leave the DatabaseName property of the TQuery blank; the names of the databases
used will be specified in the SQL statement.
3 In the SQL property, specify the SQL statement to execute. Precede each table
name in the statement with the BDE alias for the table’s database, enclosed in
colons. This whole reference is then enclosed in quotation marks.

Using the Borland Database Engine 26-9


BDE-based architecture

4 Set any parameters for the query in the Params property.


5 Call Prepare to prepare the query for execution prior to executing it for the first
time.
6 Call Open or ExecSQL depending on the type of query you are executing.
For example, suppose you define an alias called Oracle1 for an Oracle database that
has a CUSTOMER table, and Sybase1 for a Sybase database that has an ORDERS
table. A simple query against these two tables would be:
SELECT Customer.CustNo, Orders.OrderNo
FROM ”:Oracle1:CUSTOMER”
JOIN ”:Sybase1:ORDERS”
ON (Customer.CustNo = Orders.CustNo)
WHERE (Customer.CustNo = 1503)
As an alternative to using a BDE alias to specify the database in a heterogeneous
query, you can use a TDatabase component. Configure the TDatabase as normal to
point to the database, set the TDatabase.DatabaseName to an arbitrary but unique
value, and then use that value in the SQL statement instead of a BDE alias name.

Obtaining an editable result set


To request a result set that users can edit in data-aware controls, set a query
component’s RequestLive property to True. Setting RequestLive to True does not
guarantee a live result set, but the BDE attempts to honor the request whenever
possible. There are some restrictions on live result set requests, depending on
whether the query uses the local SQL parser or a server’s SQL parser.
• Queries where table names are preceded by a BDE database alias (as in
heterogeneous queries) and queries executed against Paradox or dBASE are
parsed by the BDE using Local SQL. When queries use the local SQL parser, the
BDE offers expanded support for updatable, live result sets in both single table
and multi-table queries. When using Local SQL, a live result set for a query against
a single table or view is returned if the query does not contain any of the
following:
• DISTINCT in the SELECT clause
• Joins (inner, outer, or UNION)
• Aggregate functions with or without GROUP BY or HAVING clauses
• Base tables or views that are not updatable
• Subqueries
• ORDER BY clauses not based on an index
• Queries against a remote database server are parsed by the server. If the
RequestLive property is set to True, the SQL statement must abide by Local SQL
standards in addition to any server-imposed restrictions because the BDE needs to
use it for conveying data changes to the table. A live result set for a query against a
single table or view is returned if the query does not contain any of the following:
• A DISTINCT clause in the SELECT statement
• Aggregate functions, with or without GROUP BY or HAVING clauses
• References to more than one base table or updatable views (joins)
• Subqueries that reference the table in the FROM clause or other tables

26-10 Developer’s Guide


BDE-based architecture

If an application requests and receives a live result set, the CanModify property of the
query component is set to True. Even if the query returns a live result set, you may
not be able to update the result set directly if it contains linked fields or you switch
indexes before attempting an update. If these conditions exist, you should treat the
result set as a read-only result set, and update it accordingly.
If an application requests a live result set, but the SELECT statement syntax does not
allow it, the BDE returns either
• A read-only result set for queries made against Paradox or dBASE.
• An error code for SQL queries made against a remote server.

Updating read-only result sets


Applications can update data returned in a read-only result set if they are using
cached updates.
If you are using a client dataset to cache updates, the client dataset or its associated
provider can automatically generate the SQL for applying updates unless the query
represents multiple tables. If the query represents multiple tables, you must indicate
how to apply the updates:
• If all updates are applied to a single database table, you can indicate the
underlying table to update in an OnGetTableName event handler.
• If you need more control over applying updates, you can associate the query with
an update object (TUpdateSQL). A provider automatically uses this update object
to apply updates:
a Associate the update object with the query by setting the query’s UpdateObject
property to the TUpdateSQL object you are using.
b Set the update object’s ModifySQL, InsertSQL, and DeleteSQL properties to SQL
statements that perform the appropriate updates for your query’s data.
If you are using the BDE to cache updates, you must use an update object.
Note For more information on using update objects, see “Using update objects to update a
dataset” on page 26-40.

Using TStoredProc
TStoredProc represents a stored procedure. It implements all of the basic functionality
introduced by TDataSet, as well as most of the special features typical of stored
procedure-type datasets. Before looking at the unique features introduced by
TStoredProc, you should familiarize yourself with the common database features
described in “Understanding datasets,” including the section on stored procedure-
type datasets that starts on page 24-50.
Because TStoredProc is a BDE-enabled dataset, it must be associated with a database
and a session. “Associating a dataset with database and session connections” on
page 26-3 describes how you form these associations. Once the dataset is associated
with a database and session, you can bind it to a particular stored procedure by
setting the StoredProcName property.

Using the Borland Database Engine 26-11


BDE-based architecture

TStoredProc differs from other stored procedure-type datasets in the following ways:
• It gives you greater control over how to bind parameters.
• It provides support for Oracle overloaded stored procedures.

Binding parameters
When you prepare and execute a stored procedure, its input parameters are
automatically bound to parameters on the server.
TStoredProc lets you use the ParamBindMode property to specify how parameters
should be bound to the parameters on the server. By default ParamBindMode is set to
pbByName, meaning that parameters from the stored procedure component are
matched to those on the server by name. This is the easiest method of binding
parameters.
Some servers also support binding parameters by ordinal value, the order in which
the parameters appear in the stored procedure. In this case the order in which you
specify parameters in the parameter collection editor is significant. The first
parameter you specify is matched to the first input parameter on the server, the
second parameter is matched to the second input parameter on the server, and so on.
If your server supports parameter binding by ordinal value, you can set
ParamBindMode to pbByNumber.
Tip If you want to set ParamBindMode to pbByNumber, you need to specify the correct
parameter types in the correct order. You can view a server’s stored procedure source
code in the SQL Explorer to determine the correct order and type of parameters to
specify.

Working with Oracle overloaded stored procedures


Oracle servers allow overloading of stored procedures; overloaded procedures are
different procedures with the same name. The stored procedure component’s
Overload property enables an application to specify the procedure to execute.
If Overload is zero (the default), there is assumed to be no overloading. If Overload is
one (1), then the stored procedure component executes the first stored procedure it
finds on the Oracle server that has the overloaded name; if it is two (2), it executes the
second, and so on.
Note Overloaded stored procedures may take different input and output parameters. See
your Oracle server documentation for more information.

Connecting to databases with TDatabase


When a Delphi application uses the Borland Database Engine (BDE) to connect to a
database, that connection is encapsulated by a TDatabase component. A database
component represents the connection to a single database in the context of a BDE
session.
TDatabase performs many of the same tasks as and shares many common properties,
methods, and events with other database connection components. These
commonalities are described in Chapter 23, “Connecting to databases.”

26-12 Developer’s Guide


BDE-based architecture

In addition to the common properties, methods, and events, TDatabase introduces


many BDE-specific features. These features are described in the following topics.

Associating a database component with a session


All database components must be associated with a BDE session. Use the
SessionName, establish this association. When you first create a database component
at design time, SessionName is set to “Default”, meaning that it is associated with the
default session component that is referenced by the global Session variable.
Multi-threaded or reentrant BDE applications may require more than one session. If
you need to use multiple sessions, add TSession components for each session. Then,
associate your dataset with a session component by setting the SessionName property
to a session component’s SessionName property.
At runtime, you can access the session component with which the database is
associated by reading the Session property. If SessionName is blank or “Default”, then
the Session property references the same TSession instance referenced by the global
Session variable. Session enables applications to access the properties, methods, and
events of a database component’s parent session component without knowing the
session’s actual name.
For more information about BDE sessions, see “Managing database sessions” on
page 26-16.
If you are using an implicit database component, the session for that database
component is the one specified by the dataset’s SessionName property.

Understanding database and session component interactions


In general, session component properties provide global, default behaviors that
apply to all implicit database components created at runtime. For example, the
controlling session’s KeepConnections property determines whether a database
connection is maintained even if its associated datasets are closed (the default), or if
the connections are dropped when all its datasets are closed. Similarly, the default
OnPassword event for a session guarantees that when an application attempts to
attach to a database on a server that requires a password, it displays a standard
password prompt dialog box.
Session methods apply somewhat differently. TSession methods affect all database
components, regardless of whether they are explicitly created or instantiated
implicitly by a dataset. For example, the session method DropConnections closes all
datasets belonging to a session’s database components, and then drops all database
connections, even if the KeepConnection property for individual database components
is True.
Database component methods apply only to the datasets associated with a given
database component. For example, suppose the database component Database1 is
associated with the default session. Database1.CloseDataSets() closes only those
datasets associated with Database1. Open datasets belonging to other database
components within the default session remain open.

Using the Borland Database Engine 26-13


BDE-based architecture

Identifying the database


AliasName and DriverName are mutually exclusive properties that identify the
database server to which the TDatabase component connects.
• AliasName specifies the name of an existing BDE alias to use for the database
component. The alias appears in subsequent drop-down lists for dataset
components so that you can link them to a particular database component. If you
specify AliasName for a database component, any value already assigned to
DriverName is cleared because a driver name is always part of a BDE alias.
You create and edit BDE aliases using the Database Explorer or the BDE
Administration utility. For more information about creating and maintaining BDE
aliases, see the online documentation for these utilities.
• DriverName is the name of a BDE driver. A driver name is one parameter in a BDE
alias, but you may specify a driver name instead of an alias when you create a
local BDE alias for a database component using the DatabaseName property. If you
specify DriverName, any value already assigned to AliasName is cleared to avoid
potential conflicts between the driver name you specify and the driver name that
is part of the BDE alias identified in AliasName.
DatabaseName lets you provide your own name for a database connection. The name
you supply is in addition to AliasName or DriverName, and is local to your
application. DatabaseName can be a BDE alias, or, for Paradox and dBASE files, a
fully-qualified path name. Like AliasName, DatabaseName appears in subsequent
drop-down lists for dataset components to let you link them to database components.
At design time, to specify a BDE alias, assign a BDE driver, or create a local BDE alias,
double-click a database component to invoke the Database Properties editor.
You can enter a DatabaseName in the Name edit box in the properties editor. You can
enter an existing BDE alias name in the Alias name combo box for the Alias property,
or you can choose from existing aliases in the drop-down list. The Driver name
combo box enables you to enter the name of an existing BDE driver for the
DriverName property, or you can choose from existing driver names in the drop-
down list.
Note The Database Properties editor also lets you view and set BDE connection
parameters, and set the states of the LoginPrompt and KeepConnection properties. For
information on connection parameters, see “Setting BDE alias parameters” below.
For information on LoginPrompt, see “Controlling server login” on page 23-4. For
information on KeepConnection see “Opening a connection using TDatabase” on
page 26-15.

Setting BDE alias parameters


At design time you can create or edit connection parameters in three ways:
• Use the Database Explorer or BDE Administration utility to create or modify BDE
aliases, including parameters. For more information about these utilities, see their
online Help files.

26-14 Developer’s Guide


BDE-based architecture

• Double-click the Params property in the Object Inspector to invoke the String List
editor.
• Double-click a database component in a data module or form to invoke the
Database Properties editor.
All of these methods edit the Params property for the database component. Params is
a string list containing the database connection parameters for the BDE alias
associated with a database component. Some typical connection parameters include
path statement, server name, schema caching size, language driver, and SQL query
mode.
When you first invoke the Database Properties editor, the parameters for the BDE
alias are not visible. To see the current settings, click Defaults. The current
parameters are displayed in the Parameter overrides memo box. You can edit
existing entries or add new ones. To clear existing parameters, click Clear. Changes
you make take effect only when you click OK.
At runtime, an application can set alias parameters only by editing the Params
property directly. For more information about parameters specific to using SQL
Links drivers with the BDE, see your online SQL Links help file.

Opening a connection using TDatabase


As with all database connection components, to connect to a database using
TDatabase, you set the Connected property to True or call the Open method. This
process is described in “Connecting to a database server” on page 23-3. Once a
database connection is established the connection is maintained as long as there is at
least one active dataset. When there are no more active datasets, the connection is
dropped unless the database component’s KeepConnection property is True.
When you connect to a remote database server from an application, the application
uses the BDE and the Borland SQL Links driver to establish the connection. (The BDE
can also communicate with an ODBC driver that you supply.) You need to configure
the SQL Links or ODBC driver for your application prior to making the connection.
SQL Links and ODBC parameters are stored in the Params property of a database
component. For information about SQL Links parameters, see the online SQL Links
User’s Guide. To edit the Params property, see “Setting BDE alias parameters” on
page 26-14.

Working with network protocols


As part of configuring the appropriate SQL Links or ODBC driver, you may need to
specify the network protocol used by the server, such as SPX/IPX or TCP/IP,
depending on the driver’s configuration options. In most cases, network protocol
configuration is handled using a server’s client setup software. For ODBC it may also
be necessary to check the driver setup using the ODBC driver manager.
Establishing an initial connection between client and server can be problematic. The
following troubleshooting checklist should be helpful if you encounter difficulties:
• Is your server’s client-side connection properly configured?
• Are the DLLs for your connection and database drivers in the search path?

Using the Borland Database Engine 26-15


BDE-based architecture

• If you are using TCP/IP:


• Is your TCP/IP communications software installed? Is the proper
WINSOCK.DLL installed?
• Is the server’s IP address registered in the client’s HOSTS file?
• Is the Domain Name Services (DNS) properly configured?
• Can you ping the server?
For more troubleshooting information, see the online SQL Links User’s Guide and
your server documentation.

Using ODBC
An application can use ODBC data sources (for example, Btrieve). An ODBC driver
connection requires
• A vendor-supplied ODBC driver.
• The Microsoft ODBC Driver Manager.
• The BDE Administration utility.
To set up a BDE alias for an ODBC driver connection, use the BDE Administration
utility. For more information, see the BDE Administration utility’s online help file.

Using database components in data modules


You can safely place database components in data modules. If you put a data module
that contains a database component into the Object Repository, however, and you
want other users to be able to inherit from it, you must set the HandleShared property
of the database component to True to prevent global name space conflicts.

Managing database sessions


An BDE-based application’s database connections, drivers, cursors, queries, and so
on are maintained within the context of one or more BDE sessions. Sessions isolate a
set of database access operations, such as database connections, without the need to
start another instance of the application.
All BDE-based database applications automatically include a default session
component, named Session, that encapsulates the default BDE session. When
database components are added to the application, they are automatically associated
with the default session (note that its SessionName is “Default”). The default session
provides global control over all database components not associated with another
session, whether they are implicit (created by the session at runtime when you open a
dataset that is not associated with a database component you create) or persistent
(explicitly created by your application). The default session is not visible in your data
module or form at design time, but you can access its properties and methods in your
code at runtime.

26-16 Developer’s Guide


BDE-based architecture

To use the default session, you need write no code unless your application must
• Explicitly activate or deactivate a session, enabling or disabling the session’s
databases’ ability to open.
• Modify the properties of the session, such as specifying default properties for
implicitly generated database components.
• Execute a session’s methods, such as managing database connections (for example
opening and closing database connections in response to user actions).
• Respond to session events, such as when the application attempts to access a
password-protected Paradox or dBASE table.
• Set Paradox directory locations such as the NetFileDir property to access Paradox
tables on a network and the PrivateDir property to a local hard drive to speed
performance.
• Manage the BDE aliases that describe possible database connection configurations
for databases and datasets that use the session.
Whether you add database components to an application at design time or create
them dynamically at runtime, they are automatically associated with the default
session unless you specifically assign them to a different session. If you open a
dataset that is not associated with a database component, Delphi automatically
• Creates a database component for it at runtime.
• Associates the database component with the default session.
• Initializes some of the database component’s key properties based on the default
session’s properties. Among the most important of these properties is
KeepConnections, which determines when database connections are maintained or
dropped by an application.
The default session provides a widely applicable set of defaults that can be used as is
by most applications. You need only associate a database component with an
explicitly named session if the component performs a simultaneous query against a
database already opened by the default session. In this case, each concurrent query
must run under its own session. Multi-threaded database applications also require
multiple sessions, where each thread has its own session.
Applications can create additional session components as needed. BDE-based
database applications automatically include a session list component, named
Sessions, that you can use to manage all of your session components. For more
information about managing multiple sessions see, “Managing multiple sessions” on
page 26-29.
You can safely place session components in data modules. If you put a data module
that contains one or more session components into the Object Repository, however,
make sure to set the AutoSessionName property to True to avoid namespace conflicts
when users inherit from it.

Using the Borland Database Engine 26-17


BDE-based architecture

Activating a session
Active is a Boolean property that determines if database and dataset components
associated with a session are open. You can use this property to read the current state
of a session’s database and dataset connections, or to change it. If Active is False (the
default), all databases and datasets associated with the session are closed. If True,
databases and datasets are open.
A session is activated when it is first created, and subsequently, whenever its Active
property is changed to True from False (for example, when a database or dataset is
associated with a session is opened and there are currently no other open databases or
datasets). Setting Active to True triggers a session’s OnStartup event, registers the
paradox directory locations with the BDE, and registers the ConfigMode property,
which determines what BDE aliases are available within the session. You can write
an OnStartup event handler to initialize the NetFileDir, PrivateDir, and ConfigMode
properties before they are registered with the BDE, or to perform other specific
session start-up activities. For information about the NetFileDir and PrivateDir
properties, see “Specifying Paradox directory locations” on page 26-24. For
information about ConfigMode, see “Working with BDE aliases” on page 26-25.
Once a session is active, you can open its database connections by calling the
OpenDatabase method.
For session components you place in a data module or form, setting Active to False
when there are open databases or datasets closes them. At runtime, closing databases
and datasets may trigger events associated with them.
Note You cannot set Active to False for the default session at design time. While you can
close the default session at runtime, it is not recommended.
You can also use a session’s Open and Close methods to activate or deactivate sessions
other than the default session at runtime. For example, the following single line of
code closes all open databases and datasets for a session:
Session1.Close;
This code sets Session1’s Active property to False. When a session’s Active property is
False, any subsequent attempt by the application to open a database or dataset resets
Active to True and calls the session’s OnStartup event handler if it exists. You can also
explicitly code session reactivation at runtime. The following code reactivates
Session1:
Session1.Open;
Note If a session is active you can also open and close individual database connections. For
more information, see “Closing database connections” on page 26-20.

Specifying default database connection behavior


KeepConnections provides the default value for the KeepConnection property of
implicit database components created at runtime. KeepConnection specifies what
happens to a database connection established for a database component when all its
datasets are closed. If True (the default), a constant, or persistent, database connection
is maintained even if no dataset is active. If False, a database connection is dropped as
soon as all its datasets are closed.

26-18 Developer’s Guide


BDE-based architecture

Note Connection persistence for a database component you explicitly place in a data
module or form is controlled by that database component’s KeepConnection property.
If set differently, KeepConnection for a database component always overrides the
KeepConnections property of the session. For more information about controlling
individual database connections within a session, see “Managing database
connections” on page 26-19.
KeepConnections should be set to True for applications that frequently open and close
all datasets associated with a database on a remote server. This setting reduces
network traffic and speeds data access because it means that a connection need only
be opened and closed once during the lifetime of the session. Otherwise, every time
the application closes or reestablishes a connection, it incurs the overhead of
attaching and detaching the database.
Note Even when KeepConnections is True for a session, you can close and free inactive
database connections for all implicit database components by calling the
DropConnections method. For more information about DropConnections, see
“Dropping inactive database connections” on page 26-20.

Managing database connections


You can use a session component to manage the database connections within it. The
session component includes properties and methods you can use to
• Open database connections.
• Close database connections.
• Close and free all inactive temporary database connections.
• Locate specific database connections.
• Iterate through all open database connections.

Opening database connections


To open a database connection within a session, call the OpenDatabase method.
OpenDatabase takes one parameter, the name of the database to open. This name is a
BDE alias or the name of a database component. For Paradox or dBASE, the name can
also be a fully qualified path name. For example, the following statement uses the
default session and attempts to open a database connection for the database pointed
to by the DBDEMOS alias:
var
DBDemosDatabase: TDatabase;
begin
DBDemosDatabase := Session.OpenDatabase('DBDEMOS');
...
OpenDatabase actives the session if it is not already active, and then checks if the
specified database name matches the DatabaseName property of any database
components for the session. If the name does not match an existing database
component, OpenDatabase creates a temporary database component using the
specified name. Finally, OpenDatabase calls the Open method of the database
component to connect to the server. Each call to OpenDatabase increments a reference
count for the database by 1. As long as this reference count remains greater than 0,
the database is open.

Using the Borland Database Engine 26-19


BDE-based architecture

Closing database connections


To close an individual database connection, call the CloseDatabase method. When you
call CloseDatabase, the reference count for the database, which is incremented when
you call OpenDatabase, is decremented by 1. When the reference count for a database
is 0, the database is closed. CloseDatabase takes one parameter, the database to close. If
you opened the database using the OpenDatabase method, this parameter can be set to
the return value of OpenDatabase.
Session.CloseDatabase(DBDemosDatabase);
If the specified database name is associated with a temporary (implicit) database
component, and the session’s KeepConnections property is False, the database
component is freed, effectively closing the connection.
Note If KeepConnections is False temporary database components are closed and freed
automatically when the last dataset associated with the database component is
closed. An application can always call CloseDatabase prior to that time to force
closure. To free temporary database components when KeepConnections is True, call
the database component’s Close method, and then call the session’s DropConnections
method.
Note Calling CloseDatabase for a persistent database component does not actually close the
connection. To close the connection, call the database component’s Close method
directly.
There are two ways to close all database connections within the session:
• Set the Active property for the session to False.
• Call the Close method for the session.
When you set Active to False, Delphi automatically calls the Close method. Close
disconnects from all active databases by freeing temporary database components and
calling each persistent database component’s Close method. Finally, Close sets the
session’s BDE handle to nil.

Dropping inactive database connections


If the KeepConnections property for a session is True (the default), then database
connections for temporary database components are maintained even if all the
datasets used by the component are closed. You can eliminate these connections and
free all inactive temporary database components for a session by calling the
DropConnections method. For example, the following code frees all inactive,
temporary database components for the default session:
Session.DropConnections;
Temporary database components for which one or more datasets are active are not
dropped or freed by this call. To free these components, call Close.

Searching for a database connection


Use a session’s FindDatabase method to determine whether a specified database
component is already associated with a session. FindDatabase takes one parameter,
the name of the database to search for. This name is a BDE alias or database
component name. For Paradox or dBASE, it can also be a fully-qualified path name.

26-20 Developer’s Guide


BDE-based architecture

FindDatabase returns the database component if it finds a match. Otherwise it returns


nil.
The following code searches the default session for a database component using the
DBDEMOS alias, and if it is not found, creates one and opens it:
var
DB: TDatabase;
begin
DB := Session.FindDatabase('DBDEMOS');
if (DB = nil) then { database doesn't exist for session so,}
DB := Session.OpenDatabase('DBDEMOS'); { create and open it}
if Assigned(DB) and DB.Connected then begin
DB.StartTransaction;
...
end;
end;

Iterating through a session’s database components


You can use two session component properties, Databases and DatabaseCount, to cycle
through all the active database components associated with a session.
Databases is an array of all currently active database components associated with a
session. DatabaseCount is the number of databases in that array. As connections are
opened or closed during a session’s life-span, the values of Databases and
DatabaseCount change. For example, if a session’s KeepConnections property is False
and all database components are created as needed at runtime, each time a unique
database is opened, DatabaseCount increases by one. Each time a unique database is
closed, DatabaseCount decreases by one. If DatabaseCount is zero, there are no
currently active database components for the session.
The following example code sets the KeepConnection property of each active database
in the default session to True:
var
MaxDbCount: Integer;
begin
with Session do
if (DatabaseCount > 0) then
for MaxDbCount := 0 to (DatabaseCount - 1) do
Databases[MaxDbCount].KeepConnection := True;
end;

Working with password-protected Paradox and dBASE tables


A session component can store passwords for password-protected Paradox and
dBASE tables. Once you add a password to the session, your application can open
tables protected by that password. Once you remove the password from the session,
your application can’t open tables that use the password until you add it again.

Using the Borland Database Engine 26-21


BDE-based architecture

Using the AddPassword method


The AddPassword method provides an optional way for an application to provide a
password for a session prior to opening an encrypted Paradox or dBASE table that
requires a password for access. If you do not add the password to the session, when
your application attempts to open a password-protected table, a dialog box prompts
the user for a password.
AddPassword takes one parameter, a string containing the password to use. You can
call AddPassword as many times as necessary to add passwords (one at a time) to
access tables protected with different passwords.
var
Passwrd: String;
begin
Passwrd := InputBox('Enter password', 'Password:', '');
Session.AddPassword(Passwrd);
try
Table1.Open;
except
ShowMessage('Could not open table!');
Application.Terminate;
end;
end;
Note Use of the InputBox function, above, is for demonstration purposes. In a real-world
application, use password entry facilities that mask the password as it is entered,
such as the PasswordDialog function or a custom form.
The Add button of the PasswordDialog function dialog has the same effect as the
AddPassword method.
if PasswordDialog(Session) then
Table1.Open
else
ShowMessage('No password given, could not open table!');
end;

Using the RemovePassword and RemoveAllPasswords methods


RemovePassword deletes a previously added password from memory.
RemovePassword takes one parameter, a string containing the password to delete.
Session.RemovePassword(‘secret’);
RemoveAllPasswords deletes all previously added passwords from memory.
Session.RemoveAllPasswords;

Using the GetPassword method and OnPassword event


The OnPassword event allows you to control how your application supplies
passwords for Paradox and dBASE tables when they are required. Provide a handler
for the OnPassword event if you want to override the default password handling
behavior. If you do not provide a handler, Delphi presents a default dialog for
entering a password and no special behavior is provided—the table open attempt
either succeeds or an exception is raised.

26-22 Developer’s Guide


BDE-based architecture

If you provide a handler for the OnPassword event, do two things in the event
handler: call the AddPassword method and set the event handler’s Continue parameter
to True. The AddPassword method passes a string to the session to be used as a
password for the table. The Continue parameter indicates to Delphi that no further
password prompting need be done for this table open attempt. The default value for
Continue is False, and so requires explicitly setting it to True. If Continue is False after
the event handler has finished executing, an OnPassword event fires again—even if a
valid password has been passed using AddPassword. If Continue is True after
execution of the event handler and the string passed with AddPassword is not the
valid password, the table open attempt fails and an exception is raised.
OnPassword can be triggered by two circumstances. The first is an attempt to open a
password-protected table (dBASE or Paradox) when a valid password has not
already been supplied to the session. (If a valid password for that table has already
been supplied, the OnPassword event does not occur.)
The other circumstance is a call to the GetPassword method. GetPassword either generates
an OnPassword event, or, if the session does not have an OnPassword event handler, displays
a default password dialog. It returns True if the OnPassword event handler or default dialog
added a password to the session, and False if no entry at all was made.
In the following example, the Password method is designated as the OnPassword event
handler for the default session by assigning it to the global Session object’s
OnPassword property.
procedure TForm1.FormCreate(Sender: TObject);
begin
Session.OnPassword := Password;
end;
In the Password method, the InputBox function prompts the user for a password. The
AddPassword method then programmatically supplies the password entered in the
dialog to the session.
procedure TForm1.Password(Sender: TObject; var Continue: Boolean);
var
Passwrd: String;
begin
Passwrd := InputBox('Enter password', 'Password:', '');
Continue := (Passwrd > '');
Session.AddPassword(Passwrd);
end;
The OnPassword event (and thus the Password event handler) is triggered by an
attempt to open a password-protected table, as demonstrated below. Even though
the user is prompted for a password in the handler for the OnPassword event, the
table open attempt can still fail if they enter an invalid password or something else
goes wrong.
procedure TForm1.OpenTableBtnClick(Sender: TObject);
const
CRLF = #13 + #10;

Using the Borland Database Engine 26-23


BDE-based architecture

begin
try
Table1.Open; { this line triggers the OnPassword event }
except
on E:Exception do begin { exception if cannot open table }
ShowMessage('Error!' + CRLF + { display error explaining what happened }
E.Message + CRLF +
'Terminating application...');
Application.Terminate; { end the application }
end;
end;
end;

Specifying Paradox directory locations


Two session component properties, NetFileDir and PrivateDir, are specific to
applications that work with Paradox tables.
NetFileDir specifies the directory that contains the Paradox network control file,
PDOXUSRS.NET. This file governs sharing of Paradox tables on network drives. All
applications that need to share Paradox tables must specify the same directory for the
network control file (typically a directory on a network file server). Delphi derives a
value for NetFileDir from the Borland Database Engine (BDE) configuration file for a
given database alias. If you set NetFileDir yourself, the value you supply overrides
the BDE configuration setting, so be sure to validate the new value.
At design time, you can specify a value for NetFileDir in the Object Inspector. You can
also set or change NetFileDir in code at runtime. The following code sets NetFileDir
for the default session to the location of the directory from which your application
runs:
Session.NetFileDir := ExtractFilePath(Application.EXEName);
Note NetFileDir can only be changed when an application does not have any open Paradox
files. If you change NetFileDir at runtime, verify that it points to a valid network
directory that is shared by your network users.
PrivateDir specifies the directory for storing temporary table processing files, such as
those generated by the BDE to handle local SQL statements. If no value is specified
for the PrivateDir property, the BDE automatically uses the current directory at the
time it is initialized. If your application runs directly from a network file server, you
can improve application performance at runtime by setting PrivateDir to a user’s local
hard drive before opening the database.
Note Do not set PrivateDir at design time and then open the session in the IDE. Doing so
generates a Directory is busy error when running your application from the IDE.
The following code changes the setting of the default session’s PrivateDir property to
a user’s C:\TEMP directory:
Session.PrivateDir := 'C:\TEMP';
Important Do not set PrivateDir to a root directory on a drive. Always specify a subdirectory.

26-24 Developer’s Guide


BDE-based architecture

Working with BDE aliases


Each database component associated with a session has a BDE alias (although
optionally a fully-qualified path name may be substituted for an alias when accessing
Paradox and dBASE tables). A session can create, modify, and delete aliases during
its lifetime.
The AddAlias method creates a new BDE alias for an SQL database server. AddAlias
takes three parameters: a string containing a name for the alias, a string that specifies
the SQL Links driver to use, and a string list populated with parameters for the alias.
For example, the following statements use AddAlias to add a new alias for accessing
an InterBase server to the default session:
var
AliasParams: TStringList;
begin
AliasParams := TStringList.Create;
try
with AliasParams do begin
Add('OPEN MODE=READ');
Add('USER NAME=TOMSTOPPARD');
Add('SERVER NAME=ANIMALS:/CATS/PEDIGREE.GDB');
end;
Session.AddAlias('CATS', 'INTRBASE', AliasParams);
...
finally
AliasParams.Free;
end;
end;
AddStandardAlias creates a new BDE alias for Paradox, dBASE, or ASCII tables.
AddStandardAlias takes three string parameters: the name for the alias, the fully-
qualified path to the Paradox or dBASE table to access, and the name of the default
driver to use when attempting to open a table that does not have an extension. For
example, the following statement uses AddStandardAlias to create a new alias for
accessing a Paradox table:
AddStandardAlias('MYDBDEMOS', 'C:\TESTING\DEMOS\', 'Paradox');
When you add an alias to a session, the BDE stores a copy of the alias in memory,
where it is only available to this session and any other sessions with cfmPersistent
included in the ConfigMode property. ConfigMode is a set that describes which types
of aliases can be used by the databases in the session. The default setting is cmAll,
which translates into the set [cfmVirtual, cfmPersistent, cfmSession]. If ConfigMode is
cmAll, a session can see all aliases created within the session (cfmSession), all aliases in
the BDE configuration file on a user’s system (cfmPersistent), and all aliases that the
BDE maintains in memory (cfmVirtual). You can change ConfigMode to restrict what
BDE aliases the databases in a session can use. For example, setting ConfigMode to
cfmSession restricts a session’s view of aliases to those created within the session. All
other aliases in the BDE configuration file and in memory are not available.

Using the Borland Database Engine 26-25


BDE-based architecture

To make a newly created alias available to all sessions and to other applications, use
the session’s SaveConfigFile method. SaveConfigFile writes aliases in memory to the
BDE configuration file where they can be read and used by other BDE-enabled
applications.
After you create an alias, you can make changes to its parameters by calling
ModifyAlias. ModifyAlias takes two parameters: the name of the alias to modify and a
string list containing the parameters to change and their values. For example, the
following statements use ModifyAlias to change the OPEN MODE parameter for the
CATS alias to READ/WRITE in the default session:
var
List: TStringList;
begin
List := TStringList.Create;
with List do begin
Clear;
Add('OPEN MODE=READ/WRITE');
end;
Session.ModifyAlias('CATS', List);
List.Free;
...
To delete an alias previously created in a session, call the DeleteAlias method.
DeleteAlias takes one parameter, the name of the alias to delete. DeleteAlias makes an
alias unavailable to the session.
Note DeleteAlias does not remove an alias from the BDE configuration file if the alias was
written to the file by a previous call to SaveConfigFile. To remove the alias from the
configuration file after calling DeleteAlias, call SaveConfigFile again.
Session components provide five methods for retrieving information about a BDE
aliases, including parameter information and driver information. They are:
• GetAliasNames, to list the aliases to which a session has access.
• GetAliasParams, to list the parameters for a specified alias.
• GetAliasDriverName, to return the name of the BDE driver used by the alias.
• GetDriverNames, to return a list of all BDE drivers available to the session.
• GetDriverParams, to return driver parameters for a specified driver.
For more information about using a session’s informational methods, see “Retrieving
information about a session” below. For more information about BDE aliases and the
SQL Links drivers with which they work, see the BDE online help, BDE32.HLP.

26-26 Developer’s Guide


BDE-based architecture

Retrieving information about a session


You can retrieve information about a session and its database components by using a
session’s informational methods. For example, one method retrieves the names of all
aliases known to the session, and another method retrieves the names of tables
associated with a specific database component used by the session. Table 26.4
summarizes the informational methods to a session component:

Table 26.4 Database-related informational methods for session components


Method Purpose
GetAliasDriverName Retrieves the BDE driver for a specified alias of a database.
GetAliasNames Retrieves the list of BDE aliases for a database.
GetAliasParams Retrieves the list of parameters for a specified BDE alias of a database.
GetConfigParams Retrieves configuration information from the BDE configuration file.
GetDatabaseNames Retrieves the list of BDE aliases and the names of any TDatabase
components currently in use.
GetDriverNames Retrieves the names of all currently installed BDE drivers.
GetDriverParams Retrieves the list of parameters for a specified BDE driver.
GetStoredProcNames Retrieves the names of all stored procedures for a specified database.
GetTableNames Retrieves the names of all tables matching a specified pattern for a
specified database.
GetFieldNames Retrieves the names of all fields in a specified table in a specified
database.

Except for GetAliasDriverName, these methods return a set of values into a string list
declared and maintained by your application. (GetAliasDriverName returns a single
string, the name of the current BDE driver for a particular database component used
by the session.)
For example, the following code retrieves the names of all database components and
aliases known to the default session:
var
List: TStringList;
begin
List := TStringList.Create;
try
Session.GetDatabaseNames(List);
...
finally
List.Free;
end;
end;

Using the Borland Database Engine 26-27


BDE-based architecture

Creating additional sessions


You can create sessions to supplement the default session. At design time, you can
place additional sessions on a data module (or form), set their properties in the
Object Inspector, write event handlers for them, and write code that calls their
methods. You can also create sessions, set their properties, and call their methods at
runtime.
Note Creating additional sessions is optional unless an application runs concurrent queries
against a database or the application is multi-threaded.
To enable dynamic creation of a session component at runtime, follow these steps:
1 Declare a TSession variable.
2 Instantiate a new session by calling the Create method. The constructor sets up an
empty list of database components for the session, sets the KeepConnections
property to True, and adds the session to the list of sessions maintained by the
application’s session list component.
3 Set the SessionName property for the new session to a unique name. This property
is used to associate database components with the session. For more information
about the SessionName property, see “Naming a session” on page 26-29.
4 Activate the session and optionally adjust its properties.
You can also create and open sessions using the OpenSession method of TSessionList.
Using OpenSession is safer than calling Create, because OpenSession only creates a
session if it does not already exist. For information about OpenSession, see “Managing
multiple sessions” on page 26-29.
The following code creates a new session component, assigns it a name, and opens
the session for database operations that follow (not shown here). After use, it is
destroyed with a call to the Free method.
Note Never delete the default session.
var
SecondSession: TSession;
begin
SecondSession := TSession.Create(Form1);
with SecondSession do
try
SessionName := 'SecondSession';
KeepConnections := False;
Open;
...
finally
SecondSession.Free;
end;
end;

26-28 Developer’s Guide


BDE-based architecture

Naming a session
A session’s SessionName property is used to name the session so that you can
associate databases and datasets with it. For the default session, SessionName is
“Default,” For each additional session component you create, you must set its
SessionName property to a unique value.
Database and dataset components have SessionName properties that correspond to
the SessionName property of a session component. If you leave the SessionName
property blank for a database or dataset component it is automatically associated
with the default session. You can also set SessionName for a database or dataset
component to a name that corresponds to the SessionName of a session component
you create.
The following code uses the OpenSession method of the default TSessionList
component, Sessions, to open a new session component, sets its SessionName to
“InterBaseSession,” activate the session, and associate an existing database
component Database1 with that session:
var
IBSession: TSession;
ƒ
begin
IBSession := Sessions.OpenSession('InterBaseSession');
Database1.SessionName := 'InterBaseSession';
end;

Managing multiple sessions


If you create a single application that uses multiple threads to perform database
operations, you must create one additional session for each thread. The BDE page on
the Component palette contains a session component that you can place in a data
module or on a form at design time.
Important When you place a session component, you must also set its SessionName property to a
unique value so that it does not conflict with the default session’s SessionName
property.
Placing a session component at design time presupposes that the number of threads
(and therefore sessions) required by the application at runtime is static. More likely,
however, is that an application needs to create sessions dynamically. To create
sessions dynamically, call the OpenSession method of the global Sessions object at
runtime.
OpenSession requires a single parameter, a name for the session that is unique across
all session names for the application. The following code dynamically creates and
activates a new session with a uniquely generated name:
Sessions.OpenSession('RunTimeSession' + IntToStr(Sessions.Count + 1));
This statement generates a unique name for a new session by retrieving the current
number of sessions, and adding one to that value. Note that if you dynamically create
and destroy sessions at runtime, this example code will not work as expected.
Nevertheless, this example illustrates how to use the properties and methods of
Sessions to manage multiple sessions.

Using the Borland Database Engine 26-29


BDE-based architecture

Sessions is a variable of type TSessionList that is automatically instantiated for BDE-


based database applications. You use the properties and methods of Sessions to keep
track of multiple sessions in a multi-threaded database application. Table 26.5
summarizes the properties and methods of the TSessionList component:

Table 26.5 TSessionList properties and methods


Property or Method Purpose
Count Returns the number of sessions, both active and inactive, in the session list.
FindSession Searches for a session with a specified name and returns a pointer to it, or
nil if there is no session with the specified name. If passed a blank session
name, FindSession returns a pointer to the default session, Session.
GetSessionNames Populates a string list with the names of all currently instantiated session
components. This procedure always adds at least one string, “Default” for
the default session.
List Returns the session component for a specified session name. If there is no
session with the specified name, an exception is raised.
OpenSession Creates and activates a new session or reactivates an existing session for a
specified session name.
Sessions Accesses the session list by ordinal value.

As an example of using Sessions properties and methods in a multi-threaded


application, consider what happens when you want to open a database connection.
To determine if a connection already exists, use the Sessions property to walk through
each session in the sessions list, starting with the default session. For each session
component, examine its Databases property to see if the database in question is open.
If you discover that another thread is already using the desired database, examine
the next session in the list.
If an existing thread is not using the database, then you can open the connection
within that session.
If, on the other hand, all existing threads are using the database, you must open a
new session in which to open another database connection.
If you are replicating a data module that contains a session in a multi-threaded
application, where each thread contains its own copy of the data module, you can use
the AutoSessionName property to make sure that all datasets in the data module use
the correct session. Setting AutoSessionName to True causes the session to generate its
own unique name dynamically when it is created at runtime. It then assigns this
name to every dataset in the data module, overriding any explicitly set session
names. This ensures that each thread has its own session, and each dataset uses the
session in its own data module.

26-30 Developer’s Guide


Using transactions with the BDE

Using transactions with the BDE


By default, the BDE provides implicit transaction control for your applications. When
an application is under implicit transaction control, a separate transaction is used for
each record in a dataset that is written to the underlying database. Implicit
transactions guarantee both a minimum of record update conflicts and a consistent
view of the database. On the other hand, because each row of data written to a
database takes place in its own transaction, implicit transaction control can lead to
excessive network traffic and slower application performance. Also, implicit
transaction control will not protect logical operations that span more than one record.
If you explicitly control transactions, you can choose the most effective times to start,
commit, and roll back your transactions. When you develop applications in a multi-
user environment, particularly when your applications run against a remote SQL
server, you should control transactions explicitly.
There are two mutually exclusive ways to control transactions explicitly in a BDE-
based database application:
• Use the database component to control transactions. The main advantage to using
the methods and properties of a database component is that it provides a clean,
portable application that is not dependent on a particular database or server. This
type of transaction control is supported by all database connection components,
and described in “Managing transactions” on page 23-6
• Use passthrough SQL in a query component to pass SQL statements directly to
remote SQL or ODBC servers. The main advantage to passthrough SQL is that you
can use the advanced transaction management capabilities of a particular database
server, such as schema caching. To understand the advantages of your server’s
transaction management model, see your database server documentation. For
more information about using passthrough SQL, see “Using passthrough SQL”
below.
When working with local databases, you can only use the database component to
create explicit transactions (local databases do not support passthrough SQL).
However, there are limitations to using local transactions. For more information on
using local transactions, see “Using local transactions” on page 26-32.
Note You can minimize the number of transactions you need by caching updates. For
more information about cached updates, see “Using a client dataset to cache
updates” and “Using the BDE to cache updates” on page 26-33.

Using the Borland Database Engine 26-31


Using transactions with the BDE

Using passthrough SQL


With passthrough SQL, you use a TQuery, TStoredProc, or TUpdateSQL component to send
an SQL transaction control statement directly to a remote database server. The BDE does not
process the SQL statement. Using passthrough SQL enables you to take direct advantage of
the transaction controls offered by your server, especially when those controls are non-
standard.
To use passthrough SQL to control a transaction, you must
• Install the proper SQL Links drivers. If you chose the “Typical” installation when
installing Delphi, all SQL Links drivers are already properly installed.
• Configure your network protocol. See your network administrator for more
information.
• Have access to a database on a remote server.
• Set SQLPASSTHRU MODE to NOT SHARED using the SQL Explorer.
SQLPASSTHRU MODE specifies whether the BDE and passthrough SQL
statements can share the same database connections. In most cases,
SQLPASSTHRU MODE is set to SHARED AUTOCOMMIT. However, you can’t
share database connections when using transaction control statements. For more
information about SQLPASSTHRU modes, see the help file for the BDE
Administration utility.
Note When SQLPASSTHRU MODE is NOT SHARED, you must use separate database
components for datasets that pass SQL transaction statements to the server and
datasets that do not.

Using local transactions


The BDE supports local transactions against Paradox, dBASE, Access, and FoxPro
tables. From a coding perspective, there is no difference to you between a local
transaction and a transaction against a remote database server.
Note When using transactions with local Paradox, dBASE, Access, and FoxPro tables, set
TransIsolation to tiDirtyRead instead of using the default value of tiReadCommitted. A
BDE error is returned if TransIsolation is set to anything but tiDirtyRead for local
tables.
When a transaction is started against a local table, updates performed against the
table are logged. Each log record contains the old record buffer for a record. When a
transaction is active, records that are updated are locked until the transaction is
committed or rolled back. On rollback, old record buffers are applied against
updated records to restore them to their pre-update states.
Local transactions are more limited than transactions against SQL servers or ODBC
drivers. In particular, the following limitations apply to local transactions:
• Automatic crash recovery is not provided.
• Data definition statements are not supported.

26-32 Developer’s Guide


Using the BDE to cache updates

• Transactions cannot be run against temporary tables.


• TransIsolation level must only be set to tiDirtyRead.
• For Paradox, local transactions can only be performed on tables with valid
indexes. Data cannot be rolled back on Paradox tables that do not have indexes.
• Only a limited number of records can be locked and modified. With Paradox
tables, you are limited to 255 records. With dBASE the limit is 100.
• Transactions cannot be run against the BDE ASCII driver.
• Closing a cursor on a table during a transaction rolls back the transaction unless:
• Several tables are open.
• The cursor is closed on a table to which no changes were made.

Using the BDE to cache updates


The recommended approach for caching updates is to use a client dataset
(TBDEClientDataSet) or to connect the BDE-dataset to a client dataset using a dataset
provider. The advantages of using a client dataset are discussed in “Using a client
dataset to cache updates” on page 29-16.
For simple cases, however, you may choose to use the BDE to cache updates instead.
BDE-enabled datasets and TDatabase components provide built-in properties,
methods, and events for handling cached updates. Most of these correspond directly
to the properties, methods, and events that you use with client datasets and dataset
providers when using a client dataset to cache updates. The following table lists these
properties, events, and methods and the corresponding properties, methods and
events on TBDEClientDataSet:

Table 26.6 Properties, methods, and events for cached updates


On BDE-enabled datasets
(or TDatabase) On TBDEClientDataSet Purpose
CachedUpdates Not needed for client Determines whether cached updates are
datasets, which always in effect for the dataset.
cache updates.
UpdateObject Use a BeforeUpdateRecord Specifies the update object for updating
event handler, or, if using read-only datasets.
TClientDataSet, use the
UpdateObject property on
the BDE-enabled source
dataset.
UpdatesPending ChangeCount Indicates whether the local cache
contains updated records that need to be
applied to the database.
UpdateRecordTypes StatusFilter Indicates the kind of updated records to
make visible when applying cached
updates.
UpdateStatus UpdateStatus Indicates if a record is unchanged,
modified, inserted, or deleted.

Using the Borland Database Engine 26-33


Using the BDE to cache updates

Table 26.6 Properties, methods, and events for cached updates (continued)
On BDE-enabled datasets
(or TDatabase) On TBDEClientDataSet Purpose
OnUpdateError OnReconcileError An event for handling update errors on
a record-by-record basis.
OnUpdateRecord BeforeUpdateRecord An event for processing updates on a
record-by-record basis.
ApplyUpdates ApplyUpdates Applies records in the local cache to the
ApplyUpdates (database) database.
CancelUpdates CancelUpdates Removes all pending updates from the
local cache without applying them.
CommitUpdates Reconcile Clears the update cache following
successful application of updates.
FetchAll GetNextPacket Copies database records to the local
(and PacketRecords) cache for editing and updating.
RevertRecord RevertRecord Undoes updates to the current record if
updates are not yet applied.

For an overview of the cached update process, see “Overview of using cached
updates” on page 29-17.
Note Even if you are using a client dataset to cache updates, you may want to read the
section about update objects on page 26-40. You can use update objects in the
BeforeUpdateRecord event handler of TBDEClientDataSet or TDataSetProvider to apply
updates from stored procedures or multi-table queries.

Enabling BDE-based cached updates


To use the BDE for cached updates, the BDE-enabled dataset must indicate that it
should cache updates. This is specified by setting the CachedUpdates property to True.
When you enable cached updates, a copy of all records is cached in local memory.
Users view and edit this local copy of data. Changes, insertions, and deletions are
also cached in memory. They accumulate in memory until the application applies
those changes to the database server. If changed records are successfully applied to
the database, the record of those changes are freed in the cache.
The dataset caches all updates until you set CachedUpdates to False. Applying cached
updates does not disable further cached updates; it only writes the current set of
changes to the database and clears them from memory. Canceling the updates by
calling CancelUpdates removes all the changes currently in the cache, but does not
stop the dataset from caching any subsequent changes.
Note If you disable cached updates by setting CachedUpdates to False, any pending changes
that you have not yet applied are discarded without notification. To prevent losing
changes, test the UpdatesPending property before disabling cached updates.

26-34 Developer’s Guide


Using the BDE to cache updates

Applying BDE-based cached updates


Applying updates is a two-phase process that should occur in the context of a
database component’s transaction so that your application can recover gracefully
from errors. For information about transaction handling with database components,
see “Managing transactions” on page 23-6.
When applying updates under database transaction control, the following events
take place:
1 A database transaction starts.
2 Cached updates are written to the database (phase 1). If you provide it, an
OnUpdateRecord event is triggered once for each record written to the database. If
an error occurs when a record is applied to the database, the OnUpdateError event
is triggered if you provide one.
3 The transaction is committed if writes are successful or rolled back if they are not:
If the database write is successful:
• Database changes are committed, ending the database transaction.
• Cached updates are committed, clearing the internal cache buffer (phase 2).
If the database write is unsuccessful:
• Database changes are rolled back, ending the database transaction.
• Cached updates are not committed, remaining intact in the internal cache.
For information about creating and using an OnUpdateRecord event handler, see
“Creating an OnUpdateRecord event handler” on page 26-37. For information about
handling update errors that occur when applying cached updates, see “Handling
cached update errors” on page 26-38.
Note Applying cached updates is particularly tricky when you are working with multiple
datasets linked in a master/detail relationship because the order in which you apply
updates to each dataset is significant. Usually, you must update master tables before
detail tables, except when handling deleted records, where this order must be
reversed. Because of this difficulty, it is strongly recommended that you use client
datasets when caching updates in a master/detail form. Client datasets automatically
handle all ordering issues with master/detail relationships.
There are two ways to apply BDE-based updates:
• You can apply updates using a database component by calling its ApplyUpdates
method. This method is the simplest approach, because the database handles all
details of managing a transaction for the update process and of clearing the
dataset’s cache when updating is complete.
• You can apply updates for a single dataset by calling the dataset’s ApplyUpdates
and CommitUpdates methods. When applying updates at the dataset level you
must explicitly code the transaction that wraps the update process as well as
explicitly call CommitUpdates to commit updates from the cache.

Using the Borland Database Engine 26-35

You might also like