Advanced ETL User Manual
Advanced ETL User Manual
User manual
www.dbsoftlab.com
Advanced ETL Processor User Manual
Contents
Contents....................................................................................................................................... 2
1. Introduction ............................................................................................................................. 9
2. Key features ........................................................................................................................... 10
2.1 Extraction Process ........................................................................................................... 10
2.1.1 Multiple Data Formats.............................................................................................. 10
2.1.2 Multiple Databases and Table Processing................................................................ 10
2.1.3 Other Database Features........................................................................................... 10
2.2 Summary of the Extraction Process: ............................................................................... 10
2.3 Validation Process ........................................................................................................... 11
2.4 Summary of Validation Processes:.................................................................................. 11
2.5 Transformation Process ................................................................................................... 11
2.6 Powerful Data Transformation ........................................................................................ 11
2.7 Summary of Transformation Processes:.......................................................................... 12
2.8 Loading Process............................................................................................................... 13
2.9 Summary of the Loading Process:................................................................................... 13
3. Requirements......................................................................................................................... 14
4. Advanced ETL Processor Architecture .............................................................................. 16
5. Processing Data ..................................................................................................................... 16
5. Processing Data ..................................................................................................................... 17
5.1 Screen Overview ............................................................................................................. 17
Main tool bar ..................................................................................................................... 18
5.2 Transformation Properties ............................................................................................... 19
5.3 Template tab .................................................................................................................... 20
5.4 Execution Log Tab .......................................................................................................... 21
5.5 Rejected Records Tab...................................................................................................... 22
5.6 Creating new transformation and working with Objects................................................. 23
5.7 Working with Reader ...................................................................................................... 25
5.7.1 Universal Data Reader.............................................................................................. 25
5.7.2 Data source is a Text File. ........................................................................................ 26
5.7.3 Data source is MS Access or Excel File................................................................... 31
5.7.4 Data source is a DBF File......................................................................................... 35
5.7.5 Data source is a ODBC Connection ......................................................................... 37
5.7.6 Data source is a MS SQL Server .............................................................................. 40
5.7.7 Data source is an Oracle Database ........................................................................... 41
5.7.8 Data source is an MySql........................................................................................... 42
5.7.9 Data source is an PostgreSQL Database................................................................... 43
5.7.10 Data source is an Interbase or Firebird Database ................................................... 44
5.8 Working with Validator................................................................................................... 45
5.8.1 Debugging Validation............................................................................................... 47
5.9 Working with Transformer.............................................................................................. 50
5.9.1 Auto mapping ........................................................................................................... 52
5.10 Working with Grouper .................................................................................................. 53
5.11 Working with Sorter ...................................................................................................... 54
5.12 Working with Deduplicator........................................................................................... 56
5.13 Working with UnPivot .................................................................................................. 57
5.14 Working with Pivot ....................................................................................................... 62
5.15 Working with Writer ..................................................................................................... 67
5.15.1 Target type is a Text File........................................................................................ 67
Copyright © 2009 DB Software Laboratory Page 2 of 194
Advanced ETL Processor User Manual
Copyright
License Information
Disclaimer
All information in this manual is subject to periodic change and revision without
notice. While every effort has been made to ensure that this manual is accurate, DB
Software Laboratory Limited excludes its liability for errors or inaccuracies (if any)
contained herein.
Registered Marks
Edition Information
1. Introduction
The Advanced ETL Processor is an end to end database extraction and importing
tool. The beauty of the system is that it saves the drudgery and manual tasks
normally required for tasks of this type, such as the writing of code, and all the
transformations, validations and general checks normally performed. Using traditional
methods, operations of this type can only be performed in stages, and not as one
smooth operation!
For example, the traditional method of importing data from one system to another is
to write specific code to extract data from the source database, e.g. an Oracle
database, by creating a CSV comma de-limited file, and then writing code in the new
language or system, for instance Microsoft Access, and then performing the import.
However, the operation does not end there. Any data imported has to be sorted,
duplicated and loaded into the database using appropriate primary and foreign key
constraints. This is only possible by creating code designed to achieve this process.
You then need to manually send an e-mail to the administrator when the process is
complete. In other words, each stage cannot be left to run in an automated fashion
and has to be completed before proceeding to the next.
As you can see, the process is not straightforward. The Advanced ETL Processor
automates all these processes in a simple and transparent fashion, and all without
writing any code whatsoever.
As stated, the tool to handle any kind of database, including Oracle, Microsoft
Access, SQL Server, DB2, MySQL, Excel spreadsheets, and a wide range of others.
It is an excellent tool for those organisations who work with data warehouses, and
where this involves working with a number of disparate databases.
Existing users find that the tool provides several benefits over existing tools such as
Oracle SQL Loader, BCP, DTS or SSIS, such as the ability to update records
automatically via utilisation of the primary key.
2. Key features
The ability of the Advanced ETL Processor to work with a number of disparate
systems means that it is provided with a rich set of tools and functionality, which can
be used in isolation or combined in a powerful way with other toolsets, either within
the processor or with other third party tools. It is in effect, an "engineering"
environment for the movement of data to and from different sources.
We will now explain and define the variety of features which are provided as part of
the toolset. Let us first have a look at the data extraction process.
The extraction process can handle a variety of data formats, including multiple
delimited or fixed width text files. The power of this system however, is in its ability to
find files to load using mask.
It can easily interpret and manipulate Microsoft Access data, from a number of
different databases. Again the end user can use a mask to find the tables to load the
data from. It does also apply to Excel and DBF/FoxPro files.
The Advanced ETL Processor also has other useful database features, such as
the ability to connect to any Object Database Connectivity (ODBC) database. ODBC
was intended to enable developers' access to any data through any application,
regardless of the DBMS used for managing that data. ODBC boasts platform
independence since it has been purposefully designed in a way that makes it distinct
from database systems, programming languages and operating systems. The
Advanced ETL Processor utilises this technology to great effect.
So what if you don’t have an Oracle or MySQL database? What about SQL Server?
No problem, the Advanced ETL Processor can handle SQL Server data as
efficiently as data from any other type of database.
The Advanced ETL Process has a robust validation process built in. The types and
nature of the validations taking place can be tweaked and configured by the user. A
full range of validation functions are included. Validations can be performed on the
basis of data type, lists of values, and regular expressions, which can be individually
changed according to requirements.
Validation:
In addition to the standard data transformation abilities, the processor can also
perform complex translation functions. An example would be if an integer variable =
“1”, then set a text variable to “yes”. Once data is translated, it is possible to join data
into a completely new format or present it in a new way. The Advanced ETL
Processor provides many flexible alternatives for data manipulation, and these are
not difficult to take advantage of.
The Advanced ETL Processor also provides the ability to derive calculated values,
join data together from multiple fields, summarise multiple rows at once, or can split
or merge columns at will.
The flexibility and power of the processor means that you can customize data
transformation and conversion functions according to your requirements with a click
of the mouse. This saves hours and hours of coding.
Transformation:
• 39 Transformation Functions
• String Transformation
• Number Transformation
• Date Transformation
• Sorting
• Grouping
• Deduplication
• Translating coded values (e.g., if the source system stores 1 for male and 2 for female,
but the warehouse stores M for male and F for female)
• Deriving a new calculated value (e.g., sale amount = qty * unit price)
• Joining together data from multiple fields
• Summarizing multiple rows of data (e.g., total sales for each store, and for each region)
• Generating surrogate key values
• Transposing or pivoting (turning multiple columns into multiple rows or vice versa)
• Splitting a column into multiple columns (e.g., putting a comma-separated list
specified as a string in one column as individual values in different columns)
• Customised Transformation
• Primary Key Generation
• Running Totals
The loading capability of the Advanced ETL Processor is superior to other basic
tools such as SQL Loader, because it provides the administrator with several options
and ways of providing database load capability without creating any code.
Other tools allow you to load data into a single database table at a time, under a
single instance. The ETL processor on the other hand allows you to specify multiple
upload targets which mean you can save time loading each individual table. Another
useful feature is that you can execute SQL scripts either prior to the load or after the
load has completed. This is useful for tidying up the data or providing a report on the
result of the load process once it completes.
It does not matter if the file to be loaded is a fixed or variable length text file, the
Advanced ETL Processor can handle it. It works with Access, DBF files, Oracle,
SQL Server and any ODBC compliant database.
Oracle
SQL server
This software uses the same API as Microsoft DTS and SSIS service.
3. Requirements
Below is the list of Software that must be installed before installation of Advanced
ETL Processor:
Separate Downloads:
)ote:
Depending on the Requirements you may or may not need to have all components installed
There is no need to install clients for MySql and PostgreSQL they are integrated into the
software itself.
The following graphical depiction of a typical ETL installation shows how the
software interacts with other components and interfaces belonging to various
databases. The processor sits in the middle of the various databases and carries out
its tasks, such as converting, transforming and validating data from various sources.
As you can see that the Advanced ETL Processor uses native low level API's for
specific databases, such as the Oracle Call Interface (OCI), or, in order to handle the
Microsoft SQL Server database the BCP API. Other API's can also be "plugged in"
such as the ODBC API which allows MS Access, DBF to be processed.
Note:
One of major benefit of using native low level API' is a great performance boost it
gives to Advanced ETL Processor.
SQL SQL
Advanced ETL
BCP API
ODBC ODBC
Excel Excel
ODBC
Reader
API
Validator
MS Access Transformer MS Access
Groupper
Sorter
DBF Files Deduplicator DBF Files
Pivot
Unpivot
MySQL Writer MySQL
Integrated
Clients
PostgreSQL PostgreSQL
Interbase Interbase
Extraction Loading
Copyright © 2009 DB Software Laboratory Page 16 of 194
Advanced ETL Processor User Manual
5. Processing Data
In order to load data from the data source into the data target you must define data
mapping between target table and data source.
The data processing screen provides a number of settings, for the three stages of
reading, transformation and writing. This is facilitated by the main tool bar, and a list
of available objects. The main reader toolbar provides the user with all the necessary
functionality to setup the reading process, according to the settings provided.
Source file/Table
Field Width
Reader Toolbar Field No Reader Fields Reader data
The main tool bar provides a number of icons which allow the user to create the
steps required to carry out the automatic functions required.
1 2 3 4 5 6 7 8
1. Map Properties
2. New Map
3. Load Template From the file
4. Saves Template to the file
5. Saves Template under the new name
6. Prints Map
7. Print Preview Map
8. Process Data
Click to change Transformation properties, please note that Default Date format
applies to all templates. Default Date format only used for new Date related validation
and transformation functions for example Is Date, Is Date between ETC
The execution log provides information about the Advanced ETL Processor and
the actions it took during its operations. This is useful when you wish to analyse the
activities of individual processes during their execution.
Occasionally, records will be rejected by the Advanced ETL Processor. This may
be due to things like corrupt records which have been read in a format not expected
by the processor. The rejected records tab allows the user to see a list of all the
rejected records, and other information about the nature of the rejection and where
this occurred in the process.
Objects Panel
To join two objects together, first click on the source Output button then drag it on to
the Input button
Output Buttons
Description
Every Transformation created must have one reader object. The Reader connects to
the Data source and extracts data from it. Depending on the Data source type some
options may not be available. To change the Reader properties click or double
click on the Reader object.
Provided that you are using Advanced ETL Processor all you need to do is to
change connection type no mapping will be lost.
Other tools use different connector for different databases and some of then even
sell separate licences for them. That mean that user have to recreate mapping for
new files/ databases.
The Reader is capable of extracting data from delimited or fixed width files. All
parameters are user definable. It can also skip a number of Header and Footer lines
The Rejected records file can have a pre-defined format. The following dialogue
allows the user to set this up.
Data View
One of the useful features of the Advanced ETL Processor is the ability to view the
resultant data prior, or subsequently processed. The data view looks like a
spreadsheet view as follows:
Source file
The data view toolbar allows you to change various aspects of the data view, such
as refreshing data as it changes and setting properties for how the data will look. You
can also switch between viewing of the data and checking to see how the data is
defined i.e. the data dictionary, via the “Switch to Data Definition View”.
1 2 3 4 5 6 7
1. Reader Properties
2. Refresh Data
3. Edit file in external editor
4. Add new column
5. Delete last column
6. Switch to Data View
7. Switch to Data Definition View
)ote:
You may rename fields and change field’s width here. (Works only for text files)
Within the Data Definition view you can perform a number of actions. These allow
you to change how you want data to be represented in the data view screens. The
navigation also allows switching between views.
1 2 3 4 5 6 7 8 9 10
1. Reader Properties
2. Refresh Data
3. Print Data Definition
4. Print Preview Data Definition
5. Find
6. Edit file in external editor
7. Add new column
8. Delete last column
9. Switch to Data View
10. Switch to Data Definition View
Query Builder
)ote:
It is also possible to use Query builder to design queries
Data View
1 2 3 4 5
1. Reader Properties
2. Refresh Data
3. Edit file in external editor
4. Switch to Data View
5. Switch to Data Definition View
1 2 3 4 5 6 7 8
1. Reader Properties
2. Refresh Data
3. Print Data Definition
4. Print Preview Data Definition
5. Find
6. Edit file in external editor
7. Switch to Data View
8. Switch to Data Definition View
Source Directory
Data View
Source Directory
1 2 3 4
1. Reader Properties
2. Refresh Data
3. Switch to Data View
4. Switch to Data Definition View
1 2 3 4 5 6 7
1. Reader Properties
2. Refresh Data
3. Print Data Definition
4. Print Preview Data Definition
5. Find
6. Switch to Data View
7. Switch to Data Definition View
It also possible to use ODBC connection strings for both Reader and Writer
connections.
For example for MS SQL Server connection string is:
One of the major benefits of using connection strings that it makes it no longer necessary to
create ODBC Dsn’s manually on every single computer where Advanced ETL Processor is
installed. It also gives a greater control over the connection parameters.
)ote:
Leave user name and password blank and provide it within connection string
The simplest way to create ODBC connection string is to use ODBC Connection builder
dialog. Double click on ODBC driver name to create a connection string
Data View
Source Table
1 2 3 4
1. Reader Properties
2. Refresh Data
3. Switch to Data View
4. Switch to Data Definition View
1 2 3 4 5 6 7
1. Reader Properties
2. Refresh Data
3. Print Data Definition
4. Print Preview Data Definition
5. Find
6. Switch to Data View
7. Switch to Data Definition View
)ote:
All properties are very similar to ODBC connection
)ote:
All properties are very similar to ODBC connection
)ote:
All properties are very similar to MS SQL Server connection
)ote:
All properties are very similar to MS SQL Server connection
)ote:
All properties are very similar to ODBC/MS Access connection
One of the simplest forms of data validation is verifying the data type. Data type
validation answers such simple questions as "Is the string alphabetic?" and "Is the
number valid?"
As an extension of simple type validation, range checking ensures that the provided
value is within allowable minimums and maximums. For example, a character data
type service code may only allow the alphabetic letters A through Z. All other
characters would not be valid.
Code checking is a bit more complicated, typically requiring a lookup table. For
example, maybe your application calculates sales tax for only certain state codes.
You would need to create a lookup object to hold the authorized, taxable state codes.
Pattern checking when you checking structure of the data field for example social
security number format or car registration number. Regular expressions used quite
often for pattern checks.
It does not matter which business you are in sooner or later you will discover that
there is something wrong with the data ant it has to be validated. Here when
Advanced ETL Processor Validation can help.
Records Rejected by validation Rules
Processed Records
Discarded Records
)ote:
• Records can also be rejected by the Server.
• If you have several validation rules and one of them rejects record and another
discards it, record will be discarded
Inputs
Validator Toolbar Objects Panel
Validation Rules
Data Sample
Validator Toolbar
1 2 3 4 5 6 7 8 9 10 11
1. Print
2. Print Preview
3. Delete All objects
4. Delete All Links
5. Process Data
6. First Record
7. Previous Record
8. Next Record
9. Last Record
10. Show Data
11. Show Objects Panel
)ote:
Use <value> to include actual value into default value
To add new Validation rule drag and drop it from the Validation rules panel
There more than 190 Validation Functions at the moment. They are grouped in five
different categories
1. String
2. Number
3. Date
4. Time
5. Regular Expressions
It is also possible to apply several validation rules to Input field by joining them
Validated Data
Failed Data
)ote:
If you have several validation rules and one of them rejects record and another
discards it, record will be discarded
Records Transformed
)ote:
To change Transformer properties double click on it
Transformer Toolbar
1 2 3 4 5 6 7 8 9 10 11 12 13
1. Print
2. Print Preview
3. Auto Map
4. Delete All objects
5. Delete All Links
6. Process Data
7. First Record
8. Previous Record
9. Next Record
10. Last Record
11. Copies Inputs to Outputs (Only visible if transformer is connected to any object other
than writer)
12. Show Data
13. Show Objects Panel
)ote:
If transformer is connected to any object other than writer it is possible to modify
list of Outputs. When transformer is connected to writer list of Outputs is taken
from Writer.
Examples:
Example below splits date field into Day, Month and Year using ‘/’ as a delimiter
If the Inputs and Outputs have got the same names you may use Auto map feature.
Records Out
Records In
Records Out
Records In
About Sorting
Quicksort
Merge sort
Merge sort takes advantage of the ease of merging already sorted lists into a new
sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...)
and swapping them if the first should come after the second. It then merges each of
the resulting lists of two into lists of four, then merges those lists of four, and so on;
until at last two lists are merged into the final sorted list. Of the algorithms described
here, this is the first that scales well to very large lists, because its worst-case running
time is O(n log n).
To change Sorter properties double click on the object. Tick sort, select field order
and sort order. Field order must be unique for sort field. Data is loaded in the memory
first than sorted and passed to the next object.
Records Out
Records In
To change Deduplicator properties double click on the object. Tick Deduplicate for
the fields you wish to depuplicate, only ticked fields are passed to the next object.
)ote:
There is no need to sort data before deduplication
Records In Records In
To change UnPivot properties double click on the object. Fill in Description, Group
field name, create all necessary Groups and Outputs. Once it is done Map input fields
to outputs and Groups
UnPivoted Data
Group field
UnPivot Properties
Pivot Key
Set Key
Pivoted Data
Since there are more than one field to pivot let’s join Profit and Sales fields together first
using tab a delimiter than pivot the data and split it again
Joining Data
Every Transformation created must have at least one writer object. Writer connects
to Target database and loads data into it. Depending of the target type some options
may not be available. To change the Writer properties click or double click on the
Writer object.
Writer is capable of saving data into delimited or fixed width files. All parameters are
user definable.
Data View
Target File
Number of records to show
1 2 3 4 5 6 7
1. Reader Properties
2. Refresh Data
3. Edit file in external editor
4. Add new column
5. Delete last column
6. Switch to Data View
7. Switch to Data Definition View
)ote:
You may rename fields and change field’s width here. (Works only for text files)
1 2 3 4 5 6 7 8 9 10
1. Reader Properties
2. Refresh Data
3. Print Data Definition
4. Print Preview Data Definition
5. Find
6. Edit file in external editor
7. Add new column
8. Delete last column
9. Switch to Data View
10. Switch to Data Definition View
Data View
1 2 3 4 5 6 7 8
1. Reader Properties
2. Refresh Data
3. Print
4. Print Preview
5. Find
6. Edit file in external editor
7. Switch to Data View
8. Switch to Data Definition View
1 2 3 4 5 6 7 8
1. Reader Properties
2. Refresh Data
3. Print Data Definition
4. Print Preview Data Definition
5. Find
6. Edit file in external editor
7. Switch to Data View
8. Switch to Data Definition View
Data View
1 2 3 4 5 6 7
1. Reader Properties
2. Refresh Data
3. Print
4. Print Preview
5. Find
6. Switch to Data View
7. Switch to Data Definition View
Check constraints
Ensure that any constraints on the destination table are checked during the bulk copy
operation. By default, constraints are ignored.
Keep identity
Specify that there are values in the data file for an identity column.
Keep )ULLS
Specify that any columns containing a null value should be retained as null values, even if a
default value was specified for that column in the destination table.
Batch size
Specify the number of rows in a batch. The default is the entire data file.
The following values for the Batch size property have these effects:
If you set Batch size to zero, the data is loaded in a single batch. The first row that fails will
cause the entire load to be cancelled, and the step fails.
If you set Batch size to one, the data is loaded a row at a time. Each row that fails is counted
as one row failure. Previously loaded rows are committed.
If you set Batch size to a value greater than one, the data is loaded one batch at a time. Any
row that fails in a batch fails that entire batch; loading stops and the step fails. Rows in
previously loaded batches are either committed or, if the step has joined the package
transaction, provisionally retained in the transaction, subject to later commitment or rollback.
)ote:
Option ‘Commit every Array’ works only for Oracle conventional path loading
Update Key
For the example provided below, Advanced ETL Processor will execute the
following SQL
(Update key is CustomerId, OrderNo).
Select count(*)
from [DEMO].[dbo].[orders]
where CustomerId=? And OrderNo=?
Update [DEMO].[dbo].[orders]
set orderdate=?,
amount=?
where customerid=? And OrderNo=?
Update Records
Update [DEMO].[dbo].[orders]
set OrderDate=?,
Amount=?
where CustomerId=? And OrderNo=?
Delete Records
)ote:
“Add New And Update Old Records” is not supported for SQL Server
Connection use ODBC connection instead.
Advanced ETL Processor is capable of running SQL Scripts before and after the
transformation.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
6. Validation Rules
This chapter represents list of validation rules together with short description
grouped by category. Every Validation Rule/function could have some specific
parameters and common parameters to control data flow of Advanced ETL. Most if it
is self explanatory see the example below:
)ote:
To change validation rule properties double click on it
6.1 Strings
6.1.1 Is Null
6.1.3 Is Alpha
6.1.4 Is Hex
6.1.5 Is Equal To
6.1.8 Contains
6.1.9 In List
6.2 Numbers
6.2.1 Is Number
6.2.2 Is Integer
6.2.3 Is Positive
6.2.4 Is Negative
6.3 Date
6.3.1 Is Date
6.3.7 Is 1st Quarter, Is 2nd Quarter, Is 3rd Quarter, Is 4th Quarter, Is Current
Quarter,
Is Last Quarter, Is Next Quarter
6.3.10 Is Within Past Minutes, Is Within Past Hours, Is Within Past Days, Is
Within Past Weeks, Is Within Past Months
6.4 Time
6.4.1 Is Time
6.4.5 Is Second
6.4.6 Is Minute
6.4.7 Is Hour 24
6.4.8 Is Hour 12
6.4.9 Is PM
6.4.10 Is AM
Regular expressions are used by many text editors, utilities, and programming
languages to search and manipulate text based on patterns. For example, Perl and
Tcl have a powerful regular expression engine built directly into their syntax. Several
utilities provided by Unix distributions—including the editor ed and the filter grep—
were the first to popularize the concept of regular expressions. "Regular expression"
is often shortened to regex or regexp (singular), or regexes, regexps, or regexen
(plural). Some authors distinguish between regular expression and abbreviated forms
such as regex, restricting the former to true regular expressions, which describe
regular languages, while using the latter for any regular expression-like pattern,
including those that describe languages that are not regular. As only some authors
observe this distinction, it is not safe to rely upon it.
As an example of the syntax, the regular expression \bex can be used to search for
all instances of the string "ex" that occur at word boundaries (signified by the \b).
Thus in the string, "Texts for experts," \bex matches the "ex" in "experts," but not in
"Texts" (because the "ex" occurs inside the word there and not immediately after a
word boundary).
Source: Wikipedia.
Most of regular expressions were taken from www.regexlib.com. We are not able to
include full list of contributors because it is too big. We did our best to test and modify
if necessary pattern strings however as time goes by standards changes and some of
them could became no longer valid. Therefore Regular Expression Validation Rules
should be used with caution. If you have any useful Regular expressions please let
us know we would be more than happy to include it into the next release of
Advanced ETL Processor.
6.5.2 Is IP Address V4
6.5.3 Is IP Address V6
6.5.4 Is Email
6.5.5 Is ISBN 10
6.5.6 Is ISBN 13
6.5.8 Is URL
6.5.9 Is UNC
6.5.20 Is US State
7. Transformation Functions
)ote:
To change Transformation function properties double click on it
7.1 Strings
7.1.4 First Up
7.1.6 Trim
7.1.10 Replace
7.1.19 Delete
7.1.20 Left
7.1.21 Right
7.2 Numbers
7.2.1 Round
7.2.3 Abs
7.2.4 Sign
7.3 Date
7.4 Miscellaneous
7.4.1 Length
7.4.2 Literal
7.4.3 User
7.4.4 Splitter
7.4.5 Joiner
7.4.6 Calculation
Below is the basic structure that every Calculation transformation must follow:
Var
VariableName : VariableType;
VariableName : VariableType;
...
// Some single line Comment if necessary
{
Multi line
Comment
}
Procedure ProcedureName;
variables here if necessary
Begin
Some Code;
End;
Begin
the main program block.
Result:= some calculation
End.
)ote:
The functions and procedures can appear in any order. The only requirement is that if one
procedure or function uses another one, that latter one must have been defined already.
Declaring Variables
Variables are simply a name for a block of memory cells in main memory. If a value is
assigned to a variable, that value must be of the same type as the variable, and will be stored in
the memory address designated by the variable name. The assignment statement is the
semicolon-equal :=.
Example:
Var
i : Integer: { variable name is i, type is integer)
Begin
i := 10; { valid integer number assigned to variable i }
End.
Variable Types
Number variables can be assigned from other numeric variables, and expressions:
var
Age : Byte; // Smallest positive integer type
Books : SmallInt; // Bigger signed integer
Salary : Currency; // Decimal used to hold financial amounts
Expenses : Currency;
TakeHome : Currency;
begin
Expenses := 12345.67; // Assign from a literal constant
TakeHome := Salary; // Assign from another variable
TakeHome := TakeHome - Expenses; // Assign from an expression
end;
)umerical operators
When using these multiple operators in one expression, you should use round brackets to
wrap around sub-expressions to ensure that the result is obtained. This is illustrated in the
examples below:
var
myInt : Integer; // Define integer and decimal variables
myDec : Single;
begin
myInt := 20; // myInt is now 20
myInt := myInt + 10; // myInt is now 30
myInt := myInt - 5; // myInt is now 25
myInt := myInt * 4; // myInt is now 100
myInt := 14 div 3; // myInt is now 4 (14 / 3 = 4 remainder 2)
myInt := 14 mod 3; // myInt is now 2 (14 / 3 = 4 remainder 2)
myInt := 12 * 3 - 4; // myInt is now 32 (* comes before -)
myInt := 12 * (3 - 4); // myInt is now -12 (brackets come before *)
myDec := 2.222 / 2.0; // myDec is now 1.111
end;
Character Types
var
Str1 : Char; // Holds a single character, small alphabet
Str2 : WideChar; // Holds a single character, International alphabet
Str3 : AnsiChar; // Holds a single character, small alphabet
Str4 : ShortString; // Holds a string of up to 255 Char's
Str5 : String; // Holds strings of Char's of any size desired
Str6 : AnsiString; // Holds strings of AnsiChar's any size desired
Str7 : WideString; // Holds strings of WideChar's of any size desired
Variants
The Variant data type provides a flexible general purpose data type.
Variants are useful in very specific circumstances, where data types and their content are
determined at run time rather than at compile time.
Example:
var
myVar : Variant;
Date Variables
TDateTime
Description
It is stored as a Double variable, with the date as the integral part, and time as fractional
part. The date is stored as the number of days since 30 Dec 1899. Quite why it is not 31 Dec is
not clear. 01 Jan 1900 has a days value of 2.
)ote:
No local time information is held with TDateTime - just the day and time values.
Example:
Finding the difference between two dates
var
day1, day2 : TDateTime;
diff : Double;
begin
day1 := StrToDate('12/06/2002');
day2 := StrToDate('12/07/2002');
diff := day2 - day1;
Result:='day2 - day1 = '+FloatToStr(diff)+' days';
end;
These are used in conjunction with programming logic. They are very simple:
Var
Log1 : Boolean; // Can be 'True' or 'False'
Boolean variables are a form of enumerated type. This means that they can hold one of a
fixed number of values, designated by name. Here, the values can be True or False.
Logical Operation
var
number : Integer;
text : String;
begin
number := Sqr(17); // Calculate the square of 17
if number > 400
then text := '17 squared > 400' // Action when if condition is true
else text := '17 squared <= 400'; // Action when if condition is false
result:=text;
end;
There are a number of things to note about the if statement. First that it spans a few lines -
remember that statements can span lines - this is why it insists on a terminating ;
Second, that the then statement does not have a terminating ; -this is because it is part of the if
statement, which is finished at the end of the else clause.
Third, that we have set the value of a text string when the If condition is successful - the
Then clause - and when unsuccessful - the Else clause. We could have just done a then
assignment:
Note that here, the then condition is not executed (because 17 squared is not > 400), but
there is no else clause. This means that the if statement simply finishes without doing anything.
Note also that the then clause now has a terminating ; to signify the end of the if statement.
then
begin
statement1;
statement2;
...
end // Notice no terminating ';' - still part of 'if'
else
begin
statement3;
statement4;
...
end;
We used And to join the if conditions together - both must be satisfied for the then clause to
execute. Otherwise, the else clause will execute. We could have used a number of different
logical primitives, of which And is one, covered under logical primitives below.
)ested if statements:
There is nothing to stop you using if statements as the statement of an if statement. Nesting
can be useful, and is often used like this:
if condition1
then statement1
else if condition2
then statement2
else statement3;
However, too many nested if statements can make the code confusing. The Case statement,
discussed below, can be used to overcome a lot of these problems.
Logical primitives
begin
if false And false
then Result:='false and false = true';
if false Or false
then Result:= 'false or false = true';
if true Or false
then Result:= 'true or false = true';
if false Or true
then Result:= 'false or true = true';
if true Or true
then Result:= 'true or true = true';
if Not false
then Result:= 'not false = true';
if Not true
then Result:= 'not true = true';
end;
Note that the Xor primitive returns true when one, but not both of the conditions are true.
Case statements
The If statement is useful when you have a simple two way decision. Ether you go one way
or another way. Case statements are used when you have a set of 3 or more alternatives.
var
i : Integer;
begin
i := [F1];
Case i of
15 : Resut := ('Number was fifteen');
16 : Resut := ('Number was sixteen');
17 : Resut := ('Number was seventeen');
18 : Resut := ('Number was eighteen');
19 : Resut := ('Number was nineteen');
20 : Resut := ('Number was twenty');
end;
end;
The case statement above routes the processing to just one of the statements. OK, the code
is a bit silly, but it is used to illustrate the point.
Supposing we were not entirely sure what value our case statement was processing? Or we
wanted to cover a known set of values in one fell swoop? The Else clause allows us to do that:
var
i : Integer;
begin
i := [F1];
Case i of
15 : Resut := ‘Number was fifteen';
16 : Resut := 'Number was sixteen';
17 : Resut := 'Number was seventeen';
18 : Resut := 'Number was eighteen';
19 : Resut := 'Number was nineteen';
20 : Resut := 'Nuumber was twenty';
else
Resut := 'Unexpected number‘;
end;
end;
Unexpected number : 10
One of the main reasons for using computers is to save the tedium of many repetitive tasks.
One of the main uses of loops in programs is to carrry out such repetitive tasks. A loop will
execute one or more lines of code (statements) as many times as you want.
Your choice of loop type depends on how you want to control and terminate the looping.
This is the most common loop type. For loops are executed a fixed number of times,
determined by a count. They terminate when the count is exhausted. The count (loop) is held
in a variable that can be used in the loop. The count can proceed upwards or downwards, but
always does so by a value of 1 unit. This count variable can be a number or even an
enumeration.
Counting up
var
count : Integer;
begin
For count := 1 to 5 do
Result:= 'Count is now '+IntToStr(count);
end;
Counting down
var
count : Integer;
begin
For count := 5 downto 1 do
Result:= 'Count is now '+IntToStr(count);
end;
The For statements in the examples above have all executed one statement. If you want to
execute more than one, you must enclose these in a Begin and End pair.
The Repeat loop type is used for loops where we do not know in advance how many times
we will execute. For example, when we keep asking a user for a value until one is provided, or
the user aborts. Here, we are more concerned with the loop termination condition.
Repeat loops always execute at least once. At the end, the Until condition is checked, and
the loop aborts of condition works out as true.
A simple example
var
stop : Boolean; // Our exit condition flag
i : Integer;
begin
i := 1;
exit := False; // do not exit until we are ready
repeat
i := i+1; // Increment a count
if Sqr(i) > 99
then stop:= true; // Exit if the square of our number exceeds 99
until stop; // Shorthand for 'until exit := true'
result:=I;
end;
Here we exit the repeat loop when a Boolean variable is true. Notice that we use a
shorthand - just specifying the variable as the condition is sufficient since the variable value is
either true or false.
var
i : Integer;
begin
i := 1;
repeat
i := i+1; // Increment a count
until (Sqr(i) > 99) or (Sqrt(i) > 2.5);
result:=i;
end;
Notice that compound statements require separating brackets. Notice also that Repeat
statements can accomodate multiple statements without the need for a begin/end pair. The
repeat and until clauses form a natural pairing.
While loops
While loops are very similar to Repeat loops except that they have the exit condition at the
start. This means that we use them when we wish to avoid loop execution altogether if the
condition for exit is satisfied at the start.
Var
i : Integer;
begin
i := 1;
while (Sqr(i) <= 99) and (Sqrt(i) <= 2.5) do
i := i+1; // Increment a count
result:=i;
end;
Notice that our original Repeat Until condition used Or as the compound condition joiner -
we continued until either condition was met. With our While condition, we use And as the
joiner - we continue whilst neither condition is met. Have a closer look to see why we do this.
The difference is that we repeat an action until something or something else happens. Whereas
we keep doing an action while neither something nor something else have happened.
Functions
Functions provide a flexible method to apply one formula many times to possibly different
values. They are comparable to procedures but
Begin
Result:= [F001] + [F002];
End;
2. Strings Concatenation
Begin
Result:= ‘[F001]’ + ‘[F002]’;
End;
2. If Statement
Begin
If '[F002]'='' then
Result:= 0
else If [F002]=0 then
Result:= 0
Else
Result:= [F001] mod [F002];
End;
3. Variables
var
MyVariable : integer;
Begin
MyVariable:=10;
Result :=[F001] mod MyVariable;
end;
Properties:
LowerCase(S)
LowerCase returns a string with the same text as the string passed in S, but with all letters
converted to lowercase. The conversion affects only 7-bit ASCII characters between 'A' and 'Z'.
To convert 8-bit international characters, use AnsiLowerCase.
AnsiUpperCase(S)
AnsiLowerCase(S)
AnsiLowerCase returns a string that is a copy of the given string converted to lower case.
AnsiCompareStr(S1,S2)
Conditio
n Return Value
S1 > S2 >0
S1 < S2 <0
S1 = S2 =0
AnsiCompareText(S1,S2)
AnsiStrLIComp (S1,S2,MaxLen)
AnsiLastChar(S)
Trim(S)
Trim removes leading and trailing spaces and control characters from the given string S.
TrimLeft(S)
TrimLeft returns a copy of the string S with leading spaces and control characters removed.
TrimRight(S)
TrimRight returns a copy of the string S with trailing spaces and control characters
removed.
QuotedStr(S)
Use QuotedStr to convert the string S to a quoted string. A single quote character (') is
inserted at the beginning and end of S, and each single quote character in the string is repeated.
AnsiQuotedStr(S,Quote)
Use AnsiQuotedStr to convert a string (S) to a quoted string, using the provided Quote
character. A Quote character is inserted at the beginning and end of S, and each Quote
character in the string is doubled.
AnsiExtractQuotedStr(S,Quote)
AnsiExtractQuotedStr removes the quote characters from the beginning and end of a quoted
string, and reduces pairs of quote characters within the string to a single quote character. The
Quote parameter defines what character to use as a quote character. If the first character in S is
not the value of the Quote parameter, AnsiExtractQuotedStr returns an empty string.
The function copies characters from S to the result string until the second solitary quote
character or the first null character in S. The S parameter is updated to point to the first
character following the quoted string. If S does not contain a matching end quote character, the
S parameter is updated to point to the terminating null character.
IntToStr(S)
IntToStr converts an integer into a string containing the decimal representation of that
number.
IntToHex(I,Digits)
IntToHex converts a number into a string containing the number's hexadecimal (base 16)
representation. Value is the number to convert. Digits indicates the minimum number of
hexadecimal digits to return.
StrToInt(S)
StrToInt converts the string S, which represents an integer-type number in either decimal or
hexadecimal notation, into a number. If S does not represent a valid number, StrToInt raises an
exception.
StrToIntDef(S,Default)
FileAge(File)ame)
Call FileAge to obtain the OS timestamp of the file specified by FileName. The return value
can be converted to a TDateTime object using the FileDateToDateTime function. The return
value is -1 if the file does not exist.
FileExists(File)ame)
FileExists returns true if the file specified by FileName exists. If the file does not exist,
FileExists returns false.
DeleteFile(File)ame)
DeleteFile deletes the file named by FileName from the disk. If the file cannot be deleted or
does not exist, the function returns false.
RenameFile(OldFile,)ewFile)
RenameFile attempts to change the name of the file specified by OldFile to NewFile. If the
operation succeeds, RenameFile returns true. If RenameFile cannot rename the file (for
example, if the application does not have permission to modify the file), it returns false.
ChangeFileExt(File)ame,EXT)
ChangeFileExt takes the file name passed in FileName and changes the extension of the file
name to the extension passed in Extension. Extension specifies the new extension, including
the initial dot character.
ChangeFileExt does not rename the actual file, it just creates a new file name string.
ExtractFilePath(File)ame)
The resulting string is the leftmost characters of FileName, up to and including the colon or
backslash that separates the path information from the name and extension. The resulting
string is empty if FileName contains no drive and directory parts.
ExtractFileDir(File)ame)
ExtractFileDrive(File)ame)
ExtractFileDrive returns a string containing the drive portion of a fully qualified path name
for the file passed in the FileName. For file names with drive letters, the result is in the form
"drive". For file names with a UNC path the result is in the form "\servername\sharename". If
the given path contains neither style of path prefix, the result is an empty string.
ExtractFile)ame(File)ame)
The resulting string is the rightmost characters of FileName, starting with the first character
after the colon or backslash that separates the path information from the name and extension.
The resulting string is equal to FileName if FileName contains no drive and directory parts.
ExtractFileExt(File)ame)
ExpandFile)ame(File)ame)
ExpandFileName converts the relative file name into a fully qualified path name.
ExpandFileName does not verify that the resulting fully qualified path name refers to an
existing file, or even that the resulting path exists.
ExpandU)CFile)ame(File)ame)
ExpandUNCFileName returns the fully-qualified file name for a specified file name.
ExtractRelativePath(File)ame)
Call ExtractRelativePath to convert a fully qualified path name into a relative path name.
The DestName parameter specifies file name (including path) to be converted. BaseName is
the fully qualified name of the base directory to which the returned path name should be
relative. BaseName may or may not include a file name, but it must include the final path
delimiter.
DiskFree(Drive)
DiskFree returns the number of free bytes on the specified drive, where 0 = Current, 1 = A,
2 = B, and so on.
DiskSize(Drive)
DiskSize returns the size in bytes of the specified drive, where 0 = Current, 1 = A, 2 = B,
etc. DiskSize returns -1 if the drive number is invalid.
GetCurrentDir(Directory)
SetCurrentDir(Directory)
The SetCurrentDir function sets the current directory. The return value is true if the current
directory was successfully changed, or false if an error occurred.
CreateDir(Directory)
CreateDir creates a new directory. The return value is true if a new directory was
successfully created, or false if an error occurred.
RemoveDir(Directory)
Call RemoveDir to remove the directory specified by the Dir parameter. The return value is
true if a new directory was successfully deleted, false if an error occurred. The directory must
be empty before it can be successfully deleted.
FloatToStr(F)
FloatToStr converts the floating-point value given by Value to its string representation. The
conversion uses general number format with 15 significant digits.
StrToFloat(S)
EncodeDate(Year,Month,Day)
EncodeDate returns a TDateTime value from the values specified as the Year, Month, and
Day parameters. The year must be between 1 and 9999. Valid Month values are 1 through 12.
Valid Day values are 1 through 28, 29, 30, or 31, depending on the Month value. For example,
the possible Day values for month 2 (February) are 1 through 28 or 1 through 29, depending on
whether or not the Year value specifies a leap year.
EncodeTime(Hour,Min,Sec,MSec)
EncodeTime encodes the given hour, minute, second, and millisecond into a TDateTime
value. Valid Hour values are 0 through 23. Valid Min and Sec values are 0 through 59. Valid
MSec values are 0 through 999. If the specified values are not within range, EncodeTime
raises an EConvertError exception. The resulting value is a number between 0 and 1
(inclusive) that indicates the fractional part of a day given by the specified time or (if 1.0)
midnight on the following day. The value 0 corresponds to midnight, 0.5 corresponds to noon,
0.75 corresponds to 6:00 pm, and so on.
DayOfWeek(D)
DayOfWeek returns the day of the week of the specified date as an integer between 1 and 7,
where Sunday is the first day of the week and Saturday is the seventh.
Date
Use Date to obtain the current local date as a TDateTime value. The time portion of the
value is 0 (midnight).
Time
Use Time to return the current time as a TDateTime value. The two functions are
completely equivalent.
)ow
IncMonth(D)
IsLeapYear(D)
Call IsLeapYear to determine whether the year specified by the Year parameter is a leap
year. Year specifies the calendar year. Use YearOf to obtain the value of Year for IsLeapYear
from a TDateTime value.
DateToStr(D)
Use DateToStr to obtain a string representation of a date value that can be used for display
purposes.
TimeToStr(D)
DateTimeToStr(D)
StrToDate(S)
Call StrToDate to parse a string that specifies a date. If S does not contain a valid date,
StrToDate raises an exception.
StrToTime(S)
Call StrToTime to parse a string that specifies a time value. If S does not contain a valid
time, StrToTime raises an exception.
StrToDateTime(S)
Call StrToDateTime to parse a string that specifies a date and time value. If S does not
contain a valid date, StrToDateTime raises an exception.
FormatDateTime(Format,DateTime)
FormatDateTime formats the TDateTime value given by DateTime using the format given
by Format. See the table below for information about the supported format strings.
Abort
Beep
AnsiPos(Substr,S)
Call AnsiPos to obtain the byte offset of the Substr parameter, as it appears in the string S.
For example, if Substr is the string "AB", and S is the string "ABCDE", AnsiPos returns 1. If
Substr does not appear in S, AnsiPos returns 0.
Chr(X)
Abs(X)
Length(X)
Copy(S,Index,Count)
Round(X)
Trunc(X)
Pos(Substr,Str)
Pos searches for Substr within S and returns an integer value that is the index of the first
character of Substr within S. Pos is case-sensitive. If Substr is not found, Pos returns zero
Delete(S,Index,Count)
Delete removes a substring of Count characters from string S starting with S[Index]. S is a
string-type variable. Index and Count are integer-type expressions. If index is larger than the
length of the string or less than 1, no characters are deleted. If count specifies more characters
than remain starting at the index, Delete removes the rest of the string. If count is less than or
equal to 0, no characters are deleted
Insert(Substr,Dest,Index)
Insert merges Source into S at the position S[index]. Source is a string-type expression. S is
a string-type variable of any length. Index is an integer-type expression. It is a character index
and not a byte index. If Index is less than 1, it is mapped to a 1. If it is past the end of the
string, it is set to the length of the string, turning the operation into an append. If the Source
parameter is an empty string, Insert does nothing
Sqr(X)
the Sqr function returns the square of the argument. X is a floating-point expression. The
result, of the same type as X, is the square of X, or X*X.
Sqrt(X)
Exp(X)
Exp returns the value of e raised to the power of X, where e is the base of the natural
logarithms
Ln(X)
Sin(X)
Cos(X)
Cos returns the cosine of the angle X. X expression that represents an angle in radians
Tan(X)
ArcTan(X)
SetLength(S,Length)
High(X)
Low(X)
PI
Represents the mathematical value pi, the ratio of a circle's circumference to its diameter. Pi
is approximated as 3.1415926535897932385.
ArcCos(X)
ArcCos returns the inverse cosine of X. X must be between -1 and 1. The return value is in
the range [0..Pi], in radians.
ArcCosh(X)
ArcCosh returns the inverse hyperbolic cosine of X. The value of X must be greater than or
equal to 1.
ArcCot(X)
ArcCotH(X)
ArcCsc(X)
ArcCscH(X)
ArcSec(X)
ArcSecH(X)
ArcSin(X)
ArcSin returns the inverse sine of X. X must be between -1 and 1. The return value will be
in the range [-Pi/2..Pi/2], in radians.
ArcSinh(X)
ArcTan(X)
ArcTanh(X)
ArcTanh returns the inverse hyperbolic tangent of X. The value of X must be between -1
and 1 (inclusive).
Ceil(X)
Call Ceil to obtain the lowest integer greater than or equal to X. The absolute value of X
must be less than MaxInt. For example: Ceil(-2.8) = -2 Ceil(2.8) = 3 Ceil(-1.0) = -1
Cosecant(X)
Use the Cosecant to calculate the cosecant of X, where X is an angle in radians. The
cosecant is calculated as 1/ Sin(X).
Cosh(X)
Cot(X)
Call Cot to obtain the cotangent of X. The cotangent is calculated using the formula 1 / Tan
(X).
Cotan(X)
Call Cotan to obtain the cotangent of X. The cotangent is calculated using the formula 1 /
Tan (X)
Do not call Cotan with X = 0
CotH(X)
Csc(X)
CscH(X)
Use the CscH to calculate the hyperbolic cosecant of X, where X is an angle in radians.
CycleToDeg(X)
CycleToDeg converts angles measured in cycles into degrees, where degrees = cycles * 360.
CycleToGrad(X)
CycleToRad(X)
CycleToRad converts angles measured in cycles into radians, where radians = 2pi * cycles.
DegToCycle(X)
DegToGrad(X)
Use DegToGrad to convert angles expressed in degrees to the corresponding value in grads.
DegToRad(X)
Floor(X)
Call Floor to obtain the highest integer less than or equal to X. For example: Floor(-2.8) = -
3 Floor(2.8) = 2 Floor(-1.0) = -1
GradToCycle(X)
GradToDeg(X)
GradToRad(X)
GradToRad converts angles measured in grads into radians, where radians = grads(pi/200).
Hypot(X,Y)
Hypot returns the length of the hypotenuse of a right triangle. Specify the lengths of the
sides adjacent to the right angle in X and Y. Hypot uses the formula Sqrt(X**2 + Y**2)
IntPower(Base,Exponent)
Ldexp(X)
LnXP1(X)
LnXP1 returns the natural logarithm of (X+1). Use LnXP1 when X is a value near 0.
Log10(X)
Log2(X)
Log)(Base,X)
Max(A,B)
Call Max to compare two numeric values. Max returns the greater value of the two.
Min(A,B)
Call Min to compare two numeric values in Delphi. Min returns the smaller value of the
two.
Power(Base,Exponent)
Power raises Base to any power. For fractional exponents or exponents greater than MaxInt,
Base must be greater than 0.
RadToCycle(X)
Use RadToCycle to convert angles measured in radians into cycles, where cycles =
radians/(2pi).
RadToDeg(X)
RadToGrad(X)
RandG(Mean,StdDev)
RandG produces random numbers with Gaussian distribution about the Mean. This is useful
for simulating data with sampling errors and expected deviations from the Mean.
RandomRange(AFrom,ATo)
RandomRange returns a random integer from the range that extends between AFrom and
ATo (non-inclusive). RandomRange can handle negative ranges (where AFrom is greater than
ATo). To initialize the random number generator, add a single call Randomize or assign a
value to the RandSeed variable before making any calls to RandomRange.
Sec(X)
Call Sec to obtain the secant of X, where X is an angle in radians. The secant is calculated
using the formula 1 / Cos(X).
SecH(X)
Sinh(X)
Tan(X)
Tanh(X)
7.4.7 Lookup
7.4.10 Sequence
Properties:
8. Date formats
Date/Time format strings control the conversion of strings into date time type.
Date/Time format strings are composed from specifiers which describe values to be
converted into the date time value.
In the following table, specifiers are given in lower cases. Case is ignored in formats,
except for the "am/pm" and "a/p" specifiers.
Specifier Description
tt Uses the 12-hour clock for the preceding h or hh specifier, 'am' for any hour
before noon, and 'pm' for any hour after noon.
Important thing is to understand that this format has nothing to do with your target database.
This is the format of the source data. It is there to help to covert string into date time type
inside of the software, so it can be loaded later into date or timestamp field
9. Command Line
While submitting a bug or problem please include the following to make it easier to solve the
problem as soon as possible:
This End-User License Agreement ("EULA") is a legal agreement between you (either
an individual or a single entity) and DB Software Laboratory for the SOFTWARE
PRODUCT identified above, which includes computer software and may include
associated media, printed materials, and "online" or electronic documentation. By
installing, copying, or otherwise using the SOFTWARE PRODUCT, you agree to be
bound by the terms of this EULA. If you do not agree to the terms of this EULA, you
may be subject to civil liability if you install and use this SOFTWARE PRODUCT.
Once SOFTWARE PRODUCT is installed you may use it for 30 days. After
evaluation period ends, you have to purchase a license or stop using the
SOFTWARE PRODUCT.
LICENSING
1. A single computer usage license. The user purchases one license to use the
SOFTWARE PRODUCT on one computer.
2. A SITE usage license. The user purchases a single usage license, authorising the
use of SOFTWARE PRODUCT, by the purchaser, the purchaser's
employees or accredited agents, on an unlimited number of computers at the same
physical site location. This site location would normally be defined as a single
building, but could be considered to be a number of buildings within the same,
general, geographical location, such as an industrial estate or small town.
You may permanently transfer all of your rights under this EULA, provided the
recipient agrees to the terms of this EULA.
SEVERABILITY
In the event of invalidity of any provision of this license, the parties agree that such
invalidity shall not affect the validity of the remaining portions of this license.
COPYRIGHT
MISCELLANEOUS
Should you have any questions concerning this EULA, or if you desire to contact the
author of this Software for any reason, please contact DB Software Laboratory (see
contact information at the top of this EULA).
LIMITED WARRANTY
Users with a fully paid annual maintenance fee get the following benefits:
Priority Support
Free software enhancements, updates and upgrades during the maintenance period
Advanced and exclusive notification of software promotions
"Maintenance Owner ONLY" product promotions
ENTIRE AGREEMENT
This is the entire agreement between you and DB Software Laboratory which
supersedes any prior agreement or understanding, whether written or oral, relating to
the subject matter of this license.