SG 246393
SG 246393
Modernizing
IBM Eserver
iSeries Application Data Access -
A Roadmap
map Cornerstone
Learn how to move your data definition
of your applications from DDS to SQL
Hernando Bedoya
Daniel Cruikshank
Birgitta Hauser
Sharon Hoffman
Rolf André Klaedtke
Warawich Sundarabhaka
ibm.com/redbooks
International Technical Support Organization
February 2005
SG24-6393-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page vii.
This edition applies to Version 5, Release 3, Modification 0 of i5/OS, Program Number 5722-SS1.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Chapter 2. Why modernize with SQL and DB2 UDB for iSeries . . . . . . . . . . . . . . . . . . 11
2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.1 A short look at the history of SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.2 The main parts of SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Reasons to modernize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Standard compliancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 Openness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.3 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.4 Available skills. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.5 Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2.6 Data integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Contents v
9.3 Date and time calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
9.3.1 Converting from numeric/character date values to real date values . . . . . . . . . . 197
9.3.2 Converting from date fields to character or numeric representation . . . . . . . . . . 202
9.3.3 Checking for a valid date or time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
9.3.4 Retrieving current date and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
9.3.5 Adding and subtracting date and time values . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
9.3.6 Calculating date and time differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
9.3.7 Extracting a portion of a date, time, or timestamp. . . . . . . . . . . . . . . . . . . . . . . . 219
9.3.8 Additional SQL scalar functions for date calculation . . . . . . . . . . . . . . . . . . . . . . 220
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that does
not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such provisions
are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and
distribute these sample programs in any form without payment to IBM for the purposes of developing, using,
marketing, or distributing application programs conforming to IBM's application programming interfaces.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun
Microsystems, Inc. in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other
countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, and service names may be trademarks or service marks of others.
viii Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Preface
In 1978 IBM® introduced the System/38™ as part of its midrange platform hardware base.
One of the many outstanding features of this system was the built-in Relational Database
Management System (RDMS) support. The system included a utility for defining databases,
screens, and reports. This utility used a form named Data Description Specifications (DDS) to
define the database physical (PF) and logical (LF) files (base tables, views, and indexes).
This form was columnar in design and similar in style to the RPG/III programming language
(widely used on IBM midrange platforms).
In 1988, IBM announced the AS/400®. This was a single system that contained emulation
environments for the System/3x line of hardware products. The OS/400® operating system
also contained a built-in RDMS; however, IBM offered Structured Query Language (SQL) as
an alternative to DDS for creating databases. In addition, SQL Data Manipulation Language
(DML) statements were made available as an ad hoc query language tool. These statements
could also be embedded and compiled within high level language (HLL) programs.
SQL Data Definition Language (DDL) has become the industry standard for defining RDMS
databases. DDL statements consist of CREATE statements for defining database objects,
ALTER statements for customizing existing objects (for example, adding a constraint), and
GRANT statements for authorizing the access or permissions to database objects.
Many customers are in the process of modernizing their database definition and the database
access. This IBM Redbook will help you understand how to reverse engineer a DDS created
database, and provide tips and techniques for modernizing applications to use SQL as the
database access method.
Thomas Gray
Marvin Kulas
Joanna Miszczyk
International Technical Support Organization, Rochester Center
Jarek Miszczyk
Kent Milligan
IBM Rochester
George Farr
Claus Weiss
IBM Toronto
Julie Czubik
International Technical Support Organization, Poughkeepsie Center
Your efforts will help increase product acceptance and customer satisfaction. As a bonus,
you'll develop a network of contacts in IBM development labs, and increase your productivity
and marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our Redbooks™ to be as helpful as possible. Send us your comments about this or
other Redbooks in one of the following ways:
Use the online Contact us review redbook form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. JLU Building 107-2
3605 Highway 52N
Rochester, Minnesota 55901-7829
Preface xi
xii Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Part 1
This chapter provides a general overview of the iSeries Developer Roadmap. We explain the
reasons for the roadmap, its contents, and why it may be important to you and your
organization. It may help you to evaluate your current position and provide some guidance as
to the next steps to follow.
However, we do not cover too many details, and we provide no instructions on how to
implement its single steps in this chapter. The other chapters in this book provide more
technical details and in-depth information on how to concretely transpose the theory into
practice.
Additional information on the iSeries Developer Roadmap can be found on the following web
Site:
http://www-1.ibm.com/servers/eserver/iseries/roadmap
Part of the information in this chapter was adapted from the above site.
Programming languages and the related programming models have changed as well. From
the beginning many years ago, there were several paradigm shifts, and today graphical user
interfaces and object-oriented programming have become widely used mainstream
technologies.
The IBM Eserver™ iSeries and its predecessors have gained a strong reputation in the
industry for featuring a very stable, robust technology, and for being ideally suited as a
business computer system.
In the introduction of this chapter, we already pointed out that technology evolves rapidly.
Application development methodologies and database access methods are no exception to
this, and may need some serious overhaul. To help companies take the next step in
modernizing their applications and their development environments, IBM has created the
iSeries Developer Roadmap.
This roadmap has been specifically designed to take into consideration the extent to which
your shop is probably presently entrenched in a 5250 application model. The large amount of
experience accumulated in relation to the traditional programming languages that support
your green screen applications has also been accounted for. As mentioned in the opening
paragraph of this chapter, the roadmap’s aim is to help you evaluate your current position,
and to see the next steps to be taken. In the following chapters we provide the reasons why
we think the roadmap is important, and throughout the rest of this book we provide help and
guidance on the way to modernize your applications and database access.
As an individual, you may want to enhance your knowledge and learn new technologies in
order to strenghten or improve your position either in your current job or in the job market.
New programming languages and tools offer better support for the kinds of requests coming
from users you may have to satisfy, if not now yet then possibly in the near future. In short,
software development techniques, languages, and tools, as well as hardware technology, are
continually evolving and becoming more efficient, and you should as well in order not to be
left behind.
The reasoning for companies is almost the same as for individuals. Spending money for new
hardware may give you better response and execution times. But adapting your database
access methods and the architecture of your applications may gain you more flexibility when
it comes to adapting your business to a changing market. You may increase the level of
overall security and eliminate the source of errors by taking advantage of data integrity
features built into your database, contemporarily cutting down development times, because
The list of advantages is certainly not endless, but it is long enough to prove that having a
serious look at the roadmap and evaluating your current position and the possible next steps
to take is more than just a time-consuming exercise.
You may find that you are already at an advanced level. Fine. Isn’t it nice to receive a
confirmation that you are on the right path? If you are not at an advanced level, this book,
along with more resources available from IBM, are here to help you on your way to move into
the Web application world... in a staged, non-disruptive manner.
For example, some years ago it was considered normal to order products by writing a letter or
a postcard. Then there were fax machines and now many companies accept orders 24 hours
a day during the whole year through their Web sites. Companies who did not keep up with the
changes could have faced receiving no more orders.
Many IT shops and Business Partners that use the iSeries platform today are to be found on
the left side of the chart. Typical development tasks still involve building and maintaining
green-screen applications using long-available compilers, such as RPG and COBOL, via
traditional 5250 tools such as Programming Development Manager (PDM), Source Entry
Utility (SEU), Screen Design Aid (SDA), and Report Layout Utility (RLU), some of which are
more than 20 years old.
The first step involves embracing modern tools to do the same development work previously
accomplished via PDM, SEU, SDA, and RLU (see 1.2.1, “Better tools” on page 5).
The next step (explained in 1.2.2, “Better user interface” on page 6), which is considered to
be urgent by end users (and also the most visible one), is a better user interface (UI) than the
generations-old green screen. For most applications, this is best addressed by moving to a
browser-based user interface.
Better portability involves a move from creating business logic in traditional languages to
writing it in Java. You use simple, standard Java—referred to as Java 2 Standard Edition
(J2SE)—that accesses data in the familiar SQL ways. This step is introduced in 1.2.4, “Better
portability” on page 8.
Site Developer is IBM’s entry-level offering. It is used for building dynamic Web sites out of
non-EJB Java. Application Developer extends Site Developer and adds support for EJBs.
Application Developer - Integration Edition extends Application Developer and adds support
for JCA connectors and for workflow. Finally, the Enterprise Developer further extends the
tool and adds support for zSeries® and Enterprise Generation Language (EGL), the follow-on
to VisualAge® Generator.
Several excellent books on Eclipse have been written, and an impressive amount of free and
commercial plug-ins are available.
To download Eclipse or read more about it, go to the following Web site:
http://www.eclipse.org
To find plug-ins and other useful information on Eclipse, check out the following Web site:
http://www.eclipse-plugins.info
Learning RSE also opens opportunities to access the next generation of third-party tools that
are built on top of Eclipse. Furthermore, RSE works not only with OS/400 files, commands,
and jobs, but also with IFS files and Qshell commands, and with Linux® files and commands
that reside in their own logical partition (LPAR). That is, from a Microsoft® Windows®
workstation, you can remotely access and edit files and run commands. RSE even works with
the files and commands in remote UNIX®, Windows, or Linux servers, as well as with local
Windows files and commands. Ultimately, as Java and Web services technologies are further
adopted, this consistent support across file systems and command shells will be very
important.
All three produce a Web user interface (UI) from a 5250 UI, with no impact to underlying
application logic. They produce UIs that run on WebSphere Application Server - Express (or
later releases) or on any operating system that can support WebSphere Application Server.
For more information on the WebFacing tool we recommend the IBM Redbook The IBM
WebFacing Tool: Converting 5250 Applications to Browser-based GUIs, SG24-6801.
HATS, the second “re-facing” option, is part of Host Integration Solution for iSeries. It
converts a 5250 or 3270 datastream, at runtime, to a browser-based interface that runs in
WebSphere Application Server. Because it is a runtime conversion, it instantly transforms
screens so that they can be displayed on the Web. HATS developers can easily refine, in a
repeatable manner, the conversion results to improve the Web UI. The HATS development
environment plugs into WebSphere Development Studio Client for iSeries.
At first glance, iSeries Access for Web seems similar to HATS in implementation. They both
perform 5250 to HTML conversion at execution time. However, the key strengths of iSeries
Access for Web are all of the additional things that it does, in addition to datastream
transformation. There are many “operational” capabilities inside the iSeries Access for Web
tool that allow a user to browse job queues and output queues, display message queues, etc.
While browsing a spooled file, it is possible to view the output in .pdf format and then e-mail it
to other users. It is a very powerful tool for remote operations, as well as being a
transformation tool.
By re-architecting the application into a modular one, you also allow for the replacement
and/or addition of modern technologies such as browser-based interfaces and distributed
database activity.
Struts
An even better architecture can be achieved through the usage of Apache’s Struts. Struts is a
very popular open source Web application framework that has become a standard, and more
and more companies are using it. Struts goes beyond the scope of this book. Please check
out the following WEB site for more information:
http://struts.apache.org
Struts implements the Model-View-Controller (MVC) design pattern. For an overview of Struts
and the MVC design pattern you may also check out the following Web site:
http://publib.boulder.ibm.com/infocenter/iadthelp/index.jsp?topic=/com.ibm.etools.struts.do
c/html/cstruse0001.htm
For even more information and detailed descriptions or tutorials, there are a number of
excellent books available.
To achieve better portability, the business logic is written in J2SE Java (Java 2 Standard
Edition), not in RPG or COBOL, which gives you the option of porting and deploying the code
to any server that runs a Java Virtual Machine (JVM). In other words, your code does not
have to reside or execute on an iSeries server. This opens interesting perspectives to
solution providers, who can extend the market for their applications.
Using Java to write the business code also allows for the incorporation of objects and
components, as well as many Java industry tools and standards that are available, such as
design patterns and the Unified Modeling Language (UML). The UML is an object-oriented
analysis and design language from the Object Management Group (OMG).
For more information on the UML and the OMG, visit the following WEB sites:
http://www.uml.org
http://www.omg.org
For a succinct description of the main concepts of the UML, we recommend the book UML
Distilled: A Brief Guide to the Standard Object Modeling Language, written by Martin Fowler
(ISBN 0-321-19368-7).
EJBs are beyond the scope of this book, but here is a short explanation and a pointer towards
more information.
In this book we concentrate on the database aspect of the roadmap. In Part 2, “Data
definition” on page 19, we define the terms used and look at the differences between DDS
and SQL, showing you (among other topics) how to reverse engineer DDS described files to
SQL.
In Part 3, “Data access” on page 55, we concentrate on data access; that is, we show you
how to access data using native I/O methods and embedded SQL, as well as how to
externalize data access.
But let us start with some simple questions: Do you remember the 5 1/4 inch floppy disks on
earlier PCs or even the 8 inch floppy disks on the S/38? They were first replaced with 3 1/2
inch disks, and nowadays almost all software that you purchase comes on a CD-ROM.
Would you consider installing any software on any system using those small disks, except a
really small application that fits on a single one? Probably not, because besides the fact that
the initially mentioned disk drives are long gone and storage capacity was rather low, there is
at least one other good reason: Performance. Technology has evolved, and the same is true
for DB2 UDB on the IBM Eserver iSeries.
The list of key points that we think are the main reasons why you should consider moving
your data definitions and data access to SQL are the following:
Standard-compliancy: SQL is a widely used standard.
Openness: Modernizing your database provides you with more and better options to
access your database using third-party tools.
Performance: IBM is investing money on improving database access through SQL, not
elsewhere.
Available skills: In the long run, it might be easier to find developers on the market with
Java and SQL rather than RPG/COBOL and DDS knowledge.
Functionality: Some new functions require SQL.
Data integrity: Concentrating part of your business rules in the database can cut
development time and prevent bad surprises.
The language SQL is not proprietary. This, and the fact that both the American National
Standards Institute (ANSI) and the International Standards Organization (ISO) formed SQL
Standards committees in 1986 and 1987, were major reasons for SQL to become a widely
accepted standard that is implemented in almost all RDBMSs.
At this time, three standards have been published by the ANSI-SQL group:
SQL89 (SQL1)
SQL92 (SQL2)
SQL99 (SQL3)
For more information on SQL, you may refer to the following publications:
http://publib.boulder.ibm.com/pubs/html/as400/infocenter.html
This is the iSeries Information Center Entry Page, where you can find a lot of iSeries-related
information.
2.2.2 Openness
Modernizing your database gives you more options, such as easier access through
third-party development (Microsoft Visual Studio, Sybase PowerBuilder), reporting (Business
Objects Crystal Reports), or database design tools. Some of the tools on the market do offer
an interface for DDS-described DB2 UDB databases, but that is not the rule.
Furthermore, most Web-based application development tools offer built-in support for data
access through SQL, that is, they often generate all necessary SQL code for you. For
example, it is quite common to find configurations where the main business applications are
running on the iSeries, but where an Intel®-based server running MS Windows Server is
used for some Windows server based applications and/or for simple file serving. In such an
environment, access to DB2 UDB for iSeries is easier if it is done using SQL.
2.2.3 Performance
DB2 UDB for iSeries provides two query engines to process queries: The Classic Query
Engine (CQE) and the SQL query engine (SQE). Queries that originate from non-SQL
interfaces such as the OPNQRYF command, Query/400, and the QQQQry API are
processed by CQE.
To fully understand the implementation of query management and processing in DB2 UDB for
iSeries, it is important to see how the queries were implemented in releases of OS/400
previous to V5R2.
Figure 2-1 on page 14 shows a high-level overview of the architecture of DB2 UDB for iSeries
before OS/400 V5R2. The optimizer and database engine are implemented at different layers
of the operating system. The interaction between the optimizer and the database engine
occurs across the Machine Interface (MI).
Chapter 2. Why modernize with SQL and DB2 UDB for iSeries 13
Figure 2-1 Overview of the architecture of DB2 UDB for iSeries before OS/400 V5R2
Figure 2-2 on page 15 shows an overview of the DB2 UDB for iSeries architecture on OS/400
V5R2 and i5/OS™ V5R3, and where each SQE component fits. The functional separation of
each SQE component is clearly evident. In line with design objectives, this division of
responsibility enables IBM to more easily deliver functional enhancements to the individual
components of SQE, as and when required. Notice that most of the SQE Optimizer
components are implemented below the MI. This translates into enhanced performance
efficiency.
There are good reasons to assume that more resources will be invested into improving and
enhancing database access through SQL-based interfaces. This is another good reason for
considering SQL.
But it is also very important to look forward, trying to see the trends. Those who make the
best guess about what customers want to buy tomorrow and start preparing for that market
today, clearly have an advantage. Of course, looking into the future is not possible, but the
more flexible a business is, the better it can be adapted to new challenges. The same is true
for human resources.
Accordingly, the tools and techniques used to create software have evolved dramatically over
the years. Nobody would seriously consider writing a large business application using punch
cards anymore. Modern programming languages and tools provide possibilities that only a
few years earlier were maybe imaginable but not realizable for most of us.
Combining the above statements with the topic of this section, we think it is important to have
a look at the job market and the availability of the needed skills. Since the first release of RPG
Chapter 2. Why modernize with SQL and DB2 UDB for iSeries 15
and COBOL, many other new programming languages have come and gone; some have
remained and are widely used today. One of these languages is Java.
It appears clear that today more software developers are learning Java and SQL than RPG or
COBOL. This means that in the long run it might be easier to find software developers with
Java and SQL rather than RPG/COBOL and DDS knowledge. Modernizing the most
important applications using SQL rather than DDS is a first step to make sure that these
applications cannot only fulfill their purpose, but that there are also the necessary people with
the right skill sets to maintain and enhance them.
2.2.5 Functionality
Some functions in DB2 UDB for IBM Eserver iSeries require the use of SQL. Among these
are:
New data types such as BLOB, CLOB, DBCLOB, and datalink.
– Large object data types store data ranging in size from zero bytes to 2 gigabytes. The
three large object data types have the following definitions:
• Character Large OBjects (CLOBs): A character string made up of single-byte
characters with an associated code page. This data type is appropriate for storing
text-oriented information where the amount of information can grow beyond the
limits of a regular VARCHAR data type (upper limit of 32 K bytes). Code page
conversion of the information is supported.
• Double Byte Character Large OBjects (DBCLOBs): A character string made up of
double-byte characters with an associated code page. This data type is appropriate
for storing text-oriented information where double-byte character sets are used.
Again, code page conversion of the information is supported.
• Binary Large OBjects (BLOBs): A binary string made up of bytes with no associated
code page. This data type can store binary data larger than VARBINARY (32 K
limit). This data type is good for storing image, voice, graphical, and other types of
business or application-specific data.
– A datalink value is an encapsulated value that contains a logical reference from the
database to a file stored outside the database.
Auto-incrementing of keys (sequence objects and identity column attributes): Very often, a
new row in a table must receive a unique numerical value as a record ID. Instead of writing
code to create such a value, which in fact is a counter, let the database do this
automatically.
Column-level triggers: In V5R1 IBM introduced support for SQL triggers in DB2 UDB for
iSeries, which allows you to write triggers using extensions to the SQL as defined by the
SQL standard. The greatest advantage of using SQL triggers is portability. You can often
use the same SQL trigger across other RDBMSs.
– For more information on triggers in DB2 UDB for iSeries, we recommend reading the
relevant chapter in the redbook Stored Procedures, Triggers and User Defined
Functions on DB2 Universal Database™ for iSeries, SG24-6503-01.
Encryption and decryption functions: The ability to encrypt and decrypt data at the column
level has been enhanced with the addition of new SQL scalar functions. It is now possible
to invoke a DB2 SQL statement like the following:
INSERT INTO orders VALUES (ENCRYPT('1234-4567-8900-0001'), 'JOHN DOE')
Where the first value would represent a credit card number. Only those users and
applications with access to the encryption key (or password) can see the unencrypted (or
Modern RDBMSs such as DB2 UDB support data integrity through the following features:
Journaling: A journal is a chronological record of changes made to a set of data. Journals
record every change in a table so that in the case of a major failure, all the data can be
recovered using the latest save of the database and then applying the changes recorded
in the journal to the recovered database table.
Constraints: Table constraints are used to enforce restrictions on the data allowed in
particular columns of particular tables.
– A table can have one PRIMARY KEY consisting of one or more columns.
– A set of one or more columns may be declared as UNIQUE, which means that there
may be no more than one row with a given value for certain columns, which are those
that form the key for the table (a social security number may be a unique key, because
there cannot be two people with the same social security number).
– Columns may have a CHECK constraint, which would specify the values allowed for
that column (for example, a field that holds a code representing the gender information
of an employee can contain ‘1’ for ‘Female’ or ‘2’ for ‘Male’ but not ‘3’).
Referential integrity: Referential integrity (RI) is a type of constraint that deals with
relationships between tables. To reuse the example from the beginning of this section,
there would be a referential integrity check tying the order table to the customer table.
Each order would contain a valid customer number from the Customer table as a
FOREIGN KEY. The RI constraint would ensure that a customer cannot be deleted while
there are open orders of that particular customer in the order table.
Commitment Control: Commitment Control is a mechanism to handle multiple table
transactions as a single unit of work. For example, the bank transfer of a salary payment
involves at least two table updates: First the deduction on the bank account of the
Chapter 2. Why modernize with SQL and DB2 UDB for iSeries 17
employer, and second the credit to the employee’s bank account. If there is a power
failure exactly in the middle between these two updates, the whole transaction would fail
and be rolled back.
Triggers: Triggers are user-written programs that are run automatically whenever a
change is made to a table. Triggers can be defined to run BEFORE or AFTER an
INSERT, an UPDATE, or a DELETE. They are useful for tasks such as enforcing business
rules, validating input data, and keeping an audit trail. Such a program could, for example,
automatically send a message to a user when a value has been changed in a certain
table.
Traditionally, referential integrity rules and check constraints are tied into the application
program. Moving these business rules out of the application program into the database using
SQL constraints offers these advantages:
Less coding required, because the rules do not have to be written in the program, making
the program smaller and therefore easier to understand and maintain.
Better performance, because the DBMS handles these rules faster than a user-written
application program.
Better portability, because the business rules are not hidden in the program but a part of
the database.
More security: Business and data integrity rules defined in the database provide more
security because they cannot be circumvented by a faulty or incompletely written
application.
In DB2 UDB for iSeries, once these relationships and rules are defined to the database
manager, the system automatically ensures that they are enforced at all times, regardless of
the interface used to change the data (an application, the Data File Utility, Interactive SQL,
and so on).
To read more about the advanced functions supported in DB2 UDB for iSeries, refer to the
Redbook Advanced Functions and Administration on DB2 Universal Database for iSeries,
SG24-4249.
Furthermore, we recommend reading the redbook Stored Procedures, Triggers and User
Defined Functions on DB2 Universal Database for iSeries, SG24-6503, or at least the chapter
“Transaction management in stored procedures” for more information on transaction
management.
For more information on journaling on the IBM eServer iSeries, refer to the redbook Striving
for Optimal Journal Performance on DB2 Universal Database for iSeries, SG24-6286.
This chapter gives you an overview of the different approaches and options that you have as
you begin the process of database modernization. We also introduce a methodology for this
modernization that we will be explaining throughout the book.
Column Field
Row Record
Log Journal
Most database objects can be used interchangeably whether they are created using DDS or
SQL. For example, a physical file created using DDS can be manipulated using SQL, and
likewise, a table created using SQL Data Definition Language (DDL) has an external
definition that can be used in RPG and COBOL programs and is indistinguishable from one
created using DDS, as illustrated in Figure 3-1.
SQL Considerations:
SQL-created
Programs Multi-member &
objects
multi-format files
DDS-created Native*
objects Programs
*Restrictions:
EVIs, LOB columns,
UDTs, Datalinks, etc
When you create a table using SQL you are creating a physical file object.
There are also differences in data retrieval capabilities between SQL and languages such as
COBOL and RPG. For example, SQL does not support the concept of multi-member files, so
before you can use SQL to access any member except the first member of a multi-member
file, you will need to define an SQL Alias that points to the specific member and assigns it a
name. This will be discussed in Part 3, “Data access” on page 55.
In addition to different terminology for database structures such as files and fields, SQL does
support some data types, such as datalinks, that are not available in DDS. There are also
some restrictions on the HLL such as RPG and COBOL to define Encoded Vector Indices,
access LOB columns, and use User Defined Types or datalinks.
While native database file operations through high-level languages (HLL) such as RPG or
COBOL have been since the inception of the iSeries, SQL is a standard programming
language that can be used for all databases and can be embedded in all programming
languages. SQL provides not only functions to define database objects, but also to
manipulate database data.
At compile time the physical and logical files that are used in the program must be existent,
because the file descriptions are bound into the program, module, or service program object.
It is only possible to access different files with the same structure dynamically, either through
overriding them (CL command OVRDBF) or using the keywords EXTFILE or EXTMBR for a
user-opened filed in the F-Specifications.
Through operations codes like READ or CHAIN the complete record can be accessed and
processed. You only can get access on selected fields when using a logical file with a field
selection; otherwise the whole record is read.
All executable SQL statements must be prepared before they can be executed. The result of
preparation is the executable or operational form of the statement. Depending on the method
Native SQL
(Record
I/O)
Optimizer
DB2 UDB
(Data Storage & Management)
Static SQL
These SQL statements are embedded in the source code of a host application program.
These host application programs are typically written in HLL, such as COBOL or RPG.
The host application source code must be processed by an SQL pre-compiler before
compiling the host application program itself. The SQL pre-compiler checks the syntax of the
embedded SQL statements and replaces SQL statements with calls to corresponding SQL
function programs. If the tables used in the embedded SQL statements are not available at
compile time, a SQL warning is sent, but the program, module, or service program object is
nevertheless generated. In this case the access plan cannot be built at compile, time but at
runtime.
The pre-compiler is a part of the IBM licensed product DB2 Query Manager and SQL
Development Kit for AS/400 (5769-ST1), which must be available during the compile time.
The runtime support is included in the operating system. That means that compiled programs
or service programs containing embedded SQL statements can be executed even without a
SQL licence.
The SQL statements are therefore prepared before executing the program, and the
associated access plan persists beyond the execution of the host application program.
Dynamic SQL
Programs containing embedded dynamic SQL statements must be precompiled like those
containing static SQL, but unlike static SQL, the dynamic SQL statements are checked,
Access plans associated with dynamic SQL may not persist after a database connection or
job is ended.
The QSQPRCED API (Process Extended Dynamic SQL) provides users with extended
dynamic SQL capability. Like dynamic SQL, statements can be prepared, described, and
executed using this API. Unlike dynamic SQL, SQL statements prepared into a package by
this API persist until the package or statement is explicitly dropped.
The iSeries Access Open Database Connectivity (ODBC) driver and Java Database
Connectivity (JDBC) driver both have extended dynamic SQL options available. They
interface with the QSQPRCED API on behalf of the application program.
For more information about SQL in DB2 UDB for iSeries, refer to the DB2 Universal Database
for iSeries SQL Reference in the iSeries Information Center at:
http://publib.boulder.ibm.com/infocenter/iseries/v5r3/ic2924/index.htm?info/db2/rbafzmst.ht
mL
This methodology is based on existing applications that access the database via high level
language (HLL) I/O operations commonly referred to as native support.
In Chapter 4, “Modernizing database definitions” on page 29, we cover all of the steps
involved in this first stage of the process.
The process involves a phased approach to replace native I/O operations with SQL data
access methods. The strategy of using I/O modules is to limit the SQL optimization
knowledge to the database programming group. This will allow the application programmers
to focus on solutions to business requirements without a need to understand the complexities
of database optimization.
The I/O modules mask the complexity of the database from the application programmer. For
example, a HLL program may be performing several read operations to multiple files to fill a
subfile. This could be replaced by a single call to an I/O module, which performs a single SQL
fetch operation to a join view and returns a single host array (multiple occurrence data
structure in RPG) to the caller.
In addition, the I/O modules allow the database programmer to take advantage of database
functions (that is, date and time data types, variable length fields, identity columns, etc.), thus
eliminating many common HLL programming requirements. This includes programming
required to format date and time data, formatting address lines, etc.
In Chapter 5, “Creating I/O modules to access SQL objects” on page 57, we cover the steps
required for this stage.
Since many customers have never taken the time to truly normalize their existing DDS
database, this is a good opportunity to do so.
Database triggers can be used to perform additional updates to auxiliary tables, possibly
eliminating overnight batch updates. They can also be used to initiate asynchronous
background tasks via data queues.
In this chapter we describe the methods and considerations in the process of modernization
of the database definition.
Note: There is not a unique solution that fulfills all the application development
environments. We introduce some guidelines to help you to modernize your applications to
use and exploit the features of SQL.
The complexity of this stage is dependent on the current condition of the existing database. If
all files are fully described DDS files then there should be few, if any, exceptions. This step
requires iSeries Navigator V5R1 or later. If you are not familiar with iSeries Navigator, we
recommend that you read the redbook DB2 Universal Database for iSeries Administration
The Graphical Way on V5R3 - SG24-6092.
The list gathers statistics to determine the physical files with the fewest, most, and average
number of dependent logical files and programs. Choose a physical file with the average
number of associated logical files and programs. This will be the pilot file for re-engineering.
The iSeries system catalogs have information of the database objects. You can query the
system catalog tables to find out, for example, the number of physical files to convert using
the following SQL statement:
select count(*)from qsys2.SYSTABLES
where table_schema = 'APILIB' and
table_type = 'P' and
To find out the list of the physical files in a given library, use the following SQL statement:
select table_name, table_type, file_type from qsys2.SYSTABLES
where table_schema = 'APILIB' and
table_type = 'P' and
file_type = 'D'
order by table_name
In the previous example, we wanted to build the list of all data physical files in library APILIB.
We queried from the system catalogs, SYSTABLES, in QSYS2 schema. The TABLE_TYPE
is ‘P’ for physical file, and the FILE_TYPE is ‘D’ for data. You have to select FILE_TYPE if
your library also contains source physical files. Figure 4-2 shows result of the query.
This query can help you find out what physical files/logical files are in a library. It can be used
to make the list of DDS files to be converted. Later in this chapter we cover the system
catalog tables in more detail.
To circumvent the long and short name, the FOR COLUMN clause on the CREATE TABLE
statement allows you to specify a short name for your long column names that can be used
on the interfaces that cannot support field names longer than 10 characters. If a short name is
not specified, the system will automatically generate one. The CREATE TABLE statement,
however, does not allow you to specify a short name for the table name. Again, the system
does generate a short name automatically, but the short name is not user-friendly. For
example, when you create a table named CUSTOMER_MASTER, OS/400 automatically
generates a short 10-character name, CUSTO00001, which is the first five characters of the
table name and a unique 5-digit number. It might be different each time a specific object is
created, depending on creation order and what other objects share the same 5-character
prefix. In this case you can use the RENAME TABLE SQL statement.
To circumvent the long and short name, you can use SQL DDL statements to specify your
own short name, as shown in the following examples:
RENAME TABLE (table and view) and RENAME INDEX.
CREATE TABLE MYSCHEMA.CUSTOMER_MASTER
(CUSTOMER_NAME FOR COLUMN CUSNAM CHAR(20),
It is important to create a glossary of reserved words for naming objects. This glossary would
contain the complete word; how it is used; and standard 2, 3, and/or 4 character
abbreviations, as shown inExample 4-1.
The following are some suggested guidelines for establishing SQL table and index naming
conventions.
Note: These suggestions are not meant to replace your existing standards and
conventions. The message that we want to highlight is that there should be some naming
conventions in place.
Avoid using the object type as part of the object name. For example, do not use the words
FILE, TABLE, or INDEX as part of the name.
Use the table name and a suffix for SQL indexes. Do not be concerned about the length of
the name, as indexes cannot be specified in an SQL statement. On the iSeries server,
indexes provide statistics and can be used to implement a query. For example,
CUSTMST_X001 is a radix index over CUSTMST, or CUSTMST_V001 is an Encoded
Vector index over CUSTMST.
The Generate SQL window (Figure 4-4 on page 35) displays the database object to be
converted. You can remove unwanted objects being converted, such as source physical
files.
Important: You should be aware that the Generate SQL option can only convert the
SQL-supported objects. Thus:
Unsupported objects such as multiple format logical files are not displayed and
converted in the Generate SQL window.
Keyed logical files are converted to SQL views.
The generated SQL source can be stored in either a source physical file member on the
iSeries server or in the Run SQL scripts window or the IFS. On the Output tab, choose
either Open in Run SQL Scripts or Write to file. Then click Generate.
The RUN SQL Scripts window (Figure 4-5) shows the generated SQL scripts. These are
SQL DDL statements, which can be saved on your PC and executed to create new
schema and other database objects from this window. You can also save the SQL scripts
in the source physical file and execute by calling the RUNSQLSTM command.
Note: The Generate SQL option on iSeries Navigator really invokes an API called
QSQGNDDL on the iSeries. This API can be used directly in an CL program if you prefer.
The first thing to do in this step is to look for warning messages in the generated code. These
warning messages are shown because not all of the DDS keywords have their equivalent or
can be converted to SQL, such as EDTCDE. Let us illustrate this with the following example.
A K ORHNBR
Example 4-3 shows the SQL statements generated to define the original Order Header File
ORDHDR.
Example 4-3 Created SQL script for the DDS-described physical file ORDHDR
- Generate SQL
-- Version: V5R3M0 040528
-- Generated on: 08/18/04 10:05:32
-- Relational Database: S104RT9M
-- Standards Option: DB2 UDB iSeries
Note from the previous SQL code the following SQL warning messages:
SQL1508 REUSEDLT(*NO) in table ORDHDR in ITSO4710 ignored.
While creating a physical file with CRTPF you can specify the Reuse
deleted record option (REUSEDLT). The default value is *NO. That
means that when a record is deleted only the first bit will be changed.
To delete the record physically you have to execute the CL command
RGZPFM (Reorganize Physical File Member).
For SQL tables you do not have this option. When a record is deleted
in a SQL table, the allocated storage will be reused when a new row is
written. It is not possible to reactivate deleted records.
SQL1509 Format name ORDHDRF for ORDHDR in ITSO4710 ignored.
The format name and the table name of the new SQL table will be
identical.
Note: SQL tables are always created with reuse deleted records *YES, so that it is not
possible to reactivate deleted records.
Example 4-4 on page 38 contains a modified version of the SQL script in Example 4-3 on
page 36. The table is generated with the format name, and the long column names are added
into the CREATE TABLE statement. A new statement to rename the table to a long SQL
name and to set the system name of the table to the old file name is added. All other
statements are not changed.
File-level keywords
The file-level keywords are:
Files that use any of the following keywords: ALTSEQ, FCFO, FIFO, LIFO
These keywords will be ignored.
Join logical files with JDFTVAL or JDUPSEQ
A LEFT OUTER JOIN clause will be generated, but the join default value will be the null
value and the JDUPSEQ keyword will be ignored.
Generally, it is not possible to stop using the existing traditional HLL programs and turn to
SQL-based applications. To co-exist with the existing applications, the SQL scripts may need
to be modified for some reasons before creating the new SQL database. Some of the issues
that you will have to address are covered in the next sections.
Finally, the new SQL table, CSTMR, is created and has the record format named CSRCD.
In our conversion process there are some alternatives. You can create a view with join and/or
UNION operators. Note that even a join view with or without UNION does not provide a direct
equivalent to a multiple format logical file. However, for general purposes, joins and unions
can combine data from different tables in a way that serves to what a multiple format logical
file provides. For this issue, you may choose one of the two following proposals:
Create DDS multiple format logical files over the new SQL tables that are used by the
existing HLL programs, which is our proposal for this first stage.
Modify the existing applications. An example of application changes is to modify the HLL
programs to use new I/O module to access new SQL database objects, something that
will be done on our second stage of the methodology.
The generated SQL scripts for join logical files will create SQL views that have definitions
equivalent to the DDS join logical file without the key values. In this case as well you have to
generate the CREATE INDEX statements based on the key values of the logical file. Note
that it is decision of the optimizer to use or not to use the indexes that you create.
Figure 4-6 shows the journal, journal receiver, and the DB2 views created when we execute
the CREATE SCHEMA MYSCHEMA SQL statement.
The library
The library is the logical "container" of the objects and is where the objects are stored. DB2
object names have to be unique within this container. The DB2 views created as part of the
schema are a set of views that describe tables, views, indexes, packages, procedures,
functions, triggers, and constraints. These views are built over the base set of catalog tables
in libraries QSYS and QSYS2 and only include information on objects contained in that
schema.
Most iSeries customers use their journal receivers as a core part of their database backup
and recovery process. One approach would be to save a complete copy of the table backup
media once a week and then on a nightly basis just save the changes to the table (that is, the
journal receiver) to back up media and then repeat this process every week. Once the journal
receiver has been backed up, the journal receiver object can be deleted. More information
about the proper steps for saving and deleting journal receiver objects can be found in the
Backup and Recovery Guide in the iSeries Information Center. The online version of the
iSeries Information Center can be found at:
http:www.iseries.ibm.com/infocenter
It is possible to have the system automatically delete the journal receiver object by specifying
DLTRCV(*YES) on the CHGJRN CL command or setting the equivalent option on the iSeries
Navigator interface. However, this option should only be used after reading the Backup and
Recovery Guide and understanding the behavior and implications of this option.
You will also notice over time that multiple journal receiver objects will appear in the schema
created. That is because DB2 UDB for iSeries creates the journal object with the
system-managed receiver option (for example, MNGRCV(*YES) ) on the CHGJRN CL
command). The graphical journal management interface in Figure 4-7 on page 43 denotes a
system-managed journal receiver with the System radio button selected in the Receivers
managed by section. With this option specified, DB2 UDB automatically creates a new journal
receiver each time the system is restarted (that is, system IPL) and whenever the attached
receiver reaches its size threshold. The current journal receiver is detached and a new one is
created.
It is easier to back up a journal receiver when it is detached. Also, the receiver must be
detached before it can be deleted. So you can see how this option makes it easier to manage
journal receivers. Managing journal performance is a topic outside of the scope of this book,
but you can reference the redbook Striving for Optimal Journal Performance, SG24-6486, for
more information on this.
Since DB2 UDB for iSeries has no concept of table spaces and automatically stripes and
balances DB2 objects across disks, the journal and journal receiver objects are going to be
the only schema objects that require space management from an administrator's perspective.
The only other space administration task is making sure that there is enough disk space
available on the system. For more information on this topic refer to Managing DB2 UDB for
iSeries Schemas and Journals, written by Kent Milligan, and found at:
http://www7b.boulder.ibm.com/dmdd/library/techarticle/0305milligan/0305milligan.html
Note: The journaling function can be manually disabled for each table if journal
management is already in place.
4.1.7 Create all existing DDS logical files over the new SQL tables
After creating the new schema, we use the generated and reviewed SQL scripts to create the
database objects such as:
Tables
Indexes
After having created the tables, we have to create DDS logical files over all the SQL tables in
order to be accessed by the existing HLL programs. After creating the DDS logical files over
Let us illustrate this step with one example. Let us suppose that we have:
A DDS physical file named DDS_FILE
A DDS logical file named DDS_LF
An RPG program named DDS_LF_RPG
If we execute a DSPPGMREF to the DDS_LF_RPG program we will get the result shown in
Example 4-8.
The next step is to reverse engineer the DDS PF into SQL. The new SQL table will have a
different name. This is shown in Example 4-9.
The DDL table format ID does not match the DDS PF format ID. This is expected. The next
step is to convert the original DDS PF source to a LF referencing the SQL table, as shown in
Example 4-10. (Note that this logical file must contain the original column definitions.)
After creating the DDS LF DDS_FILE, the format ID remains the same as the original DDS
PF DDS_FILE; however, it is now based on the new SQL DDL table, as shown in
Example 4-11.
The next step is to change DDS LF DDS_LF to share the format of DDS LF DDS_FILE, as
shown in Example 4-12.
After recreating the DDS LF DDS_LF, the format ID remains the same as shown below.
The RPG program still contains the original format level ID and does not need recompiling,
nor would any programs that reference DDS PF DDS_FILE. At this point you are now taking
advantage of database enhancements made available to SQL (in essence, faster reads and
larger access path sizes).
Figure 4-8 on page 46 shows how we have moved from an existing environment to a new
environment.
You need to create a set of file conversion programs. A conversion program reads data from
an existing DDS file and writes it to a new SQL table. It may be a simple CL program, as
below:
CPYF FROMFILE(APILIB/CSTMR) TOFILE(NEWSCHEMA/CSTMR) MBROPT(*REPLACE)
Note: This example can be used if each new column attribute is identical to the existing
field in DDS.
In this step it is a good opportunity to do some cleansing of the data. Some of the things to
check are:
Ensure that there are no invalid characters in the DDS numeric data. SQL checks at insert
time versus read time for DDS.
If the DDS database contains numeric fields representing dates then verify that the dates
are valid.
To validate the data in the records, you would probably need to create HLL programs to
check the validity of the fields, such as valid date data and so on. These efforts save time
down the road.
Use the test tool to establish standard test scripts that will touch each and every file. As each
file is converted, the test script can be rerun to ensure that conversion did not introduce new
invalid data.
As a last step, we may need to move this new environment (that has been a testing
environment) to a production environment. In the next section we explain considerations
regarding this movement.
Many times, however, recreating all of the DB2 objects is not feasible since the source
objects contain a large amount of data. An alternative method to use in this case is to save
and restore the schema and then manually reset the journal information afterwards. If I have
a schema named ABC with two tables, DEPARTMENT and EMPLOYEE, then here are the
steps that would need to be followed to move schema ABC into schema XYZ using the save
and restore method:
1. Use the CL command SAVLIB TEST ACCPTH(*YES).
Remember that the OS/400 container for a schema is a library. The ACCPTH(*YES)
option saves the actual index tree if any indexes exist in the schema. That will eliminate
indexes having to be rebuilt on the restore operation.
2. Create the production schema. Use the SQL statement CREATE SCHEMA PROD.
This will create the target schema object with auto-created journal and system catalog
views.
3. Use the CL command to restore the library: RSTLIB TEST OPTION(*NEW)
RSTLIB(PROD).
The *NEW option will only restore the TEST objects that do not already exist in the PROD
schema. This type of restore essentially restores everything but the objects automatically
created by DB2 UDB (journal, journal receiver, and catalog views).
4. Find a the list of DB2 objects currently journaled in TEST schema.
Since the OS/400 journal CL commands only accept the short DB2 object identifiers, the
short name for a DB2 UDB for iSeries object will need to be recorded in this step.
5. End journaling for all of the objects in TEST schema. Use the CL command ENDJRNPF
*ALL TEST/QSQJRN.
DB2 objects in the new schema, PROD, are currently associated with the original schema.
Eliminate this association by ending journaling for all tables. If schema TEST does not
exist on the system where schema PROD was restored, then the restore operation will
end the journal association with the original schema. Thus, the ENDJRNPF *ALL
command is not needed when schemas TEST and PROD reside on different systems.
If other objects in the schema such as indexes have been explicitly journaled by the user,
then journaling on those objects would have to be ended and restarted. For example,
journaled indexes would be stopped and restarted with ENDJRNAP and STRJRNAP CL
commands.
6. For each table in PROD schema you have to execute the following CL command:
STRJRNPF PROD/DEPT PROD/QSQJRN IMAGES(*BOTH) OMTJRNE(*OPNCLO)
STRJRNPF PROD/EMPLOYEE PROD/QSQJRN IMAGES(*BOTH) OMTJRNE(*OPNCLO)
Note: If the journal object was altered away from its default settings with the CHGJRN
CL command or iSeries Navigator, those customizations would need to be performed
on the journal object in the PROD schema.
7. Save the newly configured schema to backup media, so that you do not have to do these
configuration steps again.
The new SQL objects reside in a schema. In 4.1.6, “Creating the new DB2 schema on the
iSeries server” on page 40, we covered the differences between a library and a schema.
We have seen that a schema is a repository that contains SQL objects and that a schema
corresponds to a library in the iSeries. In fact, each schema is of type *LIB, that is, a library.
Creating a schema by itself would not make any sense. It is the starting point, a container
destined to be filled with tables, indexes, etc., all the objects that make up your database and
are at the heart of most applications.
But where is all the information about the database itself, the so-called metadata? Well, that
is where the term catalog comes into the picture. Catalogs are automatically created when a
schema is created, and they contain all the relevant information about the databases. Each
modification of a table in a SQL schema (that is, creating, renaming, dropping, moving, etc.) a
table updates the catalog files for that schema.
Note: Metadata is information about information. They serve to describe data, and their
use is not limited to the field of SQL or information technology. Most, if not all, SQL-based
RDBMS allow the extraction of the metadata of their content. For example, metadata is
very important to reverse engineer databases. Database design tools typically use
metadata to display database models.
To summarize, the following can be said: The structure of the database is maintained by the
DBMS in special tables that are called catalogs. The catalogs can be queried by users or
tools to display information about tables, columns, referential integrity constraints, security
rights, and any other information that composes a database.
The marketing department of a company wants to increase the size of the customer number
column CUSTNO and they want to find out in which tables the column is used. This is
important to know to be able to do an impact analysis of this change. This example further
assumes that there is not one single reference file for field definitions, but that the field may
be contained in more than one file. The field might be named either CUSTNO or CUSTNR.
Using the SQL system catalogs we would execute the following SQL statement to find out
how many files have the Customer Number field.
Now, let us find out which are the files that need to be changed.
This will run the SQL statement and display the desired result, which is a list of all files that
contain a column starting with CUST, sorted by table name.
There are two types of partitioning: Hash partitioning and range partitioning. You specify the
type of partitioning with the PARTITION BY clause in the CREATE TABLE statement. In our
example, we create a partitioned table PAYROLL in library PRODLIB with partitioning key
EMPNUM in four partitions.
Hash partitioning places rows at random intervals across a user-specified number of
partitions and key columns.
CREATE TABLE PRODLIB.PAYROLL(EMPNUM INT, FIRSTNAME CHAR(15), LASTNAME CHAR(15),
SALARY INT)
PARTITION BY HASH(EMPNUM) INTO 4 PARTITIONS
Range partitioning divides the table based on user-specified ranges of column values.
CREATE TABLE PRODLIB.PAYROLL
(EMPNUM INT, FIRSTNAME CHAR(15), LASTNAME CHAR(15), SALARY INT)
PARTITION BY RANGE(EMPNUM)
STARTING FROM (MINVALUE) ENDING AT (500) INCLUSIVE,
STARTING FROM (501) ENDING AT (1000) INCLUSIVE,
STARTING FROM (1001) ENDING AT (MAXVALUE)
However, as of the beginning of V5R3, the partitioned tables support cannot take advantage
of the query optimizer for leveraging the performance advantages. The improvement will
come in the future. The partitioned tables should really only be used in V5R3 if you have a
table that is approaching the single size table limit of 4.2 billion rows or 1.7 terabytes of
storage.
For more information on partitioned tables refer to the whitepaper Table partitioning
strategies for DB2 UDB for iSeries, which can be found at:
http://www-1.ibm.com/servers/enable/site/education/abstracts/2c52_abs.html
In this chapter, we move forward in the methodology and we propose the creation of SQL
views and the use of service programs to replace the native I/O access. We cover the steps
required in the second phase or stage proposed in our methodology in “Methodology for the
modernization” on page 25.
The process involves a phased approach to replace native I/O operations with SQL data
access methods. The strategy of using I/O modules is to limit the SQL optimization
knowledge to the database programming group. This will allow the application programmers
to focus on solutions to business requirements without a need to understand the complexities
of database optimization.
The I/O module masks the complexity of the database from the application programmer. For
example, an HLL program may be performing several read operations to multiple files to fill a
subfile. This could be replaced by a single call to an I/O module that performs a single SQL
fetch operation to a join view and returns a single host array (multiple occurrence data
structure in RPG) to the caller.
In addition, the I/O module allows the database programmer to take advantage of database
functions (that is, date and time data types, variable length fields, identity columns, etc.), thus
eliminating many common HLL programming requirements. This includes programming
required to format date and time data, formatting address lines, etc. Figure 5-1 illustrates the
objective of this second stage.
The following is a list of some or all of the parts making up the I/O process:
SQL table
SQL view
Service program object
I/O module source and object
Service program interface (prototype)
Wrapper program for stored procedures(source and object)
Binding directory object
Binding source language
Stored procedure to access the I/O module
The following are some suggested guidelines for establishing naming conventions:
1. Avoid using the object type as part of the object name. For example, do not use PGM,
MOD, etc. as part of the name.
2. Establish standard abbreviations for the different database functions. There are basically
four: Read, Write, Update, and Delete. Use these abbreviations to prefix the I/O module.
For example, the I/O module that contains the procedures for reading the customer
master table may be named GETCUSTMST. Keep in mind that you are limited to 10
characters.
3. Minimize calls to stored procedures by creating join and summary views and then creating
a single procedure to access these views. For example, CustOrderSummaryByName is a
view that groups on the customer name column and joins the customer master table to the
customer Order Header table.
4. Use long names for SQL stored procedures. GetCustomerName says exactly what the
stored procedure will do. Use the SPECIFIC clause in the CREATE PROCEDURE
statement to control the short name.
5. Use the same name for both external stored procedures and HLL ILE procedure names.
6. Keep application abbreviations to two or three characters.
7. Use binding directories to link modules to programs and/or service programs.
Note: These suggestions are not intended to replace existing standards and conventions.
The following example illustrates a joined view from the ORDERHDR table and the
CUSTOMER table. The view presents a summary of the order amount grouped by the
customer name.
CREATE VIEW ITSO4710.CUSTORDERSUMMARYBYNAME (
CUSTOMER_NAME, TOTAL_ORDER_AMT)
AS
SELECT CUSTOMER_NAME, SUM(ORDER_TOTAL) FROM ITSO4710.ORDERHDR O, ITSO4710.CUSTOMER C
WHERE O.CUSTOMER_NUMBER = C.CUSTOMER_NUMBER
GROUP BY CUSTOMER_NAME ;
To use views in RPG might be very useful, because they are more powerful than (join) logical
files. In SQL views you can all use what is possible in a SQL select statement, with the
exception of ordering rows.
To solve this problem a logical file keyed with Order Date is used. Example 5-1 shows the
DDS described logical file ORDHDRL1.
Example 5-1 DDS description for the keyed logical file ORDHDRL1
A R ORDHDRF PFILE(ORDHDR)
A K ORHDTE
A K ORHNBR
The following RPG snippet (Example 5-2) shows how the total per year can be calculated and
displayed:
The keyed logical file is read.
The year portion must be extracted from the order date.
The total for all orders in the same year must be added into a extra field.
After having read all records with the same order year, the result is displayed, and the
extra field is cleared.
Example 5-2 RPG program to calculate and display the order totals per year
FOrdHdrL1 IF E K DISK
*-----------------------------------------------------------------------------------------
D FirstRec S N First Record
Read OrdHdrF;
If %EOF;
If FirstRec = *On;
Dsply DspText;
EndIf;
Leave;
EndIf;
EndDo;
Return;
/End-Free
While logical files can only be used with native I/O, SQL views can be used with all databases
and programming languages.
Example 5-3 shows how a SQL view can be created that summarizes all orders with the
same order year.
Example 5-4 on page 62 shows how the RPG code from the previous example can be
reduced by using the new SQL view instead of logical files:
The SQL view is sequentially read.
Example 5-4 Calculate and display the order totals per year by using a SQL view
FOrdTotYr IF E DISK
*-----------------------------------------------------------------------------------------
D DspText S 50A
*-----------------------------------------------------------------------------------------
/Free
SetLL *Start OrdTotYr;
DoU %EOF(OrdTotYr);
Read OrdTotYrF;
If %EOF;
leave;
EndIf;
EndDo;
Return;
/End-Free
Note: SQL views are never sorted in a predefined sequence. The query optimizer
determines the access path that will be used to access the data.
SQL views can only be used within native I/O, when no predefined sequence is necessary.
Alternatively, embedded SQL can be used, where an order-by sequence can be specified.
This is basically an iterative process. Begin by creating a view for each program and avoid
creating views over all the columns of the physical files. Also keep in mind that views do not
contain access paths, and thus do not add system maintenance overhead. Review the
column requirements for each program and reuse the views as needed and appropriate.
5.4 Create service programs to access data from the SQL views
Once we have defined and created the required SQL views, the next step is to externalize the
I/O modules. Externalizing the I/O means that all the native I/O operations (for example,
CHAIN, READ, and WRITE) and other database operations are converted or coded into
separate routines and programs that require I/O make requests to these routines to perform
the operation on their behalf.
Externalizing I/O operations provides one way of helping to ensure that your applications can
adapt quickly and relatively painlessly to changing business needs. Instead of coding a native
READ, CHAIN, etc. at each point in the program where database access is required, you
invoke a routine to perform the I/O for you.
An application may contain more than one service program. A service program is an
Integrated Language Environment® (ILE) object that provides a means of packaging
externally supported callable routines (functions or procedures) into a separate object.
There are basically four main database operations candidates to be replaced by I/O service
programs. They are:
Read (GET)
SRVPGM IO Modules
Table/View
Functions Function
(module)
Selection Primary
1 key Access
Method
Selection Alternate (Procedure)
2 Key 1
By moving business rules into your database, you are assured that those requirements are
enforced across all transactions and, more importantly, all interfaces. In contrast, business
rules implemented in an application are enforced only when that application is used to
change your database. Relying on application-enforced business rules opens up serious data
integrity issues when data corrections are made using tools like SQL and new Web-based
applications.
In this chapter we will explore additional DB2 UDB for iSeries features that will help us move
the business rules into the database.
The First Normal Form (or 1NF) involves removal of redundant data from horizontal rows.
We want to ensure that there is no duplication of data in a given row, and that every column
stores the least amount of information possible (making the field atomic). For example,
normalization eliminates repeating groups by putting each into a separate table and
connecting them with a primary key-foreign key relationship.
The Second Normal Form (or 2NF) deals with redundancy of data in vertical columns.
The Third Normal Form deals with looking for data in the tables that is not fully dependant on
the primary key, but dependant on another value in the table. This is an ideal form for OLTP
environments.
It is not within the scope of this book to explain the normalization process. The reality is that
many of the iSeries customers have not taken the time and effort to normalize their
databases. In this stage of the modernization process it would be a good idea to take some
time and revisit the database design.
The support of referential integrity in DB2 UDB for iSeries has been around for many years.
The reality is that many customers have their referential integrity coded in their application
code. In this phase it is important that combined with the normalization process of the
database, it is a good time to implement referential integrity using the database. For a more
detailed description of how to implement referential integrity in DB2 UDB for iSeries refer to
the redbook Advanced Functions and Administration on DB2 Universal Database for iSeries -
SG24-4249-03.
6.3 Constraints
DB2 UDB provides several ways to control what data can be stored in a column. These
features are called constraints or rules that the database management enforces on a data
column or set of columns.
For a more detailed description of how to implement and define constraints in DB2 UDB for
iSeries refer to the redbook Advanced Functions and Administration on DB2 Universal
Database for iSeries - SG24-4249-03.
To understand how constraints work with other application design options, you may find it
helpful to consider how you currently apply duplicate key rules. It is common practice in
iSeries shops to define duplicate key rules when a physical file is created. These rules ensure
that no matter what application modifies the file, or what errors or malicious code that
application contains, the database will never be corrupted with duplicate keys. This does not
mean that your applications cease to check for duplicate keys, but it does mean that the
database is protected even if your applications are bypassed or incorrect. Thus the duplicate
key rule, which is a primary key or unique constraint in SQL terms, ensures that the file
contains valid data regardless of how it is updated.
If you apply the same logic you currently use for imposing duplicate key rules to referential
and check constraints, it is much easier to see how you can begin using these database
features. We recommend that you begin defining referential and check constraints for
business rules that are well defined and consistently enforced by your applications. Keep in
mind that a constraint will always be enforced, so you want to avoid imposing constraints if
the existing data contains violations, or if the applications do not conform to the constraint
rules.
Once you have defined a constraint, the next question is how to deal with constraint violations
in your applications. If you have an application that checks a business rule that is also
enforced by a constraint, it is often best to leave that application code intact and accept the
slight performance penalty incurred by checking twice—once in the constraint and once in the
application code.
If you are writing new applications, consider checking for constraint violations instead of using
data validation techniques, such as chaining to a master file. However, you may find that
Another issue to consider is how easily you can determine which constraint failed. SQL, RPG,
and COBOL all signal constraint violations. However, if multiple constraints are assigned to a
table, as is generally the case, you must retrieve the specifics concerning which constraint
failed using the Receive Program Message API. In addition, constraint violations are reported
as soon as the first violation is encountered. Therefore, if you need to validate an entire panel
of information and report all errors to the user, checking for constraint violations in your
application could be tricky.
Finally, while the duplicate key comparison works well for most constraints, some referential
constraints do more than simply prevent invalid data from being stored in a table. If you define
a constraint that cascades (for example, deleting all order line rows when the corresponding
Order Header row is deleted), you will most likely want to remove any application code that
performs the same function as the constraint.
Even if you decide never to check for constraint violations in your RPG or COBOL
applications, you may still want to impose constraints. Doing so will make your business rules
accessible to applications running on other platforms or written in languages such as Java. It
will protect your data from corruption and it will improve application portability because
constraints are a standard database capability.
There are specific data type and length requirements that must be met in order to use the
Encryption column function. This is because the encrypted version of the data will be a binary
value and longer than the original data string. The data types must be:
BINARY, VARBINARY
CHAR FOR BIT DATA, VARCHAR FOR BIT DATA
BLOB
The length of an encrypted string value is the actual string length plus an extra 8 bytes (or 16,
if BLOB or different CCSID values are used for the input) and must be rounded to an 8-byte
In Example 6-1 you see how to set the encryption password with a 3-character hint and then
the encryption of the employee 6-character ID value as it is inserted into a DB2 UDB table.
The decrypt_char function on the SELECT statement uses the same password to return the
original employee ID value of ‘112233’ back to the application.
On the iSeries server, a validation list object is a good container to safely store the encryption
passwords, because the passwords can be encrypted when they are stored in the list object.
Each validation list entry allows you to store an entry identifier along with an encrypted data
value. The encryption password is stored in the encrypted data value and the list entry
identifier could be assigned the table name or some other value that makes it easy to retrieve
the encryption password for a specific column or row. A set of OS/400 APIs is provided for
application programs to populate a validation list and retrieve values from the list.
That is exactly the functionality that the Identity column attribute and Sequence object (new in
OS/400 V5R3) can provide. Let DB2 UDB handle the key generation and locking/serialization
of that value, so the programmer can concentrate on real business logic.
Using native I/O, the relative record number can be used to access exactly one selected
record. SQL provides a scalar function RRN (file) to determine the relative record number;
however, it is not possible to generate an index over the relative record number.
Note: When using the built-in function RRN (file) in SQL to get access on one specified
relative record number, a table scan is always performed. SQL reads the complete table
and does not even stop if the row with the appropriate record number is found.
To prevent SQL from executing a table scan, a column to hold the unique identifier must be
added. Over this column an index can be built.
When a table has an identity column, the database manager can automatically generate
sequential numeric values for the column as rows are inserted into the table.
Note: Tables containing a primary key with an identity column can be accessed by native
file access. When writing a record or row though native I/O, the identity column value is
generated and inserted.
Sequence object
The sequence object allows automatic generation of values. Unlike an identity column
attribute, which is bound to a specific table, a sequence object is a global and stand-alone
object that can be used by any tables in the same database.
When inserting a row, the sequence number must be determined through NEXT VALUE FOR
SEQUENCE. For example, we insert a row to the ORDERS table using a value from the
sequence object:
INSERT INTO orders(ordnum,custnum) VALUES( NEXT VALUE FOR order_seq, 123 )
Because the sequence is an independent object and not directly tied to a particular table or
column, it can be used with multiple tables and columns. Because of its independence from
tables, a sequence can be easily changed through the SQL statement ALTER SEQUENCE.
The ALTER SEQUENCE statement only generates or updates the sequence object, but it
does not change any data.
The internal representation of a ROWID value is transparent to the user. The value is never
subject to CCSID conversion because it is considered to contain BIT data. ROWID columns
contain values of the ROWID data type, which returns a 40-byte VARCHAR value that is not
regularly ascending or descending.
A table can have only one ROWID. A row ID value can only be assigned to a column,
parameter, or host variable with a ROWID data type. For the value of the ROWID column, the
column must be defined as GENERATED BY DEFAULT or OVERRIDING SYSTEM VALUE
must be specified. A unique constraint is implicitly added to every table that has a ROWID
column that guarantees that every ROWID value is unique. A ROWID operand cannot be
directly compared to any data type. To compare the bit representation of a ROWID, first cast
the ROWID to a character string.
Note: In RPG, there is no data type that directly matches with the ROWID data type, but by
using the keyword SQLTYPE in the Definition specifications, host variables can be defined
to hold the ROWID.
The following example shows the definition of a host variable with the SQL Data type ROWID:
D MyRowId S SQLTYPE(ROWID)
If you have an existing OS/400 program that knows how to extract data out of a non-relational
object (such as an IFS stream file, data queue, or S/36 file), the program can be registered as
an external UDTF. Now, SQL programmers can have access to the data in these
non-relational objects by just invoking the UDTF. A UDTF can be referenced anywhere on an
SQL statement that a table reference is allowed.
Typically, SQL reads a S/36 record as a text field. In our example, we demonstrate
manipulating S/36 data records by using UDF and UDTF. The record layout of the S/36 file,
named S36EMP, is shown in Table 6-1.
EMPLOYEE NO. 1 6
FIRST NAME 7 18
LAST NAME 19 33
BIRTH DATE 34 39
The 6-character birth date field is stored in the format year/month/day (YYMMDD). We create
a UDF, named DMY, to convert this field into a date column with standard *DMY format, as
shown in Example 6-2.
Example 6-3 shows you how to create a UDTF, named S36UDTF, which returns rows of a
result from the S/36 data file. The user defined function, DMY, is also used in the SQL
statement to convert the date field.
Example 6-3 User-defined table function to return S/36 data as SQL table
CREATE FUNCTION SAMPLEDB01.S36UDTF (EMPNO VARCHAR(6) )
RETURNS TABLE (
EMP_NO CHAR(6) ,
FIRST_NAME CHAR(20) ,
LAST_NAME CHAR(20) ,
BIRTH_DATE DATE )
LANGUAGE SQL
SPECIFIC SAMPLEDB01.S36UDTF
NOT DETERMINISTIC
READS SQL DATA
CALLED ON NULL INPUT
NO EXTERNAL ACTION
DISALLOW PARALLEL
NOT FENCED
BEGIN
RETURN
SELECT SUBSTR ( F00001 , 1 , 6 ) AS EMP_NO ,
SUBSTR ( F00001 , 7 , 12 ) AS FIRST_NAME ,
SUBSTR ( F00001 , 19 , 15 ) AS LAST_NAME ,
SAMPLEDB01 . DMY( SUBSTR ( F00001 , 34 , 6 ) ) AS BIRTH_DATE
Figure 6-1 shows how to execute the SQL statement that invokes the UDTF from the SQL
Scripts window.
SELECT * FROM TABLE(S36UDTF(‘000010’)) AS X;
6.8.2 Datalink
The datalink data type provides a method of linking a row in a DB2 table with non-relational
objects in the form of Uniform Resource Locators (URLs) that are associated with that row of
data. For example, a row in the EMPLOYEE table might want a datalink to store the reference
to the IFS file containing a photograph of a employee, as shown in Figure 6-2.
The idea of a datalink is that the actual data stored in the column is only a pointer to the
object. This object can be anything—an image file, a voice recording, a text file, and so on.
Using datalink also gives control over the objects while they are in linked status. A datalink
column can be created such that the referenced object cannot be deleted, moved, or
renamed while there is a row in the SQL table that references that object. This object is
considered linked. Once the row containing that reference is deleted, the object is unlinked,
but not deleted.
The maximum length of a datalink must be in the range of 1 through 32717. If FOR MIXED
DATA or a mixed data CCSID is specified, the range is 4 through 32717. The specified length
must be sufficient to contain both the largest expected URL and any datalink comment. If the
length specification is omitted, a length of 200 is assumed.
A DATALINK value is an encapsulated value with a set of built-in scalar functions. The
DLVALUE function creates a DATALINK value. The following functions can be used to extract
attributes from a DATALINK value.
DLCOMMENT
DLLINKTYPE
DLURLCOMPLETE
DLURLPATH
DLURLPATHONLY
DLURLSCHEME
DLURLSERVER
Note: It is not possible to create a host variable with an equivalent data type for DATALINK
in RPG, neither through RPG data types nor by using the keyword SQLTYPE in the
Definition specifications.
For more information on datalinks refer to the redbook DB2 UDB for AS/400 Object Relational
Support, SG24-5409.
The VARCHAR, VARGRAPHIC, and VARBINARY data types have a limit of 32 K bytes of
storage. While this may be sufficient for small to medium size text data, applications often
need to store large text documents. They may also need to store a wide variety of additional
data types such as audio, video, drawings, mixed text and graphics, and images. There are
three data types to store these data objects as strings of up to two gigabytes in size.
The character large object (CLOB) data type can store up to two gigabytes of
variable-length character string. This data type is appropriate for storing text-oriented
information where the amount of information can grow beyond the limits of a regular
VARCHAR data type (upper limit of 32 K bytes). Code page conversion of the information
is supported.
Each table may have a large amount of associated LOB data. Although a single row
containing one or more LOB values cannot exceed 3.5 gigabytes, a table may contain nearly
256 gigabytes of LOB data.
You can refer to and manipulate LOBs using host variables just like any other data type.
However, host variables use the program’s storage, which may not be large enough to hold
LOB values. Other means are necessary to manipulate these large values. Locators are
useful to identify and manipulate a large object value at the database server and for
extracting pieces of the LOB value. File reference variables are useful for physically moving a
large object value (or a large part of it) to and from the client.
The LOB locator is associated with a LOB value, not a row or physical storage location in the
database. Therefore, after selecting a LOB value in a locator, you cannot perform an
operation on the original rows or tables that have any effect on the value referenced by the
locator. The value associated with the locator is valid until the unit of work ends, or the locator
is explicitly freed, whichever comes first. The FREE LOCATOR statement releases a locator
from its associated value. In a similar way, a commit or rollback operation frees all LOB
locators associated with the transaction.
For more information on datalinks refer to the redbook DB2 UDB for AS/400 Object Relational
Support, SG24-5409.
Some other reasons for using embedded SQL in RPG programs on the iSeries are:
Programmers with SQL knowledge can understand RPG programs without learning native
file operations.
The same or a similar SQL code can be embedded in different programming languages.
SQL provides a variety of scalar functions that does not exist in RPG but easily can be
exploited.
Take advantage of SQL scalar functions, user defined functions (UDF), and user defined
table functions (UDTF), which can be used in SQL statements.
Stored procedures can be called by using an SQL CALL statement.
SQL can simplify the program logic when multiple rows are included in an operation, such
as UPDATE or DELETE, or multiple joins are included in a single SQL statement.
SQL provides much more powerful possibilities to join tables, like LEFT OUTER JOIN or
EXCEPTION JOIN.
A couple of column functions allows you to easily calculate totals, averages, minimums,
and maximums.
SQL allows you to merge data from several tables by using the UNION statement.
SQL provides additional isolation levels and the SAVEPOINT statement that allows a
partial ROLLBACK.
In this chapter we cover some considerations when using Embedded SQL, specially in RPG
programs.
To embed SQL statements in your source code you have to consider the following rules:
Enter your SQL statements on the C specification. SQL statements can only be used in
classical RPG style. When you are coding in RPG free format, you have to end free format
coding using the compiler directive/End-Free and restart if after the end of your SQL
statement.
Start your SQL statements using the delimiter /EXEC SQL in positions 7 through 15, with
the slash (/) in position 7.
You can start entering your SQL statements on the same line as the starting delimiter or
on the new line.
In the continuation line put C in position 6 and add a plus sign (+) in position 7 to continue
your statements on any subsequent lines.
SQL code can be inserted between position 8 and 80.
Comments can be added elsewhere in the SQL statement, either through an asterisk (*) in
position 7 for the whole row or through and starting slash followed by an asterisk (/*), and
an ending asterisk followed by a slash (*/) for shorter comments.
Use the ending delimiter /END-EXEC in positions 7 through 15, with the slash (/) in
position 7, to signal the end of your SQL statements.
Between /EXEC SQL and /END-EXEC only one SQL statement can be inserted. It is not
possible to enter several SQL statements delimited by an semi colon (;) like in source files
for RUNSQLSTM. However, you can insert multiple SQL statements starting with /EXEC
SQL and ending with /END-EXEC each.
Example 7-1 shows an embedded SQL statement to delete rows in the Order Header table
without deleting rows in the Order Details table.
Example 7-1 Deleting Order Header without corresponding Order Details with SQL
C/EXEC SQL
C+ Delete from Order_Header a
C+ where a.Order_Number in
C+ (Select b.Order_Number
C+ from Order_Header b
C+ exception join Order_Detail c
C+ on b.Order_Number = c.Order_Number)
To compile an ILE Object with embedded SQL, an additional preceding step, the
precompilation, is necessary.
The SQL precompile creates an output source file member. By default, the precompile
process creates a temporary source file QSQLTxxxxx in QTEMP, or you can specify the
output source file as a permanent file name on the precompile command. If the precompile
process uses the QTEMP library, the system automatically deletes the file when the job
completes. A member with the same name as the program name is added to the output
source file.
By default, the precompiler calls the host language compiler. When creating a program
CRTBNDRPG is used, while for modules and service programs CRTRPGMOD is used.
The object type that is created can be determined by the compile option OBJTYPE.
Object type *MODULE generates a module object.
Object type *PGM generates a program object.
Object type *SRVPGM generates a service program object.
Note: There is only one single command to generate programs, service programs, and
modules with embedded SQL in RPG.
Example 7-2 shows the control specification to implement compiler options in the control
specifications (H-Specs).
Compiler options in the control specifications are not overwritten by the compile command.
Note: If you want to use commitment control for native I/O in RPG, you have to specify
the keyword COMMIT in the file definitions.
Note: For fields that are defined in the Definition specifications or that are embedded in
files that are defined in the File specifications, the date and time format and separators
are determined by the keywords DATFMT and TIMFMT used in the definition or control
specifications.
In RPG, compile options can be set in the control specifications by using keywords like
ACTGRP, BNDDIR, ALWNULL, DATFMT, and TIMFMT. SQL does not consider the control
specifications, but with the SET OPTION statement provides an equivalent.
The SET OPTION statement can be embedded elsewhere in the control specification. Only
one SET OPTION statement is allowed per source member. Even if a source member
consists of several independent procedures, the SET OPTION statement can only be
embedded once, but it is valid for all procedures.
In SQL stored procedures, triggers, and User defined Functions, SET OPTION must be used
to set the compiler options.
Example 7-3 shows the control specifications and a SET OPTION statement embedded in
the same RPG source.
Example 7-3 Coexistence of compiler options in H-Specification and SQL SET OPTION statement
H ActGrp('MYACTGRP')
H BndDir('QC2LE': 'QUSAPIBD': 'MYBNDDIR')
H AlwNull(*UsrCtl)
H DatFmt(*Eur)
*-----------------------------------------------------------------------------------------
C/EXEC SQL
C+ Set Option Commit = *NONE,
C+ CloSQLCsr = *ENDACTGRP,
C+ DatFmt = *ISO,
C+ TimFmt = *ISO
C/End-Exec
RPG has several methods for dealing with errors. These are:
Using the (E) extender in operation codes
Using a monitor group
Defining a *PSSR routine
Registering a Condition handler program
When DB2 UDB for iSeries encounters an error, the SQLCODE returned is negative, and the
first two digits of the SQLSTATE are different from '00', '01', and '02'. If SQL encounters a
warning, but it is a valid condition while processing the SQL statement, the SQLCODE is a
positive number, and the first two digits of the SQLSTATE are '01' (warning condition) or ‘02’
(no data condition). When the SQL statement is processed successfully, the SQLCODE
returned is 0, and SQLSTATE is '00000'.
An SQL communication area (SQLCA) is a set of variables that may be updated at the end of
the execution of every SQL statement. A program that contains executable SQL statements
may provide one, but no more than one SQLCA (unless a stand-alone SQLCODE or a
stand-alone SQLSTATE variable is used instead), except in Java, where the SQLCA is not
applicable. Instead of using an SQLCA, the GET DIAGNOSTICS statement can be used in all
languages to return codes and other information about the previous SQL statement.
The SQL precompiler automatically places the SQLCA in the Definition specifications of the
ILE RPG for iSeries program prior to the first Calculation specification, unless a SET OPTION
SQLCA = *NO statement is found. Therefore, it is not necessary to code INCLUDE SQLCA in
the source program.
If a SET OPTION SQLCA = *NO statement is found, the SQL precompiler automatically
places SQLCODE and SQLSTATE variables in the Definition specification. They are defined
as shown in Example 7-4 when the SQLCA is not included.
The SQLCA source statements for ILE RPG for iSeries are as shown in Example 7-5.
To write a more standardized code we recommend using the longer field names.
The SQLCODE (SQLCOD) and SQLSTATE (SQLSTT) values are set by the database
manager after each SQL statement is executed. If the SQLCA is used, a program should
check either the SQLCODE or SQLSTATE value to determine whether the last SQL
statement was successful.
7.4.1 SQLCODE
An SQLCODE is a return code. The return code is sent by the database manager after
completion of each SQL statement.
Each SQLCODE that is recognized by a DB2 UDB for iSeries server has a corresponding
message in the message file QSQLMSG. The message identifier for any SQLCODE is
If the error message text contains variables, the appropriate variable texts are returned in the
field SQLERRMC (SQLERM). To get the complete message text, you only have to use
Application Programming Interface (API) QMHRTVM (Retrieve Message) or CL command
RTVMSG (Retrieve Message).
Note: If you use a cursor to read your rows, do not use SQLCODE = *Zeros to detect if a
row was returned. In some cases SQL warnings are returned (SQLCODE between 1 and
99), but the row is nevertheless retrieved. It is better to use SQLCODE <> 100 or
SQLSTATE <> ’02000’ instead.
7.4.2 SQLSTATE
SQLSTATE provides application programs with common return codes for success, warning,
and error conditions found among the DB2 Universal Database products. SQLSTATE values
are particularly useful when handling errors in distributed SQL applications. SQLSTATE
values are consistent with the SQLSTATE specifications contained in the SQL 1999
standard.
In SQL functions, SQL procedures, SQL triggers, and embedded applications other than
Java, SQLSTATE values are returned in the following way:
The last five bytes of the SQLCA
A stand-alone SQLSTATE variable
The GET DIAGNOSTICS statement
The class code of an SQLSTATE value indicates whether the SQL statement was executed
successfully (class codes 00 and 01) or unsuccessfully (all other class codes).
Note: SQLCODE is the original way in which DB2 reports error and warning conditions,
but the SQL standard standardizes the SQLSTATE; that is why SQLSTATE should be
preferred.
For more information on error handling in SQL refer to the redbook Stored Procedures,
Triggers and User Defined Functions on DB2 Universal Database for iSeries, SG24-6503.
SQL allows you to embed such variables called host variables. A host variable in an SQL
statement must be identified by a preceding colon (:).
Example 7-6 shows how the delivery date in the Order Header table can be updated with the
current date for a range of order numbers (’00005’–’00010’).
C/EXEC SQL
C+ Update Order_Header
C+ set Order_Delivery = Current Date
C+ where Order_Number between :StartOrderNo and :EndOrderNo
C/End-Exec
C Return
In Example 7-7 the customer number and the order total for order number ’00005’ are
returned into the host variables Customer and Total.
C/EXEC SQL
C+ Select Customer_Number, Order_Total
C+ into :Customer, :Total
C+ from Order_Header
C+ where Order_Number = :OrderNo
C/End-Exec
C Return
Example 7-8 shows how the total amount can be raised by using a host variable.
C/EXEC SQL
C+ Select Customer_Number, Order_Total, Order_Total + :Raise
C+ into :Customer, :Total, :NewTotal
C+ from Order_Header
C+ where Order_Number = :OrderNo
C/End-Exec
C Return
C/EXEC SQL
C+ Update Order_Header
C+ set Order_Total = Order_Total + :Raise
C+ where Order_Number = :OrderNo
C/End-Exec
C Return
Example 7-10 Using host variables in the values clause of an insert statement
D OrderNo S 5A
D Customer S 5A
*-----------------------------------------------------------------------------------------
C eval OrderNo = '10010'
C eval Customer = '00010'
C/EXEC SQL
C+ Insert into Order_Header
C+ (Order_Number,
C+ Customer_Number)
C+ values (:OrderNo,
C+ :Customer)
C/End-Exec
C Return
Example 7-11 shows how a stored procedure can be called using host variables as
parameters.
C/EXEC SQL
C+ CALL calcTotals(:OrderNo, :Customer)
C/End-Exec
C Return
Host variables are commonly used in SQL statements as a receiving area for column values.
Host structures can be used in an SELECT ... INTO or FETCH clause. The INTO clause
names one or more host variables that you want to contain column values returned by SQL.
Note: When using host variables for each variable a separate pointer must be returned.
When using host structures only one pointer is returned. That could be a performance
gain.
Example 7-12 shows how an external data structure can be used to receive the complete row.
C/EXEC SQL
C+ Select *
C+ into :DSOrdHdr
C+ from Order_Header
C+ where Order_Number = '00020'
C/End-Exec
C Return
Blocked FETCH and blocked INSERT are the only SQL statements that allow an array data
structure. A host variable reference with a subscript like MyStructure(index).MySubfield is not
supported by SQL.
Note: To use blocked processing brings performance advantages because only one single
pointer must be returned for a group of rows.
D OrderNo S 5A
D Elements S 3U 0 inz(%Elem(DSOrderHeader))
D Index S 3U 0
*-----------------------------------------------------------------------------------------
C/EXEC SQL
C+ Declare CsrOrdH Cursor for
C+ Select *
C+ from Order_Header
C+ where Order_Number between '00005' and '00050'
C+ for read only
C+ optimize for 100 rows
C/End-Exec
C/EXEC SQL
C+ Fetch next from CsrOrdH
C+ for :Elements rows
C+ into :DSOrderHeader
C/END-EXEC
/Free
For Index = 1 to %Elem(DSOrderHeader);
Dsply DSOrderHeader(Index).OrHNbr;
EndFor;
Return;
/End-Free
D OrderNo S 5A inz('10040')
D Elements S 3U 0 inz(%Elem(DSOrderHeader))
D Index S 3U 0
*-----------------------------------------------------------------------------------------
/Free
clear DSOrderHeader;
For Index = 1 to Elements;
OrderNo = %EditC(%Dec(%Int(OrderNo) + 10: 5: 0): 'X');
DSOrderHeader(Index).OrHNbr = OrderNo;
DSOrderHeader(Index).CusNbr = '00010';
DSOrderHeader(Index).OrHDte = %Date();
C/EXEC SQL
C+ Insert into Order_Header
C+ :Elements Rows
C+ values (:DSOrderHeader)
C/End-Exec
C Return
Currently RPG provides about 75 built-in functions, while SQL has about 120 scalar
functions. There are some domains where SQL delivers other, additional or better functions
than RPG does and vice versa. The following list contains functions that may be useful, but
are not available in RPG:
String functions
– UPPER/LOWER: To convert a string in either upper or lower case
– HEX: Returns the hexadecimal representation of a string
– REPEAT: Returns a string composed of expression repeated integer times
– REPLACE: To replace all occurrences of a search string with a new string
– LEFT / RIGHT: Returns the left- or rightmost characters of a string
– SOUNDEX: To compare a string on a phonetical base
– DIFFERENCE: Returns a value from 0 to 4 representing the difference between the
sounds of two strings based on applying the SOUNDEX function to the strings
– SPACE: Returns a number of SBCS *BLANKS
Note: While the SQL scalar function REPLACE replaces all occurrences, the
RPG-Function %REPLACE only replaces the first one. The RPG-Function %REPLACE is
an equivalent to the SQL scalar function INSERT.
For more information about SQL scalar functions look at iSeriesDB2 Universal Database for
iSeries SQL Reference manual that can be found in information center at:
http://publib.boulder.ibm.com/infocenter/iseries/v5r3/ic2924/info/db2/rbafzmst.pdf
Example 7-15 shows how the scalar function replace can be used to remove characters in an
string.
Example 7-15 Using the scalar function REPLACE to remove characters from a string
D MyText S 50A inz('ABC-XYZ-1234-567890-A')
*-----------------------------------------------------------------------------------------
C/EXEC SQL
C+ Set :MyText = Replace(:MyText, '-', '')
C/End-Exec
C MyText Dsply
C Return
Example 7-16 shows how the scalar function trim can be used to remove leading asterisks
(*).
Example 7-16 Using the scalar function TRIM to remove leading characters
MyText2 S 50A inz('********123.45')
*-----------------------------------------------------------------------------------------
C/EXEC SQL
C+ Set :MyText2 = Trim(leading '*' from :MyText2)
C/End-Exec
C MyText2 Dsply
With static and dynamic SQL you can embed SQL statements into your source code.
In static SQL, the statement is determined at compile time. All SQL scalar functions can be
used in the embedded SQL statements. You can integrate host variables, which are set at
runtime. The syntax is checked by the precompiler and then the SQL statements are replaced
by adequate function calls.
C TotalDetail Dsply
C Return
In contrast to the SELECT ... INTO statement, SQLCODE and SQLSTATE cannot be used to
check if a record is found. If no record is found, NULL values are returned by default. You
either have to use indicator variables to detect NULL values or an SQL scalar function like
COALESCE that converts the NULL value into a default.
If the result consists of more than one row, SQLCODE -811 is returned, but in contrast to the
SELECT ... INTO statement, the host variables are not updated.
The following example shows how the total amount of an order is calculated and returned
within a SET-Statement.
C TotalDetail Dsply
C Return
There are also other situations where you have to insert and delete a couple of rows. For
example, if you have to reorganize your tables. You write rows to history tables and delete the
original rows after.
In Example 7-20 on page 96, written with native I/O, all Order Header and corresponding
Order Detail rows with the order date of the previous year must be saved in history files and
deleted after.
The logical file over the Order Header file is described in Example 5-1 on page 60. The Order
Header history file ORDHDRH is created via CRTDUPOBJ from the Order Header File
(ORDHDR) described in Example 4-2 on page 36.
Example 7-19 on page 96 shows the DDS definition of the Order Detail file ORDDTL.
A UNIQUE
A R ORDDTLF
A ORHNBR 5 COLHDG('ORDER NUMBER ')
A PRDNBR 5 COLHDG('PRODUCT NUMBER ')
A ORDQTY 5P 0 COLHDG('ORDER DTL QUANTITY')
A ORDTOT 9P 2 COLHDG('ORDERDTL TOTAL ')
*
A K ORHNBR
A K PRDNBR
The Order Detail history file ORDDTLH is created via CRTDUPOBJ from the Order Detail File
ORDDTL.
Example 7-20 Write history files for Order Header and Order Detail with native I/O
FOrdHdrL1 UF E K DISK Rename(OrdHdrF: OrdHdrF1)
FOrdDtl UF E K DISK
FOrdHdrH O E DISK Rename(OrdHdrF: OrdHdrHF)
FOrdDtlH O E DISK Rename(OrdDtlF : OrdDtlHF)
*-----------------------------------------------------------------------------------------
D PrevYear S 4P 0 Previous Year
DoU %EOF(OrdHdrL1);
Read OrdHdrF1;
If %EOF or %SubDt(OrHDte: *Years) > PrevYear;
leave;
EndIf;
// Order Detail
KeyOrdDtl.OrHNbr = OrHNbr;
clear KeyOrdDtl.PrdNbr;
SetLL %KDS(KeyOrdDtl) OrdDtlF;
DoU %EOF(OrdDtl);
ReadE %KDS(KeyOrdDtl: 1) OrdDtlF;
If %EOF;
leave;
EndIf;
EndDo;
Return;
/End-Free
Example 7-21 Write history files for Order Header and Order Detail with embedded SQL
D PrevYear S 4P 0 Previous Year
*-----------------------------------------------------------------------------------------
C eval PrevYear = %SubDt(%Date(): *Years) - 1
In static SQL you can embed almost all SQL statements that can be executed in interactive
SQL or through iSeries Navigator.
The following example shows how a summary table over the Order Header, Order Detail, and
stock tables is created containing the accumulated amounts by customer. Before creating the
new summary table, an existing one will be deleted.
C/EXEC SQL
C+ create table ITSO4710/Summary
C+ as (select year(current_date)-1 as fiscal_year, customer_number,
C+ sum(orderDtl_Quantity * Product_price) as amount
C+ from Order_header h
C+ join Order_detail d
C+ on h.Order_Number = d.Order_Number
C EndIf
C Return
Note: Host variables can neither be used in the DROP TABLE nor in the CREATE TABLE
statement. If a variable creation is needed, you have to use dynamic SQL.
Using a cursor can be compared with native I/O, single record access.
Note: In your source, the DECLARE statement must always be positioned prior to the
according OPEN, FETCH, and CLOSE statements. This is independent from the order
in which these statements are executed. To put the DECLARE statement into the
Initialization Subroutine (*INZSR), that is coded at the end of the source, will cause a
compile error.
OPEN statement to open the cursor for use within the program. The cursor must be
opened before any rows can be retrieved.
The SQL OPEN statement can be compared with a user-controlled open of the table and
an additional SETLL statement to position the pointer before the first row.
A FETCH statement to retrieve rows from the cursor's result table or to position the cursor
on another row.
Note: Contrary to RPG, more than one row can be received in one FETCH statement,
by using host structure arrays. The next FETCH will receive the next or previous block
or rows.
CLOSE statement to close the cursor for use within the program.
Note: When using a serial cursor, an OPEN without a preceding CLOSE will not
reposition the cursor on the top of the result table. To be sure that the cursor is really
closed, execute a CLOSE statement before your OPEN statement.
An example of the DECLARE statement is shown below, which is important for embedded
static SQL. All parts written in parenthesis can be omitted. For more information look at
iSeries DB2 Universal Database for iSeries SQL Reference.
DECLARE Cursor Name (DYNAMIC (SCROLL)) CURSOR (WITH HOLD)
FOR Select Statement
(FOR READ ONLY/FOR FETCH ONLY)
(FOR UPDATE (OF Column1, Column2,.....ColumnN))
(OPTIMIZE FOR n ROWS)
(WITH Isolation Level)
Cursor name
Any name can be specified for the cursor, but a cursor name must be unique in the source
member where it is defined.
Note: Even if the source member contains several independent procedures, the cursor
name must be unique.
– SCROLL
Specifies that the cursor is scrollable. The cursor may or may not have immediate
sensitivity to inserts, updates, and deletes done by other activation groups.
If DYNAMIC is not specified, the cursor is read-only, which means that the SELECT
statement cannot contain a FOR UPDATE clause.
• DYNAMIC SCROLL
Specifies that the cursor is updatable if the result table is updatable, and that the
cursor will usually have immediate sensitivity to inserts, updates, and deletes done
by other application processes.
In Example 7-24 a scroll cursor is defined. All columns can be changed by a SQL
UPDATE statement.
Note: The cursor is only closed when the SQL statements COMMIT or ROLLBACK
are executed. The cursor remains open when the RPG Operation Codes COMMIT
or ROLBK are used instead.
100 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
In the Example 7-23 on page 99 and Example 7-24 on page 100 cursors are declared
without hold.
– WITH HOLD
This prevents the cursor from being closed as a consequence of a SQL commit or
rollback operation.
When WITH HOLD is specified, a commit operation commits all the changes in the
current unit of work, and releases all locks except those that are required to maintain
the cursor position.
In Example 7-25 a serial cursor WITH HOLD is defined.
SELECT STATEMENT
Specifies the SELECT statement of the cursor that can contain references to host
variables.
If dynamic SQL is used, the statement name that is defined by the PREPARE statement
must be used.
– FOR READ ONLY clause
The FOR READ ONLY or FOR FETCH ONLY clause indicates that the result table is
read-only and therefore the cursor cannot be used for positioned UPDATE and
DELETE statements.
Some result tables are read-only by nature (for example, a table based on a read-only
view or when tables are joined). FOR READ ONLY can still be specified for such
tables, but the specification has no effect.
For result tables in which updates and deletes are allowed, specifying FOR READ
ONLY can possibly improve the performance of FETCH operations by allowing the
database manager to do blocking and avoid exclusive locks.
Example 7-26 shows a cursor that is read only.
Example 7-27 Declaring a serial cursor with specified FOR UPDATE OF clause
C/EXEC SQL
C+ Declare CsrOrdH Cursor WITH HOLD for
C+ Select Order_Total
C+ from Order_Header
C+ where Order_Number between :FirstOrderNo and :LastOrderNo
C+ and Year(Order_Date) = :PrevYear
C+ For Update of Order_Delivery, Order_Total
C/End-Exec
Note: In RPG you define in the File Specifications if a file is read as an update or
input file. If a file is defined as an update file, the row is locked as soon as it is read.
If you work with commitment control, a record is not unlocked by the operations
code UNLOCK or the (N)-extender. The record will be unlocked as soon a COMMIT
or ROLBK (Rollback) operation is executed.
102 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Any number of rows can be fetched, but performance can possibly degrade after
integer fetches.
The value of integer must be a positive integer (not zero).
In Example 7-28 a scroll cursor is defined with the OPTIMIZE FOR clause in the
SELECT statement.
104 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
The cursor name must identify a cursor defined in a DECLARE CURSOR statement.
When the FETCH statement is executed, the cursor must be in the open state.
INTO host variable
This identifies one or more host structures or host variables. In the operational form of
INTO, a host structure is replaced by a reference to each of its variables. The first value in
the result row is assigned to the first host variable in the list, the second value to the
second host variable, and so on.
Example 7-31shows the DECLARE statement and the appropriate OPEN and FETCH
statements.
C/EXEC SQL
C+ Fetch next from CsrOrdH
C+ into :CustomerNo, :TotalCustomer
C/END-EXEC
To fetch multiple rows at time it must be specified how many rows you want to receive.
FOR k ROWS
This evaluates the host variable or integer to an integral value k. If a host variable is
specified, it must be a numeric host variable with zero scale and it must not include an
indicator variable. The value of k must be in the range of 1 to 32767. The cursor is
positioned on the row specified by the orientation keyword (for example, NEXT), and that
row is fetched. Then the next k-1 rows are fetched (moving forward in the table), until the
end of the cursor is reached. After the fetch operation, the cursor is positioned on the last
row fetched.
The maximum value of fetched rows (32 767) matches with the maximum elements an
array data structure or a multi-occurrence data structure in RPG can have.
When a multiple-row-fetch is successfully executed, three statement information items are
available in the SQL Diagnostics Area (or the SQLCA):
– ROW_COUNT (or SQLERRD(3) of the SQLCA) shows the number of rows retrieved.
C/EXEC SQL
C+ Fetch next from CsrOrdH
C+ for :Elements rows
C+ into :DSOrderHeader
C/END-EXEC
The cursor name that should be closed must be added to the CLOSE statement.
Example 7-33 shows the CLOSE statement for the cursor CsrOrdH (defined in
Example 7-32).
An implicit close of the cursor is performed by the end of the activation group or the module,
depending on the option CLOSQLCSR that is set either in the compile command or the SQL
SET OPTION statement.
Note: When using serial cursors, a cursor must be closed before it can be reopened. If the
cursor is OPEN, a new OPEN will not be executed.
106 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
7.8.4 Types of cursors
The type of cursor determines the positioning methods that can be used with the cursor.
Serial cursor
A serial cursor is one defined without the SCROLL keyword in the DECLARE statement.
For a serial cursor, each row of the result table can be fetched only once per OPEN of the
cursor. When the cursor is opened, it is positioned before the first row in the result table.
When a FETCH is issued, the cursor is moved to the next row in the result table. That row is
then the current row. If host variables are specified (with the INTO clause on the FETCH
statement), SQL moves the current row's contents into your program's host variables.
This sequence is repeated each time a FETCH statement is issued until the end-of-data
(SQLCODE = 100 or SQLSTATE='02000') is reached. When you reach the end-of-data,
close the cursor.
Note: You cannot access any rows in the result table after you reach the end-of-data. To
use a serial cursor again, you must first close the cursor and then re-issue the OPEN
statement. You can never back up using a serial cursor. To be sure that the cursor was
really closed before you open it, execute the CLOSE statement before the OPEN
statement.
A serial cursor can be compared with the RPG cycle definitions IP (Input Primary) or UP
(Update Primary) in the File specifications.
Scroll cursor
For a scrollable cursor, the rows of the result table can be fetched many times. The cursor is
moved through the result table based on the position option specified on the FETCH
statement. When the cursor is opened, it is positioned before the first row in the result table.
When a FETCH is issued, the cursor is positioned to the row in the result table that is
specified by the position option. That row is then the current row. If host variables are
specified (with the INTO clause on the FETCH statement), SQL moves the current row's
contents into your program's host variables. Host variables cannot be specified for the
BEFORE and AFTER position options.
This sequence is repeated each time a FETCH statement is issued. The cursor does not
need to be closed when an end-of-data or beginning-of-data condition occurs. The position
options enable the program to continue fetching rows from the table. The following scroll
options are used to position the cursor when issuing a FETCH statement. These positions are
relative to the current cursor location in the result table.
NEXT Positions the cursor on the next row of the result table relative to the
current cursor position. NEXT is the default if no other cursor
orientation is specified.
PRIOR Positions the cursor on the previous row of the result table relative to
the current cursor position.
FIRST Positions the cursor on the first row of the result table.
LAST Positions the cursor on the last row of the result table.
Note: A scroll cursor can only be moved by a number of rows, or be positioned at the end
or beginning of the result table, but it is not possible to position by key. That means there is
no equivalent for a SETGT / READ or SETLL / READ in SQL. To achieve this functionality
you either have to DECLARE an additional cursor or you have to change the WHERE
clause in your DECLARE statement, and CLOSE and OPEN the cursor again.
Note: If you do not add WHERE CURRENT OF CursorName, all rows are updated or
deleted.
In Example 7-34 a serial cursor is defined to read all Order_Header rows with a order date in
the previous year. If the order date was in the months between January and March, the row is
deleted; otherwise the order total is raised by 10.
Example 7-34 Using the WHERE CURRENT OF clause to update and delete rows
D PrevYear S 4P 0
D DSCsrOrdH DS
D OrderDate D
D OrderTotal 11P 2
*-----------------------------------------------------------------------------------------
C eval PrevYear = %SubDt(%Date(): *Years) - 1
C/EXEC SQL
C+ Declare CsrOrdH Cursor WITH HOLD
C+ For Select Order_Date, Order_Total
C+ from Order_Header
C+ where Year(Order_Date) = :PrevYear
C+ For Update of Order_Delivery, Order_Total
C/End-Exec
108 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
C if SQLState = '02000' or SQLCode < *Zeros
C leave
C EndIf
C/EXEC SQL
C+ update Order_Header
C+ Set Order_Total = :OrderTotal,
C+ Order_Delivery = Current Date - 1 Year
C+ where Current of CsrOrdH
C/END-EXEC
C EndIf
C EndDo
C Return
Because the string is created at runtime, host variables are not necessary and cannot be
used. They can be directly integrated into the string. But there are some situations where you
wish to use variables. In this cases you can use parameter markers (?) that can be set in the
EXECUTE or OPEN statement.
To convert the character string containing the SQL statement to an executable SQL
statement one of the following steps is necessary:
EXECUTE IMMEDIATE:
A string is converted to an SQL statement and executed immediately. This statement can
only be used if no cursor is needed.
PREPARE and EXECUTE:
A string is converted and later executed. Variables can be embedded as parameter
markers and be replaced in the EXECUTE statement. EXECUTE can only be used if no
cursor is needed.
PREPARE and DECLARE CURSOR:
A string is converted and the converted SQL statement is used to DECLARE a cursor.
Like in static SQL, either a serial or a scroll cursor can be used.
If you use a variable SELECT list a SQL Descriptor Area (SQLDA) is required where the
returned variables are described.
In Example 7-35, the Order Header table with order date from the previous year and the
appropriate Order Detail rows are saved in history tables. The names of the history tables are
dynamically built. The year of the stored order data is part of the table name. In the create
table statement the table is not only built but filled with the appropriate data.
Order Header and Order Detail rows are deleted by using static SQL.
C/EXEC SQL
C+ Delete from ORDDTL d
C+ Where d.OrHNbr in (select h.OrHNbr
C+ from ORDHDR h
110 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
C/END-EXEC
/Free
C/EXEC SQL
C+ Delete from ORDHDR h
C+ Where year(h.OrHDte) = :PrevYear
C/END-EXEC
C Return
When using the EXECUTE IMMEDIATE statement instead, the PREPARE statement is
performed every time.
Parameter markers
Although a statement string cannot include references to host variables, it may include
parameter markers. These can be replaced by the values of host variables when the
prepared statement is executed. A parameter marker is a question mark (?) that is used
where a host variable could be used if the statement string were a static SQL statement.
The text below illustrates the parts of the PREPARE statement that are necessary to convert
a character string into a SQL string. For more information look at iSeries DB2 Universal
Database for iSeries SQL Reference.
PREPARE statement-name
FROM host variable
Statement name
This specifies the Name of the SQL statement. The statement name must be unique in
your source member.
Host variable
This specifies the character variable that contains the SQL string.
Example 7-36 Dynamic SQL without cursor by using PREPARE and EXECUTE statements
D KeyOrHDte S like(OrHDte)
D NextMonth S like(OrHDte)
D DsFile DS
D MyFile 10A
D 3A overlay(MyFile) inz('ORH')
D FileYear 4S 0 overlay(MyFile: *Next)
D FileMonth 2S 0 overlay(MyFile: *Next)
/Free
KeyOrHDte = %Date(%Char(FileYear) + '-'
+ %EditC(FileMonth: 'X') + '-01');
NextMonth = KeyOrHDte + %Months(1);
DoU %EOF(ORDHDRL1);
Read OrdHdrF1;
If %EOF or OrhDte >= NextMonth;
leave;
endif;
/End-Free
C/EXEC SQL
C+ Execute MyDynSQL
c+ using :OrHNbr, :CusNbr, :OrHDte, :OrHDly, :SrNbr, :OrHTot
C/End-Exec
/Free
EndDo;
Return;
112 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
/End-Free
The DECLARE statement defines the SQL cursor using the executable SQL string instead of
an SELECT statement.
The text below shows the parts of the PREPARE statement that are necessary to convert a
character string into a SQL string. For more information look at iSeries DB2 Universal
Database for iSeries SQL Reference.
PREPARE statement-name
FROM host variable
Statement name
This specifies the name of the SQL statement. The statement name must be unique in
your source member.
Host variable
This specifies the character variable that contains the SQL string. The character string can
contain the FOR READ/FETCH ONLY clause, the OPTIMIZE clause, the UPDATE
clause, and the WITH Isolation Level clause.
The text below shows the DECLARE CURSOR statement, and how it can be used with
dynamic SQL.
DECLARE Cursor Name (DYNAMIC (SCROLL)) CURSOR (WITH HOLD)
FOR Prepared SQL Statement
In Example 7-37 a SELECT statement that includes a date conversion is dynamically built.
The SQL PREPARE and DECLARE statements are executed.
D Customer S 5A
D OrderTotal S 11P 2
D MyString S 256A varying
D DspText S 50A
*-----------------------------------------------------------------------------------------
/Free
FirstDate = D'2004-06-01';
C/EXEC SQL
C+ Declare CsrOrdH Cursor WITH HOLD for MySQLStm
C/End-Exec
C/EXEC SQL
C+ Fetch next from CsrOrdH
C+ into :Customer, :OrderTotal
C/END-EXEC
C EndDo
C Return
An INCLUDE SQLDA statement can be specified at the end of the Definition specifications in
an ILE RPG for iSeries program.
Example 7-38 shows how the INCLUDE statement can be integrated. SQL_NUM is used in
the SQLDA. It must be defined as a numeric constant and include the maximum number of
selected variables.
Example 7-39 on page 115 shows the SQLDA how it will be embedded by the precompiler in
the RPG source member.
114 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Example 7-39 SQL descriptor area
D* SQL Descriptor area
D SQLDA DS
D SQLDAID 1 8A
D SQLDABC 9 12B 0
D SQLN 13 14B 0
D SQLD 15 16B 0
D SQL_VAR 80A DIM(SQL_NUM)
D 17 18B 0
D 19 20B 0
D 21 32A
D 33 48*
D 49 64*
D 65 66B 0
D 67 96A
D*
D SQLVAR DS
D SQLTYPE 1 2B 0
D SQLLEN 3 4B 0
D SQLRES 5 16A
D SQLDATA 17 32*
D SQLIND 33 48*
D SQLNAMELEN 49 50B 0
D SQLNAME 51 80A
D* End of SQLDA
D* Extended SQLDA
D SQLVAR2 DS
D SQLLONGL 1 4B 0
D SQLRSVDL 5 32A
D SQLDATAL 33 48*
D SQLTNAMELN 49 50B 0
D SQLTNAME 51 80A
D* End of Extended SQLDA
In the final stage of the proposed methodology we are proposing the use of stored
procedures, triggers, and user defined functions to complement and enhance the database
access.
In this chapter we discuss how triggers, stored procedures, and user defined functions can
assist you in moving more of the business logic into the database.
118 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
If you must change the business rules in your database environment, you only need to
update or rewrite the triggers. No change is needed to the applications (they transparently
comply with the new rules).
Code reusability
Functions implemented at the database level are automatically available to all applications
using that database. You do not need to replicate those functions throughout the different
applications.
Easier client/server application development
Client/server applications take advantage of triggers. In a client/server environment,
triggers may provide a way to split the application logic between the client and the server
system. In addition, client applications do not need specific code to activate the logic at the
server side. Application performance may also benefit from this implementation by
reducing data traffic across communication lines.
A maximum of 300 triggers can be added to one single table. The trigger program to be called
can be the same for each trigger or it can be a different program for each trigger. If there is
more than one trigger for a single event, activation time, or column defined, the triggers are
executed in the sequence they are created, which means that the last created trigger is
executed last. This must be considered if conflicting triggers are defined.
Trigger programs can activate additional trigger programs by executing database changes
through integrated inserts, updates, and deletes to other files (trigger cascading).
If a trigger event occurs, the database manager calls either QDBPUT (for input triggers) or
QDBUDR (for update or delete triggers). These programs start the proper trigger programs.
The QDBPUT and QDBUDR programs and the triggers are integrated in the call stack. After
having executed the trigger programs the control goes back to the running application
program. Because the triggers are embedded in the call stack, they can run under the same
commitment level, assuming that the activation group of the trigger programs is *CALLER or
the commitment control (STRCMTCTL) is started commitment definition scope (CMTSCOPE)
*JOB.
Trigger programs cannot return any parameter values to their caller. If a failure occurs, an
escape message must be sent to the caller. In SQL triggers a SIGNAL statement will do the
job. By sending an escape message, all call stack entries between the program where the
message is sent to and the sender are ended and removed from the call stack.
Example 8-1 Before insert trigger to update insert time and user
CREATE TRIGGER ITSO4710/MYTRGTABLE01
BEFORE INSERT ON MYTRGTABLE
REFERENCING NEW Ins
FOR EACH ROW
MODE DB2ROW
BEGIN ATOMIC
set Ins.InsertTime = Current_Timestamp;
set Ins.InsertUser = User;
END;
Example 8-2 Before insert trigger to build a serial number depending in a key
Create Trigger ITSO4710/MyTrgTable02
BEFORE INSERT on ITSO4710/MyTrgTable
Referencing NEW as INS
For Each Row
Mode DB2ROW
Select Coalesce(Max(a.CurrNbr) + 1 , 1)
into INS.CurrNbr
from ITSO4710/MyTrgTable a
Where a.Text = INS.Text;
*AFTER
The trigger program is called after the change operation on the specified table.
As a part of an application, AFTER triggers always see the database in a consistent state.
They can be used to execute additional actions in the database, like updating or deleting
other tables, for example, writing transaction data, history tables, or summary tables.
Further, they can be used to perform actions outside the database, for example, printing
invoices or sending e-mails.
Note: AFTER triggers are run after the integrity constraints that may be violated by the
triggering SQL operation have been checked.
BEFORE triggers are activated before integrity constraints are checked and may be
violated by the trigger event.
Example 8-3 on page 121 shows an AFTER UPDATE trigger that automatically updates the
reservations if the order quantity changes.
120 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Example 8-3 After update trigger
Create Trigger ITSO4710/UpdateReservation01
AFTER UPDATE of OrderDtl_Quantity on ITSO4710/ORDER_
Referencing OLD as O
NEW as N
For Each Row
Mode DB2ROW
When (O.OrderDtl_Quantity <> N.OrderDtl_Quantity)
BEGIN ATOMIC
Update ITSO4710/Reservation R
set Reserved_Quantity = R.Reserved_Quantity
- O.OrderDtl_Quantity
+ N.OrderDtl_Quantity
Where R.Order_Number = N.Order_Number
and R.Product_Number = N.Product_Number;
END;
The database events do not include clearing, initializing, moving, applying journal changes,
removing journal changes, or changing end-of-data operations.
External triggers are activated on row level, which means the program is called as soon as
one column value in the row was changed. It is not possible to define external triggers at the
column level.
Because the trigger programs are called by the database manager, independent of which
interface was used, there are some recommendations for external trigger programs you have
to care about.
External triggers are not automatically linked to the database table, but there are two
methods for registering the triggers:
CL command ADDPFTRG (Add Physical File Trigger)
Using the iSeries Navigator
Note: The default value for the commitment definition scope (CMTSCOPE) in the
STRCMTCTL command is *ACTGRP.
If you started the commitment control with the commitment definition scope *JOB, you
can use any activation group.
Open the table with a commit lock level the same as the application's commit lock level.
This allows the trigger program to run under the same commit lock level as the application.
Create the program in the table’s schema. When saving and restoring your schema, the
triggers are correctly activated. When saving in different schemas, the triggers must be
recreated or added to the table.
Use commit or rollback in the trigger program if the trigger program runs under a different
activation group than the application. If the trigger runs in the same activation group, avoid
commits and rollbacks and let it perform outside the trigger program.
122 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Signal an exception or send an escape message if an error occurs or is detected in the
trigger program. If an error message is not signalled from the trigger program, the
database assumes that the trigger ran successfully. This may cause the user data to end
up in an inconsistent state.
By defining these parameters in your trigger programs, you can take the appropriate actions
based on the kind of data change that has occurred and the characteristics of the job that
fired the trigger.
Trigger buffer
Table 8-1 shows the static part of the trigger buffer.
Table 8-1 The trigger buffer structure
Decimal Parameter Type Description
offset
10 Physical file library char(10) The library in which the physical file resides.
name
20 Physical file char (10) The name of the physical file member.
member name
30 Trigger Event char(1) The event that caused the trigger program to be called; the possible
values can be “1” (Insert), “2” (Delete), “3” (Update), “4” (Read).
32 Commit level char(1) Reports the commit lock level of the interface that activated the trigger
“0” (*NONE), “1” (*CHG), “2” (*CS), “3” (*ALL).
36 CCSID of data binary(4) The CCSID of the data in the new or the original records; the data is
converted to the job CCSID by the database.
40 Relative record binary(4) Relative record number of the record to be updated or deleted
number (*BEFORE triggers) or the relative record number of the record that was
inserted, updated, deleted, or read (*AFTER triggers).
48 Original Record binary(4) The location of the original record. The offset value is from the beginning
offset of the trigger buffer. This field is not applicable if the original value of the
record does not apply to the operation; for example, an insert operation.
56 Old record null map binary(4) The location of the null byte map of the original record. The offset value
offset is from the beginning of the trigger buffer. This field is not applicable if
the original value of the record does not apply to the change operation,
for example, an insert operation.
60 Old record null map binary(4) The length is equal to the number of fields in the physical file.
length
64 New record offset binary(4) The location of the new record. The offset value is from the beginning of
the trigger buffer. This field is not applicable if the new value of the
record does not apply to the change operation, for example, a delete
operation.
72 New record null binary(4) The location of the null byte map of the new record. The offset value is
map offset from the beginning of the trigger buffer. This field is not applicable if the
new value of the record does not apply to the change operation, for
example, a delete operation.
76 New record null binary(4) The length is equal to the number of fields in the physical file.
map length
* Original record char(*) A copy of the original physical record before being updated, deleted, or
read. The original record applies only to update, delete, and read
operations.
* Original record null char(*) This structure contains the NULL value information for each field of the
byte map original record. Each byte represents one field. The possible values for
each byte are: “0” (Not NULL) and “1” (NULL).
* New record char(*) A copy of the record that is being inserted or updated in a physical file
as a result of the change operation. The new record only applies to the
insert or update operations.
* New record null char(*) This structure contains the NULL value information for each field of the
byte map new record. Each byte represents one field. The possible values for
each byte are: “0” (Not NULL) and “1” (NULL).
If you like to use your own field names in your program, you have to define a data structure
containing position 1-80.
Note: Binary(4) means the maximum value can be saved in 4 byte. In RPG Binary(4) must
be defined as 9B 0 or better 10I 0. The integer definition can hold the compete range of
binary values.
124 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
A template of the trigger buffer is saved in the library QSYSINC, file QRPGLESRC for ILE
RPG programs, QRPGSRC for RPG/400 programs, and QCBLLESRC for cobol programs,
with the member name TRGBUF.
Example 8-4 is a copy of the trigger buffer saved in the QSYSINC library for ILE RPG
programs.
This source snippet can be easily embedded as copy member in your source code.
The following discussion refers to the most important fields in the trigger buffer, as previously
marked:
Trigger Event (QDBTE):
This field gives you the possibility of determining the event that called the trigger. This
information is particularly valuable when a trigger is defined for different events. You may
want to identify which record image to use, depending on the event that has activated the
trigger.
– If the trigger event is INSERT, only the new record is available.
D CmtLvl S N
D NoCommit C Const('0')
*-----------------------------------------------------------------------------------------
/Free
If QDBCLL = NoCommit;
CmtLvl = *Off;
Else;
CmtLvl = *On;
EndIf;
Open OrdHdr;
*InLR = *On;
/End-Free
There are two common techniques to use the trigger buffer offset information to correctly
locate the record images.
Using a substring function
Example 8-6 on page 127 shows an AFTER INSERT trigger.
126 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
After inserting a new Order Detail record, a new reservation for the product or an update
of an existing reservation is performed. The new inserted record is moved to an external
data structure, by using the built-in function %SubSt (Substring)
Example 8-6 Retrieving old/new record in an external trigger by using built-in function %Subst()
H DEBUG Option(*NoDebugIo)
*-----------------------------------------------------------------------------------------
* Data structure QDBTB - Static Part of the TriggerBuffer
/COPY QSYSINC/QRPGLESRC,TrgBuf
D 81 9999
D TrgBufLen S 10I 0
D CmtLvl S N
D NoCommit C Const('0')
*-----------------------------------------------------------------------------------------
C *entry plist
C parm QDBTB
C parm TrgBufLen
/Free
//Open file with the appropriate commitment level
If %Open(Reserve)
and ( QDBCLL = NoCommit and CmtLvl = *On
or QDBCLL <> NoCommit and CmtLvl = *Off);
Close Reserve;
EndIf;
If Not %Open(Reserve);
Open Reserve;
EndIf;
OrHNbr = New.OrhNbr;
PrdNbr = New.PrdNbr;
ResQty += New.OrdQty;
If %Found(Reserve);
Update ReserveF;
else;
write ReserveF;
EndIf;
Return;
/End-Free
Example 8-7 Retrieving old/new record in a system trigger by using based pointers
H DEBUG Option(*NoDebugIo)
*-----------------------------------------------------------------------------------------
* Data structure QDBTB - Static Part of the TriggerBuffer
/COPY QSYSINC/QRPGLESRC,TrgBuf
D 81 9999
D TrgBufLen S 10I 0
D CmtLvl S N
D NoCommit C Const('0')
* Externe Datenstrukturen
D New e ds extname(OrderDtl) qualified
D based(PtrNewRec)
*-----------------------------------------------------------------------------------------
C *entry plist
C parm QDBTB
C parm TrgBufLen
/Free
If %Open(Reserve)
and ( QDBCLL = NoCommit and CmtLvl = *On
or QDBCLL <> NoCommit and CmtLvl = *Off);
Close Reserve;
EndIf;
If Not %Open(Reserve);
Open Reserve;
EndIf;
OrHNbr = New.OrhNbr;
PrdNbr = New.PrdNbr;
ResQty += New.OrdQty;
If %Found(Reserve);
Update ReserveF;
else;
write ReserveF;
EndIf;
Return;
/End-Free
128 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Note: Please do not hard code the start of your old and new record image in your data
structure. The dynamic part of the trigger buffer could be changed.
There is a very interesting technique called Softcoding the trigger buffer that is proposed
and described in Paul Conte’s book Database Design and Programming for DB2/400. The
purpose of this technique is that if the trigger buffer is softcoded, any changes to the
underlying structure can be incorporated by simply recompiling the trigger program.
To register an external trigger with ADDPFTRG, you have to specify the following options:
Trigger event
Trigger time
Trigger program and library (*LIBL is allowed, but the actual library name is resolved and
stored in the file description)
All other options are optional or the default values can be used. For more information look at
the existing redbook Stored Procedures, Triggers and User Defined Functions in DB2 UDB
for iSeries at:
http://www.redbooks.ibm.com/redbooks/pdfs/sg246503.pdf
Example 8-8 shows registration of the trigger program in Example 8-7 on page 128.
Figure 8-1 on page 130 shows the information that has to be provided in the General tab
when registering an external trigger.
Figure 8-2 on page 131 shows the information required in the Events tab when registering an
external trigger.
130 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 8-2 Events tab for external triggers
An SQL trigger can be generated by using the SQL statement CREATE TRIGGER.
Note: Contrary to external triggers, SQL triggers are automatically registered by executing
the CREATE TRIGGER statement.
When a trigger is created, SQL creates a temporary source file that will contain C source
code with embedded SQL statements. A program object is then created using the CRTSQLCI
and CRTPGM commands. From release V5R1 on, an Internal C Compiler is shipped with the
system. The internal compiler allows customers to create SQL triggers without having to
purchase the C Compiler, even if the user does not have the ILE C product installed.
The program is created with activation group ACTGRP(*CALLER). This makes sure that your
program runs under the same commitment control level as the program or procedure that
fired the trigger. The activation group is always considered as the Unit of Work. Besides
commitment control can be started with commitment scope on job level, the default value for
the commitment scope in the CL command STRCMTCTL (Start Commitment control) is
*ACTGRP (Activation Group).
You can specify your compile options directly in the trigger by using the SET OPTION
statement in your trigger program.
The CREATE TRIGGER statement is one single command, and consists of two parts:
The control information
The SQL trigger body
Trigger body
Note: QTEMP cannot be used as the trigger-name schema qualifier. Do not use names
that begin with ’SQ’, SQL’, ’RDI’, or ’DSN’. These names are reserved for the database
manager.
Because a trigger is directly linked to the database, it is preferable to create the trigger in
the same schema as the table is located. When saving and restoring your schema, the
triggers are correctly activated. When located in different schemas, the triggers must be
recreated or added to the table.
Trigger time/trigger event
Trigger time BEFORE or AFTER, depending on when your trigger should be activated,
must be specified.
The trigger event that fires the trigger, INSERT, UPDATE, or DELETE must be specified.
Update triggers can be defined on column level by adding OF ColumnName1, ...
ColumnNameN to the UPDATE event. This means that the trigger is only be fired if
changes in the listed columns are executed.
Example 8-10 on page 133 shows an AFTER UPDATE trigger that is fired when the
OrderDtl_Quantity is changed.
132 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Example 8-10 After update trigger at the column level
Create Trigger ITSO4710/UpdateReservation01
AFTER UPDATE of OrderDtl_Quantity on ITSO4710/ORDER_DETAIL
Referencing OLD ROW as O
NEW ROW as N
For Each Row
Mode DB2ROW
When (O.OrderDtl_Quantity <> N.OrderDtl_Quantity)
BEGIN ATOMIC
Update ITSO4710/Reservation R
set Reserved_Quantity = R.Reserved_Quantity
- O.OrderDtl_Quantity
+ N.OrderDtl_Quantity
Where R.Order_Number = N.Order_Number
and R.Product_Number = N.Product_Number;
END;
Referencing
you can specify a correlation name for the triggered record image. In your trigger program
you can get access on the values by specifying CorrelationName.Column.
The OLD ROW image is only available for update and delete triggers. The NEW ROW
image is only available for insert and update triggers.
– OLD ROW AS correlation name
This specifies a correlation name that identifies the values in the row prior to the
triggering SQL operation.
– NEW ROW AS correlation name
This specifies a correlation name that identifies the values in the row prior to the
triggering SQL operation.
Example 8-11 shows a before insert trigger that updates the current time and user in
the inserted row.
Granularity
– FOR EACH ROW
The trigger is fired with each row that is affected by the event. This method must be
selected if values from the old and new row values must be compared. For example,
you want to write a transaction file where all changes are stored.
With FOR EACH ROW Granularity, the operation sequence is:
Modify row 1
Call Trigger row 1
Modify row 2
Call Trigger row 2
134 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Example 8-12 SQL trigger with FOR EACH STATEMENT clause
Create Trigger ITSO4710/InsertOrderHeaderTotal
After Insert on ITSO4710/ORDER_DETAIL
Referencing NEW_TABLE as NewDetail
For Each Statement
Mode DB2SQL
Set Option DbgView = *SOURCE
BEGIN ATOMIC
Declare SumTotal Decimal(7, 0);
select coalesce(sum(NewDetail.OrderDtl_Total), 0)
into SumTotal
from NewDetail;
update Order_Header
set Order_Total = Order_Total + SumTotal ;
END;
WHEN(SearchCondition)
It is not only possible to define an SQL trigger at the column level, but you can also specify
conditions under which circumstances a trigger is executed. Each column value that is
stored in either the OLD or NEW row can be compared. For example, you define two
triggers to handle the raise and the decrease of a certain value, because different actions
are necessary.
Example 8-13 shows a trigger that is fired when the Order Detail quantity is changed, but
only if the new quantity is higher than the old one.
Trigger body
The trigger body contains all the executable statements. When a trigger is created, SQL
creates a temporary source file that contains C source code with embedded SQL statements
that are specified in the trigger body. A program object is then created using the CRTSQLCI
and CRTPGM commands.
If your trigger consists only of one executable statement, you simply add it to the trigger
control information.
Example 8-14 shows a BEFORE INSERT trigger, where only one single statement is
executed. The next serial number for a key is calculated and inserted in the new row.
If several SQL statements must be executed in one single trigger program, they must be
embedded in a compound statement. A compound statement starts with BEGIN and ends
with END.
All SQL statements inside this trigger body must be ended with a semi colon (;).
For more information about SQL Programming Language, refer to “SQL programming
language” on page 163.
Example 8-15 shows a trigger where several statements are embedded in a single compound
statement. The product price is determined from the stock table, and the total, multiplying
quantity and price is calculated and updated in the new row.
BEGIN ATOMIC
Declare Price Decimal(5, 0);
Select coalesce(S.Product_Price, 0)
into Price
from Stock S
Where S.Product_Number = N.Product_Number;
Set N.OrderDtl_Total = Price * N.OrderDtl_Quantity;
END;
Figure 8-3 on page 137 shows the General tab for SQL triggers.
136 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 8-3 General tab for SQL triggers
Figure 8-5 on page 138 shows the trigger body for SQL triggers.
For more information on triggers refer to the redbook Stored Procedures, Triggers and User
Defined Functions in DB2 UDB for iSeries, SG24-6503.
138 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
In contrast to triggers, which are directly linked to the database tables, stored procedures
must be called explicitly by using the SQL CALL statement. When a stored procedure is
called, it is embedded in the call stack and executed. If the stored procedure ends, either
normally or abnormally, the control is returned to the caller. In this way it is possible to
interchange parameter information between caller and stored procedure.
Stored procedures can be called locally (on the same system where the application runs) or
remotely on a different system. They are the easiest way to perform a remote call and to
distribute the execution logic of an application program.
Programs do not need to be registered, as long as you do not want to overload the
procedures (look at “Procedure signature and overloading” on page 148). They can be called
directly by the SQL interfaces using the SQL command CALL.
Programs or service programs are registered as stored procedures by using the SQL
command CREATE PROCEDURE.
Note: Since Release V5R3M0, procedures in service programs without return value can
be registered as stored procedures.
To register procedures in service programs, the option EXTERNAL NAME in the CREATE
PROCEDURE statement (look at “SQL statement CREATE PROCEDURE” on page 144),
must include the procedure’s entry point (procedure name).
The activation group of the program or service program is inherited to the stored procedure,
which means that if your program runs in a named activation group, the stored procedure
uses the same activation group.
If your program or service program is generated with a named activation group or a new
activation group, and the commitment control is started at activation group level, commit and
rollback will lead to unexpected results.
Note: Commitment control can only be started once in an job, with commitment scope
either Job or Activation group. If you have to change, you first have to end commitment
control using the CL commend ENDCMTCTL and restart it again with a different
commitment scope.
In contrast to JAVA or .Net languages, RPG cannot directly receive result sets. But
nevertheless it is possible to return result sets from RPG. There are two methods to realize
this:
Define a cursor with WITH RETURN and open it before you end your program or
procedure.
In your RPG program, you can fill your data into a multi-occurrence data structure or an
array data structure and then use the SQL command SET RESULT SETS to return the
data structure as result set.
Note: If you have to receive result sets within RPG you have to use the Common Level
Interface (CLI) APIs.
Figure 8-6 on page 141 shows the General tab for external triggers.
140 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 8-6 General requirements for external triggers
Figure 8-7 on page 142 shows the Parameters window for external procedures.
Figure 8-8 on page 143 shows the Program window for an external trigger.
142 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 8-8 Program information for external triggers
In the program body all SQL statements and scalar functions can be used. Additionally,
SQL/PSM language provides a couple of control statements (for more information about SQL
programming language look at “SQL programming language” on page 163).
When creating the procedure, a temporary source file is generated, containing the SQL
statements converted into API calls in the C-language. From this source member a
C-Program Object is created.
Figure 8-9 shows the SQL statements for an SQL stored procedure.
The text below shows the most important options of the create procedure statement. For
more information see the SQL Reference.
CREATE PROCEDURE ProcedureName
(Parameter Declaration)
LANGUAGE
PARAMETER STYLE
SQL DATA
DYNAMIC RESULT SETS
EXTERNAL NAME
SPECIFIC
COMMIT ON RETURN
Procedure name
This is the name that is used to call the stored procedure. It can be defined with up to 128
characters.
The combination of schema, procedure name, and number of parameters must be unique
on the current server.
144 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
If you want to generate external stored procedures, it is not necessary that the procedure
name and the name of the program or service program are identical. The external name of
the program or procedure must be declared with the EXTERNAL NAME option.
Parameter declaration
When defining a parameter, you have to determine if it is input only, output only, or both
by, using one of the following modes:
– IN for input-only parameters
– OUT for output-only parameters
– INOUT for a parameter that is both input and output capable
Further, the parameter name and the data type must be specified.
Example 8-16 shows the registration of an external procedure with several input and
output parameters.
Note: With embedded SQL you cannot access result sets. The only way to receive
result sets within RPG is to use the Common Level Interface (CLI) APIs.
You cannot get direct access to result sets. RPG can return result sets through an open
cursor or by using the SQL statement SET RESULT SETS.
Language
Specifies the language in which the external program that must be registered is written.
The following languages can be specified:
– C
– C++
– CL
– COBOL or COBOLLE for ILE COBOL programs
– FORTRAN
– JAVA
– PLI
– REXX
– RPG for RPG/400 programs or RPGLE for ILE RPG programs
146 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Parameter style SQL cannot be used with programming language JAVA.
– DB2SQL
The DB2SQL style is a superset of the SQL parameter style.
– DB2GENERAL
Specifies that the procedure will use a parameter passing convention that is defined for
use with Java methods.
– JAVA
Specifies that the procedure will use a parameter passing convention that conforms to
the Java language and SQLJ Routines specification.
SQL DATA
Registering a stored procedure, you have to specify if SQL statements are used, by
specifying one of the following options:
– NO SQL DATA
The stored procedure does not contain SQL statements. This must be used for your
RPG or COBOL programs or service programs that do not contain embedded SQL.
– CONTAINS SQL DATA
This option must be used if you want to register a RPG or COBOL program or service
program that contains SQL statements like SET, CALL, and COMMIT, but does not
access database data.
If you create an SQL stored procedure that only executes calls to other stored
procedures and set parameters, you have to specify this option, too.
– READS SQL DATA
This option must be used if you want to register a RPG or COBOL program, where you
get access to database data by using the select statement, but no insert, update, or
delete with SQL is performed.
If you write a SQL stored procedure that only returns result sets, you have to specify
this option, too.
– MODIFIES SQL DATA
This option must be specified if you are inserting, updating, or deleting data through
SQL in your stored procedure.
EXTERNAL NAME
Specifies the program or service program that will be executed when the procedure is
called by the CALL statement. The program name must identify a program or service
program that exists at the application server at the time the procedure is called.
If you want to register a service program, you have to add the procedure name that is
called to the external name. It must even be specified if the service program consists only
of one single procedure with the same name.
Example 8-18 shows the registration of the procedure HSEMPLOYEE in the service
program HSEMPLOYEE.
SPECIFIC NAME
DB2 Universal Database for iSeries identifies each stored procedure with a specific name
that, combined with the specific schema, must be unique in the system. This gains
importance because multiple stored procedures with the same name but different
signatures must have different specific names or signatures. If you do not provide a
specific name, DB2 UDB for iSeries generates one automatically.
If you do not overload your procedure, the generated specific name will be the procedure
name. When overloading, the specific name will be generated by using the system
conventions, character 1-5 from the procedure name and a serial number.
COMMIT ON RETURN
Specifies whether the database manager commits the transaction immediately on return
from the procedure.
If COMMIT ON RETURN YES is specified, the database manager issues a commit if the
procedure returns successfully. If the procedure returns with an error, a commit is not
issued.
The signature of a procedure is determined by the qualified name and the number of
parameters in the procedure. A signature is unique in one schema.
Procedures with the same name and identical number of parameters cannot coexist in the
same schema, even if the parameters have different data types. But it is possible to register
the same program or service program with the same parameters in another procedure in the
same library.
Note: You do not have to register programs as stored procedures as long as you do not
want to overload procedures. Programs can be directly called by the SQL CALL statement,
without being registered as stored procedures.
Example 8-19 on page 149 updates the table MyEmployee. Depending on the passed
parameters, different selections are performed.
If RAISE and DEPARTMENT are specified only those rows that belong to the passed
department are updated.
148 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
If all three parameters are passed, all rows with the specified department and where the year
of the birthday is equal to the passed year are updated.
The source is compiled into a module by using the CRTRPGMOD command and then bound
into a service program by using the CRTSRVPGM command.
D UpdEmployee PI
D ParRaise 5P 2 const
D ParDept 10A varying const options(*NoPass)
D ParBYear 4P 0 const options(*NoPass)
*-----------------------------------------------------------------------
/free
setLL *Zeros MyEmployeR;
DoU %EOF(MyEmployee);
Read MyEmployeR;
If %EOF;
leave;
EndIf;
If ParRaise = *Zeros
or (%Parms >= 2 and EmpDept <> ParDept)
or (%Parms >= 3 and %SubDt(EmpBDay: *Y) <> ParBYear);
iter;
Endif;
EmpSal *= (1 + ParRaise/100);
Update MyEmployeR;
EndDo;
Return;
/End-Free
If we want to call the procedures with one, two, or three parameters, we have to register a
procedure with the same name but a different number of parameters.
Example 8-20 shows the CREATE PROCEDURE statement for each of these procedures.
If you list the procedures in the iSeries Navigator, you will see three procedures with the
same procedure name, but different signatures. Figure 8-10 shows the procedures listed in
the schema ITSO4710.
Example 8-21 on page 151 shows a procedure that calls the procedure UpdEmployee with
different parameters.
150 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Example 8-21 Calling overloaded procedures
Create Procedure ITSO4710/CallUpdEmployee
Language SQL
Not Deterministic
Contains SQL
Called on NULL input
BEGIN
Declare RAISE Decimal(5, 0);
Declare DEPARTMENT Character(10);
Declare BIRTHYEAR Decimal(4, 0);
Set RAISE = 1;
Call UpdEmployee(RAISE);
Set RAISE = 2;
Set DEPARTMENT = 'DEPT01';
Call UpdEmployee(RAISE, DEPARTMENT);
If you try to create a procedure with the same signature as an existing procedure (identical
name and identical number or parameters), the procedure will not be created. An error
occurs:
SQL State: 42723
Vendor Code: -454
Message: [SQL0454] Routine MYPROCEDURE in MYSCHEMA already exists.
If you create a procedure with the same name as an existing procedure, but with a different
number of parameters, the procedure is overloaded, which means that a procedure with a
different signature is created.
If you want to replace a procedure, you first have to delete the old one by using the SQL
statement DROP PROCEDURE. As long as the procedure is not overloaded, you simply
specify DROP PROCEDURE and add the ProcedureName.
For more information refer to the redbook Stored Procedures, Triggers and User Defined
Functions on DB2 UDB for iSeries, SG24-6309.
DB2 Universal Database for iSeries comes with a rich set of built-in functions, but users and
programmers may have different particular requirements not covered by them. UDFs comes
to play a very important role by allowing users and programmers to enrich the database
manager by providing their own functions.
152 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Performance
A UDF can run in the database engine and is very useful for performing calculations in the
database manager server. Another area where performance may be increased is in
dealing with Large Objects (LOBs). UDFs may be used for extracting or modifying portions
of the information contained in a LOB directly in the database manager server instead of
sending the complete LOB to the client side.
Migration
When migrating from other database managers, there could be built-in functions that are
not defined in DB2 Universal Database for iSeries. UDFs allow us to create those
functions in order to make the migration process easier.
A function is a relationship between a set of input values and a set of result values. When
invoked, a function performs some operation (for example, concatenate) based on the input
and returns a single or multiple results to the invoker. Depending on the nature of the return
value or values, user defined functions can be classified into:
User Defined (Scalar) Functions with one single return value
User defined table functions with a set (=table) of return values
Depending on the way they are coded, there are three different types of UDFs:
External user defined functions
SQL user defined functions
Sourced user defined functions
All types of user defined functions are generated by using the SQL command CREATE
FUNCTION.
In the following examples we create a service program, CENTER, containing two external
functions: CENTER to center a text in a character field, and RIGHTADJ to right adjust a text
in a character field.
Example 8-23 shows the prototypes for the functions CENTER and RIGHTADJ. Both
functions return an alphanumeric value.
* Function CENTER
D Center PR like(Text)
D ParText like(Text) const
* Function RightAdj
D RightAdj PR like(Text)
D ParText like(Text) const
Example 8-24 shows the source code for the two functions CENTER and RIGHTADJ.
Example 8-24 Source code for the functions CENTER and RIGHTADJ
H NoMain
H Debug BndDir('MYBNDDIR')
*-----------------------------------------------------------------------------------------
* Prototypes
D/Copy QPROLESRC,CENTER
******************************************************************************************
* Function CENTER
******************************************************************************************
P Center B Export
D Center PI like(Text)
D ParText like(Text) const
D LenParText C const(%Size(ParText))
D RetText S like(Text)
D Start S 3U 0
*-----------------------------------------------------------------------------------------
/Free
Select;
When ParText = *Blanks;
Return *Blanks;
154 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
When %Len(%Trim(ParText)) = LenParText;
Return ParText;
Other;
Start = %Int((LenParText - %Len(%Trim(ParText))) / 2) + 1;
%Subst(RetText: Start) = %Trim(ParText);
Return RetText;
EndSl;
/End-Free
P Center E
******************************************************************************************
* Function RIGHTADJ
******************************************************************************************
P RightAdj B Export
D RightAdj PI like(Text)
D ParText like(Text) const
D LenParText C const(%Size(ParText))
D RetText S like(Text)
//---------------------------------------------------------------------------------------
/Free
Select;
When ParText = *Blanks;
Return *Blanks;
Other;
EvalR RetText = %TrimR(ParText);
Return RetText;
EndSl;
/End-Free
P RightAdj E
To compile this source code into a service program the following binder source is used.
Example 8-26 shows the compile commands to generate the service program.
/* Delete Module */
DLTMOD MODULE(QTEMP/CENTER)
Example 8-27 shows how these functions are called in an RPG program.
Example 8-27 Calling the procedures CENTER and RIGHTADJ from RPG
* Prototypes
D/Copy QPROLESRC,CENTER
* Field Definition
D TextIn S like(Text) inz('MyText')
D TextOut S like(Text)
*-----------------------------------------------------------------------------------------
/Free
TextOut = Center(TextIn);
Dsply TextOut;
TextOut = RightAdj(TextIn);
Dsply TextOut;
Return;
/End-Free
If you want to use these functions in SQL you have to use them as user defined functions.
Example 8-28 shows the registration of the RPG function CENTER and RIGHTADJ as user
defined functions.
Note: In contrast to programs, service programs can have several entry points, one for
each exported procedure. To register user defined functions, the entry point or the function
name must be specified. This is even necessary if a service program contains only one
function with the same name as the service program, for example, EXTERNAL NAME
MySchema.MySrvPgm(MyFunction).
Example 8-28 Registering the RPG functions CENTER and RIGHTADJ as external UDFs
Create Function ITSO4710/CenterText
(ParText CHAR(20) )
Returns CHAR(20)
Language RPGLE
Not Deterministic
No SQL
Called on NULL Input
DisAllow Parallel
External Name 'ITSO4710/CENTER(CENTER)'
Parameter Style SQL ;
156 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
(ParText CHAR(20) )
Returns CHAR(20)
Language RPGLE
Not Deterministic
No SQL
Called On NULL Input
DisAllow Parallel
External Name 'ITSO4710/CENTER(RIGHTADJ)'
Parameter Style SQL ;
Example 8-29 shows how the external functions can be used with SQL.
Example 8-29 Using user defined functions CenterText and RightAdjust in SQL
Update MySchema/MyTable
SET MyChar = CenterText(MyChar)
MyChar1 = RightAdjust(MyChar1);
Figure 8-11 shows the General tab for creating an external user defined function.
Figure 8-12 on page 158 shows the Parameters tab for creating a user defined function.
Figure 8-8 on page 143 shows the Program tab for external user defined functions.
158 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 8-13 Program tab in creating a user defined function
Example 8-30 shows an SQL user defined function that converts a date into a text, in the
format Friday, 10th September 2004.
Example 8-30 SQL User defined scalar function to convert a date into a text string
Create Function ITSO4710/CvtDateToText (MyDate DATE )
Returns Char(50)
Language SQL
Specific ITSO4710/CvtDateToText
Deterministic
Contains SQL
Returns NULL on NULL Input
DisAllow PARALLEL
Begin
Return DayName(MyDate) concat ', ' concat
Trim(Char(DayOfMonth(MyDate))) concat
Case When DayOfMonth(MyDate) IN (1 , 21 , 31)
Then 'st'
When DayOfMonth(MyDate) IN (2 , 22)
Then 'nd'
When DayOfMonth(MyDate) = 3
Then 'rd'
else 'th'
end concat ' ' concat
MonthName(MyDate) concat ' ' concat
Char(Year(MyDate)) ;
End
Note: One very useful and important use of a table function is the ability to access data in
non-relational objects with an SQL. A table function can be written to extract data out of a
stream file in IFS, and then the invoking SQL statement is able to process that data just like
data from an SQL-created table.
For an example of an user defined table function refer to 6.8.1, “User defined table functions
for accessing non-relational data” on page 72.
The data type of the value returned by the function is not considered to be part of the function
signature.
The following user defined functions can coexist in the same schema:
MyProcedure(Int)
MyProcedure(SmallInt)
160 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Note: Certain data types are considered equivalent when it comes to function signatures.
For example, DECIMAL and NUMERIC or CHAR and GRAPHIC are treated as the same
type from the signature point of view. On the other hand, CHAR and VARCHAR are
handled as different data types. If you specify an alphanumeric constant, it is treated as
VARCHAR.
Distinct types are always treated as different data types, even though they are based on
the same data type and length as the defined parameter.
Example 8-31 shows the definition of the data type DateNumISO, which represents a numeric
date defined as Decimal(8, 0).
Example 8-32 shows the definition of the original user defined function CvtNumToDate that
converts a numeric date into a date value.
For numeric fields defined with the data type DateNumISO, you cannot use this function
CvtNumToDate. You either have to convert the data type or overload the original function.
This can be accomplished by creating a sourced user defined function that converts the data
type into decimal and calls the original user defined function.
Example 8-33 shows the sourced user defined function that allows you to use the
CvtNumToDate function for the data type DatNumISO.
Example 8-33 Sourced function to convert numeric dates from DatNumISO to date
Create Function ITSO4710/CvtNumToDate
(DateNum DateNumIso )
Returns DATE
If you list the functions in the iSeries Navigator, you will see both functions with the same
function name, but different signatures—one SQL defined and the other sourced. Figure 8-14
shows the user defined functions listed in the schema ITSO4710.
If you try to create a user defined function with the same signature as an existing procedure
(identical name, identical number of parameters with the same data types), the UDF will not
be created. An error occurs:
SQL State: 42723
Vendor Code: -454
Message: [SQL0454] Routine CALLUPDEMPLOYEE in ITSO4710 already exists.
If you create a UDF with the same name as an existing function, but with a different number
of parameters or the same number of parameters but with different data types, the user
defined function is overloaded, which means a user defined function with a different signature
is created.
If you want to replace a user defined function, you first have to delete the old one by using the
SQL statement DROP FUNCTION. As long as the FUNCTION is not overloaded, you simply
specify DROP FUNCTION and add the FunctionName.
162 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
There are two methods to delete overloaded user defined functions:
Adding the data types and the length of the parameters of the procedure after the
FunctionName in the DROP FUNCTION statement
Using the specific name or signature as follows:
DROP SPECIFIC FUNCTION SpecificName
For more information on user defined functions refer to the redbook Stored Procedures,
Triggers and User Defined Functions in DB2 UDB for iSeries, SG24-6503.
Using SQL/PSM, you can use all SQL statements and scalar functions. It is possible to insert,
update, or delete multiple rows in different tables. You can use the SELECT INTO statement
to retrieve one single row or value. Furthermore, you can define and handle serial and scroll
cursors, like in embedded SQL. Variables can be defined, but in contrast to the host variables
used in embedded SQL, they must not be proceeded by a colon (:). It is even possible to
create and use SQL statements dynamically.
In embedded SQL we learned how to get access to the database data, how to modify them,
and how to use the SQL SET statement. But to control the logic program flow we used RPG
statements like the operation codes IF or DOU. If we want to move from embedded SQL to
SQL/PSM we need those control statements in SQL.
For the use in SQL triggers, SQL stored procedures, and SQL user defined functions, SQL
provides a set of control statements that allow SQL to be used in a manner similar to writing a
A compound statement begins with BEGIN and ends with END. The END clause must be
ended with a semi colon (;).
Every SQL statement embedded in the compound statement must be ended by a semi colon
(;).
When the compound statement is used, there is an order that must be followed:
1. Local variable declarations
2. Local cursor declarations
3. Local handler declarations
4. SQL procedure logic, all other SQL statements
If you want to compare it with RPG, the Definition specifications must be located before the
Calculation specifications.
Compound statements can be nested. Nested compound statements can be used to scope
handlers, cursors, and variables to a subset of the statements in a procedure. This can
simplify the processing done for each SQL procedure statement.
Nested compound statements can be compared with internal procedures in ILE programs. If
the same procedure is needed several times, you will transform it in your ILE program into a
exported procedure. Before copying the same nested compound statement into several
procedures, it would be better to create a stored procedure or a user defined function
containing these statements, and call it.
For more information on the SQL control statements refer to the SQL Reference book.
Conditional control
Both RPG and SQL provide two methods for condition handling. In he first one, IF handles a
single and even sometimes nested condition, while the SQL CASE statement or the RPG
operation code SELECT can handle multiple conditions.
Table 8-2 on page 165 shows the SQL conditional control statements and the RPG
equivalent.
164 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Table 8-2 SQL conditional control statements
RPG
SQL Syntax SQL Example
Syntax
IF Condition IF Month(MyDate) between 1 and 3
IF
THEN SQL-Statement; THEN Set Quartal = 1;
additional SQL-Statements;
ELSEIF Month(MyDate) between 4 and 6
ELSEIF Condition THEN Set Quartal = 2;
THEN SQL-Statement; ELSEIF
additional SQL-Statements; ELSEIF Month(MyDate) between 7 and 9
THEN Set Quartal = 3;
ELSE SQL-Statement;
ELSE Set Quartal = 4; ELSE
additional SQL-Statements;
END IF; END IF; ENDIF
CASE CASE SELECT
WHEN Month(MyDate) between 1 and 3
THEN Set Quartal = 1;
WHEN Condition WHEN Month(MyDate) between 4 and 6
WHEN
THEN SQL-Statement; THEN Set Quartal = 2;
additional SQL-Statements; WHEN Month(MyDate) between 7 and 9
THEN Set Quartal = 3;
ELSE SQL-Statement;
ELSE Set Quartal = 4; OTHER
additional SQL-Statements;
END CASE; END CASE; ENDSL
Iterative control
Both SQL and RPG provide a number of methods for iterative control, but SQL statements
and RPG operation codes differ slightly.
Note: The SQL FOR statement and the RPG operation code FOR cannot be
compared.
Table 8-3 shows the SQL iterative control statements and compares them with RPG.
(Label:) NextLoop:
LOOP SQL-Statement; LOOP FETCH Cursor into OutPut;
additional SQL-Statements; IF OutPut = ' ';
DO / ENDDO
LEAVE;
FOR / ENDFOR
END IF;
SET Counter = Counter + 1;
END LOOP (Label); END LOOP NextLoop;
(Label:)
WHILE Condition WHILE Counter < 10
DO SQL-Statement DO FETCH Csr1 into OutPut; DOW / ENDDO
additional SQL-Statements; SET Counter = Counter + 1;
END WHILE (Label); END WHILE;
(Label:)
REPEAT SQL-Statement; REPEAT FETCH Csr1 into OutPut;
additional SQL-Statements; SET Counter = Counter + 1; DOU / ENDDO
UNTIL Condition UNTIL SqlState = '02000';
END REPEAT (Label); END REPEAT;
(Label:)
FOR Variable as CURSOR FOR vl AS c1 CURSOR
FOR SELECT firstnme, midinit, lastname
FOR SELECT-Statement
FROM employee
DO SQL-Statement; DO SET fullname = lastname
Combination of
additional SQL-Statements; concat ', ' concat
SETLL, DO and
firstnme
READ
concat ' ' concat
midinit;
INSERT INTO TNAMES
VALUE ( fullname );
END FOR (Label); END FOR;
In some situations it is important to leave a loop or to skip to the next iteration. SQL provides
three control statements to achieve this functionality.
LEAVE
LEAVE allows you to leave the iteration. It is an equivalent to the RPG operation code
LEAVE.
ITERATE
ITERATE allows you to skip to the next iteration. It is an equivalent to the RPG operation
code ITER.
GOTO
GOTO can be used to branch to a label. It is an equivalent to the RPG operation codes
GOTO or CABxx.
166 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Note: Use the GOTO statement sparingly. This statement interferes with the normal
sequence of processing, thus making a routine more difficult to read and maintain.
Often, another statement, such as ITERATE or LEAVE, can eliminate the need for a
GOTO statement.
Table 8-4 shows the SQL statements that allow you to skip to the next iteration or to leave it.
Table 8-5 shows the syntax and examples for these statements.
EVAL
SET Variable = Expression; SET MyVar = Quantity * Price;
(can be omitted in free
format RPG)
CALL + Parm
CALL Procedure CALL MyProc (Parm1, Parm2); CALLP for programs
(Parm1, Parm2, ... ParmN); (can be omitted in free
format RPG)
RETURN expression RETURN Quantity * Price; RETURN
To handle errors in embedded SQL we inquiry for the content of the SQLCODE or
SQLSTATE that was returned after executing an SQL statement. In procedural SQL there is
an additional method to handle errors, which is the use of Condition handlers.
Condition handler
A handler declaration associates a handler with an exception or completion condition in a
compound statement.
In the text below you will see the DECLARE HANDLER statement:
DECLARE Handler Type
FOR condition
SQL-Statements
A Condition handler is always fired when a condition occurs that matches the condition
specified in the handler definition.
168 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
SET ErrMsg = 'Error Rollback';
The SQL/PSM database language supports two programming constructs that can be used to
handle the user-defined errors:
SIGNAL
The SIGNAL statement signals an error or warning condition explicitly. If a handler is
defined to handle the exception, it is called immediately by the SIGNAL statement;
otherwise the control is returned to the caller.
RESIGNAL
The RESIGNAL statement can only be coded as part of the SQL/PSM Condition handler
and is used to re-signal an error or warning condition. It returns SQLSTATE and SQL
Message text to the invoker.
For a more complete description of error handling in stored procedures, triggers, and user
defined functions, refer to the redbook Stored Procedures, Triggers and User Defined
Functions in DB2 UDB for iSeries, SG24-6305.
In this section we address the differences and some workarounds to solve this issue.
Table 9-1compares the different RPG, DDS, and SQL data types.
CHAR
Character fixed length A -- A --
CHARACTER
VARCHAR
Character varying
A varying A VARLEN CHAR VARYING
length
CHARACTER VARYING
Indicator N -- -- -- --
Graphic E
G -- -- GRAPHIC
fixed length O
G 65535 oder CCSID
mit DBCS Encoding
J Scheme
VARGRAPHIC
Graphic E
G varying VARLEN
varying Length O
GRAPHIC VARYING
G
VARBINARY
Binary varying length -- SQLTYPE(BINARY: Length) -- -- 65535
BINARY VARYING
SQLTYPE(CLOB: Length) CLOB
Character Large SQLTYPE(CLOB_LOCATOR) CHAR LARGE OBJECT
-- -- --
Object
CHARACTER LARGE
SQLTYPE(CLOB_FILE)
OBJECT
SQLTYPE(DBCLOB: Length)
Double Byte Large CCSID with DBCS
-- SQLTYPE(DBCLOB_LOCATOR) -- -- DBCLOB
Object Encoding Scheme
SQLTYPE(DBCLOB_FILE)
172 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
9.1.1 Character data types
Character data or strings may contain one or more single-byte or double-byte characters,
depending on the format specified.
Single-byte character set (SBCS) data
Data in which every character is represented by a single byte. Each SBCS data character
string has an associated Coded Character Set Identifier (CCSID). If necessary, an SBCS
string is converted before it is used in an operation with a character string that has a
different CCSID.
Double-byte character set (DBCS) data
Data in which every character is represented by a character from the double-byte
character set (DBCS) that does not include the shift-out or shift-in characters. Every
DBCS graphic string has a CCSID that identifies a double-byte coded character set. If
necessary, a DBCS graphic string is converted before it is used in an operation with a
DBCS graphic string that has a different DBCS CCSID.
32,766 Byte
one or more double-byte 13488 = UCS - 2
UCS-2 C
characters 1200 = UTF-16
16,383 Character
You define a character field by specifying A in the Data-Type entry of the appropriate
specification. You can also define one using the LIKE keyword on the Definition specification
where the parameter is a character field.
The length of a character field must defined between 1 and 65535 bytes.
You define an indicator field by specifying N in the Data-Type entry of the appropriate
specification. You can also define an indicator field using the LIKE keyword on the Definition
specification where the parameter is an indicator field.
Note: There is no equivalent in SQL. When indicators must be saved in database files, the
appropriate column must be defined as CHARACTER with a length of one byte.
The length of a graphic field, in bytes, is two times the number of graphic characters in the
field. The fixed-length graphic format is a character string with a set length where each
character is represented by 2 bytes.
You define a graphic field by specifying G in the Data-Type entry of the appropriate
specification. You can also define one using the LIKE keyword on the Definition specification
where the parameter is a graphic field.
The default initialization value for graphic data is X'4040'. The value of *HIVAL is X'FFFF',
and the value of *LOVAL is X'0000'.
This character set can encode the characters for many written languages. Fields defined as
UCS-2 data do not contain shift-out (SO) or shift-in (SI) characters.
The length of a UCS-2 field, in bytes, is two times the number of UCS-2 characters in the
field. The fixed-length UCS-2 format is a character string with a set length where each
character is represented by 2 bytes.
You define a UCS-2 field by specifying C in the data type entry of the appropriate
specification. You can also define one using the LIKE keyword on the Definition specification
where the parameter is a UCS-2 field.
The default initialization value for UCS-2 data is X'0020'. The value of *HIVAL is X'FFFF',
*LOVAL is X'0000', and the value of *BLANKS is X'0020'.
Note: SQL and DDS do not have different data types for graphic and unicode. Double byte
characters are always defined with GRAPHIC data type. It depends on the CCSID if
Unicode or any other DBCS is used.
174 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Binary strings with fixed and varying length
Large objects (LOB)
– Character large objects (CLOBs)
– Double byte character large objects (DBCLOBs)
– Binary large objects (BLOBs)
VARCHAR 65535 or
one or more single-byte 32,740 Byte CCSID with DBCS
Graphic varying
GRAPHIC VARYING characters with varying Encoding Scheme
length
length
VARGRAPHIC 16,370 Character
CLOB
Character large 2,147,483,647 Byte
CHAR LARGE OBJECT single byte characters
object
CHARACTER LARGE OBJECT 2 Giga byte
2,147,483,647 Byte
Double byte large CCSID with DBCS
DBCLOB double byte characters 2 Giga byte
object Encoding Scheme
1,073,741,823 Character
2,147,483,647 Byte
Binary large object BLOB bytes 65535
2 Giga byte
The maximum length a single byte character string with fixed length can have is 32766 bytes.
Note: A RPG character field can be defined up to 65 535 bytes. If you have to store
character fields that can contain more than 32766 bytes, you have to define a CLOB in
SQL.
Graphic strings
In contrast to RPG, SQL does not use different data types for unicode and other DBCS. If
unicode is used, the CCSID 13488 for UCS-2 or 1200 for UTF-16 must be associated.
Graphic strings are defined through data type GRAPHIC, the number of characters the string
can have, and the CCSID.
The length attribute for graphic strings with fixed length must be between 1 and 16383
inclusive, which corresponds to 32767 bytes. Contrary to the character strings, the maximum
length for graphical strings is identical for RPG and SQL.
Note: RPG has two different data types for double-byte characters. The UCS-2 Unicode
data type (C) matches with CCSID 13488 and 1200, while all other double-byte characters
must be defined with the graphic data type (G).
Binary strings
A binary string is a sequence of bytes. The length of a binary string is the number of bytes in
the sequence. A binary string has a CCSID of 65535.
Note: In RPG no data type directly matches binary strings. However, in ILE RPG, a
BINARY fixed-length binary-string variable can be declared using the SQLTYPE keyword.
The following example shows how to define a field with BINRARY data type within RPG:
D MySqlBinary S SqlType(BINARY: 1000)
The storage allocated for variable-length character fields is 2 bytes longer than the declared
maximum length. The left-most 2 bytes are an unsigned integer field containing the current
length in characters, graphic characters, or UCS-2 characters. The actual character data
starts at the third byte of the variable-length field.
176 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
This is a disadvantage if the full field length is only occasionally used. Compared to varying
fields, the current data can be directly accessed through the length saved in the first two
bytes.
Example 9-1 shows how to define character fields with varying length within RPG.
Note: When using the VARYING keyword, the length definition is always required, which
means varying fields cannot be referenced through the keyword LIKE.
For a VARCHAR column, the length attribute must be between 1 and 32740 inclusive, that is,
less than the maximum length fixed character fields can have.
Note: While in RPG, varying character fields can be defined with the same maximum
length as fixed character fields. The maximum length for SQL-defined varying fields is
always shorter as the maximum length of their fix counterparts.
Varying graphic strings in SQL must be defined with either VARGRAPHIC or GRAPHIC
VARYING.
The length attribute must be between 1 and 16370 inclusive, which is the maximum number
of characters the column can hold. The maximum length a graphic field with varying length
can have differs from the maximum length a graphic field with fixed length can have.
Varying binary strings in SQL must be defined with either VARBINARY or BINARY
VARYING.
Packed, zoned, and binary formats should be specified for fields when:
Using values that have implied decimal positions, such as currency values
Manipulating values having more than 19 digits
Ensuring a specific number of digits for a field is important
The zone portion of the low-order byte indicates the sign (positive or negative) of the decimal
number. The standard signs are used: Hexadecimal F for positive numbers and hexadecimal
D for negative numbers. In zoned-decimal format, each digit in a decimal number includes a
zone portion; however, only the low-order zone portion serves as the sign.
A decimal value is a packed or zoned decimal number with an implicit decimal point. The
position of the decimal point is determined by the precision (total number of the digits) and the
scale (number of digits to the right of the decimal point) of the number. The scale cannot be
178 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
negative or greater than the precision. The maximum length a zoned data type can have is 63
digits, which applies to both SQL and RPG.
All values of a decimal column have the same precision and scale.
Note: The total number of digits and decimal positions decimal numbers can have are
identical in both RPG and SQL. A zoned field can have up to 63 digits.
Note: Keep in mind that RPG translates the numeric data types as far as possible to
packed numeric data types. If you really want to work with the zoned data type in RPG and
not with converted packed data types, you may embed your zoned numeric field into either
an internal or external data structure.
Note: The total number of digits and decimal positions that decimal numbers can have are
identical in both RPG and SQL. The maximum number of digits a packed numeric field can
have is 63.
Note: As already pointed out, RPG converts, as far as possible, numeric data types to
packed. But there is at least one exception when packed data are converted to zoned data.
If you list packed numeric fields that are defined through file description (F-Specification)
without any length or data type, numeric fields are handled as zoned fields.
In SQL only the integer data type is available. Depending on the number of bytes, the values
are stored in different data types:
Small Integer For 2 bytes
Integer/large integer For 4 bytes
Big Integer For 8 bytes
Table 9-4 compares the different binary data types in RPG and SQL.
1 3I 0 -128 - 128 --
2 5I 0 -32,768 - 32,767 Small Integer SMALLINT
Note: Specify integer for fields when no decimal positions are needed. Binary
representation is the most compact representation of numeric values.
To define binary fields the data type B must be specified and the precision and number of
decimal positions added.
A binary field can be from one to nine digits in length and can be defined with decimal
positions. If the length of the field is from one to four digits, the compiler assumes a binary
field length of 2 bytes. If the length of the field is from five to nine digits, the compiler assumes
a binary field length of 4 bytes. Because of these length restrictions, the highest decimal
180 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
value that can be assigned to a 2-byte binary field is 9 999, and the highest decimal value that
can be assigned to a 4-byte binary field is 999 999 999. In comparison, in a two-byte binary
field up to 216 values can be saved that correspond to a range from -32 768 to +32 768. In a
4-byte binary field up to 232 can be saved that correspond to a range from -2 147 483 648 to
+2 147 483 647.
In RPG the binary fields are converted to packed numeric fields with the adequate number of
digits and decimal positions, and they are under the restrictions (that is, overflow) of the
packed fields. In general, a binary field of n digits can have a maximum value of n 9s. This
discussion assumes zero decimal positions.
Note: The binary data type should only be used when decimal positions for binary fields
are needed. Otherwise, integer data types should be preferred, because fields defined with
RPG’s binary data type cannot hold the complete range a binary field can have. Integer
fields are not converted by the RPG compiler to packed numeric fields.
If binary fields are used as data structure sub fields they either have to be defined through
their length or using the from/to length specification.
Example 9-2 shows different ways to define binary fields as data structure sub fields within
RPG.
D MySecondDS DS
D SecondBin2 1 2B 0
D SecondBin4 3 6B 0
Note: There will be no data type in SQL that directly matches with RPG’s binary data type.
If RPG binary data have to be saved in SQL columns, they have to be defined with either
small integer or integer data type, depending on the number of bytes.
You define an integer field by specifying I in the Data-Type entry of the appropriate
specification. Decimal position must always be inserted with zero. You can also define an
integer field using the LIKE keyword on a Definition specification where the parameter is an
integer field.
Note: In contrast to the binary fields, integer fields are not converted into packed fields in
RPG. Because of their compact representation it is the fastest way to access numeric data
without decimal positions.
The length of an integer field is defined in terms of number of digits; it can be 3, 5, 10, or 20
digits long. A 3-digit field takes up 1 byte of storage; a 5-digit field takes up 2 bytes of storage;
a 10-digit field takes up 4 bytes; a 20-digit field takes up 8 bytes. The range of values allowed
for an integer field depends on its length.
Note: In contrast to RPG, there are different SQL data types to store two, four, and eight
byte integer values:
The equivalent for the two-byte integer data type is small integer (SMALLINT).
The equivalent for the four-byte integer is integer (INT or INTEGER).
The equivalent for the eight-byte integer is big integer (BIGINT).
In SQL, there is no data type that matches directly to the one byte integer data type. A
small integer definition must be used instead.
If integer fields are used as data structure sub fields they either have to be defined through
their length or when using the from/to length specification.
Example 9-3 shows how to define integer fields as data structure sub fields.
D MySecondDS DS
D SecondInt1 1 1I 0
D SecondInt2 3 4I 0
D SecondInt4 5 8I 0
D SecondInt8 9 16I 0
The length and the scale of the SQL integer fields are defined through the specified data type.
Note: There is no data type to define one-byte binary fields. Small integer must be used
instead.
Table 9-6 on page 183 displays the list of different integer data types and their allowed data
ranges.
182 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Table 9-6 Overview SQL integer data types
Number of
Description Data type Range of allowed values
bytes
2 Small Integer SMALLINT -32,768 - 32,767
Large Integer / INTEGER
4 -2,147,483,648 - 2,147,483,647
Integer INT
8 Big Integer BIGINT -9,223,372,036,854,775,808 - 9,223,372,036,854,775,807
Note: In contrast to RPG, it is not possible to define unsigned integer fields within SQL.
When unsigned field values must be stored in SQL columns, integer or decimal data types
must be used. The maximum value of an unsigned field is twice as high as the maximum
value of the integer field. So it may be necessary to change to the larger integer definition
or use a decimal data type.
You define an unsigned field by specifying U in the Data-Type entry of the appropriate
specification. The decimal positions must always be inserted with zero. You can also define
an unsigned field using the LIKE keyword on the Definition specification where the parameter
is an unsigned field.
The length of an unsigned field is defined in terms of number of digits; it can be 3, 5, 10, or 20
digits long. A 3-digit field takes up 1 byte of storage; a 5-digit field takes up 2 bytes of storage;
a 10-digit field takes up 4 bytes; a 20-digit field takes up 8 bytes. The range of values allowed
for an unsigned field depends on its length.
Because there are no negative values allowed and the valid range begins with zero, the
maximum value is twice as high as the maximum value of the integer field. This must be
considered when handling with integer, unsigned and decimal fields.
Table 9-7 shows the definition of the different integer data types and their valid ranges.
Note: In SQL, there is no equivalent for unsigned data types. Alternatively, integer or
decimal data types must be defined. You have watch the maximum values that integer and
unsigned fields can hold. It may be necessary to switch to the larger integer definition. If
you have to save the maximum value of an 8-byte unsigned field within SQL columns, you
have to define a decimal field with the appropriate number of digits.
Note: Float variables conform to the IEEE standard as supported by the OS/400 operating
system. Since float variables are intended to represent scientific values, a numeric value
stored in a float variable may not represent the exact same value as it would in a packed
variable. Float should not be used when you need to represent numbers exactly to a
specific number of decimal places, such as monetary amounts.
Table 9-8 shows the different floating point definitions in RPG compared with SQL and their
valid ranges.
Table 9-8 Comparing SQL and RPG floating point data types
Number of
RPG SQL Length Valid data range
Bytes
REAL --
4 4F 1.17549436 × 10-38 - 3.40282356 × 1038
FLOAT(Integer) 1 - 24
FLOAT --
8 8F FLOAT(Integer) 25 - 53 2.2250738585072014 × 10-308 - 1.7976931348623158 × 10308
DOUBLE --
Float format should be specified for fields when the same variable is needed to hold very
small and/or very large values that cannot be represented in packed or zoned values.
However, float format should not be used when more than 16 digits of precision are needed.
The length of a floating point field is defined in terms of the number of bytes. It must be
specified as either 4 or 8 bytes.
The decimal positions must be left blank. However, floating-point fields are considered to
have decimal positions. As a result, float variables may not be used in any place where a
numeric value without decimal places is required, such as an array index, do loop index, etc.
Note: In SQL, there are different data formats for four-byte and eight-byte floating point
data types:
The equivalent for the 4-byte floating point data type is REAL or FLOAT with a length
between 1 and 24.
The equivalent for the 8-byte floating point data type is DOUBLE or FLOAT with a
length between 25 and 53 or FLOAT.
184 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Floating point data types in SQL
There are three different data types that describe floating point data types:
REAL without any length or scale specification for a single precision floating point.
FLOAT with an length between 1 and 24 for a single-precision floating point. FLOAT with
an length between 25 and 53 for an double-precision floating point. When FLOAT data
type is used without any length specification, the default value 53 is used.
DOUBLE without any length specification for double-byte precision.
Note: The Gregorian calender was first implemented October 15th, 1582 by pope Gregor
XIII. The days between October 4th and 15th, 1582 were eliminated, while the week day
counting was not changed (even though the Gregorian calender was not introduced in all
European countries at the same time—in Germany in 1700 and USA and Great Britain in
1752).
The Lilian Date is the number of days from October 15th, 1582 until the specified date.
October 15th is day 1.
The internal representation of a date is a string of 4 bytes that contains an integer. The
integer (called the Scaliger number) represents the date.
Depending on the associated date format, the valid date range differs.
Date formats with a two digits year, YMD, *DMY, *MDY, and *JUL, have a valid year range
from year 1940 to 2039.
Date formats with a four-digit year, ISO, *USA, *EUR, *JIS, and *LONGJUL, have a valid
year range from year 0001 to 9999.
Dates formats with a three-digit year, that are *CYMD, *CMDY, and *CDMY, have a valid
year range from year 1900 to 2899.
Dates with two-digit and four-digit years can be defined within RPG, and are therefore named
as internal date formats. Dates with three-digit years cannot be defined, but are handled
within RPG, and are therefore named as external date formats. Defining dates with a
three-digit year is only available in DDS using the keyword DATFMT. These three-digit year
date formats are not valid for the date (L) type field. They are only valid on logical file zoned,
packed, or character types having a physical file based on date type fields
Table 9-9 gives an overview of the internal and external date data types, their valid
separators, and valid data ranges.
186 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Date data type in RPG
You define a date field by specifying D in the data type entry of the appropriate specification.
It is not necessary to input a length, because it is predetermined through the data type and
the date format.
The default internal format for date variables is *ISO. This default internal format can be
overridden globally by the control specification keyword DATFMT and individually by the
Definition specification keyword DATFMT.
The hierarchy used when determining the internal date format and separator for a date field
is:
1. From the DATFMT keyword specified on the Definition specification
2. From the DATFMT keyword specified on the control specification
3. *ISO
The Date separators can be set by adding them to the date format keyword either in the
definition or in the control specifications. The date separator can only be set for dates with a
two-byte year portion.
Example 9-4 shows how to define date fields with different date formats within RPG.
Example 9-4 Defining date fields with different date formats in RPG
D MyDateIso S D
D MyDateEur S D DatFmt(*Eur)
D MyDateYMD S D DatFmt(*YMD-)
Date values are stored as Scaliger numbers, and the date format provides only a method to
present the date in a readable manner. Date constants or variables used in comparisons or
assignments do not have to be in the same format or use the same separators.
Note: Date formats provide only a method to represent the internal encrypted 4-byte
integer date value in a readable manner. They do not convert the internal value in any
case.
SQL and RPG should use identical date formats, or at least a format with identical year
ranges.
The internal representation of a time is a string of 3 bytes. Each byte consists of two packed
decimal digits. The first byte represents the hour, the second byte the minute, and the last
byte the second.
Table 9-10 shows an overview of the time data types, their valid separators, and valid data
ranges.
The default internal format for time variables is *ISO. This default internal format can be
overridden globally by the control specification keyword TIMFMT and individually by the
Definition specification keyword TIMFMT.
The hierarchy used when determining the internal time format and separator for a time field
is:
1. From the TIMFMT keyword specified on the Definition specification
2. From the TIMFMT keyword specified on the control specification
3. *ISO
The time separators can be set by adding them to the TIMFMT keyword either in the
definition or in the control specifications. The time separator can only be set for the *HMS
time format.
188 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
fields, output fields, or key fields are also converted (if required) to the necessary format for
the operation.
Note: When using USA Standard time data type in RPG, the seconds portion is
overwritten by AM or PM and the seconds get lost. If you need to calculate with seconds
you should pick any other time format.
Note: Contrary to RPG, when using USA standard time format, the seconds are saved.
When defining time fields in RPG with the USA standard time format, the seconds will get
lost.
The internal representation of a timestamp is a string of 10 bytes. The first 4 bytes represent
the date, the next 3 bytes the time, and the last 3 bytes the microseconds (the last 3 bytes
contain six packed digits).
Until now there was only one timestamp format available. The year is always saved as a
four-digit year.
In traditional DDS-described files NULL values are not used. Besides, there is a keyword,
ALWNULL, that allows creating files with fields that can contain NULL values. With the begin
of the data transfer between iSeries and PCs and using SQL-based tables, where the use of
NULL values instead of default values is standard, NULL values must be handled.
Both the ALWNULL option and the keyword ALWNULL can have the same characteristics.
*NO, which is the default value
Specifies that the ILE RPG program will not process records with null-value fields from
externally described files. If you attempt to retrieve a record containing null values, no data
in the record is accessible to the ILE RPG program and a data-mapping error occurs.
*INPUTONLY or *YES
190 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Specifies that the ILE RPG program can successfully read records with null-capable fields
containing null values from externally described input-only database files. When a record
containing null values is retrieved, no data mapping errors occur and the database default
values are placed into any fields that contain null values. The program cannot do any of
the following:
– Use null-capable key fields.
– Create or update records containing null-capable fields.
– Determine whether a null-capable field is actually null while the program is running.
– Set a null-capable field to be null.
*USRCTL
Specifies that the ILE RPG program can read, write, and update records with null values
from externally described database files. Records with null keys can be retrieved using
keyed operations. The program can determine whether a null-capable field is actually
NULL, and it can set a null-capable field to be NULL for output or update. The programmer
is responsible for ensuring that fields containing null values are used correctly within the
program.
Note: Both the compiler option ALWNULL and the keyword ALWNULL only affect
externally described files that are defined in the File specifications or that are used as
externally described data structures.
If the file used for an externally described data structure has null-capable fields defined, the
matching RPG subfields are defined to be null-capable. Similarly, if a record format has
null-capable fields, a data structure defined with LIKEREC will have null-capable subfields.
When a data structure has null-capable subfields, another data structure defined like that
data structure using LIKEDS will also have null-capable subfields. However, using the LIKE
keyword to define one field like another null-capable field does not cause the new field to be
null-capable.
The %NULLIND built-in function can be used to query or set the null indicator for null-capable
fields. This built-in function can only be used if the ALWNULL(*USRCTL) keyword is specified
on a control specification or if the compiler option ALWNULL is set to *USRCTL. The field
name can be a null-capable array element, data structure, stand-alone field, subfield, or
multiple occurrence data structure.
%NULLIND can only be used in expressions in extended factor 2 or in free format coding.
When used on the right-hand side of an expression, this function returns the setting of the null
indicator for the null-capable field. The setting can be *ON or *OFF. When used on the
left-hand side of an expression, this function can be used to set the null indicator for
null-capable fields to *ON or *OFF. The content of a null-capable field remains unchanged.
The content of a field can only be changed if the NULL indicator is set to *OFF.
Example 9-5 on page 192 shows the DDS description for the file OrdHead, where all the
fields with the exception of OrdHNbr are NULL-capable fields.
A K ORHNBR
Example 9-6 shows an example of how to handle NULL values in RPG. In the example:
If the delivery date contains NULL values and order total contains neither NULL values nor
zeros, delivery date is set to current date.
If the delivery date contains NULL values and order total contains zeros, order total is set
to NULL.
If delivery date is not NULL and order total contains either NULL values or zeros, delivery
date and order total are set to NULL.
If %EOF;
leave;
Endif;
select;
When %NullInd(OrHDly) = *On
and %NullInd(OrHTot) = *Off and OrHTot <> *Zeros;
%NullInd(OrHDly) = *Off;
OrHDly = %Date();
When %NullInd(OrHDly) = *On
and %NullInd(OrHTot) = *Off and OrHTot = *Zeros;
%NullInd(OrHTot) = *On;
When %NullInd(OrHDly) = *Off
and ( %NullInd(OrHTot) = *Off and OrHTot = *Zeros
or %NullInd(OrHTot) = *On);
%NullInd(OrHDly) = *On;
%NullInd(OrHTot) = *On;
EndSl;
update OrdHeadF;
192 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
EndDo;
Return;
/End-Free
An indicator variable must be defined as a 2-byte binary field that matches with the RPG
definition 5I 0 and the SQL definition SMALL INTEGER. You specify an indicator variable
(preceded by a colon) immediately after the host variable.
Example 9-7 shows how indicator variables can be defined as stand-alone fields and be used
with host variables.
D IndDelDate S 5I 0
D IndOrderTotal S 5I 0
*-----------------------------------------------------------------------------------------
C/EXEC SQL
C+ Select OrHDly, OrHTot
C+ into :DelDate :IndDelDate, :OrderTotal :IndOrderTotal
C+ from OrdHead
C+ where OrHNbr = :OrderNo
C/END-EXEC
If a host structure is used for the retrieval values, you define a 2-byte binary (integer) array
with as many elements as data structure subfields. You specify the indicator array (preceded
by a colon) immediately after the host structure.
Example 9-8 shows how a host structure can be used with an indicator array.
D Arr1HostVar S 5I 0 Dim(2)
*-----------------------------------------------------------------------------------------
C/EXEC SQL
C+ Select OrHDly, OrHTot
C+ into :DsHostVar :Arr1HostVar
C+ from OrdHead
C+ where OrHNbr = :OrderNo
It is neither possible to use a indicator data structure nor single indicators in combination with
a host structure. If you prefer named indicators, you can define a data structure with named
indicators and overlay the data structure through an array.
Example 9-9 defines a data structure for the indicator values, that is overlaid by an array. In
the SQL statement the indicator array is used.
D Ds2IndHostVar DS
D Ind2DelDate 5I 0
D Ind2OrdTotal 5I 0
D Arr2HostVar 5I 0 Dim(2) overlay(DS2IndHostVar)
*-----------------------------------------------------------------------------------------
C/EXEC SQL
C+ Select OrHDly, OrHTot
C+ into :DSHostVar :Arr2HostVar
C+ from OrdHead
C+ where OrHNbr = :OrderNo
C/END-EXEC
If a host structure array is used, you define a data structure containing as many subfields as
the host structure array, and add either the keyword OCCUR(Elements) to create a multi
occurrence data structure or keyword DIM(Elements) to create an array data structure.
194 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Verifying retrieval string length through indicator variables
You can also use an indicator variable to verify that a retrieved string value has not been
truncated.
If truncation occurs, the indicator variable contains a positive integer that specifies the
original length of the string.
If the string represents a large object (LOB), and the original length of the string is greater
than 32767, the value that is stored in the indicator variable is 32767, since no larger value
can be stored in a half word integer. When the database manager returns a value from a
result column, you can test the indicator variable.
If the value of the indicator variable is less than zero, you know the value of the results
column is null. When the database manager returns a null value, the host variable will be
set to the default value for the result column.
Example 9-11 shows how an indicator variable can be used to set columns to NULL values.
C/EXEC SQL
C+ Update ITSO4710/ORDHEAD
C+ set ORHTOT = :OrderTotal :IndOrderTotal
C+ Where ORHDLY = Date('2004-09-02')
C/END-EXEC
C Return
You can directly use the SQL special word NULL to set a column to a NULL value.
Example 9-12 shows how columns can be updated by directly specifying the NULL value.
If you want to count rows using SQL column function COUNT(*), all rows are counted, even if
one or several rows contain only NULL values. If you use COUNT(FieldName) instead, and
FieldName contains NULL values, only the rows without NULL value are considered.
If you want to calculate the average using the SQL column function AVG(FieldName) and one
or more rows contain a NULL value, they are not considered. Let us assume that we have
three rows, containing, 2, 4, and NULL; the average will be 3.
Other SQL column functions like STDDEV (to calculate the biased standard deviation) and
VARIANCE (to calculate the biased variance) do not consider rows containing NULL values
either.
If you want to calculate the average, standard deviation, or variance over all rows, you have
to convert the NULL values into default values. This can be done by using the scalar function
COALESCE or VALUE.
Note: COALESCE should be preferred for conformance to the SQL 1999 standard.
The following example shows how the NULL value can be replaced by a zero using SQL
scalar function COALESCE:
SELECT Avg(Coalesce(ORER_TOTAL, 0)) FROM ORDER_HEADER
There is an SQL scalar function NULLIF that converts specified values into NULL values. The
following example shows how a zero value can be replaced through a NULL value using the
SQL scalar function NULLIF:
SELECT Avg(NullIf(ORER_TOTAL, 0)) FROM ORDER_HEADER
In the first step, we can add additional fields in our tables containing the date and time
information. Then we have to fill the new fields by translating the existent numeric or
character values into real date or time information. To guarantee that the new fields are
always updated, we can add before update triggers for the numeric and alphanumeric fields
that fill the date and time values.
After having modernized our tables, we can modernize the date and time calculation in our
programs, using the date and time fields instead of the numeric or character date and time
196 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
values. RPG and SQL both provide a set of functions to facilitate date calculation, but they
handle the date calculation differently. This may be an advantage, because you can always
choose the best method.
If all programs use only the new date and time fields, the original numeric and character fields
as well as the trigger that updates the date and time fields can be removed. In the following
section we show RPG just as in SQL:
How to convert numeric or character date or time fields into real date and time fields and
vice versa
How to check valid dates in numeric or character fields
How to trap the system and job date
How to calculate time differences and how to add and subtract time durations.
We will present you with some useful SQL scalar functions
In factor 1 the date or time format of the numeric or alphanumeric date or time can be
specified. If factor 1 is *Blanks, *ISO format is used as default.
If the date or time is alpha numeric, but does not contain any date or time separator, zero
must be added to the date or time format. Date separators can be added in factor 1 for
two-digit year formats. Time separators can be added for time format *HMS.
Note: The date or time format cannot be used as a variable. If alternate date or time
formats are needed, you have to code a separate statement for each data or time format.
Example 9-14 shows how numeric and alphanumeric dates can be converted to dates using
the operation code MOVE.
Example 9-14 Converting numeric and alphanumeric values to dates by using MOVE operation code
D DateIso S D
D DateN4Year S 8P 0
D DateN2Year S 6P 0
D DateA4YearSep S 10A
D DateA2YearSep S 8A
D DateA4Year S 8A
C Return
If no parameter is specified, the current date, time, or timestamp is used. In parameter 1 the
alpha numeric, numeric field, or a date string can be specified. If the date or time format is not
*ISO, the date or time format of the alpha numeric or numeric date must be specified in
parameter 2.
Example 9-15 shows how numeric and alpha numeric dates can be converted to dates using
the built-in function %DATE.
Example 9-15 Converting numeric and alphanumeric values to dates using built-in function %DATE()
D DateIso S D
D DateN4Year S 8P 0
D DateN2Year S 6P 0
D DateA4YearSep S 10A
D DateA2YearSep S 8A
D DateA4Year S 8A
D DateA2Year S 6A
*-----------------------------------------------------------------------------------------
198 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
/Free
DateN4Year = 20040826;
DateIso = %Date(DateN4Year);
Dsply DateIso;
DateN4Year = 27082004;
DateIso = %Date(DateN4Year: *Eur);
Dsply DateIso;
DateN2Year = 082504;
DateIso = %Date(DateN2Year: *MDY);
Dsply DateIso;
DateA4YearSep = '2004-08-26';
DateIso = %Date(DateA4YearSep);
Dsply DateIso;
DateA4Year = '27082004';
DateIso = %Date(DateA4Year: *Eur0);
Dsply DateIso;
DateA2YearSep = '08-25-04';
DateIso = %Date(DateA2YearSep: *MDY-);
Dsply DateIso;
DateA2Year = '082704';
DateIso = %Date(DateA2Year: *MDY0);
Dsply DateIso;
Return;
/END-FREE
Example 9-16 shows a data structure where character fields are overlaid by date fields.
Additionally, it is shown how the character and date fields can be used.
DateCharEur = '01.07.2004';
Dsply DateEur;
Return;
/End-Free
In contrast to RPG, you cannot convert numeric date or time representations directly into date
or time formats. You first have to convert your numeric field into a character string containing
a valid date or time representation.
If you have to convert numeric fields into dates, it would be easier to use the RPG functions.
Just write a small function in RPG and create from this function an UDF that can be used in
SQL.
A string with an actual length of 7 that represents a valid date in the form YYYYNNN, where
YYYY are digits denoting a year, and NNN are digits between 001 and 366 denoting a day of
that year.
Example 9-17 shows an INSERT statement, where different date columns are filled with
dates, and every date is based on a character string in a different date format.
Example 9-17 Using the SQL scalar function DATE to convert character strings to dates
Insert into MySchema/MyTable
Values(date('2004-08-27'),
date('28.08.2004'),
date('08/29/2004'),
date('2004245'))
Example 9-18 Inserting character strings into date fields with SQL
Insert into MySchema/MyTable
Values('2004-08-27',
'28.08.2004',
'08/29/2004',
'2004245')
If your character string contains only a 2-digit year, you have to convert to a 4-digit year. This
can be done using a case expression.
Example 9-19 shows how a date field can be updated based on a character field with a date
representation like YY-MM-DD.
Example 9-19 Converting a character string with a 2-digit year with SQL
update MySchema/MyTable
200 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
set MyDate = Date(case when substr(MyDateAlpha, 1, 2)
between '40' and '99'
then '19'
else '20'
end
concat MyDateAlpha)
Where MyDate is NULL
If you frequently have to deal with character string conversion to dates, it is a good idea to
create an user defined function.
It is also possible to convert numeric values with the scalar function DATE, but in contrast to
RPG, the numeric value represents the number of days since 0001-01-01. For example, 731
587 represents July 1st, 2004.
To determine the number of dates since 0001-01-01, the scalar function DAYS(Date) can be
used.
Example 9-20 shows how time can be inserted using character strings in different time
formats.
Example 9-20 Inserting character string containing time values into time fields with SQL
Insert into MySchema/MyTable
values(time('18:47:22'),
time('18.48.22'),
time('06:47 PM'))
A timestamp function returns a string with an actual length of 7 that represents a valid date in
the form YYYYNNN, where YYYY are digits denoting a year, and NNN are digits between
001 and 366 denoting a day of that year.
A timestamp function returns a string with an actual length of 14 that represents a valid date
and time in the form YYYYXXDDHHMMSS, where YYYY is year, XX is month, DD is day, HH
is hour, MM is minute, and SS is seconds.
Example 9-21 shows an INSERT statement where different timestamp formats are used.
Example 9-21 Inserting character strings containing timestamp values into timestamps with SQL
Insert into MySchema/MyTable
values(Timestamp('2004-08-31-18.23.45.123456'),
Timestamp('20040831182345'),
Timestamp('2004-08-31', '18.12.34'))
In factor 1 the date or time format of the numeric or alphanumeric date or time can be
specified. If factor 1 is *Blank, *ISO format is used as default.
If the date or time is character representation without any date or time separator, zeros must
be added to the date or time format. Date separators can be added in factor 1 for two-digit
year formats. Time separators can be added for time format *HMS.
Note: The date or time format cannot be used as a variable. If alternate date or time
formats are needed, you have to code a separate statement for each data or time format.
Example 9-22 shows how date fields can be converted to character or numeric representation
using the operation code MOVE.
Example 9-22 Converting dates to character or numeric representation using operation code MOVE
D MyDateAlpha S 10A
D MyDateNum S 8P 0
D MyDateIso S D
*-----------------------------------------------------------------------------------------
C eval MyDateIso = D'2004-08-29'
C Return
202 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
The built-in function %CHAR allows you to convert date, time, or timestamp fields into
character representations. The second parameter is optional and contains the date or time
format the character string must have. If the second parameter is not specified, the date or
time format *ISO is used.
If the character string must not contain date or time separators, a zero must be added to the
date or time format.
Example 9-23 shows how date fields can be converted to character representations using the
built-in function %CHAR.
Example 9-23 Converting date fields to character representation using the built-in function %CHAR()
D MyDateAlpha S 10A
D MyDateIso S D
*-----------------------------------------------------------------------------------------
/Free
MyDateIso = D'2004-08-29';
MyDateAlpha = %Char(MyDateIso);
Dsply MyDateAlpha;
Return;
/End-Free
Beginning with release V5R3M0, date and time fields can be directly converted to numeric
fields using the built-in function %DEC.
Before release V5R3M0, the built-in function %DEC could only be used to format numeric
values or to convert character representations to a numeric field.
When the first parameter of the built in function %DEC is a date, time, or timestamp
expression, the optional second format parameter specifies the format of the value returned.
The converted decimal value will have the number of digits that a value of that format can
have, and zero decimal positions. For example, if the first parameter is a date, and the format
is *YMD, the decimal value will have six digits.
Example 9-24 shows how date fields can be converted into numeric fields using the built-in
function %DEC.
Example 9-24 Converting date fields into numeric values using the built-in function %DEC()
D MyDateNum S 8P 0
D MyDateNum6 S 6P 0
D MyDateIso S D
*-----------------------------------------------------------------------------------------
/Free
MyDateIso = D'2004-08-29';
MyDateNum = %Dec(MyDateIso);
Dsply MyDateNum ;
Return;
/End-Free
Prior to release V5R3M0, you either have to use a combination of built-in-functions, %CHAR
and %INT or %CHAR and %DEC, or use operation code MOVE.
Example 9-25 shows how date fields can be converted to numeric fields using the built-in
functions %CHAR and %INT or %DEC.
Example 9-25 Converting date fields to numeric representation with different built-in functions
D MyDateNum S 8P 0
D MyDateIso S D
*-----------------------------------------------------------------------------------------
/Free
MyDateIso = D'2004-08-29';
Return;
/End-Free
Like in RPG, the first parameter represents the date or time field, while the second is optional
and represents the date or time format of the returned value. In SQL the date and time format
must be specified without a leading asterisk (*).
Note: In contrast to RPG, only four-digit year date formats can be specified as the second
parameter.
If no date or time format is specified, the job's date format or the format that is defined in the
set option or the compile parameter in an embedded SQL program is used. In this way you
can get a two-digit year representation.
Note: If in RPG no date format is specified, the *ISO format is used. If in SQL no date
format is specified, the job’s format is specified.
If you need a date representation with a two-digit year, it is better to convert it with RPG. If
you need those conversions frequently, just write a small function in RPG and create an UDF.
Example 9-26 shows how the scalar function CHAR can be used to convert date fields into
character representations.
Example 9-26 Converting dates to character representation using the SQL scalar function CHAR
204 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Select char(MyDate), char(MyDate, ISO),
char(MyDate, Eur), char(MyDate, USA)
from MyTable
Using SQL to convert date or time fields into numeric representation is a little tricky. You have
to convert your date or time field into a character representation without date or time
separators, and then cast the result into a numeric field.
If you need to convert date or time fields, it would be wise to create a little function in RPG
and convert it into a UDF.
Example 9-27 shows how a date field can be converted into a numeric representation within
SQL.
According to wether date, time, or timestamp values must be checked, you have to use the
operation code TEST with different extenders:
Extender (D) to check a valid date
Extender(T) to check a valid time
Extender(Z) to check a valid timestamp
Additionally, you have to specify extender (E) to initialize the %Status and %Error indicator.
In format 1 you can specify the date, time, or timestamp format that has to be checked. If
factor 1 is blank, format *ISO is checked.
If the content of the string operand is not valid, program status code 112 is signaled. Then the
error indicator is set on or the %ERROR built-in function is set to return '1', depending on the
error handling method specified.
Example 9-28 shows how a date string can be checked by using the operations code TEST.
Example 9-28 Checking for valid date and time values using operation code TEST
D MyDate S D
D MyTime S T
*-----------------------------------------------------------------------------------------
/Free
MyDateAlpha = '09/31/2004';
Test(ED) *USA MyDateAlpha;
If %Error;
MyDate = *LoVal;
else;
MyDate = %Date(MyDateAlpha);
Endif;
MyTimeNum = 250026;
Test(ET) MyTimeNum;
If %Status = 112;
MyTime = *LoVal;
Else;
MyTime = %Time(MyTimeNum);
EndIf;
Return;
/End-Free
Example 9-29 shows how a date can be checked by using the built-in function %DATE and a
monitor group.
Example 9-29 Checking for valid date and time values using a monitor group
D MyDateAlpha S 10A
D MyTimeNum S 6P 0
D MyDate S D
D MyTime S T
*-----------------------------------------------------------------------------------------
/Free
Monitor;
MyDateAlpha = '2004-09-31';
MyDate = %Date(MyDateAlpha);
On-Error 112;
MyDate = *LoVal;
EndMon;
Monitor;
MyTimeNum = 250026;
MyTime = %Time(MyTimeNum);
On-Error;
MyTime = *LoVal;
EndMon;
206 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Return;
/End-Free
Nevertheless, you can use an indicator variable to detect data mapping errors. If -2 is
returned in the indicator variable, the character or numeric string contains an invalid date or
time.
Example 9-30 shows how an indicator variable can be used to check for valid dates or times.
Example 9-30 Checking for valid date and time values using SQL
D MyDateAlpha S 10A
D MyTimeAlpha S 8A
D MyDate S D
D IndMyDate S 5I 0
D MyTime S T
D IndMyTime S 5I 0
*-----------------------------------------------------------------------------------------
C eval MyDateAlpha = '2004-09-31'
C/EXEC SQL
C+ Set :MyDate :IndMyDate = :MyDateAlpha
C/END-EXEC
C If IndMyDate = -2
C eval MyDate = *LoVal
C Endif
C Return
C time SysTime
C SysTime Dsply
C time SysStamp
C SysStamp Dsply
C time SysDateTimeN
C SysDateTimeN Dsply
C Return
Example 9-32 Retrieve the system date and time by using built-in functions
D SysDate S D
D SysTime S T
D SysStamp S Z
*-----------------------------------------------------------------------------------------
/Free
SysDate = %Date();
Dsply SysDate;
SysTime = %Time();
Dsply SysTime;
SysStamp = %Timestamp();
Dsply SysStamp;
Return;
/End-Free
208 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Note: If a field is initialized in the global Definition specifications, the initialization is only
executed the first time the program or service program is called. This must be considered
when a program is ended with RETURN instead of *INLR and the appropriate activation
group is not *NEW.
If the field is defined in the local Definition specification without specifying the keyword
STATIC, the initialization is executed every time the procedure is called.
Example 9-33 shows how date and time fields can be initialized with system date and time.
Example 9-33 Initialization of date and time fields with system time
D SysDate S D inz(*Sys)
D SysTime S T inz(*Sys)
D SysStamp S Z inz(*Sys)
Example 9-34 Retrieving the job date by using the built-in function %Date()
D MyJobDate S D
*-----------------------------------------------------------------------------------------
/Free
MyJobDate = %Date(*Date);
Dsply MyJobDate;
Return;
/End-Free
Note: The SQL 1999 Core standard uses the form with the underscore.
Scalar functions
– CURTIME retrieves the current time.
– CURDATE retrieves the current date.
– NOW retrieves the current timestamp.
The ADDDUR adds the duration specified in factor 2 to a date or time and places the resulting
date, time, or timestamp in the result field.
The SUBDUR operation can be used to subtract a duration specified in factor 2 from a field or
constant specified in factor 1 and place the resulting date, time, or timestamp in the field
specified in the result field.
Factor 1 is optional and may contain a date, time, or timestamp field, subfield, array, array
element, literal or constant. If factor 1 contains a field name, array, or array element, then its
data type must be the same data type as the field specified in the result field. If factor 1 is not
specified, the duration is added to the field specified in the result field.
Factor 2 is required and contains two subfactors. The first is a duration and may be a numeric
field, array element, or constant with zero decimal positions. If the duration is negative then it
is subtracted from the date. The second subfactor must be a valid duration code indicating
the type of duration.
210 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Table 9-12 Duration codes
Duration code
Unit Valid for
long short
Year *YEARS *Y Timestamp and Date
Month *MONTHS *M Timestamp and Date
Day *DAYS *D Timestamp and Date
Hour *HOURS *H Timestamp and Time
Minute *MINUTES *MN Timestamp and Time
Second *SECONDS *S Timestamp and Time
Microsecond *MSECONDS *MS Timestamp
The duration code must be consistent with the result field data type:
To a date field only years, months, or days can be added or subtracted.
To a time field only hours, minutes, or seconds can be added or subtracted.
To a timestamp field years, months, days, hours, minutes, seconds, or microseconds can
be added or subtracted.
Note: If you add and subtract one month from a date, the result may not be the same date
as the original date. For example, if you add one month to January 31st, the result will be
February 28th or 29th. If you then subtract one month, the result will be January 28th or
29th.
Example 9-35 shows how durations can be added and subtracted using the operation codes
ADDDUR and SUBDUR.
Example 9-35 Adding/subtracting durations using operation codes ADDDUR and SUBDUR
D MyDate S D
D MyTime S T
D NbrDays S 3U 0 inz(30)
D NbrMin S 3U 0 inz(50)
*-----------------------------------------------------------------------------------------
C eval MyDate = %Date()
C SubDur 1:*Months MyDate
C MyDate dsply
C AddDur NbrDays:*Days MyDate
C MyDate dsply
C Return
Built-in functions
To add or subtract time or date values to a date or time, the numeric values must be
converted into date or time compatible values.
Example 9-36 shows how built-in functions can be used to add and subtract date and time
values.
D NbrDays S 3U 0 inz(30)
D NbrMin S 3U 0 inz(50)
*-----------------------------------------------------------------------------------------
/Free
MyDate = %Date() - %Months(1) + %Days(NbrDays);
Dsply MyDate;
Return;
/End-Free
Note: If you are dealing with microseconds in RPG, only the first three digits are
supported. In contrast, SQL supports all six digits.
For example, if you use the built-in function %TIMESTAMP(), the returned values may be
'2004-08-30-20.05.47.123000'. If you use CURRENT_TIMESTAMP in SQL the result may
be '2004-08-30-20.05.47.123456'.
To build a timestamp value, a date and a time value can be added by using a simple plus sign
(+).
Example 9-37 shows how a date and a time value can be added to a timestamp value.
Example 9-37 Creating a timestamp value from a date and a time value
D MyDate S D
D MyTime S T
D MyTimeStamp S Z
*-----------------------------------------------------------------------------------------
/FREE
MyDate = %Date() - %Days(3);
MyTime = %Time() - %Hours(5);
MyTimeStamp = MyDate + MyTime;
Dsply MyTimeStamp;
212 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Return;
/END-FREE
Example 9-38 shows how date and time values can be added or subtracted in SQL.
D NbrDays S 3U 0 inz(30)
D NbrMin S 3U 0 inz(50)
*-----------------------------------------------------------------------------------------
C/Exec SQL
C+ Set :MyDate = Current_Date - 1 Month + :NbrDays Days
C/End-Exec
C MyDate dsply
C/Exec SQL
C+ Set :MyTime = Current_Time - 450 Seconds + :NbrMin Minutes
C/End-Exec
C MyTime dsply
C Return
Time difference calculated in RPG can only contain one date or time type, but never a
combination of all.
Note: The result is rounded down, with any remainder discarded. For example, 61 minutes
is equal to 1 hour, and 59 minutes is equal to 0 hours.
The result of the difference between a timestamp and a time or between two dates can be:
A number of hours
A number of minutes
A number of seconds
If you need to know how many hours and minutes and seconds are between two times, you
have to calculate the time difference in seconds and then divide through 3600 to get the
hours. The rest must be divided through 60 to get the minutes and the rest are the seconds.
SQL provides a much easier method. In SQL, date and times can be directly subtracted
without using a scalar function. The result will be a numeric field that contains a combination
of years, months, days, hours, minutes, seconds, and microseconds.
The result of the difference between two dates is a numeric 8-digit (8,0) result with:
Position 1–4: Difference in years
Position 5–6: Difference in months
Position 7–8: Difference in days
The difference between January 1st, 2003 and July 3rd, 2004 is 10602; 1 year, 6 months and
2 days.
The result of the difference between two times is a numeric 6-digit (6,0) result with:
Position 1–2: Difference in hours
Position 3–4: Difference in minutes
Position 5–6: Difference in seconds
The result of the difference between two timestamps is a numeric 20-digit with 6 decimal
positions (20,6) result with:
Position 1–4: Difference in years
Position 5–6: Difference in months
Position 7–8: Difference in days
Position 9–10: Difference in hours
Position 11–12: Difference in minutes
Position 13–14: Difference in seconds
Position 15–20: Difference in microseconds
If you want to subtract a date or time from a timestamp, the timestamp must be converted into
a date or time value using the scalar functions DATE or TIME.
214 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Note: While in RPG, a time difference can only be calculated for one date or time type.
SQL provides a combination of the different date or time types. For example, if you
subtract two times, RPG returns either hours or minutes or seconds, and SQL returns
hours and minutes and seconds.
Example 9-39 Calculation time differences using the operation code SUBDUR
D MyDate1 S D inz(D'2004-08-31')
D MyTime1 S T inz(T'20.25.50')
D MyTimeStamp1 S Z inz(Z'2004-08-30-10.15.45.000000')
D MyDate2 S D inz(D'2004-04-09')
D MyTime2 S T inz(T'14.30.55')
D MyTimeStamp2 S Z inz(Z'2004-07-01-04.30.30.000000')
*-----------------------------------------------------------------------------------------
C MyDate1 SubDur MyDate2 MyDiff:*Days
C MyDiff Dsply
C Return
Example 9-40 Calculating time differences using the built-in function %DIFF()
D MyDate1 S D inz(D'2004-08-31')
D MyTime1 S T inz(T'20.25.50')
D MyTimeStamp1 S Z inz(Z'2004-08-30-10.15.45.000000')
D MyDate2 S D inz(D'2004-04-09')
D MyTime2 S T inz(T'14.30.55')
D MyTimeStamp2 S Z inz(Z'2004-07-01-04.30.30.000000')
D MyDiff S 10I 0
*-----------------------------------------------------------------------------------------
/Free
MyDiff = %Diff(MyDate1: MyDate2: *Days);
Dsply MyDiff;
Return;
/End-Free
Example 9-41 on page 217 shows how the difference between two time values with the result
in hours, minutes, and seconds is calculated by only using RPG.
216 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Example 9-41 Calculating time differences in hours, minutes, and seconds with RPG
D MyTime1 S T inz(T'20.35.50')
D MyTime2 S T inz(T'14.30.45')
D MyDiff S 10I 0
D DSTime DS
D DSTimeNum 6S 0
D DsHours 2S 0 overlay(DSTimeNum)
D DsMinutes 2S 0 overlay(DSTimeNum: *Next)
D DsSeconds 2S 0 overlay(DSTimeNum: *Next)
*-----------------------------------------------------------------------------------------
/Free
MyDiff = %Diff(MyTime1: MyTime2: *Seconds);
Return;
/End-Free
Example 9-42 shows the same result, but the calculation is done by SQL.
Example 9-42 Calculating time differences in hours, minutes, and seconds with SQL
D MyTime1 S T inz(T'20.35.50')
D MyTime2 S T inz(T'14.30.45')
D DSTime DS
D DSTimeNum 6S 0
D DsHours 2S 0 overlay(DSTimeNum)
D DsMinutes 2S 0 overlay(DSTimeNum: *Next)
D DsSeconds 2S 0 overlay(DSTimeNum: *Next)
*-----------------------------------------------------------------------------------------
C/EXEC SQL Set :DSTimeNum = :MyTime1 - :MyTime2
C/End-EXEC
C DSTime Dsply
C Return
C Return
Note: TIMESTAMPDIFF can only be used for statistical purposes, because the following
assumptions are used in estimating the difference:
There are 365 days in a year.
There are 30 days in a month.
There are 24 hours in a day.
There are 60 minutes in an hour.
There are 60 seconds in a minute.
TIMESTAMPDIFF can only be used for statistical purposes, because the following
assumptions are used in estimating the difference:
There are 365 days in a year.
There are 30 days in a month.
218 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
There are 24 hours in a day.
There are 60 minutes in an hour.
There are 60 seconds in a minute.
Example 9-45 shows how the time difference in weeks between two timestamps can be
calculated using the scalar function TIMESTAMPDIFF.
Example 9-45 Calculating time differences using the SQL scalar function TIMESTAMPDIFF
D MyTimeStamp S Z inz(Z'2004-07-01-08.00.00.000000')
D MyResult S 5I 0
*-----------------------------------------------------------------------------------------
C/EXEC SQL
C+ Set :MyResult = TimeStampDiff(32,
C+ cast(current_timestamp - :MyTimeStamp as Char(22)))
C/End-EXEC
C MyResult Dsply
C Return
While RPG offers two methods, SQL has a set of scalar functions to extract the particular
portions of a date or timestamp.
Note: Duration code *DAYS always returns the day of the month, even when julian date
format is used.
Example 9-46 shows how the end of month can be calculated. The day of the month is
determined by using the operation code EXTRCT.
Example 9-46 Calculating the last day of a month with RPG operation codes
D MyDate S D inz(D'2004-02-15')
D MyMonthEnd S D
D Days S 3U 0
*-----------------------------------------------------------------------------------------
C extrct MyDate:*Days Days
C MyDate SubDur Days:*Days MyMonthEnd
C AddDur 1:*Months MyMonthEnd
The following example shows how the end of month can be calculated by using a built-in
function. The day of the month is determined by using the built-in function %SUBDT.
Example 9-47 Calculating the last day of a month using built-in functions
D MyDate S D inz(D'2004-02-15')
D MyMonthEnd S D
*----------------------------------------------------------
/Free
MyMonthEnd = MyDate - %Days(%SubDt(MyDate: *Days)) + %Months(1);
Dsply MyMonthEnd;
Return;
/End-Free
All scalar functions have one argument, where the date, time, or timestamp must be
specified. The return value is always a numeric value without decimal positions, containing
the number of durations.
Note: There is a difference between the scalar function DAY and the scalar function
DAYS. DAY returns the day of the month, while DAYS returns the number of days since
’0001-01-01’.
While the scalar functions DAY and DAYOFMONTH return the day of the month, the scalar
function DAYOFYEAR can be used to return the current day of the year.
220 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Calculating the day of the week
SQL provides three scalar functions to calculate the day of the week:
DAYOFWEEK(Date)
The DAYOFWEEK function returns a numeric value between 1 and 7 that represents the
day of the week for a date or timestamp, where 1 is Sunday and 7 is Saturday.
DAYOFWEEK_ISO(Date)
The DAYOFWEEK_ISO function returns a numeric value between 1 and 7 that represents
the day of the week for a date or timestamp, where 1 is Monday and 7 is Sunday.
DAYNAME(Date)
This returns a mixed-case character string containing the name of the day (for example,
Friday) for the day portion of the argument.
National language considerations: The name of the day that is returned is based on the
language used for messages in the job. This name of the day is retrieved from message
CPX9034 in message file QCPFMSG in library *LIBL.
The following example shows the function CvtDateToText that edits a date, for example,
Wednesday, 1st September 2004.
Example 9-48 UDF to convert a date into a character representation with day and month name
Create Function ITSO4710/CvtDateToText (MyDate DATE )
Returns Char(50)
Language SQL
Specific ITSO4710/CvtDateToText
Deterministic
Contains SQL
Returns NULL on NULL Input
DisAllow PARALLEL
Return DayName(MyDate) concat ' ' concat
Trim(Char(DayOfMonth(MyDate))) concat
Case When DayOfMonth(MyDate) IN (1 , 21 , 31)
Then 'st'
When DayOfMonth(MyDate) IN (2 , 22)
Then 'nd'
When DayOfMonth(MyDate) = 3
Then 'rd'
else 'th'
end concat ', ' concat
MonthName(MyDate) concat ' ' concat
Char(Year(MyDate)) ;
222 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Part 4
Part 4 Tools
In this section we cover the tools that can help us in this process of modernization. The
iSeries Developer’s Roadmap introduces modern development tools that can assist us in this
process of modernization.
iSeries developers are familiar with tools such as Source Entry Utility (SEU), and the iSeries
Developer Roadmap introduces modern development tools such as WebSphere
Development Studio Client for iSeries. It not only supports new technology like Java, Web, or
XML development, but also traditional RPG, COBOL, and so on.
In this chapter, we also introduce you to several interesting graphical tools such as:
The graphical iSeries System debugger. This state-of-the-art debugger lets the
developers debug programs that run on an iSeries server.
DB2 Query Management Facility (QMF™). The graphical query tool, which can help the
end users work with queries without knowledge of SQL syntax.
iSeries Navigator.
WebSphere Development Studio Client for iSeries (WDSc).
The WebSphere Studio Workbench is the basis of all IBM WebSphere Studio products. It is
based on the open-source Eclipse technology. The WSSD is the application development
tool on top of the Workbench and the WSAD is expanded to support the Enterprise Java
Beans (EJB). The core feature of the Eclipse user interface includes perspectives that are a
collection of views and editor tools. A developer can use an appropriate editor to code, for
example, Java, RPG, or SQL.
WebSphere Development Studio Client for iSeries (WDSC) and Remote System Explorer
(RSE) provide a modern interface for iSeries application development and debugging. In
addition, the WDSC has several wizards to help get started with the DB2 Web service and
Extenders technology.
Figure 10-1 shows the RSE LPEX editor. This graphical editor includes color coding, syntax
checking, and statement prompting. There is an Outline view that assists with program
coding.
For database development tasks, the Data perspective is part of the WSSD product, which
includes views and editors for SQL development. You can connect to an iSeries server and
import the database object definitions. There are several wizards to construct Select, Insert,
Delete, and Update statements.
226 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
In the following example, we used WDSC Version 5.1.2 to demonstrate the connection to the
iSeries server and how to build a SELECT statement task.
1. In the WDSC menu, we select File → New → Other, then the New window appears
(Figure 10-2). The left pane shows the available perspectives and the right pane shows
the corresponding wizards to the perspective. We select Data perspective and SQL
statement wizard and click Next.
2. Figure 10-3 on page 228 shows the pull-down menu to select the SQL statement type.
Since we need to connect to an iSeries server, we also select Create the new database
connection. Click Next.
3. In the Database Connection window (Figure 10-4 on page 229), we specify the value of
the connection name, database name, user ID, password, database vendor type, and host
name. We select DB2 Universal Database for iSeries, V5R1 for the database vendor type.
You can connect to the iSeries database now. However, all schemas and the objects in
the schemas information on the iSeries server are gathered by the default. We can set the
filters to include only the objects that we are interested in by clicking Filters.
228 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 10-4 Database connection
4. When the Connection Filters window displays (Figure 10-5), we deselect Exclude system
schema and specify a filter to include only ITSO4710 schema. Then click Connect to
Database to collect the database information.
5. We specify the folder in which to keep the database information and also specify the SQL
statement name, CUSTORDERSUMMARYBYNAME, then we click Next.
6. The Construct an SQL Statement window appears and, as shown in Figure 10-7 on
page 231, we use menu tabs to specify the following:
– Tables names (ORDERHDR and CUSTOMER)
– Join type (Inner join)
– Join condition (ORDERHDR.CUSTOMER_NUMBER =
CUSTOMER.CUSTOMER_NUMBER)
– Group by and Order by conditions (CUSTOMER_NAME)
230 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 10-7 Construct an SQL statement
7. Figure 10-8 shows the Create Join window, in which we select the source and target table,
column, and join type.
9. Figure 10-10 on page 233 shows the result of the SQL. Click Close to close the Execute
SQL statement window and then click Finish.
232 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 10-10 Executing SQL
10.The SQL Builder view in the Data perspective shows the constructed SQL statement
(Figure 10-11 on page 234), which can be saved and used in our SQL applications. You
can also use the created SQL statement in the query tool, such as DB2 Query
Management Facility (QMF).
In addition, the DB Servers view in the Data perspective also includes a tool that is likely to
reverse engineer. This tool can generate the SQL DDL from the database information that
was collected at the creation of the new database connection.
234 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
The pop-up menu on the database objects in the navigator tree provides the editing and
viewing of database objects.
SQL Assist has statement wizards to provide programmers with step-by-step processing of
coding an SQL Select, Insert, Update, and Delete. This is especially useful for native
programmers who are still learning SQL syntax. You can test your SQL statements to see the
result and launch the Visual Explain to understand the optimizer’s implementation of the
query.
You can find an example of using this feature in the next section.
Before we bring you to our Visual Explain example, we would like to introduce you to the
Statistics Manager. It is good to know what the Statistics Manager is and how it relates to the
column statistics.
Statistics Manager
OS/400 is an object-based operating system. Tables and indexes are objects. Like all
objects, information about the object’s structure, size, and attributes is contained within the
table and index objects. In addition, tables and indexes contain statistical information about
the number of distinct values in a column and the distribution of those values in the table. The
DB2 UDB for iSeries optimizer uses this information to determine how to best access the
requested data for a given query request.
Starting with V5R2 of OS/400, DB2 UDB for iSeries has a new SQL query engine (SQE). As
part of this new SQL query engine, a statistics manager component is responsible for
generating, maintaining, and providing statistics to the SQE optimizer. As mentioned earlier,
sources for statistics within DB2 UDB for iSeries come from default values and/or indexes.
With SQE, the optimizer has another source, namely column statistics stored within the table
object.
The column statistics will be generated by a low-priority background job, with the goal of
having the column statistics available for future executions of this query. This automatic
collection of statistics allows the SQE optimizer to benefit from columns statistics, without
requiring an administrator to be responsible for the collection and management of statistics,
as is true for other RDBMS products. Even though it is not required, statistics can also be
manually requested for iSeries users who want to take on the task of statistics collection and
management, without waiting for the statistics manager to recognize the need for a column
statistic. The column statistics that are generated are only used by the new SQL query
engine. The original Classic Query Engine (CQE) continues to use only default values and
indexes for statistics.
The automatic statistic collection is controlled by the system value QDBFSTCCOL. This
system value is set to *ALL by default. This value allows all requests for background statistics
collections, whether initiated by the system or initiated by the user.
The following example shows you how to execute the SQL request and use Visual Explain in
Databases feature of iSeries Navigator to collect the column statistics.
1. Click Run an SQL script in the Databases tasks pane (Figure 10-12).
2. In the Run SQL Script window, as shown in Figure 10-13 on page 237, we enter the SQL
statement in the text pane. For our example, we select the rows from the customer table
that have a credit limit over 100000.
select * from itso4710.customer where cuscrd > 100000
Then select VisualExplain → Run and Explain.
236 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 10-13 Select VisualExplain menu
3. In the Visual Explain window (Figure 10-14), select Actions → Advisor to see the
Statistics and Index Advisor information.
For more information on Visual Explain refer to the redbook DB2 Universal Database for
iSeries Administration The Graphical Way on V5R3, SG24-6092.
Here is an example of the steps to run the graphical debugger for an SQL stored procedure.
1. Create the SQL procedure with the *SOURCE debug view, as shown in Figure 10-16 on
page 239. The SQL procedure can be created by:
a. An iSeries Navigator Run SQL Script session. The SET Option statement is used in
the procedure to cause the SQL source-level debug view to be created during creation
of the stored procedure.
b. The RUNSQLSTM command. The procedure source code can also be stored in a
source physical file member and used to create the SQL procedure with the
source-level debug view. Here is a sample RUNSQLSTM command:
RUNSQLSTM SRCFILE(MYLIB/MYSRC) SRCMBR(SHIP_IT) COMMIT(*NONE) NAMING(*SQL)
DBGVIEW(*SOURCE)
238 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
CREATE PROCEDURE myschema.ship_it(IN ordnum INTEGER, IN ordtype CHAR(1),
IN ordweight dec(3,2))
LANGUAGE SQL
SET OPTION DBGVIEW =*SOURCE
sp: BEGIN
DECLARE ratecalc DECIMAL(5,2);
/* Check for international order */
IF ordtype='I' THEN
SET ratecalc = ordweight * 5.50;
INSERT INTO myschema.wwshipments VALUES(ordnum,ordweight,ratecalc);
ELSE
SET ratecalc = ordweight * 1.75;
INSERT INTO myschema.shipments values(ordnum,ordweight,ratecalc);
END IF;
END
2. An iSeries Navigator Run SQL Script session needs to be active to start the debug mode.
A debug session is initiated by going to the Run pull-down menu and selecting the
Debugger task, as shown in Figure 10-17.
3. The Debugger task will cause the Start Debug dialog window to be started on the client,
as shown in Figure 10-18 on page 240.
iSeries Navigator will automatically fill in the job information for the server job that will be
used by the Run SQL Script session for execution and debug of the stored procedure.
The user will need to fill in the library and program name of the procedure. In this example,
the library name is MYSCHEMA and procedure name is SHIP_IT. When the procedure
name is 10 characters or less, than the program and procedure name will be the same. If
the procedure name is longer than 10 characters (for example, testprocedure), you need
to look up the program name associated with the stored procedure. The SPECIFIC clause
can be used to control the program name for a stored procedure with a long name.
4. Selecting the OK button causes the iSeries Graphical Debugger to be started on the client
and load the SQL source-level debug view for the SHIP_IT stored procedure. You can set
a breakpoint by left-clicking Line 5 (IF ORDTYPE=’I’). An enabled breakpoint is indicated
by the red arrow, as shown in Figure 10-19 on page 241. Now that a breakpoint has been
set, click the green resume arrow on the tool bar.
240 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 10-19 Set the breakpoint
5. Return to your iSeries Navigator Run SQL Script session and issue the following SQL
CALL statement:
CALL MYSCHEMA.SHIP_IT(33, ‘I’, 5.1)
The Debug Client will then take control at the breakpoint specified in the previous step.
Figure 10-20 on page 242 shows how the yellow highlighting is used to indicate where
execution was stopped for the breakpoint.
6. To view the contents of the ORDTYPE input parameter to determine which branch of the
IF statement will be executed, left-click the Console tab in the lower left-hand corner and
enter the following EVAL statement in the command window:
EVAL *SHIP_IT.ORDTYPE.
The content of this variable is then displayed in the Console window, as shown in
Figure 10-21. All of the procedure parameter values can be displayed by just entering:
EVAL SHIP_IT
7. To view the calculated shipping rate prior to the stored procedure ending, right-click Line
11 and select the Run to Cursor task. This will allow the debugger to execute all of the
242 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
code up to Line 11. When Line 11 is reached, the following command can be issued in the
Console window to display the computed shipping rate:
EVAL SP:RATECALC
8. To complete execution of the SHIP_IT stored procedure, click the green resume arrow on
the tool bar.
For more information on the Graphical iSeries System Debugger, look at the debugger tool
help by pressing F1. The iSeries InfoCenter also contains information on the debugger, just
search the InfoCenter Web site:
For more information, see the white paper, Graphical Debugger makes Procedural SQL
Debug Even Easier, at:
http://www.ibm.com/servers/enable/site/education/abstracts/sqldebug_abs.html
PRTSQLINF
PRTSQLINF is a CL command to print the information, which includes the SQL statements
and the access plans used during the execution of the statements. For SQL embedded in a
program and package objects, this command extracts the optimizer access method
information out of the objects and places that information in a spooled file. The spooled file
contents can then be analyzed to determine if any changes are needed to improve
performance.
STRDBMON
STRDBMON is a CL command that is used to collect the database performance data for a
specified job or all jobs on the iSeries server. The Database Monitor gathers query execution
statistics from the iSeries server jobs and records them in a database file. This database file
is then analyzed to provide performance information to help tune the query or the database.
You can use Visual Explain to analyze the collected performance data and give the
recommendations.
To access the DB2 Development Center, load the DB2 Application Development client, which
is part of the DB2 Personal Developer’s Edition and can be downloaded from IBM’s Web site:
http://www14.software.ibm.com/webapp/download/search.jsp?rs=db2pde
We have *QMQRY object, MYQRY, and *QMFORM object, MYFRM, on the iSeries server.
We use MYQRY and MYFRM to create a report summarizing the order amount and sorted by
customer name, as shown in Figure 10-22.
244 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
The steps are:
1. We use the commands RTVQMQRY and RTVQMFORM to export our query objects,
MYQRY and MYFRM, respectively. We also specify the source file that is used to receive
the query definitions as a source member.
Here is an example of how to export our query manager object and its source
(Figure 10-23).
RTVQMQRY QMQRY(MYQRY) SRCFILE(QQMQRYSRC)
Since Query Manager/400 uses the system naming convention, which does not comply
with QMF, we have to change the SQL statements to use the SQL naming convention
instead. You can choose to change either in iSeries before transferring to PC or in QMF
for Windows.
FROM "ITSO4710"."ORDERHDR" A,
"ITSO4710"."CUSTOMER" B
The following shows how export a query form object and its source (Figure 10-24 on
page 246).
RTVQMFORM QMQRY(MYFRM) SRCFILE(QQMFORMSRC)
2. Transfer the Query/400 definition source members to PC. You can use either File Transfer
Protocol (FTP) or the iSeries Access Data transfer function. In our example we store the
files, MYQRY.QRY and MYFRM.FRM, in the directory c:\temp.
246 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 10-25 Transfer query definition source member to PC
3. Use the QMF conversion utility to convert the Query/400 definition source members to the
files that can be imported and saved as QMF objects by QMF for Windows. These files will
be placed in the target directory QM400 and have a prefix of QMF_ added to the name of
the original files.
C:\Program Files\QM400>qm4_qmf c:\temp\myqry.qry
C:\Program Files\QM400>qm4_qmf c:\temp\myfrm.frm
The converted files are QMF_MYQRY.QRY and QMF_MYFRM.FRM.
4. In the QMF for Windows, select File → Open to open the converted files. Click Run
Query in the Query menu and choose Data Source as From open document in the
Form menu. Figure 10-26 on page 248 shows the query result the same as running on
iSeries server.
The files can be saved as QMF objects, and the end users use the graphical QMF for
Windows to continually work with their queries.
248 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 10-27 QMF for Windows
2. Select Query → Add → Table. The Tables panel, as shown in Figure 10-28 on page 250,
appears for entering the schema/library in the Table owner field, and the table name in the
Table name field. You can choose to add a table from the list by entering selection criteria
(in our example, OR%), and click Add From List.
3. We select the ORDER_HEADER file and click Add, as shown in Figure 10-29.
250 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 10-30 Join Tables
6. Select the join columns, as shown in Figure 10-31, select CUSTOMER_NUMBER, and
click Add.
7. We select the result columns by selecting Query → Add → Column. In the Columns
window, as shown in Figure 10-32 on page 252, we can also add the summary function
and change the column name of the selected column. In our example, we select the
CUSTOMER_NAME and the ORDER_TOTAL column. We specify the SUM function, and
also change the column name of ORDER_TOTAL.
8. For sorting by customer name, select Query → Add → Sort condition, and select the
CUSTOMER_NAME column, as shown in Figure 10-33.
9. Figure 10-34 on page 253 shows the complete SQL prompted for this query.
252 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Figure 10-34 Complete SQL prompted
10.You can see the SQL statement, as shown in Figure 10-35, by selecting View → SQL.
From the query result, you can change the format, print the report, and store as the document
file. The end user may use the QMF as the query tool instead of the traditional Query/400.
You can find more information and download the QMF for Windows for evaluation at the Web
site:
http://www-306.ibm.com/software/data/qmf/
The Query/400 and Query Manager/400 conversion tools are written by the business partner,
Rocket Software. The tools can be downloaded from the Web site:
http://www.rocketsoftware.com/qmf/qmf/cproducts.asp
For more information on the .Net technology, please visit the Web site:
http://www.ibm.com/developerworks/db2/downloads/dotnetbeta/
254 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see “How to get IBM Redbooks” on page 257.
Note that some of the documents referenced here may be available in softcopy only.
Advanced Functions and Administration on DB2 Universal Database for iSeries,
SG24-4249
Stored Procedures, Triggers and User Defined Functions on DB2 Universal Dabase for
iSeries, SG24-6503
Preparing for and Tuning the V5R2 SQL Query Engine on DB2 Universal Database for
iSeries, SG24-6598
IBM WebFacing Tool: Converting 5250 Applications to Browser-based GUIs, SG24-6801
EJB 2.0 Development with WebSphere Studio Application Developer, SG24-6819
DB2 Universal Database for iSeries Administration The Graphical Way on V5R3,
SG24-6092
Striving for Optimal Journal Performance, SG24-6486
DB2 UDB for AS/400 Object Relational Support, SG24-5409
Striving for Optimal Journal Performance on DB2 Universal Database for iSeries,
SG24-6286
Other publications
These publications are also relevant as further information sources:
Database Programming, SC41-5701
SQL Reference, SC41-5612
DDS Reference, SC41-5712
Conte, Paul. Database Design and Programming for DB2/400. 29th Street Press, April
1997. ISBN 1-8824190-65
Conte, Paul and Cravitz, Mike. SQL/400® Developer’s Guide. 29th Street Press,
September 2000. ISBN 1-882419-70-7
Fowler, Martin. UML Distilled: A Brief Guide to the Standard Object Modeling Language.
Addison-Wesley Professional; 3rd edition, September 19, 2003. ISBN 0-321-19368-7
256 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
How to get IBM Redbooks
You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft
publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at
this Web site:
ibm.com/redbooks
I N
I/O modules 25 naming conventions 58
Identity column attribute 71 No Commit 103
ILE procedure 59 Non keyed logical file 22
Index 22 non-relational data 72
index maintenance 49 Normalization 66
Initialization Subroutine 98 Numeric data types 178
iSeries Access for Web 6
iSeries Developer Roadmap 1
iSeries Navigator 30, 71, 234 O
Isolation Level 22 Object Management Group 8
ITERATE 166 Object-Oriented programming 2
Iterative Control 165 ODBC 13, 51
OPEN statement 98, 103
OPNQRYF 64
260 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
OPNQRYF command 13 RTVQMQRY 245
Overloading 148 Run SQL Scripts 235
overloading 148 RUNSQLSTM 13
OVRDBF 23 RUNSQLSTM command 35
P S
Packed numeric data type 179 S/36 data 72
Parameter-Style 146 SAVEPOINT 77
Partitioned tables 53 SAVLIB 47
Personal Digital Assistants 8 Schema 22
Physical File 22 Screen Design Aid (SDA) 4
physical file object 22 Scroll Cursor 107
physical files 48 Second Normal Form 66
pointers 127 Select/Omit filtering 49
PRIMARY KEY 17 SEQUEL 12
Program described files. 30 Sequence object 71
program object 80 Serial Cursor 107
Programming Development Manager (PDM) 4 service program 62
prototype 59 service program object 80
prototype call 63 service programs 58
PRTSQLINF 243 Set Option Statement 82
Set Statement 93
SIGNAL 169
Q Site Developer 5
QSYS2 50 Softcoding the trigger buffer 129
Query Management Facility 244 Source Entry Utility (SEU) 4
Query Manager 13 SQE Optimizer 14
query performance 68 SQL Alias 23
Query/400 13 SQL Assist 235
SQL communication area 83
R SQL data types 49, 172
Range partitioning 53 SQL Descriptor Area 114
Read Stability 103 SQL GRANT 69
Receive Program Message API 69 SQL index 17
Record 22 SQL indexes 49
Record format level check 39 SQL Query Engine (SQE) 13
Record format name 39 SQL SET TRANSACTION 126
Redbooks Web site 257 SQL Stored procedures 139
Contact us xi SQL tables 48
Referential integrity 17, 67 SQL Triggers 119, 131
Registering an external trigger 129 SQL Views 40
Registering external triggers 129 SQL views 26, 49
Registering Stored Procedures 140 SQLCA 82
Remote System Explorer (RSE) 6 SQLCODE 83
RENAME INDEX 32 SQLCOLPRIVILEGES 51
RENAME TABLE 32 SQLCOLUMNS 51
Reorganize Physical File Member 37 SQLFOREIGNKEYS 51
REPEAT 165 SQLPRIMARYKEYS 51
Repeatable Read 103 SQLPROCEDURECOLS 51
Report Layout Utility (RLU) 4 SQLPROCEDURES 52
RESIGNAL 169 SQLSCHEMAS 52
RETURN 167 SQLSPECIALCOLUMNS 52
REUSEDLT 37 SQLSTATE 83
REVOKE 69 SQLSTATISTICS 52
RGZPFM (Reorganize Physical File Member) 37 SQLTABLEPRIVILEGES 52
ROLLBACK 77 SQLTABLES 52
Row 22 SQLTYPEINFO 52
ROWID data type 71 SQLUDTS 52
RTVQMFORM 245 Static 24
Static SQL 24
Index 261
Statistics Manager 235 U
Stored Procedures 138 UDTFs 72
STRCMTCTL 122, 131 Uncommitted Read 103
STRDBMON 243 Unified Modeling Language 8
String functions 91 Uniform Resource Locator 74
Struts 8 unique identifiers 70
Symmetric Multiprocessing 17 unique key constraints 67
SYSCATALOGS 50 Unsupported DDS keywords 38
SYSCHKCST 50 User Defined Functions 152
SYSCOLUMNS 50 User Defined Scalar Functions 159
SYSCST 50 User Defined Table Functions 160
SYSCSTCOL 50 User Defined Types 23
SYSCSTDEP 50 User-defined Distinct Type 152
SYSFUNCS 50
SYSINDEXES 51
SYSJARCONTENTS 51 V
SYSJAROBJECTS 51 validation list object 70
SYSKEYCST 51 VARBINARY 75
SYSKEYS 51 VARCHAR 75
SYSPACKAGE 51 VARGRAPHIC 75
SYSPARMS 51 View 22
SYSPROCS 51 VisualAge Generator 5
SYSREFCST 51
SYSROUTINEDEP 51
SYSROUTINES 51
W
Web service 4
SYSSEQUENCES 51
WebFacing Tool 6–7
SYSTABLEDEP 51
WebSphere Application Server 7
SYSTABLES 51
WebSphere Application Server - Express 6
system catalog tables 31
WebSphere Development Studio Client for iSeries 226
system catalogs 32
WebSphere Host Access Transformation Services
SYSTRIGCOL 51, 138
(HATS) 6
SYSTRIGDEP 138
WebSphere Studio 5
SYSTRIGGERS 51, 138
WHILE 165
SYSTRIGUPD 51, 138
Wrapper program 59
SYSTYPES 51
SYSVIEWDEP 51
SYSVIEWS 51 Z
Zoned numeric data type 178
T
Table 22
Third Normal Form 66
time data types 185
Time Format 81
Time functions 92
Time Separator Character 81
TIMFMT 81
TIMSEP 81
traditional 5250 tools 4
Transaction files 30
Trigger 118
Trigger Body 135
Trigger Buffer 122–123
Trigger Event 132
Trigger event 119
Trigger events 121
Trigger Name 132
Trigger Time 132
Trigger time 119
Triggers 18
Trigonometric functions 92
262 Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Modernizing IBM Eserver iSeries Application Data Acess - A Roadmap Cornerstone
Back cover ®