Firebird 50 Language Reference
Firebird 50 Language Reference
Firebird 50 Language Reference
0 Language Reference
Dmitry Filippov, Alexander Karpeykin, Alexey Kovyazin, Dmitry Kuzmenko,
Denis Simonov, Paul Vinkenoog, Dmitry Yemanov, Mark Rotteveel
If you notice any discrepancies, errors or anything missing, please report this on
https://github.com/FirebirdSQL/firebird-documentation/issues or submit a pull
request with the necessary changes.
1
Table of Contents
Table of Contents
1. About the Firebird 5.0 Language Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.1. Subject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.2. Authorship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.2.1. Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.3. Reporting Errors or Missing Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.4. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.5. Contributing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2. SQL Language Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1. Background to Firebird’s SQL Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.1. SQL Flavours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.2. SQL Dialects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.3. Error Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2. Basic Elements: Statements, Clauses, Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3. Identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.1. Rules for Regular Identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.2. Rules for Delimited Identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4. Literals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5. Operators and Special Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.6. Comments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3. Data Types and Subtypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1. Integer Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.1.1. SMALLINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.2. INTEGER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.3. BIGINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.4. INT128 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.5. Hexadecimal Format for Integer Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2. Floating-Point Data Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.1. Approximate Floating-Point Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.2. Decimal Floating-Point Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3. Fixed-Point Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3.1. NUMERIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.2. DECIMAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4. Data Types for Dates and Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4.1. DATE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4.2. TIME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4.3. TIMESTAMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.4.4. Session Time Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4.5. Time Zone Format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2
Table of Contents
3
Table of Contents
4
Table of Contents
5
Table of Contents
6
Table of Contents
7
Table of Contents
8
Table of Contents
9
Table of Contents
10
Table of Contents
11
Table of Contents
12
Table of Contents
13
Table of Contents
14
Table of Contents
15
Table of Contents
RDB$GENERATORS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
RDB$INDEX_SEGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
RDB$INDICES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
RDB$KEYWORDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
RDB$LOG_FILES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
RDB$PACKAGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
RDB$PAGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
RDB$PROCEDURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
RDB$PROCEDURE_PARAMETERS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
RDB$PUBLICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
RDB$PUBLICATION_TABLES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
RDB$REF_CONSTRAINTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
RDB$RELATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
RDB$RELATION_CONSTRAINTS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
RDB$RELATION_FIELDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
RDB$ROLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
RDB$SECURITY_CLASSES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
RDB$TIME_ZONES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
RDB$TRANSACTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
RDB$TRIGGERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745
RDB$TRIGGER_TYPE Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
RDB$TRIGGER_MESSAGES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
RDB$TYPES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
RDB$USER_PRIVILEGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
RDB$VIEW_RELATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
Appendix E: Monitoring Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
MON$ATTACHMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
Using MON$ATTACHMENTS to Kill a Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
MON$COMPILED_STATEMENTS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
MON$CALL_STACK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
MON$CONTEXT_VARIABLES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757
MON$DATABASE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
MON$IO_STATS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
MON$MEMORY_USAGE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
MON$RECORD_STATS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
MON$STATEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762
Using MON$STATEMENTS to Cancel a Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
MON$TABLE_STATS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
MON$TRANSACTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
Appendix F: Security tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
SEC$DB_CREATORS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
16
Table of Contents
SEC$GLOBAL_AUTH_MAPPING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
SEC$USERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
SEC$USER_ATTRIBUTES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
Appendix G: Plugin tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
PLG$PROF_CURSORS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
PLG$PROF_PSQL_STATS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
PLG$PROF_PSQL_STATS_VIEW. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
PLG$PROF_RECORD_SOURCES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
PLG$PROF_RECORD_SOURCE_STATS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
PLG$PROF_RECORD_SOURCE_STATS_VIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
PLG$PROF_REQUESTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
PLG$PROF_SESSIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
PLG$PROF_STATEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
PLG$PROF_STATEMENT_STATS_VIEW. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
PLG$SRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
PLG$USERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
Appendix H: Character Sets and Collations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
Appendix I: License notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
Appendix J: Document History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
17
Chapter 1. About the Firebird 5.0 Language Reference
This Firebird 5.0 Language Reference is the fourth comprehensive manual to cover all aspects of
the query language used by developers to communicate, through their applications, with the
Firebird relational database management system.
1.1. Subject
The subject of this volume is Firebird’s implementation of the SQL (“Structured Query Language”)
relational database language. Firebird conforms closely with international standards for SQL, from
data type support, data storage structures, referential integrity mechanisms, to data manipulation
capabilities and access privileges. Firebird also implements a robust procedural
language — procedural SQL (PSQL) — for stored procedures, stored functions, triggers, and
dynamically-executable code blocks. These areas are addressed in this volume.
This document does not cover configuration of Firebird, Firebird command-line tools, nor its
programming APIs. See Firebird RDBMS, and specifically Reference Manuals for more Firebird
documentation.
1.2. Authorship
For the Firebird 5.0 version, the Firebird 4.0 Language Reference was taken as the base, and Firebird
5.0 information was added based on the Firebird 5.0 Release Notes and feature documentation.
1.2.1. Contributors
Direct Content
Resource Content
18
Chapter 1. About the Firebird 5.0 Language Reference
• Alexander Peshkov
• Vladyslav Khorsun
• Claudio Valderrama
• Helen Borrie
• … and others
Pull requests with changes and fixes are also much appreciated.
1.4. Acknowledgments
Sponsors and Other Donors
See also the Acknowledgements in the Firebird 2.5 Language Reference for the sponsors of the
initial Russian version and its translation.
Moscow Exchange is the largest exchange holding in Russia and Eastern Europe, founded on
December 19, 2011, through the consolidation of the MICEX (founded in 1992) and RTS (founded in
1995) exchange groups. Moscow Exchange ranks among the world’s top 20 exchanges by trading
in bonds and by the total capitalization of shares traded, as well as among the 10 largest exchange
platforms for trading derivatives.
Technical support and developer of administrator tools for the Firebird DBMS.
1.5. Contributing
There are several ways you can contribute to the documentation of Firebird, or Firebird in general:
19
Chapter 2. SQL Language Structure
Distinct subsets of SQL apply to different areas of activity. The SQL subsets in Firebird’s language
implementation are:
Dynamic SQL is the major part of the language which corresponds to Part 2 (SQL/Foundation) of the
SQL specification. DSQL represents statements passed by client applications through the public
Firebird API and processed by the database engine.
Procedural SQL augments Dynamic SQL to allow compound statements containing local variables,
assignments, conditions, loops and other procedural constructs. PSQL corresponds to Part 4
(SQL/PSM) of the SQL specifications. PSQL extensions are available in persistent stored modules
(procedures, functions and triggers), and in Dynamic SQL as well (see EXECUTE BLOCK).
Embedded SQL is the SQL subset supported by Firebird gpre, the application which allows you to
embed SQL constructs into your host programming language (C, C++, Pascal, Cobol, etc.) and
preprocess those embedded constructs into the proper Firebird API calls.
Only a subset of the statements and expressions implemented in DSQL are supported in ESQL.
Interactive ISQL refers to the language that can be executed using Firebird isql, the command-line
application for accessing databases interactively. As a regular client application, its native language
is DSQL. It also offers a few additional commands that are not available outside its specific
environment.
Both DSQL and PSQL subsets are completely presented in this reference. Neither ESQL nor ISQL
flavours are described here unless mentioned explicitly.
SQL dialect is a term that defines the specific features of the SQL language that are available when
accessing a database. SQL dialects can be defined at the database level and specified at the
20
Chapter 2. SQL Language Structure
• Dialect 1 is intended solely to allow backward compatibility with legacy databases from old
InterBase versions, version 5 and below. A “Dialect 1” database retains certain language
features that differ from Dialect 3, the default for Firebird databases.
◦ Date and time information are stored in a DATE data type. A TIMESTAMP data type is also
available, that is identical to this DATE implementation.
◦ Double quotes may be used as an alternative to apostrophes for delimiting string data. This
is contrary to the SQL standard — double quotes are reserved for a distinct syntactic
purpose both in standard SQL and in Dialect 3. Double-quoting strings is therefore to be
avoided.
◦ The precision for NUMERIC and DECIMAL data types is smaller than in Dialect 3 and, if the
precision of a fixed decimal number is greater than 9, Firebird stores it internally as a
double-precision floating point value.
◦ Identifiers are case-insensitive and must always comply with the rules for regular
identifiers — see the section Identifiers below.
◦ Although generator values are stored as 64-bit integers, a Dialect 1 client request, SELECT
GEN_ID (MyGen, 1), for example, will return the generator value truncated to 32 bits.
• Dialect 2 is available only on a Firebird client connection and cannot be set in a database. It is
intended to assist debugging of possible problems with legacy data when migrating a database
from dialect 1 to 3.
• In Dialect 3 databases,
◦ numbers (DECIMAL and NUMERIC data types) are stored as fixed-point values (scaled integers)
for all precisions; depending on the type and precision, they are stored as a SMALLINT,
INTEGER, BIGINT or INT128.
◦ Double quotes are reserved for delimiting non-regular identifiers, enabling object names
that are case-sensitive or that do not meet the requirements for regular identifiers in other
ways.
This reference describes the semantics of SQL Dialect 3 unless specified otherwise.
21
Chapter 2. SQL Language Structure
Processing of every SQL statement either completes successfully or fails due to a specific error
condition. Error handling can be done both client-side in the application, or server-side using PSQL.
Clauses
A clause defines a certain type of directive in a statement. For instance, the WHERE clause in a
SELECT statement and in other data manipulation statements (e.g. UPDATE, DELETE) specifies criteria
for searching one or more tables for the rows that are to be selected, updated or deleted. The
ORDER BY clause specifies how the output data — result set — should be sorted.
Options
Options, being the simplest constructs, are specified in association with specific keywords to
provide qualification for clause elements. Where alternative options are available, it is usual for
one of them to be the default, used if nothing is specified for that option. For instance, the SELECT
statement will return all rows that match the search criteria unless the DISTINCT option restricts
the output to non-duplicated rows.
Keywords
All words that are included in the SQL lexicon are keywords. Some keywords are reserved,
meaning their usage as identifiers for database objects, parameter names or variables is
prohibited in some or all contexts. Non-reserved keywords can be used as identifiers, although
this is not recommended. From time to time, non-reserved keywords may become reserved, or
new (reserved or non-reserved) keywords may be added when new language feature are
introduced. Although unlikely, reserved words may also change to non-reserved keywords, or
keywords may be removed entirely, for example when parser rules change.
For example, the following statement will be executed without errors because, although ABS is a
keyword, it is not a reserved word.
On the contrary, the following statement will return an error because ADD is both a keyword and
a reserved word.
Refer to the list of reserved words and keywords in the chapter Reserved Words and Keywords.
22
Chapter 2. SQL Language Structure
2.3. Identifiers
All database objects have names, often called identifiers. The maximum identifier length is 63
characters character set UTF8 (252 bytes).
Two types of names are valid as identifiers: regular identifiers, similar to variable names in regular
programming languages, and delimited identifiers that are specific to SQL. To be valid, each type of
identifier must conform to a set of rules, as follows:
• The name must start with an unaccented, 7-bit ASCII alphabetic character. It may be followed
by other 7-bit ASCII letters, digits, underscores or dollar signs. No other characters, including
spaces, are valid. The name is case-insensitive, meaning it can be declared and used in either
upper or lower case. Thus, from the system’s point of view, the following names are the same:
fullname
FULLNAME
FuLlNaMe
FullName
<name> ::=
<letter> | <name><letter> | <name><digit> | <name>_ | <name>$
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
• Length cannot exceed 63 characters in character set UTF8 (252 bytes). Identifiers are stored in
character set UTF8, which means characters outside the ASCII range are stored using 2 to 4 bytes.
23
Chapter 2. SQL Language Structure
• It may contain any character from the UTF8 character set, including accented characters, spaces
and special characters
• Delimited identifiers are stored as-is and are case-sensitive in all contexts
• Trailing spaces in delimited identifiers are removed, as with any string constant
• Delimited identifiers are available in Dialect 3 only. For more details on dialects, see SQL
Dialects
A delimited identifier such as "FULLNAME" is the same as the regular identifiers FULLNAME, fullname,
FullName, and so on. The reason is that Firebird stores regular identifiers in upper case, regardless
of how they were defined or declared. Delimited identifiers are always stored according to the
exact case of their definition or declaration. Thus, "FullName" (quoted, or delimited) is different
from FullName (unquoted, or regular) which is stored as FULLNAME in the metadata.
2.4. Literals
Literals are used to directly represent values in a statement. Examples of standard types of literals
are:
Details about literals for each data type are discussed in section Literals (Constants) of chapter
Common Language Elements.
24
Chapter 2. SQL Language Structure
Some of these characters, alone or in combination, may be used as operators (arithmetical, string,
logical), as SQL command separators, to quote identifiers, or to mark the limits of string literals or
comments.
Operator Syntax
<operator> ::=
<string concatenation operator>
| <arithmetic operator>
| <comparison operator>
| <logical operator>
2.6. Comments
Comments may be present in SQL scripts, SQL statements and PSQL modules. A comment can be
any text, usually used to document how particular parts of the code work. The parser ignores the
text of comments.
Firebird supports two types of comments: block (or bracketed) and in-line (or simple).
Syntax
Block comments start with the /* character pair and end with the */ character pair. Text in block
comments may be of any length and can occupy multiple lines.
25
Chapter 2. SQL Language Structure
In-line comments start with a pair of hyphen characters, -- and continue until the first linebreak
(end of line).
Example
26
Chapter 3. Data Types and Subtypes
• Define columns in a database table in the CREATE TABLE statement or change columns using ALTER
TABLE
• Declare or change a domain using the CREATE DOMAIN or ALTER DOMAIN statements
• Declare local variables, return values and parameters in PSQL modules and UDFs — user-
defined functions
• Provide arguments for the CAST() function when explicitly converting data from one type to
another
27
Chapter 3. Data Types and Subtypes
28
Chapter 3. Data Types and Subtypes
VARCHAR(n), CHAR n characters. Size from 1 to 32,765 Variable length string type. The total
VARYING(n), in bytes depends bytes size of characters in bytes cannot be
CHARACTER on the encoding, larger than (32KB-3), taking into
VARYING(n)
the number of account their encoding. The two
bytes in a leading bytes store the declared
character length. There is no default size: the n
argument is mandatory. Leading and
trailing spaces are stored, and they are
not trimmed, except for those trailing
characters that are past the declared
length.
29
Chapter 3. Data Types and Subtypes
3.1.1. SMALLINT
The 16-bit SMALLINT data type is for compact data storage of integer data for which only a narrow
16
range of possible values is required. Numbers of the SMALLINT type are within the range from -2 to
16
2 - 1, that is, from -32,768 to 32,767.
SMALLINT Examples
3.1.2. INTEGER
The INTEGER — or INT — data type is a 32-bit integer. Numbers of the INTEGER type are within the
32 32
range from -2 to 2 - 1, that is, from -2,147,483,648 to 2,147,483,647.
INTEGER Example
3.1.3. BIGINT
63 63
Numbers of the BIGINT type are within the range from -2 to 2 - 1, or from
-9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
3.1.4. INT128
INT128 is a 128-bit integer data type. This type is not defined in the SQL standard.
127 127
Numbers of the INT128 type are within the range from -2 to 2 - 1.
Constants of integer types can be specified in a hexadecimal format by means of 1 to 8 digits for
INTEGER, 9 to 16 hexadecimal digits for BIGINT, and 10 to 32 hexadecimal digits for INT128. Hex
representation for writing to SMALLINT is not explicitly supported, but Firebird will transparently
convert a hex number to SMALLINT if necessary, provided it falls within the ranges of negative and
positive SMALLINT.
30
Chapter 3. Data Types and Subtypes
The usage and numerical value ranges of hexadecimal notation are described in more detail in the
discussion of number constants in the chapter entitled Common Language Elements.
The hexadecimal INTEGERs in the above example are automatically cast to BIGINT before being
inserted into the table. However, this happens after the numerical value is determined, so
0x80000000 (8 digits) and 0x080000000 (9 digits) will be stored as different BIGINT values.
Approximate floating-point values are stored in an IEEE 754 binary format that comprises sign,
exponent and mantissa. Precision is dynamic, corresponding to the physical storage format of the
value, which is exactly 4 bytes for the FLOAT type and 8 bytes for DOUBLE PRECISION.
Considering the peculiarities of storing floating-point numbers in a database, these data types are
not recommended for storing monetary data. For the same reasons, columns with floating-point
data are not recommended for use as keys or to have uniqueness constraints applied to them.
For testing data in columns with floating-point data types, expressions should check using a range,
for instance, BETWEEN, rather than searching for exact matches.
When using these data types in expressions, extreme care is advised regarding the rounding of
evaluation results.
31
Chapter 3. Data Types and Subtypes
FLOAT
FLOAT [(bin_prec)]
Parameter Description
The FLOAT data type defaults to a 32-bit single precision floating-point type with an approximate
precision of 7 decimal digits after the decimal point (24 binary digits). To ensure the safety of
storage, rely on 6 decimal digits of precision.
• 1 <= _bin_prec <= 23: 32-bit single precision (synonym for FLOAT)
• 25 <= _bin_prec <= 53: 64-bit double precision (synonym for DOUBLE PRECISION)
The behaviour of FLOAT (without explicit precision) behaves as the SQL standard type REAL.
Compatibility Notes
• Firebird 3.0 and earlier supported FLOAT(dec_prec) where dec_prec was the approximate
precision in decimal digits, with 0 <= dec_prec <= 7 mapped to 32-bit single precision and P > 7
mapped to 64-bit double precision. This syntax was never documented.
• For bin_prec in FLOAT(bin_prec), the values 1 <= bin_prec <= 24 are all treated as bin_prec = 24,
values 25 <= bin_prec <= 53 are all handled as bin_prec = 53.
• Most Firebird tools will report FLOAT(1) — FLOAT(24) as FLOAT, and FLOAT(25) — FLOAT(53) as
DOUBLE PRECISION.
REAL
REAL
The data type REAL is a synonym for FLOAT, and is provided for syntax compatibility. When used to
define a column or parameter, it’s indistinguishable from using FLOAT or FLOAT(1) — FLOAT(24).
Compatibility Notes
• REAL has been available as a synonym for FLOAT since Firebird 1.0 and even earlier, but was
never documented.
32
Chapter 3. Data Types and Subtypes
DOUBLE PRECISION
DOUBLE PRECISION
The DOUBLE PRECISION data type is stored with an approximate precision of 15 digits.
Compatibility Notes
• Firebird also has the — previously undocumented — synonyms for DOUBLE PRECISION: LONG FLOAT
and LONG FLOAT(bin_prec), with 1 <= bin_prec <= 53.
These non-standard type names are deprecated and may be removed in a future Firebird
version.
• Firebird 3.0 and earlier supported LONG FLOAT(dec_prec) where dec_prec was the approximate
precision in decimal digits, where any value for dec_prec mapped to 64-bit double precision.
Decimal floating-point values are stored in an IEEE 754 decimal format that comprises sign,
exponent and coefficient. Contrary to the approximate floating-point data types, precision is either
16 or 34 decimal digits.
DECFLOAT
DECFLOAT [(dec_prec)]
Parameter Description
DECFLOAT is a SQL standard-compliant numeric type that stores floating-point number precisely
(decimal floating-point type), unlike FLOAT or DOUBLE PRECISION that provide a binary approximation
of the purported precision.
The type is stored and transmitted as IEEE 754 standard types Decimal64 (DECFLOAT(16)) or
Decimal128 (DECFLOAT(34)).
The “16” and “34” refer to the maximum precision in Base-10 digits. See
https://en/wikipedia.org/wiki/iEEE_754#Basic_and_interchange_formats for a comprehensive
table.
33
Chapter 3. Data Types and Subtypes
Observe that although the smallest exponent for DECFLOAT(16) is -383, the smallest value has an
exponent of -398, but 15 fewer digits. And similar for DECFLOAT(34), smallest exponent is -6143, but
the smallest value has an exponent of -6176, but 33 fewer digits. The reason is that precision was
“sacrificed” to be able to store a smaller value.
This is a result of how the value is stored: as a decimal value of 16 or 34 digits and an exponent. For
example, 1.234567890123456e-383 is stored as coefficient 1234567890123456 and exponent -398, while
1E-398 is stored as coefficient 1, exponent -398.
The behaviour of DECFLOAT operations in a session, specifically rounding and error behaviour, can
be configured using the SET DECFLOAT management statement, and the isc_dpb_decfloat_round and
isc_dpb_decfloat_traps DPB items.
It is possible to express DECFLOAT(34) values in approximate numeric literals, but only for values
with a mantissa of 20 or more digits, or an absolute exponent larger than 308. Scientific notation
literals with fewer digits or a smaller absolute exponent are DOUBLE PRECISION literals. Exact
numeric literals with 40 or more digits — actually 39 digits, when larger than the maximum INT128
value — are also handled as DECFLOAT(34).
Alternatively, use a string literal and explicitly cast to the desired DECFLOAT type.
The length of DECFLOAT literals cannot exceed 1024 characters. Scientific notation is required for
greater values. For example, 0.0<1020 zeroes>11 cannot be used as a literal, but the equivalent in
scientific notation, 1.1E-1022, is valid. Similarly, 10<1022 zeroes>0 can be presented as 1.0E1024.
Literals with more than 34 significant digits are rounded using the DECFLOAT rounding mode of the
session.
A number of standard scalar functions can be used with expressions and values of the DECFLOAT
type. They are:
The aggregate functions SUM, AVG, MAX and MIN work with DECFLOAT data, as do all the statistical
aggregates (including but not limited to STDDEV or CORR).
34
Chapter 3. Data Types and Subtypes
COMPARE_DECFLOAT
compares two DECFLOAT values to be equal, different or unordered
NORMALIZE_DECFLOAT
takes a single DECFLOAT argument and returns it in its simplest form
QUANTIZE
takes two DECFLOAT arguments and returns the first argument scaled using the second value as a
pattern
TOTALORDER
performs an exact comparison on two DECFLOAT values
Detailed descriptions are available in the Special Functions for DECFLOAT section of the Built-in
Scalar Functions chapter.
According to the SQL standard, both types limit the stored number to the declared scale (the
number of digits after the decimal point). The standard defines different treatment of the precision
for each type: precision for NUMERIC columns is exactly “as declared”, while DECIMAL columns accepts
numbers whose precision is at least equal to what was declared.
The behaviour of both NUMERIC and DECIMAL in Firebird is like the SQL-standard
DECIMAL; the precision is at least equal to what was declared.
For instance, NUMERIC(4, 2) defines a number consisting altogether of four digits, including two
[1]
digits after the decimal point; that is, it can have up to two digits before the point ] and no more
than two digits after the point. If the number 3.1415 is written to a column with this data type
definition, the value of 3.14 will be saved in the NUMERIC(4, 2) column.
The form of declaration for fixed-point data, for instance, NUMERIC(p, s), is common to both types.
The s argument in this template is scale. Understanding the mechanism for storing and retrieving
s
fixed-point data should help to visualise why: for storage, the number is multiplied by 10 (10 to the
power of s), converting it to an integer; when read, the integer is converted back by multiplying by
-s s
10 (or, dividing by 10 ).
The method of storing fixed-point data in the database depends on several factors: declared
precision, database dialect, declaration type.
35
Chapter 3. Data Types and Subtypes
Numerics with precision less than 19 digits use SMALLINT, INTEGER, BIGINT or DOUBLE PRECISION as the
base data type, depending on the number of digits and SQL dialect. When precision is between 19
and 38 digits an INT128 is used as the base data type, and the actual precision is always extended to
the full 38 digits.
For complex calculations, those digits are cast internally to DECFLOAT(34). The result of various
mathematical operations, such as LOG(), EXP() and so on, and aggregate functions using a high
precision numeric argument, will be DECFLOAT(34).
3.3.1. NUMERIC
Data Type Declaration Format
Parameter Description
Storage Examples
Further to the explanation above, Firebird will store NUMERIC data according the declared precision
and scale. Some more examples are:
Always keep in mind that the storage format depends on the precision. For
instance, you define the column type as NUMERIC(2,2) presuming that its range of
values will be -0.99…0.99. However, the actual range of values for the column will
be -327.68…327.67, which is due to storing the NUMERIC(2,2) data type in the
SMALLINT format. In storage, the NUMERIC(4,2), NUMERIC(3,2) and NUMERIC(2,2) data
types are the same. This means that if you need to store data in a column with the
36
Chapter 3. Data Types and Subtypes
NUMERIC(2,2) data type and limit the range to -0.99…0.99, you will have to create a
CHECK constraint for it.
3.3.2. DECIMAL
Data Type Declaration Format
Parameter Description
Storage Examples
The storage format in the database for DECIMAL is similar to NUMERIC, with some differences that are
easier to observe with the help of some more examples:
Time zone support is available using the types TIME WITH TIME ZONE and TIMESTAMP WITH TIME ZONE.
In this language reference, we’ll use TIME and TIMESTAMP to refer both to the specific types without
time zone — TIME [WITHOUT TIME ZONE] and TIMESTAMP [WITHOUT TIME ZONE] — and aspects of both the
without time zone and with time zone types, which one we mean is usually clear from the context.
The data types TIME WITHOUT TIME ZONE, TIMESTAMP WITHOUT TIME ZONE and DATE are
defined to use the session time zone when converting from or to a TIME WITH TIME
ZONE or TIMESTAMP WITH TIME ZONE. TIME and TIMESTAMP are synonymous to their
respective WITHOUT TIME ZONE data types.
Dialect 3 supports all the five types, while Dialect 1 has only DATE. The DATE type in Dialect 3 is “date-
37
Chapter 3. Data Types and Subtypes
only”, whereas the Dialect 1 DATE type stores both date and time-of-day, equivalent to TIMESTAMP in
Dialect 3. Dialect 1 has no “date-only” type.
Fractions of Seconds
If fractions of seconds are stored in date and time data types, Firebird stores them to ten-
thousandths of a second (100 microseconds or deci-milliseconds). If a lower granularity is
preferred, the fraction can be specified explicitly as thousandths, hundredths or tenths of a second,
or second, in Dialect 3 databases of ODS 11 or higher.
The time-part of a TIME or TIMESTAMP is a 32-bit integer, with room for deci-milliseconds (or 100
microseconds) precision and time values are stored as the number of deci-milliseconds
elapsed since midnight. The actual precision of values stored in or read from time(stamp)
functions and variables is:
• The EXTRACT() function returns up to deci-milliseconds precision with the SECOND and
MILLISECOND arguments
Deci-milliseconds precision is not supported by all drivers and access components. The best
assumption to make from all this is that, although Firebird stores TIME and the TIMESTAMP time-
-4
part values as the number of deci-milliseconds (10 seconds) elapsed since midnight, the
actual precision could vary from seconds to milliseconds.
The time zone types are stored as values at UTC (offset 0), using the structure of TIME or
TIMESTAMP + two extra bytes for time zone information (either an offset in minutes, or the id of
a named time zone). Storing as UTC allows Firebird to index and compare two values in
different time zones.
38
Chapter 3. Data Types and Subtypes
• When you use named zones, and the time zone rules for that zone change, the UTC time
stays the same, but the local time in the named zone may change.
• For TIME WITH TIME ZONE, calculating a time zone offset for a named zone to get the local
time in the zone applies the rules valid at the 1st of January 2020 to ensure a stable value.
This may result in unexpected or confusing results.
• When the rules of a named time zone changes, a value in the affected date range may no
longer match the intended value if the actual offset in that named zone changes.
3.4.1. DATE
Syntax
DATE
The DATE data type in Dialect 3 stores only date without time. The available range for storing data is
from January 01, 1 to December 31, 9999.
In Dialect 1, date literals without a time part, as well as casts of date mnemonics
'TODAY', 'YESTERDAY' and 'TOMORROW' automatically get a zero time part.
If you need to store a Dialect 1 timestamp literal with an explicit zero time-part,
the engine will accept a literal like '2016-12-25 00:00:00.0000'. However, '2016-12-
25' would have the same effect.
Storage of Dates
Internally, Firebird stores dates in a 32-bit integer as a Modified Julian Date, or the number of
days since 1858-11-17. An additional restriction is imposed, limiting valid dates to the range
from 0001-01-01 AD to 9999-12-31 AD.
3.4.2. TIME
Syntax
The TIME data type is available in Dialect 3 only. It stores the time of day within the range from
00:00:00.0000 to 23:59:59.9999.
If you need to get the time-part from DATE in Dialect 1, you can use the EXTRACT function.
39
Chapter 3. Data Types and Subtypes
See also the EXTRACT() function in the chapter entitled Built-in Scalar Functions.
The TIME (or synonym TIME WITHOUT TIME ZONE) represents a time without time zone information.
The TIME WITH TIME ZONE represents a time with time zone information (either an offset or a named
zone).
Firebird uses the ICU implementation of the IANA Time Zone Database for named zones.
3.4.3. TIMESTAMP
Syntax
The TIMESTAMP data type is available in Dialect 3 and Dialect 1. It comprises two 32-bit integers — a
date-part and a time-part — to form a structure that stores both date and time-of-day. In Dialect 1,
DATE is an alias for TIMESTAMP.
The EXTRACT function works equally well with TIMESTAMP as with the Dialect 1 DATE type.
The TIMESTAMP (or synonym TIMESTAMP WITHOUT TIME ZONE) represents a time and date without time
zone information.
The TIMESTAMP WITH TIME ZONE represents a time with time zone information (either an offset or a
named zone).
40
Chapter 3. Data Types and Subtypes
As the name implies, the session time zone can be different for each database attachment. It can be
set explicitly in the DPB with the item isc_dpb_session_time_zone; otherwise, by default, it uses the
same time zone as the operating system of the Firebird server process. This default can be
overridden in firebird.conf, setting DefaultTimeZone.
Drivers may apply different defaults, for example specifying the client time zone
as the default session time zone. Check your driver documentation for details.
Subsequently, the time zone can be changed to a given time zone using a SET TIME ZONE statement or
reset to its original value with SET TIME ZONE LOCAL or ALTER SESSION RESET.
A time zone is specified as a string, either a time zone region (for example, 'America/Sao_Paulo') or
a displacement from GMT in hours:minutes (for example, '-03:00').
Supported time zone region names can be found in the system table RDB$TIME_ZONES.
A time/timestamp with time zone is considered equal to another time/timestamp with time zone if
their conversions to UTC are equivalent. For example, time '10:00 -02:00' and time '09:00 -03:00'
are equivalent, since both are the same as time '12:00 GMT'.
The same equivalence applies in UNIQUE constraints and for sorting purposes.
The method of storing date and time values makes it possible to involve them as operands in some
arithmetic operations. In storage, a date value or date-part of a timestamp is represented as the
number of days elapsed since “date zero” — November 17, 1858 — whilst a time value or the time-
part of a timestamp is represented as the number of deci-milliseconds (100 microseconds) since
midnight.
An example is to subtract an earlier date, time or timestamp from a later one, resulting in an
interval of time, in days and fractions of days.
41
Chapter 3. Data Types and Subtypes
42
Chapter 3. Data Types and Subtypes
See also
DATEADD, DATEDIFF
Package RDB$TIME_ZONE_UTIL
Time zones are often changed: of course, when it happens, it is desirable to update the time zone
database as soon as possible.
Firebird stores WITH TIME ZONE values translated to UTC time. Suppose a value is created with one
time zone database, and a later update of that database changes the information in the range of our
stored value. When that value is read, it will be returned as different to the value that was stored
initially.
Firebird uses the IANA time zone database through the ICU library. The ICU library included in the
43
Chapter 3. Data Types and Subtypes
Firebird kit (Windows), or installed in a POSIX operating system, can sometimes have an outdated
time zone database.
An updated database can be found on this page on the FirebirdSQL GitHub. Filename le.zip stands
for little-endian and is the necessary file for most computer architectures (Intel/AMD compatible
x86 or x64), while be.zip stands for big-endian architectures and is necessary mostly for RISC
computer architectures. The content of the zip file must be extracted in the /tzdata sub-directory of
the Firebird installation, overwriting existing *.res files belonging to the database.
./tzdata is the default directory where Firebird looks for the time zone database. It can be
overridden with the ICU_TIMEZONE_FILES_DIR environment variable.
If no character set is explicitly specified when defining a character object, the default character set
of the database — at time of defining the object — will be used. If the database does not have a
default character set defined, the object gets the character set NONE.
3.5.1. Unicode
Most current development tools support Unicode, implemented in Firebird with the character sets
UTF8 and UNICODE_FSS. UTF8 comes with collations for many languages. UNICODE_FSS is more limited
and was previously used mainly by Firebird internally for storing metadata. Keep in mind that one
UTF8 character occupies up to 4 bytes, thus limiting the size of CHAR fields to 8,191 characters
(32,767/4).
The actual “bytes per character” value depends on the range the character belongs to. Non-accented
Latin letters occupy 1 byte, Cyrillic letters from the WIN1251 encoding occupy 2 bytes in UTF8,
characters from other encodings may occupy up to 4 bytes.
The UTF8 character set implemented in Firebird supports the latest version of the Unicode standard,
thus recommending its use for international databases.
While working with strings, it is essential to keep the character set of the client connection in mind.
If there is a mismatch between the character sets of the stored data and that of the client
connection, the output results for string columns are automatically re-encoded, both when data are
sent from the client to the server and when they are sent back from the server to the client. For
example, if the database was created in the WIN1251 encoding but KOI8R or UTF8 is specified in the
client’s connection parameters, the mismatch will be transparent.
44
Chapter 3. Data Types and Subtypes
3.5.4. Collation
Each character set has a default collation (COLLATE) that specifies the collation order (or, collation
sequence, or collating sequence). Usually, it provides nothing more than ordering based on the
numeric code of the characters and a basic mapping of upper- and lower-case characters. If some
behaviour is needed for strings that is not provided by the default collation and a suitable
alternative collation is supported for that character set, a COLLATE collation clause can be specified
in the column definition.
A COLLATE collation clause can be applied in other contexts besides the column definition. For
comparison operations, it can be added in the WHERE clause of a SELECT statement. If output needs to
be sorted in a special alphabetic sequence, or case-insensitively, and the appropriate collation
exists, then a COLLATE clause can be included with the ORDER BY clause when rows are being sorted
on a character field and with the GROUP BY clause in case of grouping operations.
Case-Insensitive Searching
For a case-insensitive search, the UPPER function could be used to convert both the search argument
and the searched strings to upper-case before attempting a match:
...
where upper(name) = upper(:flt_name)
For strings in a character set that has a case-insensitive collation available, you can apply the
collation, to compare the search argument and the searched strings directly. For example, using the
WIN1251 character set, the collation PXW_CYRL is case-insensitive for this purpose:
...
WHERE FIRST_NAME COLLATE PXW_CYRL >= :FLT_NAME
...
45
Chapter 3. Data Types and Subtypes
See also
CONTAINING
UTF8 Collations
The following table shows the possible collations for the UTF8 character set.
Collation Characteristics
UCS_BASIC Collation works according to the position of the character in the table
(binary).
UNICODE Collation works according to the UCA algorithm (Unicode Collation
Algorithm) (alphabetical).
UTF8 The default, binary collation, identical to UCS_BASIC, which was added for
SQL compatibility
UNICODE_CI Case-insensitive collation, works without taking character case into
account.
UNICODE_CI_AI Case-insensitive, accent-insensitive collation, works alphabetically
without taking character case or accents into account.
Example
An example of collation for the UTF8 character set without taking into account the case or
accentuation of characters (similar to COLLATE PXW_CYRL in the earlier example).
...
ORDER BY NAME COLLATE UNICODE_CI_AI
The maximum length for an index key equals one quarter of the page size, i.e. from 1,024 — for
page size 4,096 — to 8,192 bytes — for page size 32,768. The maximum length of an indexed string is
9 bytes less than that quarter-page limit.
The following formula calculates the maximum length of an indexed string (in characters):
max_char_length = FLOOR((page_size / 4 - 9) / N)
46
Chapter 3. Data Types and Subtypes
The table below shows the maximum length of an indexed string (in characters), according to page
size and character set, calculated using this formula.
Table 10. Maximum Index Lengths by Page Size and Character Size
1 2 3 4 6
With case-insensitive collations (“_CI”), one character in the index key will occupy not 4, but 6 (six)
bytes, so the maximum key length for a page of — for example — 4,096 bytes, will be 169 characters.
See also
CREATE DATABASE, Collation, SELECT, WHERE, GROUP BY, ORDER BY
BINARY
BINARY [(length)]
Parameter Description
BINARY is a fixed-length binary data type, and is an SQL standard-compliant alias for CHAR(length)
CHARACTER SET OCTETS. Values shorter than the declared length are padded with NUL (0x00) up to the
declared length.
Some tools may report the type as CHAR CHARACTER SET OCTETS instead of BINARY.
See also
CHAR, VARBINARY
CHAR
47
Chapter 3. Data Types and Subtypes
Parameter Description
CHAR is a fixed-length character data type. Values shorter than the declared length are padded with
spaces up to the declared length. The pad character does not have to be a space (0x20): it depends
on the character set. For example, the pad character for the OCTETS character set is NUL (0x00).
Fixed-length character data can be used to store codes whose length is standard and has a definite
“width”. An example of such a code is an EAN13 barcode — 13 characters, all filled.
Formally, the COLLATE clause is not part of the data type declaration, and its position depends on the
syntax of the statement.
See also
BINARY, VARCHAR
VARBINARY
Parameter Description
Some tools may report the type as VARCHAR CHARACTER SET OCTETS instead of
VARBINARY.
See also
VARCHAR, BINARY
VARCHAR
48
Chapter 3. Data Types and Subtypes
Parameter Description
VARCHAR is a variable-length character data type, up to a maximum of 32,765 bytes. The stored
structure is equal to the actual size of the data plus 2 bytes to record the length of the data.
All characters that are sent from the client application to the database are considered meaningful,
including leading and trailing spaces.
Formally, the COLLATE clause is not part of the data type declaration, and its position depends on the
syntax of the statement.
See also
VARBINARY, CHAR
NCHAR
NCHAR is a fixed-length character data type with the ISO8859_1 character set. In all other respects it is
the same as CHAR.
A similar data type is available for the variable-length string type: NATIONAL {CHAR | CHARACTER}
VARYING.
See also
CHAR, VARCHAR
49
Chapter 3. Data Types and Subtypes
BOOLEAN
The SQL-compliant BOOLEAN data type (8 bits) comprises the distinct truth values TRUE and FALSE.
Unless prohibited by a NOT NULL constraint, the BOOLEAN data type also supports the truth value
UNKNOWN as the null value. The specification does not make a distinction between the NULL value of
this data type, and the truth value UNKNOWN that is the result of an SQL predicate, search condition, or
Boolean value expression: they may be used interchangeably to mean the same thing.
As with many programming languages, the SQL BOOLEAN values can be tested with implicit truth
values. For example, field1 OR field2 and NOT field1 are valid expressions.
The IS Operator
Predicates can use the operator Boolean IS [NOT] for matching. For example, field1 IS FALSE, or
field1 IS NOT TRUE.
• Equivalence operators (“=”, “!=”, “<>” and so on) are valid in all comparisons.
BOOLEAN Examples
50
Chapter 3. Data Types and Subtypes
7. Valid syntax, but as with a comparison with NULL, will never return any record
Although BOOLEAN is not inherently convertible to any other data type, the strings 'true' and 'false'
(case-insensitive) will be implicitly cast to BOOLEAN in value expressions. For example:
The value 'false' is converted to BOOLEAN. Any attempt to use the Boolean operators AND, NOT, OR and
IS will fail. NOT 'False', for example, is invalid.
A BOOLEAN can be explicitly converted to and from string with CAST. UNKNOWN is not available for any
form of casting.
51
Chapter 3. Data Types and Subtypes
For ordering and comparison, the value TRUE is greater than the value FALSE.
BLOBs (Binary Large Objects) are complex structures used to store text and binary data of an
undefined length, often very large.
Syntax
If the SUB_TYPE and CHARACTER SET clauses are absent, then subtype BINARY (or 0) is used. If the
SUB_TYPE clause is absent and the CHARACTER SET clause is present, then subtype TEXT (or 1) is used.
Shortened syntax
Formally, the COLLATE clause is not part of the data type declaration, and its position depends on the
syntax of the statement.
Segment Size
Specifying the BLOB segment size is a throwback to times past, when applications for
working with BLOB data were written in C (Embedded SQL) with the help of the gpre pre-
compiler. Nowadays, it is effectively irrelevant. The segment size for BLOB data is determined
by the client side and is usually larger than the data page size, in any case.
The optional SUB_TYPE parameter specifies the nature of data written to the column. Firebird
provides two pre-defined subtypes for storing user data:
Subtype 0: BINARY
If a subtype is not specified, the specification is assumed to be for untyped data and the default
SUB_TYPE BINARY (or SUB_TYPE 0) is applied. This is the subtype to specify when the data are any
form of binary file or stream: images, audio, word-processor files, PDFs and so on.
52
Chapter 3. Data Types and Subtypes
Subtype 1: TEXT
Subtype 1 has an alias, TEXT, which can be used in declarations and definitions. For instance, BLOB
SUB_TYPE TEXT (or BLOB SUB_TYPE 1). It is a specialized subtype used to store plain text data that is
too large to fit into a string type. A CHARACTER SET may be specified, if the field is to store text with
a different encoding to that specified for the database. A COLLATE clause is also supported.
Custom Subtypes
It is also possible to add custom data subtypes, for which the range of enumeration from -1 to
-32,768 is reserved. Custom subtypes enumerated with positive numbers are not allowed, as the
Firebird engine uses the numbers from 2-upward for some internal subtypes in metadata. Custom
subtype aliases can be inserted into the RDB$TYPES table by users with the system privilege
CREATE_USER_TYPES.
= (assignment)
As an efficient alternative to concatenation, you can also use BLOB_APPEND() or the functions and
procedures of system package RDB$BLOB_UTIL.
Partial support:
• An error occurs with these if the search argument is larger than or equal to 32 KB:
• Aggregation clauses work not on the contents of the field itself, but on the BLOB ID. Aside from
that, there are some quirks:
53
Chapter 3. Data Types and Subtypes
BLOB Storage
• By default, a regular record is created for each BLOB, and it is stored on a data page that is
allocated for it. If the entire BLOB fits onto this page, it is called a level 0 BLOB. The number of
this special record is stored in the table record and occupies 8 bytes.
• If a BLOB does not fit onto one data page, its contents are put onto separate pages allocated
exclusively to it (blob pages), while the numbers of these pages are stored into the BLOB record.
This is a level 1 BLOB.
• If the array of page numbers containing the BLOB data does not fit onto a data page, the array is
put on separate blob pages, while the numbers of these pages are put into the BLOB record. This
is a level 2 BLOB.
See also
FILTER, DECLARE FILTER, BLOB_APPEND(), RDB$BLOB_UTIL
The support of arrays in the Firebird DBMS is a departure from the traditional relational model.
Supporting arrays in the DBMS could make it easier to solve some data-processing tasks involving
large sets of similar data.
Arrays in Firebird are stored in BLOB of a specialized type. Arrays can be one-dimensional and
multi-dimensional and of any data type except BLOB and ARRAY.
Example
This example will create a table with a field of the array type consisting of four integers. The
subscripts of this array are from 1 to 4.
54
Chapter 3. Data Types and Subtypes
By default, dimensions are 1-based — subscripts are numbered from 1. To specify explicit upper
and lower bounds of the subscript values, use the following syntax:
A new dimension is added using a comma in the syntax. In this example we create a table with a
two-dimensional array, with the lower bound of subscripts in both dimensions starting from zero:
The database employee.fdb, found in the ../examples/empbuild directory of any Firebird distribution
package, contains a sample stored procedure showing some simple work with arrays:
55
Chapter 3. Data Types and Subtypes
END
If the features described are enough for your tasks, you might consider using arrays in your
projects. Currently, no improvements are planned to enhance support for arrays in Firebird.
The SQL_NULL type holds no data, but only a state: NULL or NOT NULL. It is not available as a data type
for declaring table fields, PSQL variables or parameter descriptions. This data type exists to support
the use of untyped parameters in expressions involving the IS NULL predicate.
An evaluation problem occurs when optional filters are used to write queries of the following type:
After processing, at the API level, the query will look like this:
This is a case where the developer writes an SQL query and considers :param1 as though it were a
variable that they can refer to twice. However, at the API level, the query contains two separate and
independent parameters. The server cannot determine the type of the second parameter since it
comes in association with IS NULL.
The SQL_NULL data type solves this problem. Whenever the engine encounters an “? IS NULL”
predicate in a query, it assigns the SQL_NULL type to the parameter, which will indicate that
parameter is only about “nullness” and the data type or the value need not be addressed.
The following example demonstrates its use in practice. It assumes two named parameters — say,
:size and :colour — which might, for example, get values from on-screen text fields or drop-down
lists. Each named parameter corresponds with two positional parameters in the query.
SELECT
SH.SIZE, SH.COLOUR, SH.PRICE
FROM SHIRTS SH
WHERE (SH.SIZE = ? OR ? IS NULL)
AND (SH.COLOUR = ? OR ? IS NULL)
Explaining what happens here assumes the reader is familiar with the Firebird API and the passing
of parameters in XSQLVAR structures — what happens under the surface will not be of interest to
those who are not writing drivers or applications that communicate using the “naked” API.
56
Chapter 3. Data Types and Subtypes
The application passes the parameterized query to the server in the usual positional ?-form. Pairs of
“identical” parameters cannot be merged into one, so for the two optional filters in the example,
four positional parameters are needed: one for each ? in our example.
After the call to isc_dsql_describe_bind(), the SQLTYPE of the second and fourth parameters will be
set to SQL_NULL. Firebird has no knowledge of their special relation with the first and third
parameters: that responsibility lies entirely on the application side.
Once the values for size and colour have been set (or left unset) by the user, and the query is about
to be executed, each pair of XSQLVARs must be filled as follows:
Second parameter (NULL test): set sqldata to null (null pointer, not SQL NULL) and *sqlind to 0 (for
NOT NULL)
In other words: The value compare parameter is always set as usual. The SQL_NULL parameter is set
the same, except that sqldata remains null at all times.
The CAST function enables explicit conversion between many pairs of data types.
Syntax
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
<array_datatype> ::=
!! See Array Data Types Syntax !!
57
Chapter 3. Data Types and Subtypes
Casting to a Domain
When you cast to a domain, any constraints declared for it are taken into account, i.e. NOT NULL or
CHECK constraints. If the value does not pass the check, the cast will fail.
If TYPE OF is additionally specified — casting to its base type — any domain constraints are ignored
during the cast. If TYPE OF is used with a character type (CHAR/VARCHAR), the character set and
collation are retained.
When operands are cast to the type of a column, the specified column may be from a table or a
view.
Only the type of the column itself is used. For character types, the cast includes the character set,
but not the collation. The constraints and default values of the source column are not applied.
Example
SELECT
CAST ('I have many friends' AS TYPE OF COLUMN TTT.S)
FROM RDB$DATABASE;
To convert string data types to the BOOLEAN type, the value must be (case-insensitive) 'true' or
'false', or NULL.
Keep in mind that partial information loss is possible. For instance, when you cast
the TIMESTAMP data type to the DATE data type, the time-part is lost.
58
Chapter 3. Data Types and Subtypes
Datetime Formats
To cast string data types to the DATE, TIME or TIMESTAMP data types, you need the string argument to
be one of the predefined datetime mnemonics (see Table 16) or a representation of the date in one
of the allowed datetime formats (see Datetime Format Syntax),
Literal Description
'NOW' Current date and time
'TODAY' Current date
'TOMORROW' Current date + 1 (day)
'YESTERDAY' Current date - 1 (day)
Casting the date mnemonics 'TODAY', 'TOMORROW' or 'YESTERDAY' to a TIMESTAMP WITH TIME ZONE will
produce a value at 00:00:00 UTC rebased to the session time zone.
For example, cast('TODAY' as timestamp with time zone) on 2021-05-02 20:00 - 2021-05-03 19:59
New York (or 2021-05-03 00:00 - 2021-05-03 23:59 UTC) with session time zone America/New_York,
will produce a value TIMESTAMP '2021-05-02 20:00:00.0000 America/New_York', while cast('TODAY' as
date) or CURRENT_DATE will produce either DATE '2021-05-02' or DATE '2021-05-03' depending on the
actual date.
select
cast('04.12.2014' as date) as d1, -- DD.MM.YYYY
cast('04 12 2014' as date) as d2, -- MM DD YYYY
cast('4-12-2014' as date) as d3, -- MM-DD-YYYY
cast('04/12/2014' as date) as d4, -- MM/DD/YYYY
cast('04.12.14' as date) as d5, -- DD.MM.YY
-- DD.MM with current year
cast('04.12' as date) as d6,
-- MM/DD with current year
cast('04/12' as date) as d7,
cast('2014/12/04' as date) as d8, -- YYYY/MM/DD
cast('2014 12 04' as date) as d9, -- YYYY MM DD
cast('2014.12.04' as date) as d10, -- YYYY.MM.DD
cast('2014-12-04' as date) as d11, -- YYYY-MM-DD
cast('4 Jan 2014' as date) as d12, -- DD MM YYYY
cast('2014 Jan 4' as date) as dt13, -- YYYY MM DD
cast('Jan 4 2014' as date) as dt14, -- MM DD YYYY
cast('11:37' as time) as t1, -- HH:mm
cast('11:37:12' as time) as t2, -- HH:mm:ss
cast('11:31:12.1234' as time) as t3, -- HH:mm:ss.nnnn
-- DD.MM.YYYY HH:mm
cast('04.12.2014 11:37' as timestamp) as dt1,
-- MM/DD/YYYY HH:mm:ss
cast('04/12/2014 11:37:12' as timestamp) as dt2,
-- DD.MM.YYYY HH:mm:ss.nnnn
59
Chapter 3. Data Types and Subtypes
Firebird allows the use of a shorthand “C-style” type syntax for casts from string to the types DATE,
TIME and TIMESTAMP. The SQL standard calls these “datetime literals”.
Syntax
<data_type> 'date_format_string'
These literal expressions are evaluated directly during parsing, as though the
statement were already prepared for execution. As this produced unexpected or
confusing results when using the datetime mnemonics like 'NOW', especially in
PSQL code, the datetime mnemonics are no longer allowed in datetime literals
since Firebird 4.0.
To use datetime mnemonics, use the full CAST syntax. An example of using such an
expression in a trigger:
Implicit data conversion is not possible in Dialect 3 — the CAST function is almost always required to
avoid data type clashes.
In Dialect 1, in many expressions, one type is implicitly cast to another without the need to use the
CAST function. For instance, the following statement in Dialect 1 is valid:
UPDATE ATABLE
SET ADATE = '25.12.2016' + 1
In Dialect 3, this statement will raise error 35544569, “Dynamic SQL Error: expression evaluation
not supported, Strings cannot be added or subtracted in dialect 3” — a cast will be needed:
UPDATE ATABLE
60
Chapter 3. Data Types and Subtypes
UPDATE ATABLE
SET ADATE = DATE '25.12.2016' + 1
In Dialect 1, mixing integer data and numeric strings is usually possible because the parser will try
to cast the string implicitly. For example,
2 + '1'
In Dialect 3, an expression like this will raise an error, so you will need to write it as a CAST
expression:
2 + CAST('1' AS SMALLINT)
When multiple data elements are being concatenated, all non-string data will undergo implicit
conversion to string, if possible.
Example
SELECT 30||' days hath September, April, June and November' CONCAT$
FROM RDB$DATABASE;
CONCAT$
------------------------------------------------
30 days hath September, April, June and November
Domain usage is not limited to column definitions for tables and views. Domains can be used to
declare input and output parameters and variables in PSQL code.
61
Chapter 3. Data Types and Subtypes
A domain definition has required and optional attributes. The data type is a required attribute.
Optional attributes include:
• a default value
• CHECK constraints
• character set (for character data types and text BLOB fields)
See also
Explicit Data Type Conversion for the description of differences in the data conversion mechanism
when domains are specified for the TYPE OF and TYPE OF COLUMN modifiers.
While defining a column using a domain, it is possible to override some attributes inherited from
the domain. Table 17 summarises the rules for domain override.
Data type No
Text character set Yes It can also be used to restore the default
database values for the column
CHECK constraints Yes To add new conditions to the check, you can use
the corresponding CHECK clauses in the CREATE
and ALTER statements at the table level.
NOT NULL No Often it is better to leave domain nullable in its
definition and decide whether to make it NOT
NULL when using the domain to define columns.
62
Chapter 3. Data Types and Subtypes
Short Syntax
See also
CREATE DOMAIN in the Data Definition (DDL) Statements chapter.
Altering a Domain
To change the attributes of a domain, use the DDL statement ALTER DOMAIN. With this statement you
can:
Short Syntax
Example
When changing a domain, its dependencies must be taken into account: whether there are table
columns, any variables, input and/or output parameters with the type of this domain declared in
the PSQL code. If you change domains in haste, without carefully checking them, your code may
stop working!
When you convert data types in a domain, you must not perform any conversions
63
Chapter 3. Data Types and Subtypes
that may result in data loss. Also, for example, if you convert VARCHAR to INTEGER,
check carefully that all data using this domain can be successfully converted.
See also
ALTER DOMAIN in the Data Definition (DDL) Statements chapter.
The DDL statement DROP DOMAIN deletes a domain from the database, provided it is not in use by any
other database objects.
Syntax
Example
See also
DROP DOMAIN in the Data Definition (DDL) Statements chapter.
The syntax documented below is referenced from other parts of this language reference.
The scalar data types are simple data types that hold a single value. For reasons of organisation, the
syntax of BLOB types are defined separately in BLOB Data Types Syntax.
<domain_or_non_array_type> ::=
<scalar_datatype>
| <blob_datatype>
| [TYPE OF] domain
| TYPE OF COLUMN rel.col
<scalar_datatype> ::=
SMALLINT | INT[EGER] | BIGINT | INT128
| REAL | FLOAT [(bin_prec)] | DOUBLE PRECISION
| DECFLOAT [(dec_prec)]
| BOOLEAN
| DATE
64
Chapter 3. Data Types and Subtypes
Argument Description
scale Scale, or number of decimals. From 0 to 38. It must be less than or equal
to precision
domain_or_non_array_t Non-array types that can be used in PSQL code and casts
ype
A domain name can be specified as the type of a PSQL parameter or local variable. The parameter
or variable will inherit all domain attributes. If a default value is specified for the parameter or
variable, it overrides the default value specified in the domain definition.
If the TYPE OF clause is added before the domain name, only the data type of the domain is used:
any of the other attributes of the domain — NOT NULL constraint, CHECK constraints, default
value — are neither checked nor used. However, if the domain is of a text type, its character set and
collation are always used.
Input and output parameters or local variables can also be declared using the data type of columns
65
Chapter 3. Data Types and Subtypes
in existing tables and views. The TYPE OF COLUMN clause is used for that, specifying relationname
.columnname as its argument.
When TYPE OF COLUMN is used, the parameter or variable inherits only the data type and — for string
types — the character set and collation. The constraints and default value of the column are
ignored.
The BLOB data types hold binary, character or custom format data of unspecified size. For more
information, see Binary Data Types.
<blob_datatype> ::=
BLOB [SUB_TYPE {subtype_num | subtype_name}]
[SEGMENT SIZE seglen] [CHARACTER SET charset]
| BLOB [(seglen [, subtype_num])]
| BLOB [(, subtype_num)]
Argument Description
subtype_name BLOB subtype mnemonic name; this can be TEXT, BINARY, or one of the
(other) standard or custom names defined in RDB$TYPES for RDB$FIELD_NAME
= 'RDB$FIELD_SUB_TYPE'.
seglen Segment size, cannot be greater than 65,535, defaults to 80 when not
specified. See also Segment Size
If the SUB_TYPE and CHARACTER SET clauses are absent, then subtype BINARY (or 0) is used. If the
SUB_TYPE clause is absent and the CHARACTER SET clause is present, then subtype TEXT (or 1) is used.
The array data types hold multiple scalar values in a single or multi-dimensional array. For more
information, see Array Types
<array_datatype> ::=
{SMALLINT | INT[EGER] | BIGINT | INT128} <array_dim>
| {REAL | FLOAT [(bin_prec)] | DOUBLE PRECISION} <array_dim>
| DECFLOAT [(dec_prec)] <array_dim>
| BOOLEAN <array_dim>
| DATE <array_dim>
| TIME [{WITHOUT | WITH} TIME ZONE] <array_dim>
66
Chapter 3. Data Types and Subtypes
Argument Description
scale Scale, or number of decimals. From 0 to 38. It must be less than or equal
to precision
[1] in practice, the actual range is determined by the backing type, for NUMERIC(4, s) that is SMALLINT, which means it can store [-
327.68, 327.67
67
Chapter 4. Common Language Elements
4.1. Expressions
SQL expressions provide formal methods for evaluating, transforming and comparing values. SQL
expressions may include table columns, variables, constants, literals, various statements and
predicates and also other expressions. The complete list of possible tokens in expressions follows.
Array element
An expression may contain a reference to an array member i.e., <array_name>[s], where s is the
subscript of the member in the array <array_name>
Arithmetic operators
The +, -, *, / characters used to calculate values
Concatenation operator
The || (“double-pipe”) operator used to concatenate strings
Logical operators
The reserved words NOT, AND and OR, used to combine simple search conditions to create complex
conditions
Comparison operators
The symbols =, <>, !=, ~=, ^=, <, <=, >, >=, !<, ~<, ^<, !>, ~> and ^>
Comparison predicates
LIKE, STARTING WITH, CONTAINING, SIMILAR TO, BETWEEN, IS [NOT] NULL, IS [NOT] {TRUE | FALSE |
UNKNOWN} and IS [NOT] DISTINCT FROM
Existential predicates
Predicates used to check the existence of values in a set. The IN predicate can be used both with
sets of comma-separated constants and with subqueries that return a single column. The EXISTS,
SINGULAR, ALL, ANY and SOME predicates can be used only with sub-queries.
Constant or Literal
Numbers, or string literals enclosed in apostrophes or Q-strings, Boolean values TRUE, FALSE and
UNKOWN, NULL
68
Chapter 4. Common Language Elements
Datetime literal
An expression, similar to a string literal enclosed in apostrophes, that can be interpreted as a
date, time or timestamp value. Datetime literals can be strings of characters and numerals, such
as TIMESTAMP '25.12.2016 15:30:35', that can be resolved as datetime value.
Datetime mnemonics
A string literal with a description of a desired datetime value that can be cast to a datetime type.
For example 'TODAY', 'NOW'.
Context variable
An internally-defined context variable
Local variable
Declared local variable, input or output parameter of a PSQL module (stored procedure, stored
function, trigger, or unnamed PSQL block in DSQL)
Positional parameter
A member of an ordered group of one or more unnamed parameters passed to a stored
procedure or prepared query
Subquery
A SELECT statement enclosed in parentheses that returns a single (scalar) value or, when used in
existential predicates, a set of values
Function identifier
The identifier of an internal, packaged, stored or external function in a function expression
Type cast
An expression explicitly converting data of one data type to another using the CAST function
(CAST (<value> AS <datatype>)). For datetime literals only, the shorthand syntax <datatype>
<value> is also supported (DATE '2016-12-25').
Conditional expression
Expressions using CASE and related internal functions
Parentheses
Bracket pairs (…) used to group expressions. Operations inside the parentheses are performed
before operations outside them. When nested parentheses are used, the most deeply nested
expressions are evaluated first and then the evaluations move outward through the levels of
nesting.
COLLATE clause
Clause applied to CHAR and VARCHAR types to specify the character-set-specific collation to use
in string comparisons
69
Chapter 4. Common Language Elements
AT expression
Expression to change the time zone of a datetime.
A literal — or constant — is a value that is supplied directly in an SQL statement, not derived from
an expression, a parameter, a column reference nor a variable. It can be a string or a number.
String Literals
A string literal is a series of characters enclosed between a pair of apostrophes (“single quotes”).
The maximum length of a string literal is 32,765 for CHAR/VARCHAR, or 65,533 bytes for BLOB; the
maximum character count will be determined by the number of bytes used to encode each
character.
<char-literal> ::=
[<introducer> charset-name] <quote> [<char>...] <quote>
[{ <separator> <quote> [<char>...] <quote> }... ]
<separator> ::=
{ <comment> | <white space> }
• In Dialect 3, double quotes are not valid for quoting strings. The SQL standard reserves double
quotes for a different purpose: delimiting or quoting identifiers.
• Care should be taken with the string length if the value is to be written to a CHAR or VARCHAR
column. The maximum length for a CHAR or VARCHAR literal is 32,765 bytes.
The character set of a string constant is assumed to be the same as the character set of its destined
storage.
Examples
70
Chapter 4. Common Language Elements
'cd'
from RDB$DATABASE;
-- output: abcd
-- comment and whitespace between literal
select 'ab' /* comment */ 'cd'
from RDB$DATABASE;
-- output: abcd
String literals can also be entered in hexadecimal notation, so-called “binary strings”. Each pair of
hex digits defines one byte in the string. Strings entered this way will be type BINARY (a.k.a. CHAR
CHARACTER SET OCTETS) by default, unless the introducer syntax is used to force a string to be
interpreted as another character set.
<binary-literal> ::=
[<introducer> charsetname] X <quote> [<space>...]
[{ <hexit> [<space>...] <hexit> [<space>...] }...] <quote>
[{ <separator> <quote> [<space>...]
[{ <hexit> [<space>...] <hexit> [<space>...] }...] <quote> }...]
Examples
71
Chapter 4. Common Language Elements
from RDB$DATABASE;
-- output: BINARY
The client interface determines how binary strings are displayed to the user. The isql utility, for
example, uses upper case letters A-F, while FlameRobin uses lower case letters. Other client
programs may use other conventions, such as displaying spaces between the byte pairs: '4E 65 72
76 65 6E'.
The hexadecimal notation allows any byte value (including 00) to be inserted at any position in the
string. However, if you want to coerce it to anything other than OCTETS, it is your responsibility to
supply the bytes in a sequence that is valid for the target character set.
The usage of the _win1252 introducer in above example is a non-standard extension and equivalent
to an explicit cast to a CHAR of appropriate length with character set WIN1252.
It is possible to use a character, or character pair, other than the doubled (escaped) apostrophe, to
embed a quoted string inside another string without the need to escape the quote. The keyword q or
Q preceding a quoted string informs the parser that certain left-right pairs or pairs of identical
characters within the string are the delimiters of the embedded string literal.
Syntax
Rules
• When <start char> is ‘(’, ‘{’, ‘[’ or ‘<’, <end char> is paired up with its respective “partner”, viz. ‘
)’, ‘}’, ‘]’ and ‘>’.
• Inside the string, i.e. <char> items, single quotes can be used without escaping. Each quote will
be part of the result string.
Examples
If necessary, a string literal may be preceded by a character set name, itself prefixed with an
underscore “_”. This is known as introducer syntax. Its purpose is to inform the engine about how to
interpret and store the incoming string.
Example
72
Chapter 4. Common Language Elements
Number Literals
• In SQL, for numbers in the standard decimal notation, the decimal point is always represented
by period character (‘.’, full-stop, dot); thousands are not separated. Inclusion of commas,
blanks, etc. will cause errors.
The format of the literal decides the type (<d> for a decimal digit, <h> for a hexadecimal digit):
Format Type
<d>[<d> …] INTEGER, BIGINT, INT128 or DECFLOAT(34) (depends
on if value fits in the type). DECFLOAT(34) is used
for values that do not fit in INT128.
0{x|X} <h>[<h> …] INTEGER for 1-8 <h>, or BIGINT for 9-16 <h>, INT128
for 17-32 <h>
<d>[<d> …] "." [<d> …] NUMERIC(18, n), NUMERIC(38, n) or DECFLOAT(34)
where n depends on the number of digits after
the decimal point, and precision on the total
number of digits.
Integer values can also be entered in hexadecimal notation. Numbers with 1-8 hex digits will be
interpreted as type INTEGER; numbers with 9-16 hex digits as type BIGINT; numbers with 17-32 hex
digits as type INT128.
Syntax
0{x|X}<hexdigits>
73
Chapter 4. Common Language Elements
Examples
• Hex numbers in the range 0 … 7FFF FFFF are positive INTEGERs with values between 0 …
2147483647 decimal. To coerce a number to BIGINT, prepend enough zeroes to bring the total
number of hex digits to nine or above. That changes the type but not the value.
• Hex numbers between 8000 0000 … FFFF FFFF require some attention:
◦ When written with eight hex digits, as in 0x9E44F9A8, a value is interpreted as 32-bit INTEGER.
Since the leftmost bit (sign bit) is set, it maps to the negative range -2147483648 … -1 decimal.
◦ With one or more zeroes prepended, as in 0x09E44F9A8, a value is interpreted as 64-bit BIGINT
in the range 0000 0000 8000 0000 … 0000 0000 FFFF FFFF. The sign bit is not set now, so they
map to the positive range 2147483648 … 4294967295 decimal.
Thus, in this range, and for 16 vs 16+ digits, prepending a mathematically insignificant 0 results
in a different value. This is something to be aware of.
• Hex numbers between 0 0000 0001 … 7FFF FFFF FFFF FFFF are all positive BIGINT.
• Hex numbers between 8000 0000 0000 0000 … FFFF FFFF FFFF FFFF are all negative BIGINT.
• Hex numbers between 0 0000 0000 0000 0001 … 7FFF FFFF FFFF FFFF FFFF FFFF FFFF FFFF are
all positive INT128
• Hex numbers between 8000 0000 0000 0000 0000 0000 0000 0000 … FFFF FFFF FFFF FFFF FFFF
FFFF FFFF FFFF are all negative INT128
• A SMALLINT cannot be written in hex, strictly speaking, since even 0x0 and 0x1 are evaluated as
INTEGER. However, if you write a positive integer within the 16-bit range 0x0000 (decimal zero) to
0x7FFF (decimal 32767) it will be converted to SMALLINT transparently.
It is possible to write to a negative SMALLINT in hex, using a 4-byte hex number within the range
0xFFFF8000 (decimal -32768) to 0xFFFFFFFF (decimal -1).
Boolean Literals
74
Chapter 4. Common Language Elements
Datetime Literals
Formally, the SQL standard defines datetime literals as a prefix DATE, TIME and TIMESTAMP followed by
a string literal with a datetime format. Historically, Firebird documentation has referred to these
datetime literals as “shorthand casts”.
Since Firebird 4.0, the use of datetime mnemonics in datetime literals (e.g. DATE 'TODAY') is no
longer allowed.
The format of datetime literals and strings in Firebird 4.0 and higher is more strict
compared to earlier Firebird versions.
<datetime_literal> ::=
DATE '<date_format>'
| TIME { '<time_format>' | '<time_tz_format>' }
| TIMESTAMP { '<timestamp_format>' | '<timestamp_tz_format>' }
<date_format> ::=
[YYYY<p>]MM<p>DD
| MM<p>DD[<p>{ YYYY | YY }]
| DD<p>MM[<p>{ YYYY | YY }]
<time_zone> ::=
{ + | - }HH:MM
| time zone name (e.g. Europe/Berlin)
Argument Description
75
Chapter 4. Common Language Elements
Argument Description
YY Two-digit year
Use of the complete specification of the year in the four-digit form — YYYY — is
strongly recommended, to avoid confusion in date calculations and aggregations.
Example
-- 1
UPDATE PEOPLE
SET AGECAT = 'SENIOR'
WHERE BIRTHDATE < DATE '1-Jan-1943';
-- 2
INSERT INTO APPOINTMENTS
(EMPLOYEE_ID, CLIENT_ID, APP_DATE, APP_TIME)
VALUES (973, 8804, DATE '1-Jan-2021' + 2, TIME '16:00');
-- 3
NEW.LASTMOD = TIMESTAMP '1-Jan-2021 16:00';
76
Chapter 4. Common Language Elements
SQL operators comprise operators for comparing, calculating, evaluating and concatenating values.
Operator Precedence
SQL Operators are divided into four types. Each operator type has a precedence, a ranking that
determines the order in which operators and the values obtained with their help are evaluated in
an expression. The higher the precedence of the operator type is, the earlier it will be evaluated.
Each operator has its own precedence within its type, that determines the order in which they are
evaluated in an expression.
Operators with the same precedence are evaluated from left to right. To force a different evaluation
order, operations can be grouped by means of parentheses.
Concatenation Operator
The concatenation operator — two pipe characters known as “double pipe” or ‘||’ — concatenates
two character strings to form a single string. Character strings can be literals or values obtained
from columns or other expressions.
Example
See also
BLOB_APPEND()
77
Chapter 4. Common Language Elements
Arithmetic Operators
Where operators have the same precedence, they are evaluated in left-to-right sequence.
Example
UPDATE T
SET A = 4 + 1/(B-C)*D
Comparison Operators
This group also includes comparison predicates BETWEEN, LIKE, CONTAINING, SIMILAR TO and others.
Example
See also
Other Comparison Predicates.
78
Chapter 4. Common Language Elements
Logical Operators
Example
Result type
BIGINT — dialect 2 and 3
INTEGER — dialect 1
Syntax
NEXT VALUE FOR returns the next value of a sequence. Sequence is the SQL-standard term for what is
historically called a generator in Firebird and its ancestor, InterBase. The NEXT VALUE FOR operator is
equivalent to the legacy GEN_ID (…, increment) function with increment the increment stored in the
metadata of the sequence. It is the recommended syntax for retrieving the next sequence value.
Unlike the GEN_ID function, the NEXT VALUE FOR expression does not take any parameters and thus
provides no way to retrieve the current value of a sequence, nor to step the next value by a different
value than the increment configured for the sequence. GEN_ID (…, <step value>) is still needed for
these tasks. A step value of 0 returns the current sequence value.
The increment of a sequence can be configured with the INCREMENT clause of CREATE SEQUENCE or
ALTER SEQUENCE.
Example
See also
SEQUENCE (GENERATOR), GEN_ID()
79
Chapter 4. Common Language Elements
Syntax
<at expr> ::= <expr> AT { TIME ZONE <time zone string> | LOCAL }
The AT expression expresses a datetime value in a different time zone, while keeping the same UTC
instant.
AT translates a time/timestamp value to its corresponding value in another time zone. If LOCAL is
used, the value is converted to the session time zone.
When expr is a WITHOUT TIME ZONE type, expr is first converted to a WITH TIME ZONE in the session time
zone and then transformed to the specified time zone.
Examples
A conditional expression is one that returns different values according to how a certain condition is
met. It is composed by applying a conditional function construct, of which Firebird supports
several. This section describes only one conditional expression construct: CASE. All other conditional
expressions apply internal functions derived from CASE and are described in Conditional Functions.
CASE
The CASE construct returns a single value from a number of possible values. Two syntactic variants
are supported:
• The searched CASE, which works like a series of “if … else if … else if” clauses.
Simple CASE
Syntax
...
CASE <test-expr>
WHEN <expr> THEN <result>
[WHEN <expr> THEN <result> ...]
[ELSE <defaultresult>]
END
...
When this variant is used, test-expr is compared to the first expr, second expr and so on, until a
80
Chapter 4. Common Language Elements
match is found, and the corresponding result is returned. If no match is found, defaultresult from
the optional ELSE clause is returned. If there are no matches and no ELSE clause, NULL is returned.
The matching works as the “=” operator. That is, if test-expr is NULL, it does not match any expr, not
even an expression that resolves to NULL.
The returned result does not have to be a literal value: it might be a field or variable name,
compound expression or NULL literal.
Example
SELECT
NAME,
AGE,
CASE UPPER(SEX)
WHEN 'M' THEN 'Male'
WHEN 'F' THEN 'Female'
ELSE 'Unknown'
END GENDER,
RELIGION
FROM PEOPLE
Searched CASE
Syntax
CASE
WHEN <bool_expr> THEN <result>
[WHEN <bool_expr> THEN <result> ...]
[ELSE <defaultresult>]
END
The bool_expr expression is one that gives a ternary logical result: TRUE, FALSE or NULL. The first
expression to return TRUE determines the result. If no expressions return TRUE, defaultresult from
the optional ELSE clause is returned as the result. If no expressions return TRUE and there is no ELSE
clause, the result will be NULL.
As with the simple CASE construct, the result need not be a literal value: it might be a field or
variable name, a compound expression, or be NULL.
Example
CANVOTE = CASE
WHEN AGE >= 18 THEN 'Yes'
WHEN AGE < 18 THEN 'No'
ELSE 'Unsure'
END
81
Chapter 4. Common Language Elements
NULL is not a value in SQL, but a state indicating that the value of the element either is unknown or it
does not exist. It is not a zero, nor a void, nor an “empty string”, and it does not act like any value.
When you use NULL in numeric, string or date/time expressions, the result will always be NULL. When
you use NULL in logical (Boolean) expressions, the result will depend on the type of the operation
and on other participating values. When you compare a value to NULL, the result will be unknown.
Consult the Firebird Null Guide for more in-depth coverage of Firebird’s NULL behaviour.
1 + 2 + 3 + NULL
'Home ' || 'sweet ' || NULL
MyField = NULL
MyField <> NULL
NULL = NULL
not (NULL)
If it seems difficult to understand why, remember that NULL is a state that stands for “unknown”.
It has already been shown that NOT (NULL) results in NULL. The interaction is a bit more complicated
for the logical AND and logical OR operators:
Examples
82
Chapter 4. Common Language Elements
4.1.5. Subqueries
A subquery is a special form of expression that is a query embedded within another query.
Subqueries are written in the same way as regular SELECT queries, but they must be enclosed in
parentheses. Subquery expressions can be used in the following ways:
• To obtain values or conditions for search predicates (the WHERE, HAVING clauses).
• To produce a set that the enclosing query can select from, as though were a regular table or
view. Subqueries like this appear in the FROM clause (derived tables) or in a Common Table
Expression (CTE)
Correlated Subqueries
A subquery can be correlated. A query is correlated when the subquery and the main query are
interdependent. To process each record in the subquery, it is necessary to fetch a record in the main
query, i.e. the subquery fully depends on the main query.
SELECT *
FROM Customers C
WHERE EXISTS
(SELECT *
FROM Orders O
WHERE C.cnum = O.cnum
AND O.adate = DATE '10.03.1990');
When subqueries are used to get the values of the output column in the SELECT list, a subquery must
return a scalar result (see below).
Scalar Results
Subqueries used in search predicates, other than existential and quantified predicates, must return
a scalar result; that is, not more than one column from not more than one matching row or
aggregation. If the query returns more columns or rows, a run-time error will occur (“Multiple
rows in a singleton select…”).
83
Chapter 4. Common Language Elements
row. However, “singleton” and “scalar” are not synonymous: not all singleton
SELECTS are required to be scalar; and single-column selects can return multiple
rows for existential and quantified predicates.
Subquery Examples
1. A subquery as the output column in a SELECT list:
SELECT
e.first_name,
e.last_name,
(SELECT
sh.new_salary
FROM
salary_history sh
WHERE
sh.emp_no = e.emp_no
ORDER BY sh.change_date DESC ROWS 1) AS last_salary
FROM
employee e
2. A subquery in the WHERE clause for obtaining the employee’s maximum salary and filtering by it:
SELECT
e.first_name,
e.last_name,
e.salary
FROM employee e
WHERE
e.salary = (
SELECT MAX(ie.salary)
FROM employee ie
)
4.2. Predicates
A predicate is a simple expression asserting some fact, let’s call it P. If P resolves as TRUE, it
succeeds. If it resolves to FALSE or NULL (UNKNOWN), it fails. A trap lies here, though: suppose the
predicate, P, returns FALSE. In this case NOT(P) will return TRUE. On the other hand, if P returns
NULL (unknown), then NOT(P) returns NULL as well.
In SQL, predicates can appear in CHECK constraints, WHERE and HAVING clauses, CASE expressions, the
IIF() function and in the ON condition of JOIN clauses, and anywhere a normal expression can
occur.
4.2.1. Conditions
A condition — or Boolean expression — is a statement about the data that, like a predicate, can
84
Chapter 4. Common Language Elements
resolve to TRUE, FALSE or NULL. Conditions consist of one or more predicates, possibly negated
using NOT and connected by AND and OR operators. Parentheses may be used for grouping predicates
and controlling evaluation order.
A predicate may embed other predicates. Evaluation sequence is in the outward direction, i.e. the
innermost predicates are evaluated first. Each “level” is evaluated in precedence order until the
truth value of the ultimate condition is resolved.
A comparison predicate consists of two expressions connected with a comparison operator. There
are six traditional comparison operators:
For the complete list of comparison operators with their variant forms, see Comparison Operators.
If one of the sides (left or right) of a comparison predicate has NULL in it, the value of the predicate
will be UNKNOWN.
Examples
1. Retrieve information about computers with the CPU frequency not less than 500 MHz and the
price lower than $800:
SELECT *
FROM Pc
WHERE speed >= 500 AND price < 800;
2. Retrieve information about all dot matrix printers that cost less than $300:
SELECT *
FROM Printer
WHERE ptrtype = 'matrix' AND price < 300;
3. The following query will return no data, even if there are printers with no type specified for
them, because a predicate that compares NULL with NULL returns NULL:
SELECT *
FROM Printer
WHERE ptrtype = NULL AND price < 300;
On the other hand, ptrtype can be tested for NULL and return a result: it is just that it is not a
comparison test:
SELECT *
85
Chapter 4. Common Language Elements
FROM Printer
WHERE ptrtype IS NULL AND price < 300;
When CHAR and VARCHAR fields are compared for equality, trailing spaces are
ignored in all cases.
BETWEEN
Syntax
The BETWEEN predicate tests whether a value falls within a specified range of two values. (NOT
BETWEEN tests whether the value does not fall within that range.)
The operands for BETWEEN predicate are two arguments of compatible data types. The BETWEEN
predicate in Firebird is asymmetrical — if the lower bound is not the first argument, the BETWEEN
predicate will return FALSE. The search is inclusive (the values represented by both arguments are
included in the search). In other words, the BETWEEN predicate could be rewritten:
When BETWEEN is used in the search conditions of DML queries, the Firebird optimizer can use an
index on the searched column, if it is available.
Example
SELECT *
FROM EMPLOYEE
WHERE HIRE_DATE BETWEEN date '1992-01-01' AND CURRENT_DATE
LIKE
Syntax
86
Chapter 4. Common Language Elements
The LIKE predicate compares the character-type expression with the pattern defined in the second
expression. Case- or accent-sensitivity for the comparison is determined by the collation that is in
use. A collation can be specified for either operand, if required.
Wildcards
Two wildcard symbols are available for use in the search pattern:
• the percentage symbol (%) will match any sequence of zero or more characters in the tested
value
• the underscore character (_) will match any single character in the tested value
If the tested value matches the pattern, taking into account wildcard symbols, the predicate is
TRUE.
If the search string contains either of the wildcard symbols, the ESCAPE clause can be used to specify
an escape character. The escape character must precede the ‘%’ or ‘_’ symbol in the search string, to
indicate that the symbol is to be interpreted as a literal character.
1. Find the numbers of departments whose names start with the word “Software”:
SELECT DEPT_NO
FROM DEPT
WHERE DEPT_NAME LIKE 'Software%';
So, if you need to search for the beginning of a string, it is recommended to use
the STARTING WITH predicate instead of the LIKE predicate.
2. Search for employees whose names consist of 5 letters, start with the letters “Sm” and end with
“th”. The predicate will be true for such names as “Smith” and “Smyth”.
SELECT
first_name
87
Chapter 4. Common Language Elements
FROM
employee
WHERE first_name LIKE 'Sm_th'
3. Search for all clients whose address contains the string “Rostov”:
SELECT *
FROM CUSTOMER
WHERE ADDRESS LIKE '%Rostov%'
4. Search for tables containing the underscore character in their names. The ‘#’ character is used
as the escape character:
SELECT
RDB$RELATION_NAME
FROM RDB$RELATIONS
WHERE RDB$RELATION_NAME LIKE '%#_%' ESCAPE '#'
See also
STARTING WITH, CONTAINING, SIMILAR TO
STARTING WITH
Syntax
The STARTING WITH predicate searches for a string or a string-like type that starts with the characters
in its value argument. The case- and accent-sensitivity of STARTING WITH depends on the collation of
the first value.
When STARTING WITH is used in the search conditions of DML queries, the Firebird optimizer can use
an index on the searched column, if it exists.
Example
Search for employees whose last names start with “Jo”:
See also
88
Chapter 4. Common Language Elements
LIKE
CONTAINING
Syntax
The CONTAINING predicate searches for a string or a string-like type looking for the sequence of
characters that matches its argument. It can be used for an alphanumeric (string-like) search on
numbers and dates. A CONTAINING search is not case-sensitive. However, if an accent-sensitive
collation is in use then the search will be accent-sensitive.
Examples
1. Search for projects whose names contain the substring “Map”:
SELECT *
FROM PROJECT
WHERE PROJ_NAME CONTAINING 'Map';
Two rows with the names “AutoMap” and “MapBrowser port” are returned.
2. Search for changes in salaries with the date containing number 84 (in this case, it means
changes that took place in 1984):
SELECT *
FROM SALARY_HISTORY
WHERE CHANGE_DATE CONTAINING 84;
See also
LIKE
SIMILAR TO
Syntax
SIMILAR TO matches a string against an SQL regular expression pattern. Unlike in some other
languages, the pattern must match the entire string to succeed — matching a substring is not
enough. If any operand is NULL, the result is NULL. Otherwise, the result is TRUE or FALSE.
If a literal pattern is used, and it doesn’t start with a wildcard or other special regex character,
SIMILAR TO can use an index.
89
Chapter 4. Common Language Elements
The following syntax defines the SQL regular expression format. It is a complete and correct top-
down definition. It is also highly formal and long, and may be daunting to anyone who hasn’t
already some experience with regular expressions (or with highly formal, rather long top-down
definitions). Feel free to skip it and read the next section, Building Regular Expressions, which uses
a bottom-up approach, aimed at the rest of us.
<m>, <n> ::= unsigned int, with <m> <= <n> if both present
In this section are the elements and rules for building SQL regular expressions.
90
Chapter 4. Common Language Elements
Characters
Within regular expressions, most characters represent themselves. The only exceptions are the
special characters below:
[ ] ( ) | ^ - + * % _ ? { }
A regular expression that contains no special characters or escape characters matches only strings
that are identical to itself (subject to the collation in use). That is, it functions just like the ‘=’
operator:
Wildcards
The known SQL wildcards ‘_’ and ‘%’ match any single character and a string of any length,
respectively:
Character Classes
A bunch of characters enclosed in brackets define a character class. A character in the string
matches a class in the pattern if the character is a member of the class:
As can be seen from the second line, the class only matches a single character, not a sequence.
Within a class definition, two characters connected by a hyphen define a range. A range comprises
the two endpoints and all the characters that lie between them in the active collation. Ranges can
be placed anywhere in the class definition without special delimiters to keep them apart from the
other elements.
91
Chapter 4. Common Language Elements
The following predefined character classes can also be used in a class definition:
[:ALPHA:]
Latin letters a..z and A..Z. With an accent-insensitive collation, this class also matches accented
forms of these characters.
[:DIGIT:]
Decimal digits 0..9.
[:ALNUM:]
Union of [:ALPHA:] and [:DIGIT:].
[:UPPER:]
Uppercase Latin letters A..Z. Also matches lowercase with case-insensitive collation and accented
forms with accent-insensitive collation.
[:LOWER:]
Lowercase Latin letters a..z. Also matches uppercase with case-insensitive collation and accented
forms with accent-insensitive collation.
[:SPACE:]
Matches the space character (ASCII 32).
[:WHITESPACE:]
Matches horizontal tab (ASCII 9), linefeed (ASCII 10), vertical tab (ASCII 11), formfeed (ASCII 12),
carriage return (ASCII 13) and space (ASCII 32).
Including a predefined class has the same effect as including all its members. Predefined classes are
only allowed within class definitions. If you need to match against a predefined class and nothing
more, place an extra pair of brackets around it.
If a class definition starts with a caret, everything that follows is excluded from the class. All other
characters match:
92
Chapter 4. Common Language Elements
If the caret is not placed at the start of the sequence, the class contains everything before the caret,
except for the elements that also occur after the caret:
Lastly, the already mentioned wildcard ‘_’ is a character class of its own, matching any single
character.
Quantifiers
A question mark (‘?’) immediately following a character or class indicates that the preceding item
may occur 0 or 1 times to match:
An asterisk (‘*’) immediately following a character or class indicates that the preceding item may
occur 0 or more times to match:
A plus sign (‘+’) immediately following a character or class indicates that the preceding item must
occur 1 or more times to match:
93
Chapter 4. Common Language Elements
If a character or class is followed by a number enclosed in braces (‘{’ and ‘}’), it must be repeated
exactly that number of times to match:
If the number is followed by a comma (‘,’), the item must be repeated at least that number of times
to match:
If the braces contain two numbers separated by a comma, the second number not smaller than the
first, then the item must be repeated at least the first number and at most the second number of
times to match:
The quantifiers ‘?’, ‘*’ and ‘+’ are shorthand for {0,1}, {0,} and {1,}, respectively.
OR-ing Terms
Regular expression terms can be OR’ed with the ‘|’ operator. A match is made when the argument
string matches at least one of the terms:
Subexpressions
One or more parts of the regular expression can be grouped into subexpressions (also called
subpatterns) by placing them between parentheses (‘(’ and ‘)’). A subexpression is a regular
expression in its own right. It can contain all the elements allowed in a regular expression, and can
also have quantifiers added to it.
94
Chapter 4. Common Language Elements
To match against a character that is special in regular expressions, that character has to be escaped.
There is no default escape character; the user specifies one when needed:
The last line demonstrates that the escape character can also escape itself, if needed.
Syntax
Two operands are considered DISTINCT (different) if they have a different value or if one of them is
NULL and the other non-null. They are considered NOT DISTINCT (equal) if they have the same value
or if both of them are NULL.
IS [NOT] DISTINCT FROM always returns TRUE or FALSE and never UNKNOWN (NULL) (unknown value).
Operators ‘=’ and ‘<>’, conversely, will return UNKNOWN (NULL) if one or both operands are NULL.
IS NOT DISTINCT
= <> IS DISTINCT FROM
FROM
Same value TRUE TRUE FALSE FALSE
Examples
-- PSQL fragment
95
Chapter 4. Common Language Elements
See also
IS [NOT] NULL, Boolean IS [NOT]
Boolean IS [NOT]
Syntax
The IS predicate with Boolean literal values checks if the expression on the left side matches the
Boolean value on the right side. The expression on the left side must be of type BOOLEAN, otherwise
an exception is raised.
The right side of the predicate only accepts the literals TRUE, FALSE, UNKNOWN, and NULL. It does not
accept expressions.
ID BVAL
============= =======
2 <false>
ID BVAL
============= =======
3 <null>
See also
IS [NOT] NULL
IS [NOT] NULL
Syntax
Since NULL is not a value, these operators are not comparison operators. The IS [NOT] NULL predicate
tests that the expression on the left side has a value (IS NOT NULL) or has no value (IS NULL).
96
Chapter 4. Common Language Elements
Example
Search for sales entries that have no shipment date set for them:
This group of predicates includes those that use subqueries to submit values for all kinds of
assertions in search conditions. Existential predicates are so called because they use various
methods to test for the existence or non-existence of some condition, returning TRUE if the existence
or non-existence is confirmed or FALSE otherwise.
EXISTS
Syntax
The EXISTS predicate uses a subquery expression as its argument. It returns TRUE if the subquery
result contains at least one row, otherwise it returns FALSE.
NOT EXISTS returns FALSE if the subquery result contains at least one row, otherwise it returns TRUE.
The subquery can specify multiple columns, or SELECT *, because the evaluation is made on the
number of rows that match its criteria, not on the data.
Examples
1. Find those employees who have projects.
SELECT *
FROM employee
WHERE EXISTS(SELECT *
FROM employee_project ep
WHERE ep.emp_no = employee.emp_no)
SELECT *
FROM employee
WHERE NOT EXISTS(SELECT *
FROM employee_project ep
97
Chapter 4. Common Language Elements
IN
Syntax
The IN predicate tests whether the value of the expression on the left side is present in the set of
values specified on the right side. The set of values cannot have more than 65535 items. The IN
predicate can be replaced with the following equivalent forms:
When the IN predicate is used in the search conditions of DML queries, the Firebird optimizer can
use an index on the searched column, if a suitable one exists. Lists that are known to be constant
are pre-evaluated as invariants and cached as a binary search tree, making comparisons faster if
the condition needs to be tested for many rows or if the value list is long.
In its second form, the IN predicate tests whether the value of the expression on the left side is
present — or not present, if NOT IN is used — in the result of the subquery on the right side.
The subquery must specify only one column, otherwise the error “count of column list and variable
list do not match” will occur.
Queries using an IN predicate with a subquery can be replaced with a similar query using the
EXISTS predicate. For example, the following query:
SELECT
model, speed, hd
FROM PC
WHERE
model IN (SELECT model
FROM product
WHERE maker = 'A');
SELECT
model, speed, hd
FROM PC
WHERE
EXISTS (SELECT *
98
Chapter 4. Common Language Elements
FROM product
WHERE maker = 'A'
AND product.model = PC.model);
However, a query using NOT IN with a subquery does not always give the same result as its NOT
EXISTS counterpart. The reason is that EXISTS always returns TRUE or FALSE, whereas IN returns
NULL in one of these two cases:
a. when the test value is NULL and the IN () list is not empty
b. when the test value has no match in the IN () list and at least one list element is NULL
It is in only these two cases that IN () will return NULL while the EXISTS predicate will return FALSE
(“no matching row found”). In a search or, for example, an IF (…) statement, both results mean
“failure”, and it makes no difference to the outcome.
For the same data, NOT IN () will return NULL, while NOT EXISTS will return TRUE, leading to opposite
results.
Now, assume that the NY celebrities list is not empty and contains at least one NULL birthday. Then
for every citizen who does not share his birthday with a NY celebrity, NOT IN will return NULL,
because that is what IN does. The search condition is thereby not satisfied and the citizen will be left
out of the SELECT result, which is wrong.
For citizens whose birthday does match with a celebrity’s birthday, NOT IN will correctly return
FALSE, so they will be left out too, and no rows will be returned.
non-matches will have a NOT EXISTS result of TRUE and their records will be in the result set.
99
Chapter 4. Common Language Elements
If there is any chance of NULLs being encountered when searching for a non-match,
you will want to use NOT EXISTS.
Examples of use
1. Find employees with the names “Pete”, “Ann” and “Roger”:
SELECT *
FROM EMPLOYEE
WHERE FIRST_NAME IN ('Pete', 'Ann', 'Roger');
2. Find all computers that have models whose manufacturer starts with the letter “A”:
SELECT
model, speed, hd
FROM PC
WHERE
model IN (SELECT model
FROM product
WHERE maker STARTING WITH 'A');
See also
EXISTS
SINGULAR
Syntax
The SINGULAR predicate takes a subquery as its argument and evaluates it as TRUE if the subquery
returns exactly one row, otherwise the predicate is evaluated as FALSE. The subquery may list
several output columns since the rows are not returned anyway, they are only tested for (singular)
existence. For brevity, people usually specify ‘SELECT *’. The SINGULAR predicate can return only two
values: TRUE or FALSE.
Example
Find those employees who have only one project.
SELECT *
FROM employee
WHERE SINGULAR(SELECT *
FROM employee_project ep
WHERE ep.emp_no = employee.emp_no)
100
Chapter 4. Common Language Elements
A quantifier is a logical operator that sets the number of objects for which this condition is true. It is
not a numeric quantity, but a logical one that connects the condition with the full set of possible
objects. Such predicates are based on logical universal and existential quantifiers that are
recognised in formal logic.
In subquery expressions, quantified predicates make it possible to compare separate values with
the results of subqueries; they have the following common form:
ALL
Syntax
When the ALL quantifier is used, the predicate is TRUE if every value returned by the subquery
satisfies the condition in the predicate of the main query.
Example
Show only those clients whose ratings are higher than the rating of every client in Paris.
SELECT c1.*
FROM Customers c1
WHERE c1.rating > ALL
(SELECT c2.rating
FROM Customers c2
WHERE c2.city = 'Paris')
If the subquery returns an empty set, the predicate is TRUE for every left-side value,
regardless of the operator. This may appear to be contradictory, because every left-
side value will thus be considered both smaller and greater than, both equal to and
unequal to, every element of the right-side stream.
Nevertheless, it aligns perfectly with formal logic: if the set is empty, the predicate
is true for every row in the set.
Syntax
The quantifiers ANY and SOME are identical in their behaviour. Both are specified in the SQL
standard, and they be used interchangeably to improve the readability of operators. When the ANY
101
Chapter 4. Common Language Elements
or the SOME quantifier is used, the predicate is TRUE if any of the values returned by the subquery
satisfies the condition in the predicate of the main query. If the subquery returns no rows at all, the
predicate is automatically considered as FALSE.
Example
Show only those clients whose ratings are higher than those of one or more clients in Rome.
SELECT *
FROM Customers
WHERE rating > ANY
(SELECT rating
FROM Customers
WHERE city = 'Rome')
102
Chapter 5. Data Definition (DDL) Statements
5.1. DATABASE
This section describes how to create a database, connect to an existing database, alter the file
structure of a database and how to drop a database. It also shows two methods to back up a
database and how to switch the database to the “copy-safe” mode for performing an external
backup safely.
Available in
DSQL, ESQL
Syntax
<db_initial_option> ::=
USER username
| PASSWORD 'password'
| ROLE rolename
| PAGE_SIZE [=] size
| LENGTH [=] num [PAGE[S]]
| SET NAMES 'charset'
<db_config_option> ::=
DEFAULT CHARACTER SET default_charset
[COLLATION collation] -- not supported in ESQL
| <sec_file>
| DIFFERENCE FILE 'diff_file' -- not supported in ESQL
<server_spec> ::=
host[/{port | service}]:
| <protocol>://[host[:{port | service}]/]
<sec_file> ::=
103
Chapter 5. Data Definition (DDL) Statements
FILE 'filepath'
[LENGTH [=] num [PAGE[S]]
[STARTING [AT [PAGE]] pagenum]
Each db_initial_option and db_config_option can occur at most once, except sec_file, which can occur
zero or more times.
Parameter Description
filepath Full path and file name including its extension. The file name must be
specified according to the rules of the platform file system being used.
host Host name or IP address of the server where the database is to be created
port The port number where the remote server is listening (parameter
RemoteServicePort in firebird.conf file)
username Username of the owner of the new database. The maximum length is 63
characters. The username can optionally be enclosed in single or double
quotes. When a username is enclosed in double quotes, it is case-sensitive
following the rules for quoted identifiers. When enclosed in single quotes,
it behaves as if the value was specified without quotes. The user must be
an administrator or have the CREATE DATABASE privilege.
password Password of the user as the database owner. When using the Legacy_Auth
authentication plugin, only the first 8 characters are used. Case-sensitive
rolename The name of the role whose rights should be taken into account when
creating a database. The role name can be enclosed in single or double
quotes. When the role name is enclosed in double quotes, it is case-
sensitive following the rules for quoted identifiers. When enclosed in
single quotes, it behaves as if the value was specified without quotes.
size Page size for the database, in bytes. Possible values are 4096, 8192, 16384
and 32768. The default page size is 8192.
num Maximum size of the primary database file, or a secondary file, in pages
default_charset Specifies the default character set for string data types
104
Chapter 5. Data Definition (DDL) Statements
Parameter Description
diff_file File path and name for DIFFERENCE files (.delta files) for backup mode
The CREATE DATABASE statement creates a new database. You can use CREATE DATABASE or CREATE
SCHEMA. They are synonymous, but we recommend to always use CREATE DATABASE as this may change
in a future version of Firebird.
A database consists of one or more files. The first (main) file is called the primary file, subsequent
files are called secondary file(s).
Multi-file Databases
Nowadays, multi-file databases are considered an anachronism. It made sense to
use multi-file databases on old file systems where the size of any file is limited. For
instance, you could not create a file larger than 4 GB on FAT32.
The primary file specification is the name of the database file and its extension with the full path to
it according to the rules of the OS platform file system being used. The database file must not exist
at the moment the database is being created. If it does exist, you will get an error message, and the
database will not be created.
If the full path to the database is not specified, the database will be created in one of the system
directories. The particular directory depends on the operating system. For this reason, unless you
have a strong reason to prefer that situation, always specify either the absolute path or an alias,
when creating a database.
You can use aliases instead of the full path to the primary database file. Aliases are defined in the
databases.conf file in the following format:
alias = filepath
If you create a database on a remote server, you need to specify the remote server specification.
105
Chapter 5. Data Definition (DDL) Statements
The remote server specification depends on the protocol being used. If you use the TCP/IP protocol
to create a database, the primary file specification should look like this:
host[/{port|service}]:{filepath | db_alias}
Firebird also has a unified URL-like syntax for the remote server specification. In this syntax, the
first part specifies the name of the protocol, then a host name or IP address, port number, and path
of the primary database file, or an alias.
inet
TCP/IP (first tries to connect using the IPv6 protocol, if it fails, then IPv4)
inet4
TCP/IP v4
inet6
TCP/IP v6
xnet
local protocol (does not include a host, port and service name)
ROLE
The name of the role (usually RDB$ADMIN), which will be taken into account when creating the
database. The role must be assigned to the user in the applicable security database.
PAGE_SIZE
The desired database page size. This size will be set for the primary file and all secondary files of
the database. If you specify the database page size less than 4,096, it will be automatically
rounded up to 4,096. Other values not equal to either 4,096, 8,192, 16,384 or 32,768 will be
changed to the closest smaller supported value. If the database page size is not specified, the
default value of 8,192 is used.
106
Chapter 5. Data Definition (DDL) Statements
Larger page sizes can fit more records on a single page, have wider indexes,
and more indexes, but they will also waste more space for blobs (compare the
wasted space of a 3KB blob on page size 4096 with one on 32768: +/- 1KB vs +/-
29KB), and increase memory consumption of the page cache.
LENGTH
The maximum size of the primary or secondary database file, in pages. When a database is
created, its primary and secondary files will occupy the minimum number of pages necessary to
store the system data, regardless of the value specified in the LENGTH clause. The LENGTH value
does not affect the size of the only (or last, in a multi-file database) file. The file will keep
increasing its size automatically when necessary.
SET NAMES
The character set of the connection available after the database is successfully created. The
character set NONE is used by default. Notice that the character set should be enclosed in a pair of
apostrophes (single quotes).
STARTING AT
The database page number at which the next secondary database file should start. When the
previous file is fully filled with data according to the specified page number, the system will start
adding new data to the next database file.
DIFFERENCE FILE
The path and name for the file delta that stores any mutations to the database file after it has
been switched to the “copy-safe” mode by the ALTER DATABASE BEGIN BACKUP statement. For the
detailed description of this clause, see ALTER DATABASE.
Databases are created in Dialect 3 by default. For the database to be created in Dialect 1, you will
need to execute the statement SET SQL DIALECT 1 from script or the client application, e.g. in isql,
before the CREATE DATABASE statement.
• Administrators
107
Chapter 5. Data Definition (DDL) Statements
1. Creating a database in Windows, located on disk D with a page size of 4,096. The owner of the
database will be the user wizard. The database will be in Dialect 1, and will use WIN1251 as its
default character set.
2. Creating a database in the Linux operating system with a page size of 8,192 (default). The owner
of the database will be the user wizard. The database will be in Dialect 3 and will use UTF8 as its
default character set, with UNICODE_CI_AI as the default collation.
3. Creating a database on the remote server “baseserver” with the path specified in the alias “test”
that has been defined previously in the file databases.conf. The TCP/IP protocol is used. The
owner of the database will be the user wizard. The database will be in Dialect 3 and will use
UTF8 as its default character set.
4. Creating a database in Dialect 3 with UTF8 as its default character set. The primary file will
contain up to 10,000 pages with a page size of 8,192. As soon as the primary file has reached the
maximum number of pages, Firebird will start allocating pages to the secondary file test.fdb2.
If that file is filled up to its maximum as well, test.fdb3 becomes the recipient of all new page
allocations. As the last file, it has no page limit imposed on it by Firebird. New allocations will
continue for as long as the file system allows it or until the storage device runs out of free space.
If a LENGTH parameter were supplied for this last file, it would be ignored.
108
Chapter 5. Data Definition (DDL) Statements
5. Creating a database in Dialect 3 with UTF8 as its default character set. The primary file will
contain up to 10,000 pages with a page size of 8,192. As far as file size and the use of secondary
files are concerned, this database will behave exactly like the one in the previous example.
See also
ALTER DATABASE, DROP DATABASE
Alters the file organisation of a database, toggles its “copy-safe” state, manages encryption, and
other database-wide configuration
Available in
DSQL, ESQL — limited feature set
Syntax
<alter_db_option> :==
<add_sec_clause>
| {ADD DIFFERENCE FILE 'diff_file' | DROP DIFFERENCE FILE}
| {BEGIN | END} BACKUP
| SET DEFAULT CHARACTER SET charset
| {ENCRYPT WITH plugin_name [KEY key_name] | DECRYPT}
| SET LINGER TO linger_duration
| DROP LINGER
| SET DEFAULT SQL SECURITY {INVOKER | DEFINER}
| {ENABLE | DISABLE} PUBLICATION
| INCLUDE <pub_table_filter> TO PUBLICATION
| EXCLUDE <pub_table_filter> FROM PUBLICATION
<sec_file> ::=
FILE 'filepath'
[STARTING [AT [PAGE]] pagenum]
[LENGTH [=] num [PAGE[S]]
109
Chapter 5. Data Definition (DDL) Statements
<pub_table_filter> ::=
ALL
| TABLE table_name [, table_name ...]
ALTER DATABASE
ADD FILE x LENGTH 8000
FILE y LENGTH 8000
FILE z
Multiple occurrences of add_sec_clause (ADD FILE clauses) are allowed; an ADD FILE
clause that adds multiple files (as in the example above) can be mixed with others
that add only one file.
Parameter Description
filepath Full path and file name of the delta file or secondary database file
pagenum Page number from which the secondary database file is to start
diff_file File path and name of the .delta file (difference file)
• switch a single-file database into and out of the “copy-safe” mode (DSQL only)
• set or unset the path and name of the delta file for physical backups (DSQL only)
SCHEMA is currently a synonym for DATABASE; this may change in a future version, so
we recommend to always use DATABASE
110
Chapter 5. Data Definition (DDL) Statements
• Administrators
ADD (FILE)
Adds secondary files to the database. It is necessary to specify the full path to the file and the
name of the secondary file. The description for the secondary file is similar to the one given for
the CREATE DATABASE statement.
If only a filename is specified, the delta file will be created in the current
directory of the server. On Windows, this will be the system directory — a very
unwise location to store volatile user files and contrary to Windows file system
rules.
BEGIN BACKUP
Switches the database to the “copy-safe” mode. ALTER DATABASE with this clause freezes the main
database file, making it possible to back it up safely using file system tools, even if users are
connected and performing operations with data. Until the backup state of the database is
reverted to NORMAL, all changes made to the database will be written to the delta (difference)
file.
Despite its name, the ALTER DATABASE BEGIN BACKUP statement does not start a
backup process, but only freezes the database, to create the conditions for doing
a task that requires the database file to be read-only temporarily.
END BACKUP
Switches the database from the “copy-safe” mode to the normal mode. A statement with this
111
Chapter 5. Data Definition (DDL) Statements
clause merges the difference file with the main database file and restores the normal operation
of the database. Once the END BACKUP process starts, the conditions no longer exist for creating
safe backups by means of file system tools.
Use of BEGIN BACKUP and END BACKUP and copying the database files with
filesystem tools, is not safe with multi-file databases! Use this method only on
single-file databases.
Making a safe backup with the gbak utility remains possible at all times,
although it is not recommended running gbak while the database is in LOCKED
or MERGE state.
ENCRYPT WITH
See Encrypting a Database in the Security chapter.
DECRYPT
See Decrypting a Database in the Security chapter.
SET LINGER TO
Sets the linger-delay. The linger-delay applies only to Firebird SuperServer, and is the number of
seconds the server keeps a database file (and its caches) open after the last connection to that
database was closed. This can help to improve performance at low cost, when the database is
opened and closed frequently, by keeping resources “warm” for the next connection.
The SET LINGER TO and DROP LINGER clauses can be combined in a single
statement, but the last clause “wins”. For example, ALTER DATABASE SET LINGER
TO 5 DROP LINGER will set the linger-delay to 0 (no linger), while ALTER DATABASE
DROP LINGER SET LINGER to 5 will set the linger-delay to 5 seconds.
DROP LINGER
Drops the linger-delay (sets it to zero). Using DROP LINGER is equivalent to using SET LINGER TO 0.
Dropping LINGER is not an ideal solution for the occasional need to turn it off for
once-only operations where the server needs a forced shutdown. The gfix utility
now has the -NoLinger switch, which will close the specified database
immediately after the last attachment is gone, regardless of the LINGER setting in
the database. The LINGER setting is retained and works normally the next time.
112
Chapter 5. Data Definition (DDL) Statements
The same one-off override is also available through the Services API, using the
tag isc_spb_prp_nolinger, e.g. (in one line):
The DROP LINGER and SET LINGER TO clauses can be combined in a single
statement, but the last clause “wins”.
ENABLE PUBLICATION
Enables publication of this database for replication. Replication begins (or continues) with the
next transaction started after this transaction commits.
DISABLE PUBLICATION
Enables publication of this database for replication. Replication is disabled immediately after
commit.
INCLUDE … TO PUBLICATION
Includes tables to publication. If the INCLUDE ALL TO PUBLICATION clause is used, all tables created
afterward will also be replicated, unless overridden explicitly in the CREATE TABLE statement.
Replication
• Other than the syntax, configuring Firebird for replication is not covered in
1. Adding a secondary file to the database. As soon as 30000 pages are filled in the previous
primary or secondary file, the Firebird engine will start adding data to the secondary file
test4.fdb.
ALTER DATABASE
ADD FILE 'D:\test4.fdb'
STARTING AT PAGE 30001;
113
Chapter 5. Data Definition (DDL) Statements
ALTER DATABASE
ADD DIFFERENCE FILE 'D:\test.diff';
ALTER DATABASE
DROP DIFFERENCE FILE;
ALTER DATABASE
BEGIN BACKUP;
5. Switching the database back from the “copy-safe” mode to the normal operation mode:
ALTER DATABASE
END BACKUP;
ALTER DATABASE
SET DEFAULT CHARACTER SET WIN1252;
ALTER DATABASE
SET LINGER TO 30;
ALTER DATABASE
ENCRYPT WITH DbCrypt;
ALTER DATABASE
DECRYPT;
See also
114
Chapter 5. Data Definition (DDL) Statements
Available in
DSQL, ESQL
Syntax
DROP DATABASE
The DROP DATABASE statement deletes the current database. Before deleting a database, you have to
connect to it. The statement deletes the primary file, all secondary files and all shadow files.
Contrary to CREATE DATABASE and ALTER DATABASE, DROP SCHEMA is not a valid alias for
DROP DATABASE. This is intentional.
• Administrators
DROP DATABASE;
See also
CREATE DATABASE, ALTER DATABASE
5.2. SHADOW
A shadow is an exact, page-by-page copy of a database. Once a shadow is created, all changes made
in the database are immediately reflected in the shadow. If the primary database file becomes
unavailable for some reason, the DBMS will switch to the shadow.
Available in
115
Chapter 5. Data Definition (DDL) Statements
DSQL, ESQL
Syntax
<secondary_file> ::=
FILE 'filepath'
[STARTING [AT [PAGE]] pagenum]
[LENGTH [=] num [PAGE[S]]]
Parameter Description
filepath The name of the shadow file and the path to it, in accord with the rules of
the operating system
page_num The number of the page at which the secondary shadow file should start
The CREATE SHADOW statement creates a new shadow. The shadow starts duplicating the database
right at the moment it is created. It is not possible for a user to connect to a shadow.
Like a database, a shadow may be multi-file. The number and size of a shadow’s files are not
related to the number and size of the files of the shadowed database.
The page size for shadow files is set to be equal to the database page size and cannot be changed.
If a calamity occurs involving the original database, the system converts the shadow to a copy of
the database and switches to it. The shadow is then unavailable. What happens next depends on the
MODE option.
• If the AUTO mode is selected (the default value), shadowing ceases automatically, all references
to it are deleted from the database header, and the database continues functioning normally.
If the CONDITIONAL option was set, the system will attempt to create a new shadow to replace the
lost one. It does not always succeed, however, and a new one may need to be created manually.
• If the MANUAL mode attribute is set when the shadow becomes unavailable, all attempts to
connect to the database and to query it will produce error messages. The database will remain
116
Chapter 5. Data Definition (DDL) Statements
inaccessible until either the shadow again becomes available, or the database administrator
deletes it using the DROP SHADOW statement. MANUAL should be selected if continuous shadowing is
more important than uninterrupted operation of the database.
LENGTH
Specifies the maximum size of the primary or secondary shadow file in pages. The LENGTH value
does not affect the size of the only shadow file, nor the last if it is a set. The last (or only) file will
keep automatically growing as long as it is necessary.
STARTING AT
Specifies the shadow page number at which the next shadow file should start. The system will
start adding new data to the next shadow file when the previous file is filled with data up to the
specified page number.
You can verify the sizes, names and location of the shadow files by connecting to
the database using isql and running the command SHOW DATABASE;
• Administrators
2. Creating a multi-file shadow for the current database as “shadow number 2”:
See also
CREATE DATABASE, DROP SHADOW
Available in
117
Chapter 5. Data Definition (DDL) Statements
DSQL, ESQL
Syntax
Parameter Description
The DROP SHADOW statement deletes the specified shadow for the current database. When a shadow is
dropped, all files related to it are deleted and shadowing to the specified sh_num ceases. The
optional DELETE FILE clause makes this behaviour explicit. On the contrary, the PRESERVE FILE clause
will remove the shadow from the database, but the file itself will not be deleted.
• Administrators
DROP SHADOW 1;
See also
CREATE SHADOW
5.3. DOMAIN
DOMAIN is one of the object types in a relational database. A domain is created as a specific data type
with attributes attached to it (think of attributes like length, precision or scale, nullability, check
constraints). Once a domain has been defined in the database, it can be reused repeatedly to define
table columns, PSQL arguments and PSQL local variables. Those objects inherit all attributes of the
domain. Some attributes can be overridden when the new object is defined, if required.
This section describes the syntax of statements used to create, alter and drop domains. A detailed
description of domains and their usage can be found in Custom Data Types — Domains.
118
Chapter 5. Data Definition (DDL) Statements
Available in
DSQL, ESQL
Syntax
<datatype> ::=
<scalar_datatype> | <blob_datatype> | <array_datatype>
<scalar_datatype> ::=
!! See Scalar Data Types Syntax !!
<blob_datatype> ::=
!! See BLOB Data Types Syntax !!
<array_datatype> ::=
!! See Array Data Types Syntax !!
<dom_condition> ::=
<val> <operator> <val>
| <val> [NOT] BETWEEN <val> AND <val>
| <val> [NOT] IN ({<val> [, <val> ...] | <select_list>})
| <val> IS [NOT] NULL
| <val> IS [NOT] DISTINCT FROM <val>
| <val> [NOT] CONTAINING <val>
| <val> [NOT] STARTING [WITH] <val>
| <val> [NOT] LIKE <val> [ESCAPE <val>]
| <val> [NOT] SIMILAR TO <val> [ESCAPE <val>]
| <val> <operator> {ALL | SOME | ANY} (<select_list>)
| [NOT] EXISTS (<select_expr>)
| [NOT] SINGULAR (<select_expr>)
| (<dom_condition>)
| NOT <dom_condition>
| <dom_condition> OR <dom_condition>
| <dom_condition> AND <dom_condition>
<operator> ::=
<> | != | ^= | ~= | = | < | > | <= | >=
| !< | ^< | ~< | !> | ^> | ~>
<val> ::=
VALUE
| <literal>
| <context_var>
| <expression>
| NULL
| NEXT VALUE FOR genname
119
Chapter 5. Data Definition (DDL) Statements
| GEN_ID(genname, <val>)
| CAST(<val> AS <cast_type>)
| (<select_one>)
| func([<val> [, <val> ...]])
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
Parameter Description
select_one A scalar SELECT statement — selecting one column and returning only one
row
select_list A SELECT statement selecting one column and returning zero or more
rows
select_expr A SELECT statement selecting one or more columns and returning zero or
more rows
Type-specific Details
Array Types
• If the domain is to be an array, the base type can be any SQL data type except BLOB and array.
• For each array dimension, one or two integer numbers define the lower and upper
boundaries of its index range:
◦ By default, arrays are 1-based. The lower boundary is implicit and only the upper
120
Chapter 5. Data Definition (DDL) Statements
boundary need be specified. A single number smaller than 1 defines the range num..1
and a number greater than 1 defines the range 1..num.
◦ Two numbers separated by a colon (‘:’) and optional whitespace, the second greater than
the first, can be used to define the range explicitly. One or both boundaries can be less
than zero, as long as the upper boundary is greater than the lower.
• When the array has multiple dimensions, the range definitions for each dimension must be
separated by commas and optional whitespace.
• Subscripts are validated only if an array actually exists. It means that no error messages
regarding invalid subscripts will be returned if selecting a specific element returns nothing
or if an array field is NULL.
String Types
You can use the CHARACTER SET clause to specify the character set for the CHAR, VARCHAR and BLOB
(SUB_TYPE TEXT) types. If the character set is not specified, the character set specified as DEFAULT
CHARACTER SET of the database will be used. If the database has no default character set, the
character set NONE is applied by default when you create a character domain.
With character set NONE, character data are stored and retrieved the way they
were submitted. Data in any encoding can be added to a column based on such
a domain, but it is impossible to add this data to a column with a different
encoding. Because no transliteration is performed between the source and
destination encodings, errors may result.
DEFAULT Clause
The optional DEFAULT clause allows you to specify a default value for the domain. This value will
be added to the table column that inherits this domain when the INSERT statement is executed, if
no value is specified for it in the DML statement. Local variables and arguments in PSQL
modules that reference this domain will be initialized with the default value. For the default
value, use a literal of a compatible type or a context variable of a compatible type.
When creating a domain, take care to avoid specifying limitations that would
contradict one another. For instance, NOT
contradictory.
NULL and DEFAULT NULL are
CHECK Constraint(s)
The optional CHECK clause specifies constraints for the domain. A domain constraint specifies
conditions that must be satisfied by the values of table columns or variables that inherit from
the domain. A condition must be enclosed in parentheses. A condition is a logical expression
(also called a predicate) that can return the Boolean results TRUE, FALSE and UNKNOWN. A condition
is considered satisfied if the predicate returns the value TRUE or “unknown value” (equivalent to
NULL). If the predicate returns FALSE, the condition for acceptance is not met.
121
Chapter 5. Data Definition (DDL) Statements
VALUE Keyword
The keyword VALUE in a domain constraint substitutes for the table column that is based on this
domain or for a variable in a PSQL module. It contains the value assigned to the variable or the
table column. VALUE can be used anywhere in the CHECK constraint, though it is usually used in the
left part of the condition.
COLLATE
The optional COLLATE clause allows you to specify the collation if the domain is based on one of
the string data types, including BLOBs with text subtypes. If no collation is specified, the collation
will be the one that is default for the specified character set at the time the domain is created.
• Administrators
1. Creating a domain that can take values greater than 1,000, with a default value of 10,000.
2. Creating a domain that can take the values 'Yes' and 'No' in the default character set specified
during the creation of the database.
3. Creating a domain with the UTF8 character set and the UNICODE_CI_AI collation.
4. Creating a domain of the DATE type that will not accept NULL and uses the current date as the
default value.
122
Chapter 5. Data Definition (DDL) Statements
5. Creating a domain defined as an array of 2 elements of the NUMERIC(18, 3) type. The starting
array index is 1.
Domains defined over an array type may be used only to define table columns.
You cannot use array domains to define local variables in PSQL modules.
6. Creating a domain whose elements can be only country codes defined in the COUNTRY table.
The example is given only to show the possibility of using predicates with
queries in the domain test condition. It is not recommended to create this style
of domain in practice unless the lookup table contains data that are never
deleted.
See also
ALTER DOMAIN, DROP DOMAIN
Available in
DSQL, ESQL
Syntax
<datatype> ::=
<scalar_datatype> | <blob_datatype>
<scalar_datatype> ::=
!! See Scalar Data Types Syntax !!
<blob_datatype> ::=
!! See BLOB Data Types Syntax !!
123
Chapter 5. Data Definition (DDL) Statements
Parameter Description
The ALTER DOMAIN statement enables changes to the current attributes of a domain, including its
name. You can make any number of domain alterations in one ALTER DOMAIN statement.
TO name
Renames the domain, as long as there are no dependencies on the domain, i.e. table columns,
local variables or procedure arguments referencing it.
SET DEFAULT
Sets a new default value for the domain, replacing any existing default.
DROP DEFAULT
Deletes a previously specified default value and replace it with NULL.
Adding a NOT NULL constraint to an existing domain will subject all columns using this domain to
a full data validation, so ensure that the columns have no nulls before attempting the change.
An explicit NOT NULL constraint on a column that depends on a domain prevails over the domain.
In this situation, the modification of the domain to make it nullable does not propagate to the
column.
TYPE
Changes the data type of the domain to a different, compatible one. The system will forbid any
change to the type that could result in data loss. An example would be if the number of
characters in the new type were smaller than in the existing type.
124
Chapter 5. Data Definition (DDL) Statements
When you alter the attributes of a domain, existing PSQL code may become
invalid. For information on how to detect it, read the piece entitled The
RDB$VALID_BLR Field in Appendix A.
• If the domain was declared as an array, it is not possible to change its type or its dimensions;
nor can any other type be changed to an array type.
• The collation cannot be changed without dropping the domain and recreating it with the
desired attributes.
• Administrators
Domain alterations can be prevented by dependencies from objects to which the user does not have
sufficient privileges.
1. Changing the data type to INTEGER and setting or changing the default value to 2,000:
2. Renaming a domain.
3. Deleting the default value and adding a constraint for the domain:
125
Chapter 5. Data Definition (DDL) Statements
See also
CREATE DOMAIN, DROP DOMAIN
Available in
DSQL, ESQL
Syntax
The DROP DOMAIN statement deletes a domain that exists in the database. It is not possible to delete a
domain if it is referenced by any database table columns or used in any PSQL module. To delete a
domain that is in use, all columns in all tables that refer to the domain have to be dropped and all
references to the domain have to be removed from PSQL modules.
• Administrators
126
Chapter 5. Data Definition (DDL) Statements
See also
CREATE DOMAIN, ALTER DOMAIN
5.4. TABLE
As a relational DBMS, Firebird stores data in tables. A table is a flat, two-dimensional structure
containing any number of rows. Table rows are often called records.
All rows in a table have the same structure and consist of columns. Table columns are often called
fields. A table must have at least one column. Each column contains a single type of SQL data.
This section describes how to create, alter and drop tables in a database.
Creates a table
Available in
DSQL, ESQL
Syntax
<col_def> ::=
<regular_col_def>
| <computed_col_def>
| <identity_col_def>
<regular_col_def> ::=
colname {<datatype> | domainname}
[DEFAULT {<literal> | NULL | <context_var>}]
[<col_constraint> ...]
[COLLATE collation_name]
<computed_col_def> ::=
colname [{<datatype> | domainname}]
{COMPUTED [BY] | GENERATED ALWAYS AS} (<expression>)
<identity_col_def> ::=
colname {<datatype> | domainname}
127
Chapter 5. Data Definition (DDL) Statements
<identity_col_option> ::=
START WITH start_value
| INCREMENT [BY] inc_value
<datatype> ::=
<scalar_datatype> | <blob_datatype> | <array_datatype>
<scalar_datatype> ::=
!! See Scalar Data Types Syntax !!
<blob_datatype> ::=
!! See BLOB Data Types Syntax !!
<array_datatype> ::=
!! See Array Data Types Syntax !!
<col_constraint> ::=
[CONSTRAINT constr_name]
{ PRIMARY KEY [<using_index>]
| UNIQUE [<using_index>]
| REFERENCES other_table [(colname)] [<using_index>]
[ON DELETE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
[ON UPDATE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
| CHECK (<check_condition>)
| NOT NULL }
<tconstraint> ::=
[CONSTRAINT constr_name]
{ PRIMARY KEY (<col_list>) [<using_index>]
| UNIQUE (<col_list>) [<using_index>]
| FOREIGN KEY (<col_list>)
REFERENCES other_table [(<col_list>)] [<using_index>]
[ON DELETE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
[ON UPDATE {NO ACTION | CASCADE | SET DEFAULT | SET NULL}]
| CHECK (<check_condition>) }
<check_condition> ::=
<val> <operator> <val>
| <val> [NOT] BETWEEN <val> AND <val>
| <val> [NOT] IN (<val> [, <val> ...] | <select_list>)
| <val> IS [NOT] NULL
| <val> IS [NOT] DISTINCT FROM <val>
128
Chapter 5. Data Definition (DDL) Statements
<operator> ::=
<> | != | ^= | ~= | = | < | > | <= | >=
| !< | ^< | ~< | !> | ^> | ~>
<val> ::=
colname ['['array_idx [, array_idx ...]']']
| <literal>
| <context_var>
| <expression>
| NULL
| NEXT VALUE FOR genname
| GEN_ID(genname, <val>)
| CAST(<val> AS <cast_type>)
| (<select_one>)
| func([<val> [, <val> ...]])
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
<table_attr> ::=
<sql_security>
| {ENABLE | DISABLE} PUBLICATION
<gtt_table_attr> ::=
<sql_security>
| ON COMMIT {DELETE | PRESERVE} ROWS
129
Chapter 5. Data Definition (DDL) Statements
Parameter Description
tablename Name (identifier) for the table. The maximum length is 63 characters and
must be unique in the database.
filespec File specification (only for external tables). Full file name and path,
enclosed in single quotes, correct for the local file system and located on
a storage device that is physically connected to Firebird’s host computer.
colname Name (identifier) for a column in the table. The maximum length is 63
characters and must be unique in the table.
inc_value The increment (or step) value of the identity column, default is 1; zero (0)
is not allowed.
other_table The name of the table referenced by the foreign key constraint
other_col The name of the column in other_table that is referenced by the foreign
key
context_var Any context variable whose data type is allowed in the given context
check_condition The condition applied to a CHECK constraint, that will resolve as either
true, false or NULL
collation Collation
select_one A scalar SELECT statement — selecting one column and returning only one
row
select_list A SELECT statement selecting one column and returning zero or more
rows
select_expr A SELECT statement selecting one or more columns and returning zero or
more rows
The CREATE TABLE statement creates a new table. Its name must be unique among the names of all
130
Chapter 5. Data Definition (DDL) Statements
A table must contain at least one column that is not computed, and the names of columns must be
unique in the table.
A column must have either an explicit SQL data type, the name of a domain whose attributes will be
copied for the column, or be defined as COMPUTED BY an expression (a calculated field).
Character Columns
You can use the CHARACTER SET clause to specify the character set for the CHAR, VARCHAR and BLOB (text
subtype) types. If the character set is not specified, the default character set of the database — at
time of the creation of the column — will be used.
If the database has no default character set, the NONE character set is applied. Data in any encoding
can be added to such a column, but it is not possible to add this data to a column with a different
encoding. No transliteration is performed between the source and destination encodings, which
may result in errors.
The optional COLLATE clause allows you to specify the collation for character data types, including
BLOB SUB_TYPE TEXT. If no collation is specified, the default collation for the specified character
set — at time of the creation of the column — is applied.
The optional DEFAULT clause allows you to specify the default value for the table column. This value
will be added to the column when an INSERT statement is executed and that column was omitted
from the INSERT command or DEFAULT was used instead of a value expression. The default value will
also be used in UPDATE when DEFAULT is used instead of a value expression.
The default value can be a literal of a compatible type, a context variable that is type-compatible
with the data type of the column, or NULL, if the column allows it. If no default value is explicitly
specified, NULL is implied.
Domain-based Columns
To define a column, you can use a previously defined domain. If the definition of a column is based
on a domain, it may contain a new default value, additional CHECK constraints, and a COLLATE clause
that will override the values specified in the domain definition. The definition of such a column
may contain additional column constraints (for instance, NOT NULL), if the domain does not have it.
131
Chapter 5. Data Definition (DDL) Statements
Identity columns are defined using the GENERATED {ALWAYS | BY DEFAULT} AS IDENTITY clause. The
identity column is a column associated with an internal sequence. Its value is set automatically
every time it is not specified in the INSERT statement, or when the column value is specified as
DEFAULT.
Rules
• The data type of an identity column must be an exact number type with zero scale. Allowed
types are SMALLINT, INTEGER, BIGINT, NUMERIC(p[,0]) and DECIMAL(p[,0]) with p <= 18.
◦ The INT128 type and numeric types with a precision higher than 18 are not supported.
• Identity columns are implicitly NOT NULL (non-nullable), and cannot be made nullable.
• The use of other methods of generating key values for identity columns, e.g. by trigger-
generator code or by allowing users to change or add them, is discouraged to avoid unexpected
key violations.
GENERATED ALWAYS
An identity column of type GENERATED ALWAYS will always generate a column value on insert.
Explicitly inserting a value into a column of this type is not allowed, unless:
1. the specified value is DEFAULT; this generates the identity value as normal.
2. the OVERRIDING SYSTEM VALUE clause is specified in the INSERT statement; this allows a user value
to be inserted;
3. the OVERRIDING USER VALUE clause is specified in the INSERT statement; this allows a user specified
value to be ignored (though in general it makes more sense to not include the column in the
INSERT).
GENERATED BY DEFAULT
An identity column of type GENERATED BY DEFAULT will generate a value on insert if no value — other
than DEFAULT — is specified on insert. When the OVERRIDING USER VALUE clause is specified in the
INSERT statement, the user-provided value is ignored, and an identity value is generated (as if the
column was not included in the insert, or the value DEFAULT was specified).
The optional START WITH clause allows you to specify an initial value other than 1. This value is the
first value generated when using NEXT VALUE FOR sequence.
132
Chapter 5. Data Definition (DDL) Statements
INCREMENT Option
The optional INCREMENT clause allows you to specify another non-zero step value than 1.
The SQL standard specifies that if INCREMENT is specified with a negative value, and
START WITH is not specified, that the first value generated should be the maximum
31
of the column type (e.g. 2 - 1 for INTEGER). Instead, Firebird will start at 1.
Computed Columns
Computed columns can be defined with the COMPUTED [BY] or GENERATED ALWAYS AS clause (the SQL
standard alternative to COMPUTED [BY]). Specifying the data type is optional; if not specified, the
appropriate type will be derived from the expression.
If the data type is explicitly specified for a calculated field, the calculation result is converted to the
specified type. This means, for instance, that the result of a numeric expression could be converted
to a string.
In a query that selects a computed column, the expression is evaluated for each row of the selected
data.
• If the column is to be an array, the base type can be any SQL data type except BLOB and array.
• For each array dimension, one or two integer numbers define the lower and upper boundaries
of its index range:
◦ By default, arrays are 1-based. The lower boundary is implicit and only the upper boundary
need be specified. A single number smaller than 1 defines the range num…1 and a number
greater than 1 defines the range 1…num.
◦ Two numbers separated by a colon (‘:’) and optional whitespace, the second greater than the
first, can be used to define the range explicitly. One or both boundaries can be less than
zero, as long as the upper boundary is greater than the lower.
• When the array has multiple dimensions, the range definitions for each dimension must be
separated by commas and optional whitespace.
• Subscripts are validated only if an array actually exists. It means that no error messages
regarding invalid subscripts will be returned if selecting a specific element returns nothing or if
an array field is NULL.
Constraints
133
Chapter 5. Data Definition (DDL) Statements
Constraints can be specified at column level (“column constraints”) or at table level (“table
constraints”). Table-level constraints are required when keys (unique constraint, primary key,
foreign key) consist of multiple columns and when a CHECK constraint involves other columns in the
row besides the column being defined. The NOT NULL constraint can only be specified as a column
constraint. Syntax for some types of constraint may differ slightly according to whether the
constraint is defined at the column or table level.
• A column-level constraint is specified during a column definition, after all column attributes
except COLLATION are specified, and can involve only the column specified in that definition
• A table-level constraints can only be specified after the definitions of the columns used in the
constraint.
• Table-level constraints are a more flexible way to set constraints, since they can cater for
constraints involving multiple columns
• You can mix column-level and table-level constraints in the same CREATE TABLE statement
The system automatically creates the corresponding index for a primary key (PRIMARY KEY), a
unique key (UNIQUE), and a foreign key (REFERENCES for a column-level constraint, FOREIGN KEY
REFERENCES for table-level).
Constraints and their indexes are named automatically if no name was specified using the
CONSTRAINT clause:
• The constraint name has the form INTEG_n, where n represents one or more digits
• The index name has the form RDB$PRIMARYn (for a primary key index), RDB$FOREIGNn (for a foreign
key index) or RDB$n (for a unique key index).
Named Constraints
A constraint can be named explicitly if the CONSTRAINT clause is used for its definition. By default,
the constraint index will have the same name as the constraint. If a different name is wanted for
the constraint index, a USING clause can be included.
The USING clause allows you to specify a user-defined name for the index that is created
automatically and, optionally, to define the direction of the index — either ascending (the default)
or descending.
134
Chapter 5. Data Definition (DDL) Statements
PRIMARY KEY
The PRIMARY KEY constraint is built on one or more key columns, where each column has the NOT
NULL constraint specified. The values across the key columns in any row must be unique. A table can
have only one primary key.
The UNIQUE constraint defines the requirement of content uniqueness for the values in a key
throughout the table. A table can contain any number of unique key constraints.
As with the primary key, the unique constraint can be multi-column. If so, it must be specified as a
table-level constraint.
Firebird’s SQL-compliant rules for UNIQUE constraints allow one or more NULLs in a column with a
UNIQUE constraint. This makes it possible to define a UNIQUE constraint on a column that does not
have the NOT NULL constraint.
For UNIQUE keys that span multiple columns, the logic is a little complicated:
• Multiple rows having null in all the columns of the key are allowed
• Multiple rows having keys with different combinations of nulls and non-null values are allowed
• Multiple rows having the same key columns null and the rest filled with non-null values are
allowed, provided the non-null values differ in at least one column
• Multiple rows having the same key columns null and the rest filled with non-null values that
are the same in every column will violate the constraint
In principle, all nulls are considered distinct. However, if two rows have
exactly the same key columns filled with non-null values, the NULL columns
are ignored and the uniqueness is determined on the non-null columns as
though they constituted the entire key.
Illustration
135
Chapter 5. Data Definition (DDL) Statements
FOREIGN KEY
A foreign key ensures that the participating column(s) can contain only values that also exist in the
referenced column(s) in the master table. These referenced columns are often called target
columns. They must be the primary key or a unique key in the target table. They need not have a
NOT NULL constraint defined on them although, if they are the primary key, they will, of course, have
that constraint.
The foreign key columns in the referencing table itself do not require a NOT NULL constraint.
A single-column foreign key can be defined in the column declaration, using the keyword
REFERENCES:
... ,
ARTIFACT_ID INTEGER REFERENCES COLLECTION (ARTIFACT_ID),
The column ARTIFACT_ID in the example references a column of the same name in the table
COLLECTIONS.
Both single-column and multi-column foreign keys can be defined at the table level. For a multi-
column foreign key, the table-level declaration is the only option.
...
CONSTRAINT FK_ARTSOURCE FOREIGN KEY(DEALER_ID, COUNTRY)
REFERENCES DEALER (DEALER_ID, COUNTRY),
Notice that the column names in the referenced (“master”) table may differ from those in the
foreign key.
If no target columns are specified, the foreign key automatically references the target table’s
primary key.
With the sub-clauses ON UPDATE and ON DELETE it is possible to specify an action to be taken on the
affected foreign key column(s) when referenced values in the master table are changed:
NO ACTION
(the default) — Nothing is done
CASCADE
The change in the master table is propagated to the corresponding row(s) in the child table. If a
key value changes, the corresponding key in the child records changes to the new value; if the
master row is deleted, the child records are deleted.
SET DEFAULT
The foreign key columns in the affected rows will be set to their default values as they were when
the foreign key constraint was defined.
136
Chapter 5. Data Definition (DDL) Statements
SET NULL
The foreign key columns in the affected rows will be set to NULL.
The specified action, or the default NO ACTION, could cause a foreign key column to become invalid.
For example, it could get a value that is not present in the master table. Such condition will cause
the operation on the master table to fail with an error message.
Example
...
CONSTRAINT FK_ORDERS_CUST
FOREIGN KEY (CUSTOMER) REFERENCES CUSTOMERS (ID)
ON UPDATE CASCADE ON DELETE SET NULL
CHECK Constraint
The CHECK constraint defines the condition the values inserted in this column or row must satisfy. A
condition is a logical expression (also called a predicate) that can return the TRUE, FALSE and UNKNOWN
values. A condition is considered satisfied if the predicate returns TRUE or value UNKNOWN (equivalent
to NULL). If the predicate returns FALSE, the value will not be accepted. This condition is used for
inserting a new row into the table (the INSERT statement) and for updating the existing value of the
table column (the UPDATE statement) and also for statements where one of these actions may take
place (UPDATE OR INSERT, MERGE).
CHECK constraints — whether defined at table level or column level — refer to table columns by their
names. The use of the keyword VALUE as a placeholder — as in domain CHECK constraints — is not
valid in the context of defining constraints in a table.
Example
with two column-level constraints and one at table-level:
In Firebird, columns are nullable by default. The NOT NULL constraint specifies that the column
cannot take NULL in place of a value.
137
Chapter 5. Data Definition (DDL) Statements
A NOT NULL constraint can only be defined as a column constraint, not as a table constraint.
The SQL SECURITY clause specifies the security context for executing functions referenced in
computed columns, and check constraints, and the default context used for triggers fired for this
table. When SQL Security is not specified, the default value of the database is applied at runtime.
Replication Management
When the database has been configured using ALTER DATABASE INCLUDE ALL TO PUBLICATION, new
tables will automatically be added for publication, unless overridden using the DISABLE PUBLICATION
clause.
If the database has not been configured for INCLUDE ALL (or has later been reconfigured using ALTER
DATABASE EXCLUDE ALL FROM PUBLICATION), new tables will not automatically be added for publication.
To include tables for publication, the ENABLE PUBLICATION clause must be used.
• Administrators
The user executing the CREATE TABLE statement becomes the owner of the table.
1. Creating the COUNTRY table with the primary key specified as a column constraint.
2. Creating the STOCK table with the named primary key specified at the column level and the
named unique key specified at the table level.
138
Chapter 5. Data Definition (DDL) Statements
3. Creating the JOB table with a primary key constraint spanning two columns, a foreign key
constraint for the COUNTRY table and a table-level CHECK constraint. The table also contains an
array of 5 elements.
4. Creating the PROJECT table with primary, foreign and unique key constraints with custom index
names specified with the USING clause.
ID NAME
139
Chapter 5. Data Definition (DDL) Statements
============ ===============
1 Table
2 Book
10 Computer
6. Creating the SALARY_HISTORY table with two computed fields. The first one is declared according
to the SQL standard, while the second one is declared according to the traditional declaration of
computed fields in Firebird.
7. With DEFINER set for table t, user US needs only the SELECT privilege on t. If it were set for
INVOKER, the user would also need the EXECUTE privilege on function f.
set term ^;
create function f() returns int
as
begin
return 3;
end^
set term ;^
create table t (i integer, c computed by (i + f())) SQL SECURITY DEFINER;
insert into t values (2);
grant select on table t to user us;
commit;
8. With DEFINER set for table tr, user US needs only the INSERT privilege on tr. If it were set for
INVOKER, either the user or the trigger would also need the INSERT privilege on table t. The result
would be the same if SQL SECURITY DEFINER were specified for trigger tr_ins:
140
Chapter 5. Data Definition (DDL) Statements
commit;
Global temporary tables have persistent metadata, but their contents are transaction-bound (the
default) or connection-bound. Every transaction or connection has its own private instance of a
GTT, isolated from all the others. Instances are only created if and when the GTT is referenced. They
are destroyed when the transaction ends or on disconnect. The metadata of a GTT can be modified
or removed using ALTER TABLE and DROP TABLE, respectively.
Syntax
<gtt_table_attr> ::=
<sql_security>
| ON COMMIT {DELETE | PRESERVE} ROWS
Syntax notes
• ON COMMIT DELETE ROWS creates a transaction-level GTT (the default), ON COMMIT PRESERVE ROWS a
connection-level GTT
• The EXTERNAL [FILE] clause is not allowed in the definition of a global temporary table
Restrictions on GTTs
GTTs can be “dressed up” with all the features of ordinary tables (keys, references, indexes, triggers
141
Chapter 5. Data Definition (DDL) Statements
• The destruction of a GTT instance at the end of its lifecycle does not cause any BEFORE/AFTER
delete triggers to fire
select t.rdb$type_name
from rdb$relations r
join rdb$types t on r.rdb$relation_type = t.rdb$type
where t.rdb$field_name = 'RDB$RELATION_TYPE'
and r.rdb$relation_name = 'TABLENAME'
The RDB$TYPE_NAME field will show PERSISTENT for a regular table, VIEW for a view,
GLOBAL_TEMPORARY_PRESERVE for a connection-bound GTT and
GLOBAL_TEMPORARY_DELETE for a transaction_bound GTT.
2. Creating a transaction-scoped global temporary table that uses a foreign key to reference a
connection-scoped global temporary table. The ON COMMIT sub-clause is optional because DELETE
ROWS is the default.
142
Chapter 5. Data Definition (DDL) Statements
External Tables
The optional EXTERNAL [FILE] clause specifies that the table is stored outside the database in an
external text file of fixed-length records. The columns of a table stored in an external file can be of
any type except BLOB or ARRAY, although for most purposes, only columns of CHAR types would be
useful.
All you can do with a table stored in an external file is insert new rows (INSERT) and query the data
(SELECT). Updating existing data (UPDATE) and deleting rows (DELETE) are not possible.
A file that is defined as an external table must be located on a storage device that is physically
present on the machine where the Firebird server runs and, if the parameter ExternalFileAccess in
the firebird.conf configuration file is Restrict, it must be in one of the directories listed there as
the argument for Restrict. If the file does not exist yet, Firebird will create it on first access.
The ability to use external files for a table depends on the value set for the
ExternalFileAccess parameter in firebird.conf:
• If it is set to None (the default), any attempt to access an external file will be
denied.
• If this parameter is set to Full, external files may be accessed anywhere on the
host file system. This creates a security vulnerability and is not recommended.
The “row” format of the external table is fixed length and binary. There are no field delimiters: both
field and row boundaries are determined by maximum sizes, in bytes, of the field definitions. Keep
this in mind, both when defining the structure of the external table and when designing an input
file for an external table that is to import (or export) data from another application. The ubiquitous
CSV format, for example, is of no use as an input file and cannot be generated directly into an
143
Chapter 5. Data Definition (DDL) Statements
external file.
The most useful data type for the columns of external tables is the fixed-length CHAR type, of suitable
lengths for the data they are to carry. Date and number types are easily cast to and from strings
whereas the native data types — binary data — will appear to external applications as unparseable
“alphabetti”.
Of course, there are ways to manipulate typed data to generate output files from Firebird that can
be read directly as input files to other applications, using stored procedures, with or without
employing external tables. Such techniques are beyond the scope of a language reference. Here, we
provide guidelines and tips for producing and working with simple text files, since the external
table feature is often used as an easy way to produce or read transaction-independent logs that can
be studied off-line in a text editor or auditing application.
Row Delimiters
Generally, external files are more useful if rows are separated by a delimiter, in the form of a
“newline” sequence that is recognised by reader applications on the intended platform. For most
contexts on Windows, it is the two-byte 'CRLF' sequence, carriage return (ASCII code decimal 13)
and line feed (ASCII code decimal 10). On POSIX, LF on its own is usual. There are various ways to
populate this delimiter column. In our example below, it is done by using a BEFORE INSERT trigger
and the internal function ASCII_CHAR.
For our example, we will define an external log table that might be used by an exception handler in
a stored procedure or trigger. The external table is chosen because the messages from any handled
exceptions will be retained in the log, even if the transaction that launched the process is
eventually rolled back because of another, unhandled exception. For demonstration purposes, it
has two data columns, a timestamp and a message. The third column stores the row delimiter:
Now, a trigger, to write the timestamp and the row delimiter each time a message is written to the
file:
SET TERM ^;
CREATE TRIGGER bi_ext_log FOR ext_log
ACTIVE BEFORE INSERT
AS
BEGIN
IF (new.stamp is NULL) then
144
Chapter 5. Data Definition (DDL) Statements
Inserting some records (which could have been done by an exception handler or a fan of
Shakespeare):
The output:
Alters a table
Available in
DSQL, ESQL
Syntax
<operation> ::=
ADD <col_def>
| ADD <tconstraint>
| DROP colname
| DROP CONSTRAINT constr_name
| ALTER [COLUMN] colname <col_mod>
| ALTER SQL SECURITY {INVOKER | DEFINER}
| DROP SQL SECURITY
| {ENABLE | DISABLE} PUBLICATION
<col_mod> ::=
TO newname
| POSITION newpos
| <regular_col_mod>
| <computed_col_mod>
| <identity_col_mod>
145
Chapter 5. Data Definition (DDL) Statements
<regular_col_mod> ::=
TYPE {<datatype> | domainname}
| SET DEFAULT {<literal> | NULL | <context_var>}
| DROP DEFAULT
| {SET | DROP} NOT NULL
<computed_col_mod> ::=
[TYPE <datatype>] {COMPUTED [BY] | GENERATED ALWAYS AS} (<expression>)
<identity_col_mod> ::=
SET GENERATED {ALWAYS | BY DEFAULT} [<identity_mod_option>...]
| <identity_mod_options>...
| DROP IDENTITY
<identity_mod_options> ::=
RESTART [WITH restart_value]
| SET INCREMENT [BY] inc_value
Parameter Description
operation One of the available operations altering the structure of the table
colname Name (identifier) for a column in the table. The maximum length is 63
characters. Must be unique in the table.
newname New name (identifier) for the column. The maximum length is 63
characters. Must be unique in the table.
newpos The new column position (an integer between 1 and the number of
columns in the table)
other_table The name of the table referenced by the foreign key constraint
inc_value The increment (or step) value of the identity column; zero (0) is not
allowed.
The ALTER TABLE statement changes the structure of an existing table. With one ALTER TABLE
statement it is possible to perform multiple operations, adding/dropping columns and constraints
and also altering column specifications.
146
Chapter 5. Data Definition (DDL) Statements
Some changes in the structure of a table increment the metadata change counter (“version count”)
assigned to every table. The number of metadata changes is limited to 255 for each table, or 32,000
for each view. Once the counter reaches this limit, you will not be able to make any further changes
to the structure of the table or view without resetting the counter.
You need to back up and restore the database using the gbak utility.
With the ADD clause you can add a new column or a new table constraint. The syntax for defining
the column and the syntax of defining the table constraint correspond with those described for
CREATE TABLE statement.
• Adding a new table constraint does not increase the metadata change counter
Points to Be Aware of
1. Adding a column with a NOT NULL constraint without a DEFAULT value will fail if
the table has existing rows. When adding a non-nullable column, it is
recommended either to set a default value for it, or to create it as nullable,
update the column in existing rows with a non-null value, and then add a NOT
NULL constraint.
2. When a new CHECK constraint is added, existing data is not tested for
compliance. Prior testing of existing data against the new CHECK expression is
recommended.
3. Although adding an identity column is supported, this will only succeed if the
table is empty. Adding an identity column will fail if the table has one or more
rows.
The DROP colname clause deletes the specified column from the table. An attempt to drop a column
will fail if anything references it. Consider the following items as sources of potential dependencies:
• indexes
• views
147
Chapter 5. Data Definition (DDL) Statements
The DROP CONSTRAINT clause deletes the specified column-level or table-level constraint.
A PRIMARY KEY or UNIQUE key constraint cannot be deleted if it is referenced by a FOREIGN KEY
constraint in another table. It will be necessary to drop that FOREIGN KEY constraint before
attempting to drop the PRIMARY KEY or UNIQUE key constraint it references.
With the ALTER [COLUMN] clause, attributes of existing columns can be modified without the need to
drop and re-add the column. Permitted modifications are:
• change the name (does not affect the metadata change counter)
• change the data type (increases the metadata change counter by one)
• change the column position in the column list of the table (does not affect the metadata change
counter)
• delete the default column value (does not affect the metadata change counter)
• set a default column value or change the existing default (does not affect the metadata change
counter)
• change the type and expression for a computed column (does not affect the metadata change
counter)
• set the NOT NULL constraint (does not affect the metadata change counter)
• drop the NOT NULL constraint (does not affect the metadata change counter)
• change the type of an identity column, or change an identity column to a regular column
The TO keyword with a new identifier renames an existing column. The table must not have an
existing column that has the same identifier.
It will not be possible to change the name of a column that is included in any constraint: primary
key, unique key, foreign key, or CHECK constraints of the table.
Renaming a column will also be disallowed if the column is used in any stored PSQL module or
view.
148
Chapter 5. Data Definition (DDL) Statements
The keyword TYPE changes the data type of an existing column to another, allowable type. A type
change that might result in data loss will be disallowed. As an example, the number of characters in
the new type for a CHAR or VARCHAR column cannot be smaller than the existing specification for it.
If the column was declared as an array, no change to its type or its number of dimensions is
permitted.
The data type of a column that is involved in a foreign key, primary key or unique constraint cannot
be changed at all.
The POSITION keyword changes the position of an existing column in the notional “left-to-right”
layout of the record.
• If a position number is greater than the number of columns in the table, its new position will be
adjusted silently to match the number of columns.
The optional DROP DEFAULT clause deletes the current default value for the column.
• If the column is based on a domain with a default value, the default value will revert to the
domain default
• An error will be raised if an attempt is made to delete the default value of a column which has
no default value or whose default value is domain-based
The optional SET DEFAULT clause sets a default value for the column. If the column already has a
default value, it will be replaced with the new one. The default value applied to a column always
overrides one inherited from a domain.
The SET NOT NULL clause adds a NOT NULL constraint on an existing table column. Contrary to
definition in CREATE TABLE, it is not possible to specify a constraint name.
The successful addition of the NOT NULL constraint is subject to a full data validation on the table, so
ensure that the column has no nulls before attempting the change.
An explicit NOT NULL constraint on domain-based column overrides domain settings. In this
scenario, changing the domain to be nullable does not extend to a table column.
Dropping the NOT NULL constraint from the column if its type is a domain that also has a NOT NULL
constraint, has no observable effect until the NOT NULL constraint is dropped from the domain as
well.
149
Chapter 5. Data Definition (DDL) Statements
The data type and expression underlying a computed column can be modified using a COMPUTED
[BY] or GENERATED ALWAYS AS clause in the ALTER TABLE ALTER [COLUMN] statement. Conversion of a
regular column to a computed one and vice versa is not permitted.
For identity columns (SET GENERATED {ALWAYS | BY DEFAULT}) it is possible to modify several
properties using the following clauses.
Identity Type
The SET GENERATED {ALWAYS | BY DEFAULT} changes an identity column from ALWAYS to BY DEFAULT and
vice versa. It is not possible to use this to change a regular column to an identity column.
RESTART
The RESTART clause restarts the sequence used for generating identity values. If only the RESTART
clause is specified, then the sequence resets to the initial value specified when the identity column
was defined. If the optional WITH restart_value clause is specified, the sequence will restart with the
specified value.
In Firebird 3.0, RESTART WITH restart_value would also change the configured
initial value to restart_value. This was not compliant with the SQL standard, so
since Firebird 4.0, RESTART WITH restart_value will only restart the sequence with
the specified value. Subsequent RESTARTs (without WITH) will use the START WITH
value specified when the identity column was defined.
SET INCREMENT
The SET INCREMENT clause changes the increment of the identity column.
DROP IDENTITY
The DROP IDENTITY clause will change an identity column to a regular column.
Using the ALTER SQL SECURITY or DROP SQL SECURITY clauses, it is possible to change or drop the SQL
Security property of a table. After dropping SQL Security, the default value of the database is
applied at runtime.
If the SQL Security property is changed for a table, triggers that do not have an explicit SQL
Security property will not see the effect of the change until the next time the trigger is loaded into
the metadata cache.
150
Chapter 5. Data Definition (DDL) Statements
Replication Management
To stop replicating a table, use the DISABLE PUBLICATION clause. To start replicating a table, use the
ENABLE PUBLICATION clause.
• Administrators
2. Adding the CAPITAL column with the NOT NULL and UNIQUE constraint and deleting the CURRENCY
column.
3. Adding the CHK_SALARY check constraint and a foreign key to the JOB table.
4. Setting default value for the MODEL field, changing the type of the ITEMID column and renaming
the MODELNAME column.
151
Chapter 5. Data Definition (DDL) Statements
See also
CREATE TABLE, DROP TABLE, CREATE DOMAIN
Drops a table
Available in
DSQL, ESQL
Syntax
Parameter Description
The DROP TABLE statement drops (deletes) an existing table. If the table has dependencies, the DROP
TABLE statement will fail with an error.
When a table is dropped, all its triggers and indexes will be deleted as well.
• Administrators
152
Chapter 5. Data Definition (DDL) Statements
See also
CREATE TABLE, ALTER TABLE, RECREATE TABLE
Available in
DSQL
Syntax
See the CREATE TABLE section for the full syntax of CREATE TABLE and descriptions of defining tables,
columns and constraints.
RECREATE TABLE creates or recreates a table. If a table with this name already exists, the RECREATE
TABLE statement will try to drop it and create a new one. Existing dependencies will prevent the
statement from executing.
See also
CREATE TABLE, DROP TABLE
5.5. INDEX
An index is a database object used for faster data retrieval from a table or for speeding up the
sorting in a query. Indexes are also used to enforce the referential integrity constraints PRIMARY KEY,
FOREIGN KEY and UNIQUE.
153
Chapter 5. Data Definition (DDL) Statements
This section describes how to create indexes, activate and deactivate them, drop them and collect
statistics (recalculate selectivity) for them.
Creates an index
Available in
DSQL, ESQL
Syntax
Parameter Description
tablename The name of the table for which the index is to be built
col Name of a column in the table. Columns of the types BLOB and ARRAY and
computed fields cannot be used in an index.
expression The expression that will compute the values for a computed index, also
known as an “expression index”
The CREATE INDEX statement creates an index for a table that can be used to speed up searching,
sorting and grouping. Indexes are created automatically in the process of defining constraints, such
as primary key, foreign key or unique constraints.
An index can be built on the content of columns of any data type except for BLOB and arrays. The
name (identifier) of an index must be unique among all index names.
Key Indexes
When a primary key, foreign key or unique constraint is added to a table or column, an index
with the same name is created automatically, without an explicit directive from the designer.
For example, the PK_COUNTRY index will be created automatically when you execute and
commit the following statement:
154
Chapter 5. Data Definition (DDL) Statements
• Administrators
Unique Indexes
Specifying the keyword UNIQUE in the index creation statement creates an index in which
uniqueness will be enforced throughout the table. The index is referred to as a “unique index”. A
unique index is not a constraint.
Unique indexes cannot contain duplicate key values (or duplicate key value combinations, in the
case of compound, or multi-column, or multi-segment) indexes. Duplicated NULLs are permitted, in
accordance with the SQL standard, in both single-segment and multi-segment indexes.
Partial Indexes
Specifying the WHERE clause in the index creation statement creates a partial index (also knows as
filtered index). A partial index contains only rows that match the search condition of the WHERE.
A partial index definition may include the UNIQUE clause. In this case, every key in the index is
required to be unique. This allows enforcing uniqueness for a subset of table rows.
• The WHERE clause of the statement includes exactly the same boolean expression as the one
defined for the index;
• The search condition defined for the index contains ORed boolean expressions and one of them
is explicitly included in the WHERE clause of the statement;
• The search condition defined for the index specifies IS NOT NULL and the WHERE clause of the
statement includes an expression on the same field that is known to exclude NULLs.
Index Direction
All indexes in Firebird are uni-directional. An index may be constructed from the lowest value to
the highest (ascending order) or from the highest value to the lowest (descending order). The
keywords ASC[ENDING] and DESC[ENDING] are used to specify the direction of the index. The default
index order is ASC[ENDING]. It is valid to define both an ascending and a descending index on the
same column or key set.
Firebird uses B-tree indexes, which are bidirectional. However, due to technical limitations,
Firebird uses an index in one direction only.
155
Chapter 5. Data Definition (DDL) Statements
See also Firebird for the Database Expert: Episode 3 - On disk consistency
In creating an index, you can use the COMPUTED BY clause to specify an expression instead of one or
more columns. Computed indexes are used in queries where the condition in a WHERE, ORDER BY or
GROUP BY clause exactly matches the expression in the index definition. The expression in a
computed index may involve several columns in the table.
Limits on Indexes
The maximum length of a key in an index is limited to a quarter of the page size.
The number of indexes that can be accommodated for each table is limited. The actual maximum
for a specific table depends on the page size and the number of columns in the indexes.
The maximum indexed string length is 9 bytes less than the maximum key length. The maximum
indexable string length depends on the page size, the character set, and the collation.
156
Chapter 5. Data Definition (DDL) Statements
Depending on the collation, the maximum size can be further reduced as case-insensitive and
accent-insensitive collations require more bytes per character in an index. See also Character
Indexes in Chapter Data Types and Subtypes.
Since Firebird 5.0, index creation can be parallelized. Parallelization happens automatically if the
current connection has two or more parallel workers — configured through ParallelWorkers in
firebird.conf or isc_dpb_parallel_workers — and the server has parallel workers available.
2. Creating an index with keys sorted in the descending order for the CHANGE_DATE column in the
SALARY_HISTORY table
3. Creating a multi-segment index for the ORDER_STATUS, PAID columns in the SALES table
4. Creating an index that does not permit duplicate values for the NAME column in the COUNTRY table
SELECT *
FROM PERSONS
WHERE UPPER(NAME) STARTING WITH UPPER('Iv');
157
Chapter 5. Data Definition (DDL) Statements
insert into OFFER (PRODUCT_ID, ARCHIVED, PRICE) values (1, false, 18.95);
insert into OFFER (PRODUCT_ID, ARCHIVED, PRICE) values (1, true, 17.95);
insert into OFFER (PRODUCT_ID, ARCHIVED, PRICE) values (1, true, 16.95);
-- Next fails due to second record for PRODUCT_ID=1 and ARCHIVED=false:
insert into OFFER (PRODUCT_ID, ARCHIVED, PRICE) values (1, false, 19.95);
-- Statement failed, SQLSTATE = 23000
-- attempt to store duplicate value (visible to active transactions) in unique
index "IDX_OFFER_UNIQUE_PRODUCT"
-- -Problematic key value is ("PRODUCT_ID" = 1)
See also
ALTER INDEX, DROP INDEX
158
Chapter 5. Data Definition (DDL) Statements
Available in
DSQL, ESQL
Syntax
Parameter Description
The ALTER INDEX statement activates or deactivates an index. There is no facility on this statement
for altering any attributes of the index.
INACTIVE
With the INACTIVE option, the index is switched from the active to inactive state. The effect is
similar to the DROP INDEX statement except that the index definition remains in the database.
Altering a constraint index to the inactive state is not permitted.
An active index can be deactivated if there are no queries prepared using that index; otherwise,
an “object in use” error is returned.
Activating an inactive index is also safe. However, if there are active transactions modifying the
table, the transaction containing the ALTER INDEX statement will fail if it has the NOWAIT attribute.
If the transaction is in WAIT mode, it will wait for completion of concurrent transactions.
On the other side of the coin, if our ALTER INDEX succeeds and starts to rebuild the index at
COMMIT, other transactions modifying that table will fail or wait, according to their WAIT/NO WAIT
attributes. The situation is the same for CREATE INDEX.
How is it Useful?
ACTIVE
Rebuilds the index (even if already active), and marks it as active.
How is it Useful?
Even if the index is active when ALTER INDEX … ACTIVE is executed, the index
will be rebuilt. Rebuilding indexes can be a useful piece of housekeeping to do,
occasionally, on the indexes of a large table in a database that has frequent
inserts, updates or deletes but is infrequently restored.
159
Chapter 5. Data Definition (DDL) Statements
• Administrators
Altering the index of a PRIMARY KEY, FOREIGN KEY or UNIQUE constraint to INACTIVE is not permitted.
However, ALTER INDEX … ACTIVE works just as well with constraint indexes as it does with others, as
an index rebuilding tool.
2. Switching the IDX_UPDATER index back to the active state and rebuilding it
See also
CREATE INDEX, DROP INDEX, SET STATISTICS
Drops an index
Available in
DSQL, ESQL
Syntax
Parameter Description
The DROP INDEX statement drops (deletes) the named index from the database.
A constraint index cannot be dropped using DROP INDEX. Constraint indexes are dropped during the
process of executing the command ALTER TABLE … DROP CONSTRAINT ….
160
Chapter 5. Data Definition (DDL) Statements
• Administrators
See also
CREATE INDEX, ALTER INDEX
Available in
DSQL, ESQL
Syntax
Parameter Description
The SET STATISTICS statement recalculates the selectivity of the specified index.
• Administrators
Index Selectivity
The selectivity of an index is the result of evaluating the number of rows that can be selected in a
search on every index value. A unique index has the maximum selectivity because it is impossible
to select more than one row for each value of an index key if it is used. Keeping the selectivity of an
161
Chapter 5. Data Definition (DDL) Statements
index up to date is important for the optimizer’s choices in seeking the most optimal query plan.
Index statistics in Firebird are not automatically recalculated in response to large batches of
inserts, updates or deletions. It may be beneficial to recalculate the selectivity of an index after such
operations because the selectivity tends to become outdated.
The statements CREATE INDEX and ALTER INDEX ACTIVE both store index statistics that correspond to
the contents of the newly-[re]built index.
SET STATISTICS can be performed under concurrent load without risk of corruption. However,
under concurrent load, the newly calculated statistics could become outdated as soon as SET
STATISTICS finishes.
See also
CREATE INDEX, ALTER INDEX
5.6. VIEW
A view is a virtual table that is a stored and named SELECT query for retrieving data of any
complexity. Data can be retrieved from one or more tables, from other views and also from
selectable stored procedures.
Unlike regular tables in relational databases, a view is not an independent data set stored in the
database. The result is dynamically created as a data set when the view is selected.
The metadata of a view are available to the process that generates the binary code for stored
procedures and triggers, as though they were concrete tables storing persistent data.
Creates a view
Available in
DSQL
Syntax
162
Chapter 5. Data Definition (DDL) Statements
Parameter Description
colname View column name. Duplicate column names are not allowed.
The CREATE VIEW statement creates a new view. The identifier (name) of a view must be unique
among the names of all views, tables, and stored procedures in the database.
The name of the new view can be followed by the list of column names that should be returned to
the caller when the view is invoked. Names in the list do not have to be related to the names of the
columns in the base tables from which they derive.
If the view column list is omitted, the system will use the column names and/or aliases from the
SELECT statement. If duplicate names or non-aliased expression-derived columns make it impossible
to obtain a valid list, creation of the view fails with an error.
The number of columns in the view’s list must match the number of columns in the selection list of
the underlying SELECT statement in the view definition.
Additional Points
• If the full list of columns is specified, it makes no sense to specify aliases in the SELECT statement
because the names in the column list will override them
• The column list is optional if all the columns in the SELECT are explicitly named and are unique
in the selection list
Updatable Views
A view can be updatable or read-only. If a view is updatable, the data retrieved when this view is
called can be changed by the DML statements INSERT, UPDATE, DELETE, UPDATE OR INSERT or MERGE.
Changes made in an updatable view are applied to the underlying table(s).
A read-only view can be made updatable with the use of triggers. Once triggers have been defined
on a view, changes posted to it will never be written automatically to the underlying table, even if
the view was updatable to begin with. It is the responsibility of the programmer to ensure that the
triggers update (or delete from, or insert into) the base tables as needed.
A view will be automatically updatable if all the following conditions are met:
• the SELECT statement queries only one table or one updatable view
• each base table (or base view) column not present in the view definition meets one of the
following conditions:
163
Chapter 5. Data Definition (DDL) Statements
◦ it is nullable
• the SELECT statement contains no fields derived from subqueries or other expressions
• the SELECT statement does not contain fields defined through aggregate functions (MIN, MAX, AVG,
SUM, COUNT, LIST, etc.), statistical functions (CORR, COVAR_POP, COVAR_SAMP, etc.), linear regression
functions (REGR_AVGX, REGR_AVGY, etc.) or any type of window function
• the SELECT statement does not include the keyword DISTINCT or row-restrictive keywords such as
ROWS, FIRST, SKIP, OFFSET or FETCH
To report the right values in RETURNING, the trigger will need to explicitly assign
those values to the columns of the NEW record.
The optional WITH CHECK OPTION clause requires an updatable view to check whether new or
updated data meet the condition specified in the WHERE clause of the SELECT statement. Every attempt
to insert a new record or to update an existing one is checked whether the new or updated record
would meet the WHERE criteria. If they fail the check, the operation is not performed and an error is
raised.
WITH CHECK OPTION can be specified only in a CREATE VIEW statement in which a WHERE clause is
present to restrict the output of the main SELECT statement. An error message is returned otherwise.
Please note:
If WITH CHECK OPTION is used, the engine checks the input against the WHERE clause
before passing anything to the base relation. Therefore, if the check on the input
fails, any default clauses or triggers on the base relation that might have been
designed to correct the input will never come into action.
Furthermore, view fields omitted from the INSERT statement are passed as NULLs to
the base relation, regardless of their presence or absence in the WHERE clause. As a
result, base table defaults defined on such fields will not be applied. Triggers, on
the other hand, will fire and work as expected.
For views that do not have WITH CHECK OPTION, fields omitted from the INSERT
statement are not passed to the base relation at all, so any defaults will be applied.
164
Chapter 5. Data Definition (DDL) Statements
• Administrators
To create a view, a non-admin user also needs at least SELECT access to the underlying table(s)
and/or view(s), and the EXECUTE privilege on any selectable stored procedures involved.
To enable insertions, updates and deletions through the view, the creator/owner must also possess
the corresponding INSERT, UPDATE and DELETE rights on the underlying object(s).
Granting other users privileges on the view is only possible if the view owner has these privileges
on the underlying objects WITH GRANT OPTION. This will always be the case if the view owner is also
the owner of the underlying objects.
1. Creating view returning the JOB_CODE and JOB_TITLE columns only for those jobs where
MAX_SALARY is less than $15,000.
2. Creating a view returning the JOB_CODE and JOB_TITLE columns only for those jobs where
MAX_SALARY is less than $15,000. Whenever a new record is inserted or an existing record is
updated, the MAX_SALARY < 15000 condition will be checked. If the condition is not true, the
insert/update operation will be rejected.
165
Chapter 5. Data Definition (DDL) Statements
CODE_PRICE,
COST,
COST * 1.1
FROM PRICE;
4. Creating a view with the help of aliases for fields in the SELECT statement (the same result as in
Example 3).
See also
ALTER VIEW, CREATE OR ALTER VIEW, RECREATE VIEW, DROP VIEW
Alters a view
Available in
DSQL
Syntax
166
Chapter 5. Data Definition (DDL) Statements
Parameter Description
colname View column name. Duplicate column names are not allowed.
Use the ALTER VIEW statement for changing the definition of an existing view. Privileges for views
remain intact and dependencies are not affected.
The syntax of the ALTER VIEW statement corresponds with that of CREATE VIEW.
Be careful when you change the number of columns in a view. Existing application
code and PSQL modules that access the view may become invalid. For information
on how to detect this kind of problem in stored procedures and trigger, see The
RDB$VALID_BLR Field in the Appendix.
• Administrators
See also
CREATE VIEW, CREATE OR ALTER VIEW, RECREATE VIEW
Available in
167
Chapter 5. Data Definition (DDL) Statements
DSQL
Syntax
Parameter Description
colname View column name. Duplicate column names are not allowed.
Use the CREATE OR ALTER VIEW statement for changing the definition of an existing view or creating it
if it does not exist. Privileges for an existing view remain intact and dependencies are not affected.
The syntax of the CREATE OR ALTER VIEW statement corresponds with that of CREATE VIEW.
See also
CREATE VIEW, ALTER VIEW, RECREATE VIEW
Drops a view
Available in
DSQL
168
Chapter 5. Data Definition (DDL) Statements
Syntax
Parameter Description
The DROP VIEW statement drops (deletes) an existing view. The statement will fail if the view has
dependencies.
• Administrators
Example
See also
CREATE VIEW, RECREATE VIEW, CREATE OR ALTER VIEW
Available in
DSQL
Syntax
Parameter Description
169
Chapter 5. Data Definition (DDL) Statements
Parameter Description
colname View column name. Duplicate column names are not allowed.
Creates or recreates a view. If there is a view with this name already, the engine will try to drop it
before creating the new instance. If the existing view cannot be dropped, because of dependencies
or insufficient rights, for example, RECREATE VIEW fails with an error.
Creating the new view PRICE_WITH_MARKUP view or recreating it, if it already exists
See also
CREATE VIEW, DROP VIEW, CREATE OR ALTER VIEW
5.7. TRIGGER
A trigger is a special type of stored procedure that is not called directly, instead it is executed when
a specified event occurs. A DML trigger is specific to a single relation (table or view) and one phase
in the timing of the event (BEFORE or AFTER). A DML trigger can be specified to execute for one
specific event (insert, update, delete) or for a combination of those events.
1. a “database trigger” can be specified to fire at the start or end of a user session (connection) or a
user transaction.
2. a “DDL trigger” can be specified to fire before or after execution of one or more types of DDL
statements.
Creates a trigger
Available in
DSQL, ESQL
170
Chapter 5. Data Definition (DDL) Statements
Syntax
<relation_trigger_legacy> ::=
FOR {tablename | viewname}
[ACTIVE | INACTIVE]
{BEFORE | AFTER} <mutation_list>
[POSITION number]
<relation_trigger_sql> ::=
[ACTIVE | INACTIVE]
{BEFORE | AFTER} <mutation_list>
ON {tablename | viewname}
[POSITION number]
<database_trigger> ::=
[ACTIVE | INACTIVE] ON <db_event>
[POSITION number]
<ddl_trigger> ::=
[ACTIVE | INACTIVE]
{BEFORE | AFTER} <ddl_event>
[POSITION number]
<mutation_list> ::=
<mutation> [OR <mutation> [OR <mutation>]]
<db_event> ::=
CONNECT | DISCONNECT
| TRANSACTION {START | COMMIT | ROLLBACK}
<ddl_event> ::=
ANY DDL STATEMENT
| <ddl_event_item> [{OR <ddl_event_item>} ...]
<ddl_event_item> ::=
{CREATE | ALTER | DROP} TABLE
| {CREATE | ALTER | DROP} PROCEDURE
| {CREATE | ALTER | DROP} FUNCTION
| {CREATE | ALTER | DROP} TRIGGER
| {CREATE | ALTER | DROP} EXCEPTION
| {CREATE | ALTER | DROP} VIEW
| {CREATE | ALTER | DROP} DOMAIN
171
Chapter 5. Data Definition (DDL) Statements
<psql_trigger> ::=
[SQL SECURITY {INVOKER | DEFINER}]
<psql-module-body>
<psql-module-body> ::=
!! See Syntax of Module Body !!
<external-module-body> ::=
!! See Syntax of Module Body !!
Parameter Description
tablename Name of the table with which the relation trigger is associated
viewname Name of the view with which the relation trigger is associated
The CREATE TRIGGER statement is used for creating a new trigger. A trigger can be created either for a
relation (table | view) event (or a combination of relation events), for a database event, or for a DDL
event.
CREATE TRIGGER, along with its associates ALTER TRIGGER, CREATE OR ALTER TRIGGER and RECREATE
TRIGGER, is a compound statement, consisting of a header and a body. The header specifies the name
of the trigger, the name of the relation (for a DML trigger), the phase of the trigger, the event(s) it
applies to, and the position to determine an order between triggers.
172
Chapter 5. Data Definition (DDL) Statements
The trigger body consists of optional declarations of local variables and named cursors followed by
one or more statements, or blocks of statements, all enclosed in an outer block that begins with the
keyword BEGIN and ends with the keyword END. Declarations and embedded statements are
terminated with semicolons (‘;’).
The name of the trigger must be unique among all trigger names.
Statement Terminators
Some SQL statement editors — specifically the isql utility that comes with Firebird, and possibly
some third-party editors — employ an internal convention that requires all statements to be
terminated with a semicolon. This creates a conflict with PSQL syntax when coding in these
environments. If you are unacquainted with this problem and its solution, please study the details
in the PSQL chapter in the section entitled Switching the Terminator in isql.
SQL Security
The SQL SECURITY clause specifies the security context for executing other routines or inserting into
other tables.
By default, a trigger applies the SQL Security property defined on its table (or — if the table doesn’t
have the SQL Security property set — the database default), but it can be overridden by specifying it
explicitly.
If the SQL Security property is changed for the table, triggers that do not have an explicit SQL
Security property will not see the effect of the change until the next time the trigger is loaded into
the metadata cache.
The trigger body is either a PSQL body, or an external UDR module body.
DML — or “relation” — triggers are executed at the row (record) level, every time a row is changed.
A trigger can be either ACTIVE or INACTIVE. Only active triggers are executed. Triggers are created
ACTIVE by default.
• Administrators
• Users with — for a table — the ALTER ANY TABLE, or — for a view — ALTER ANY VIEW privilege
173
Chapter 5. Data Definition (DDL) Statements
Forms of Declaration
A relation trigger specifies — among other things — a phase and one or more events.
Phase
Phase concerns the timing of the trigger with regard to the change-of-state event in the row of data:
• A BEFORE trigger is fired before the specified database operation (insert, update or delete) is
carried out
• An AFTER trigger is fired after the database operation has been completed
Row Events
A relation trigger definition specifies at least one of the DML operations INSERT, UPDATE and DELETE,
to indicate one or more events on which the trigger should fire. If multiple operations are specified,
they must be separated by the keyword OR. No operation may occur more than once.
Within the statement block, the Boolean context variables INSERTING, UPDATING and DELETING can be
used to test which operation is currently executing.
The keyword POSITION allows an optional execution order (“firing order”) to be specified for a series
of triggers that have the same phase and event as their target. The default position is 0. If multiple
triggers have the same position and phase, those triggers will be executed in an undefined order,
while respecting the total order by position and phase.
1. Creating a trigger in the “legacy” form, firing before the event of inserting a new record into the
CUSTOMER table occurs.
2. Creating a trigger firing before the event of inserting a new record into the CUSTOMER table in the
SQL standard-compliant form.
174
Chapter 5. Data Definition (DDL) Statements
3. Creating a trigger that will file after either inserting, updating or deleting a record in the
CUSTOMER table.
4. With DEFINER set for trigger tr_ins, user US needs only the INSERT privilege on tr. If it were set for
INVOKER, either the user or the trigger would also need the INSERT privilege on table t.
commit;
175
Chapter 5. Data Definition (DDL) Statements
The result would be the same if SQL SECURITY DEFINER were specified for table TR:
commit;
Database Triggers
Triggers can be defined to fire upon “database events”; a mixture of events that act across the scope
of a session (connection), and events that act across the scope of an individual transaction:
• CONNECT
• DISCONNECT
• TRANSACTION START
• TRANSACTION COMMIT
• TRANSACTION ROLLBACK
• Administrators
CONNECT and DISCONNECT triggers are executed in a transaction created specifically for this purpose.
This transaction uses the default isolation level, i.e. snapshot (concurrency), write and wait. If all
goes well, the transaction is committed. Uncaught exceptions cause the transaction to roll back, and
• for a CONNECT trigger, the connection is then broken and the exception is returned to the client
176
Chapter 5. Data Definition (DDL) Statements
• for a DISCONNECT trigger, exceptions are not reported. The connection is broken as intended
TRANSACTION triggers are executed within the transaction whose start, commit or rollback evokes
them. The action taken after an uncaught exception depends on the event:
• In a TRANSACTION START trigger, the exception is reported to the client and the transaction is
rolled back
• In a TRANSACTION COMMIT trigger, the exception is reported, the trigger’s actions so far are undone
and the commit is cancelled
• In a TRANSACTION ROLLBACK trigger, the exception is not reported and the transaction is rolled
back as intended.
Traps
Some Firebird command-line tools have been supplied with switches that an administrator can use
to suppress the automatic firing of database triggers. So far, they are:
gbak -nodbtriggers
isql -nodbtriggers
nbackup -T
Two-phase Commit
In a two-phase commit scenario, TRANSACTION COMMIT triggers fire in the prepare phase, not at the
commit.
Some Caveats
1. The use of the IN AUTONOMOUS TRANSACTION DO statement in the database event triggers related to
transactions (TRANSACTION START, TRANSACTION ROLLBACK, TRANSACTION COMMIT) may cause the
autonomous transaction to enter an infinite loop
2. The DISCONNECT and TRANSACTION ROLLBACK event triggers will not be executed when clients are
disconnected via monitoring tables (DELETE FROM MON$ATTACHMENTS)
1. Creating a trigger for the event of connecting to the database that logs users logging into the
system. The trigger is created as inactive.
177
Chapter 5. Data Definition (DDL) Statements
2. Creating a trigger for the event of connecting to the database that does not permit any users,
except for SYSDBA, to log in during off hours.
CREATE EXCEPTION E_INCORRECT_WORKTIME 'The working day has not started yet.';
DDL Triggers
DDL triggers allow restrictions to be placed on users who attempt to create, alter or drop a DDL
object. Their other purposes is to keep a metadata change log.
DDL triggers fire on specified metadata changes events in a specified phase. BEFORE triggers run
before changes to system tables. AFTER triggers run after changes in system tables.
• Administrators
A DDL trigger is a type of database trigger. See Suppressing Database Triggers how to suppress
178
Chapter 5. Data Definition (DDL) Statements
1. Here is how you might use a DDL trigger to enforce a consistent naming scheme, in this case,
stored procedure names should begin with the prefix “SP_”:
set term !;
Test
-- The last command raises this exception and procedure TEST is not created
-- Statement failed, SQLSTATE = 42000
-- exception 1
-- -E_INVALID_SP_NAME
-- -Invalid SP name (should start with SP_)
-- -At trigger 'TRIG_DDL_SP' line: 4, col: 5
set term ;!
2. Implement custom DDL security, in this case restricting the running of DDL commands to
certain users:
set term !;
179
Chapter 5. Data Definition (DDL) Statements
as
begin
if (current_user <> 'SUPER_USER') then
exception e_access_denied;
end!
Test
-- The last command raises this exception and procedure SP_TEST is not created
-- Statement failed, SQLSTATE = 42000
-- exception 1
-- -E_ACCESS_DENIED
-- -Access denied
-- -At trigger 'TRIG_DDL' line: 4, col: 5
set term ;!
Firebird has privileges for executing DDL statements, so writing a DDL trigger
for this should be a last resort, if the same effect cannot be achieved using
privileges.
set term !;
180
Chapter 5. Data Definition (DDL) Statements
The above trigger will fire for this DDL command. It’s a good idea to use -nodbtriggers when
working with them!
commit!
set term ;!
Test
181
Chapter 5. Data Definition (DDL) Statements
commit;
182
Chapter 5. Data Definition (DDL) Statements
)
====================================================
See also
ALTER TRIGGER, CREATE OR ALTER TRIGGER, RECREATE TRIGGER, DROP TRIGGER, DDL Triggers in Chapter
Procedural SQL (PSQL) Statements
Alters a trigger
Available in
DSQL, ESQL
Syntax
<psql_trigger> ::=
[<sql_security>]
[<psql-module-body>]
<sql_security> ::=
SQL SECURITY {INVOKER | DEFINER}
| DROP SQL SECURITY
The ALTER TRIGGER statement only allows certain changes to the header and body of a trigger.
183
Chapter 5. Data Definition (DDL) Statements
Reminders
The BEFORE keyword directs that the trigger be executed before the associated
event occurs; the AFTER keyword directs that it be executed after the event.
More than one DML event — INSERT, UPDATE, DELETE — can be covered in a single
trigger. The events should be separated with the keyword OR. No event should be
mentioned more than once.
• Administrators
• Users with — for a table — the ALTER ANY TABLE, or — for a view — ALTER ANY VIEW privilege
• Administrators
3. Switching the TR_CUST_LOG trigger to the inactive status and modifying the list of events.
4. Switching the tr_log_connect trigger to the active status, changing its position and body.
184
Chapter 5. Data Definition (DDL) Statements
ACTIVE POSITION 1
AS
BEGIN
INSERT INTO LOG_CONNECT (ID,
USERNAME,
ROLENAME,
ATIME)
VALUES (NEXT VALUE FOR SEQ_LOG_CONNECT,
CURRENT_USER,
CURRENT_ROLE,
CURRENT_TIMESTAMP);
END
See also
CREATE TRIGGER, CREATE OR ALTER TRIGGER, RECREATE TRIGGER, DROP TRIGGER
Available in
DSQL
Syntax
The CREATE OR ALTER TRIGGER statement creates a new trigger if it does not exist; otherwise it alters
and recompiles it with the privileges intact and dependencies unaffected.
185
Chapter 5. Data Definition (DDL) Statements
See also
CREATE TRIGGER, ALTER TRIGGER, RECREATE TRIGGER
Drops a trigger
Available in
DSQL, ESQL
Syntax
Parameter Description
• Administrators
• Users with — for a table — the ALTER ANY TABLE, or — for a view — ALTER ANY VIEW privilege
• Administrators
See also
CREATE TRIGGER, RECREATE TRIGGER
Available in
186
Chapter 5. Data Definition (DDL) Statements
DSQL
Syntax
The RECREATE TRIGGER statement creates a new trigger if no trigger with the specified name exists;
otherwise the RECREATE TRIGGER statement tries to drop the existing trigger and create a new one.
The operation will fail on COMMIT if the trigger is in use.
Be aware that dependency errors are not detected until the COMMIT phase of this
operation.
See also
CREATE TRIGGER, DROP TRIGGER, CREATE OR ALTER TRIGGER
5.8. PROCEDURE
A stored procedure is a software module that can be called from a client, another procedure,
function, executable block or trigger. Stored procedures are written in procedural SQL (PSQL) or
defined using a UDR (User-Defined Routine). Most SQL statements are available in PSQL as well,
sometimes with limitations or extensions. Notable limitations are the prohibition on DDL and
transaction control statements in PSQL.
187
Chapter 5. Data Definition (DDL) Statements
Available in
DSQL, ESQL
Syntax
<type> ::=
<datatype>
| [TYPE OF] domain
| TYPE OF COLUMN rel.col
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
<psql_procedure> ::=
[SQL SECURITY {INVOKER | DEFINER}]
<psql-module-body>
<psql-module-body> ::=
!! See Syntax of Module Body !!
<external-module-body> ::=
!! See Syntax of Module Body !!
Parameter Description
188
Chapter 5. Data Definition (DDL) Statements
Parameter Description
literal A literal value that is assignment-compatible with the data type of the
parameter
context_var Any context variable whose type is compatible with the data type of the
parameter
collation Collation
The CREATE PROCEDURE statement creates a new stored procedure. The name of the procedure must
be unique among the names of all stored procedures, tables, and views in the database.
CREATE PROCEDURE is a compound statement, consisting of a header and a body. The header specifies
the name of the procedure and declares input parameters and the output parameters, if any, that
are to be returned by the procedure.
The procedure body consists of declarations for any local variables, named cursors, and
subroutines that will be used by the procedure, followed by one or more statements, or blocks of
statements, all enclosed in an outer block that begins with the keyword BEGIN and ends with the
keyword END. Declarations and embedded statements are terminated with semicolons (‘;’).
Statement Terminators
Some SQL statement editors — specifically the isql utility that comes with Firebird, and possibly
some third-party editors — employ an internal convention that requires all statements to be
terminated with a semicolon. This creates a conflict with PSQL syntax when coding in these
environments. If you are unacquainted with this problem and its solution, please study the details
in the PSQL chapter in the section entitled Switching the Terminator in isql.
Parameters
Each parameter has a data type. The NOT NULL constraint can also be specified for any parameter, to
prevent NULL being passed or assigned to it.
A collation can be specified for string-type parameters, using the COLLATE clause.
Input Parameters
Input parameters are presented as a parenthesized list following the name of the function. They
are passed by value into the procedure, so any changes inside the procedure has no effect on the
parameters in the caller. Input parameters may have default values. Parameters with default
values specified must be added at the end of the list of parameters.
Output Parameters
The optional RETURNS clause is for specifying a parenthesised list of output parameters for the
189
Chapter 5. Data Definition (DDL) Statements
stored procedure.
SQL Security
The SQL SECURITY clause specifies the security context for executing other routines or inserting into
other tables. When SQL Security is not specified, the default value of the database is applied at
runtime.
The SQL SECURITY clause can only be specified for PSQL procedures, and is not valid for procedures
defined in a package.
The optional declarations section, located at the start of the body of the procedure definition,
defines variables (including cursors) and subroutines local to the procedure. Local variable
declarations follow the same rules as parameters regarding specification of the data type. See
details in the PSQL chapter for DECLARE VARIABLE, DECLARE CURSOR, DECLARE FUNCTION, and DECLARE
PROCEDURE.
A stored procedure can also be located in an external module. In this case, instead of a procedure
body, the CREATE PROCEDURE specifies the location of the procedure in the external module using the
EXTERNAL clause. The optional NAME clause specifies the name of the external module, the name of the
procedure inside the module, and — optionally — user-defined information. The required ENGINE
clause specifies the name of the UDR engine that handles communication between Firebird and the
external module. The optional AS clause accepts a string literal “body”, which can be used by the
engine or module for various purposes.
• Administrators
The user executing the CREATE PROCEDURE statement becomes the owner of the table.
Examples
1. Creating a stored procedure that inserts a record into the BREED table and returns the code of the
inserted record:
190
Chapter 5. Data Definition (DDL) Statements
2. Creating a selectable stored procedure that generates data for mailing labels (from
employee.fdb):
191
Chapter 5. Data Definition (DDL) Statements
3. With DEFINER set for procedure p, user US needs only the EXECUTE privilege on p. If it were set for
INVOKER, either the user or the procedure would also need the INSERT privilege on table t.
set term ^;
create procedure p (i integer) SQL SECURITY DEFINER
as
begin
insert into t values (:i);
end^
set term ;^
See also
CREATE OR ALTER PROCEDURE, ALTER PROCEDURE, RECREATE PROCEDURE, DROP PROCEDURE
192
Chapter 5. Data Definition (DDL) Statements
Available in
DSQL, ESQL
Syntax
The ALTER PROCEDURE statement allows the following changes to a stored procedure definition:
• local variables
After ALTER PROCEDURE executes, existing privileges remain intact and dependencies are not affected.
Altering a procedure without specifying the SQL SECURITY clause will remove the SQL Security
property if currently set for this procedure. This means the behaviour will revert to the database
default.
Take care about changing the number and type of input and output parameters in
stored procedures. Existing application code and procedures and triggers that call
it could become invalid because the new description of the parameters is
incompatible with the old calling format. For information on how to troubleshoot
such a situation, see the article The RDB$VALID_BLR Field in the Appendix.
• Administrators
193
Chapter 5. Data Definition (DDL) Statements
PROJ_ID VARCHAR(20))
AS
BEGIN
FOR SELECT
PROJ_ID
FROM
EMPLOYEE_PROJECT
WHERE
EMP_NO = :emp_no
INTO :proj_id
DO
SUSPEND;
END
See also
CREATE PROCEDURE, CREATE OR ALTER PROCEDURE, RECREATE PROCEDURE, DROP PROCEDURE
Available in
DSQL
Syntax
The CREATE OR ALTER PROCEDURE statement creates a new stored procedure or alters an existing one.
If the stored procedure does not exist, it will be created by invoking a CREATE PROCEDURE statement
transparently. If the procedure already exists, it will be altered and compiled without affecting its
existing privileges and dependencies.
194
Chapter 5. Data Definition (DDL) Statements
FROM
EMPLOYEE_PROJECT
WHERE
EMP_NO = :emp_no
INTO :proj_id
DO
SUSPEND;
END
See also
CREATE PROCEDURE, ALTER PROCEDURE, RECREATE PROCEDURE
Available in
DSQL, ESQL
Syntax
Parameter Description
The DROP PROCEDURE statement deletes an existing stored procedure. If the stored procedure has any
dependencies, the attempt to delete it will fail and raise an error.
• Administrators
See also
CREATE PROCEDURE, RECREATE PROCEDURE
195
Chapter 5. Data Definition (DDL) Statements
Available in
DSQL
Syntax
The RECREATE PROCEDURE statement creates a new stored procedure or recreates an existing one. If a
procedure with this name already exists, the engine will try to drop it and create a new one.
Recreating an existing procedure will fail at the COMMIT request if the procedure has dependencies.
Be aware that dependency errors are not detected until the COMMIT phase of this
operation.
After a procedure is successfully recreated, privileges to execute the stored procedure, and the
privileges of the stored procedure itself are dropped.
Creating the new GET_EMP_PROJ stored procedure or recreating the existing GET_EMP_PROJ stored procedure.
See also
CREATE PROCEDURE, DROP PROCEDURE, CREATE OR ALTER PROCEDURE
196
Chapter 5. Data Definition (DDL) Statements
5.9. FUNCTION
A stored function is a user-defined function stored in the metadata of a database, and running on
the server. Stored functions can be called by stored procedures, stored functions (including the
function itself), triggers and DSQL. When a stored function calls itself, such a stored function is
called a recursive function.
Unlike stored procedures, stored functions always return a single scalar value. To return a value
from a stored functions, use the RETURN statement, which immediately ends the function.
See also
EXTERNAL FUNCTION
Available in
DSQL
Syntax
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
<psql_function> ::=
[SQL SECURITY {INVOKER | DEFINER}]
<psql-module-body>
<psql-module-body> ::=
!! See Syntax of Module Body !!
<external-module-body> ::=
!! See Syntax of Module Body !!
197
Chapter 5. Data Definition (DDL) Statements
Parameter Description
collation Collation
literal A literal value that is assignment-compatible with the data type of the
parameter
context-var Any context variable whose type is compatible with the data type of the
parameter
paramname The name of an input parameter of the function. The maximum length is
63 characters. The name of the parameter must be unique among input
parameters of the function and its local variables.
The CREATE FUNCTION statement creates a new stored function. The stored function name must be
unique among the names of all stored and external (legacy) functions, excluding sub-functions or
functions in packages. For sub-functions or functions in packages, the name must be unique within
its module (package, stored procedure, stored function, trigger).
It is advisable to not reuse function names between global stored functions and
stored functions in packages, although this is legal. At the moment, it is not
possible to call a function or procedure from the global namespace from inside a
package, if that package defines a function or procedure with the same name. In
that situation, the function or procedure of the package will be called.
CREATE FUNCTION is a compound statement with a header and a body. The header defines the name
of the stored function, and declares input parameters and return type.
The function body consists of optional declarations of local variables, named cursors, and
subroutines (sub-functions and sub-procedures), and one or more statements or statement blocks,
enclosed in an outer block that starts with the keyword BEGIN and ends with the keyword END.
Declarations and statements inside the function body must be terminated with a semicolon (‘;’).
Statement Terminators
Some SQL statement editors — specifically the isql utility that comes with Firebird, and possibly
some third-party editors — employ an internal convention that requires all statements to be
terminated with a semicolon. This creates a conflict with PSQL syntax when coding in these
environments. If you are unacquainted with this problem and its solution, please study the details
in the PSQL chapter in the section entitled Switching the Terminator in isql.
Parameters
A collation can be specified for string-type parameters, using the COLLATE clause.
198
Chapter 5. Data Definition (DDL) Statements
Input Parameters
Input parameters are presented as a parenthesized list following the name of the function. They
are passed by value into the function, so any changes inside the function has no effect on the
parameters in the caller. The NOT NULL constraint can also be specified for any input parameter,
to prevent NULL being passed or assigned to it. Input parameters may have default values.
Parameters with default values specified must be added at the end of the list of parameters.
Output Parameter
The RETURNS clause specifies the return type of the stored function. If a function returns a string
value, then it is possible to specify the collation using the COLLATE clause. As a return type, you
can specify a data type, a domain, the type of a domain (using TYPE OF), or the type of a column
of a table or view (using TYPE OF COLUMN).
Deterministic functions
The optional DETERMINISTIC clause indicates that the function is deterministic. Deterministic
functions always return the same result for the same set of inputs. Non-deterministic functions can
return different results for each invocation, even for the same set of inputs. If a function is specified
as deterministic, then such a function might not be called again if it has already been called once
with the given set of inputs, and instead takes the result from a metadata cache.
RETURN rand();
END;
199
Chapter 5. Data Definition (DDL) Statements
)
SELECT n, fn_t() FROM t;
SQL Security
The SQL SECURITY clause specifies the security context for executing other routines or inserting into
other tables. When SQL Security is not specified, the default value of the database is applied at
runtime.
The SQL SECURITY clause can only be specified for PSQL functions, and is not valid for functions
defined in a package.
The optional declarations section, located at the start of the body of the function definition, defines
variables (including cursors) and subroutines local to the function. Local variable declarations
follow the same rules as parameters regarding specification of the data type. See details in the PSQL
chapter for DECLARE VARIABLE, DECLARE CURSOR, DECLARE FUNCTION, and DECLARE PROCEDURE.
Function Body
The header section is followed by the function body, consisting of one or more PSQL statements
enclosed between the outer keywords BEGIN and END. Multiple BEGIN … END blocks of terminated
statements may be embedded inside the procedure body.
A stored function can also be located in an external module. In this case, instead of a function body,
the CREATE FUNCTION specifies the location of the function in the external module using the EXTERNAL
clause. The optional NAME clause specifies the name of the external module, the name of the function
inside the module, and — optionally — user-defined information. The required ENGINE clause
specifies the name of the UDR engine that handles communication between Firebird and the
external module. The optional AS clause accepts a string literal “body”, which can be used by the
engine or module for various purposes.
External UDR (User Defined Routine) functions created using CREATE FUNCTION …
EXTERNAL … should not be confused with legacy UDFs (User Defined Functions)
declared using DECLARE EXTERNAL FUNCTION.
UDFs are deprecated, and a legacy from previous Firebird functions. Their
capabilities are significantly inferior to the capabilities to the new type of external
UDR functions.
200
Chapter 5. Data Definition (DDL) Statements
• Administrators
The user who created the stored function becomes its owner.
Calling in a select:
Call inside PSQL code, the second optional parameter is not specified:
MY_VAR = ADD_INT(A);
201
Chapter 5. Data Definition (DDL) Statements
202
Chapter 5. Data Definition (DDL) Statements
BEGIN
xMod = MOD (ANumber, 16);
ANumber = ANumber / 16;
xResult = TO_HEX (xMod);
WHILE (ANUMBER> 0) DO
BEGIN
xMod = MOD (ANumber, 16);
ANumber = ANumber / 16;
xResult = TO_HEX (xMod) || xResult;
END
RETURN '0x' || LPAD (xResult, AByte_Per_Number * 2, '0' );
END
6. With DEFINER set for function f, user US needs only the EXECUTE privilege on f. If it were set for
INVOKER, the user would also need the INSERT privilege on table t.
set term ^;
create function f (i integer) returns int SQL SECURITY DEFINER
as
begin
insert into t values (:i);
return i + 1;
end^
set term ;^
grant execute on function f to user us;
commit;
See also
CREATE OR ALTER FUNCTION, ALTER FUNCTION, RECREATE FUNCTION, DROP FUNCTION, DECLARE EXTERNAL
FUNCTION
Available in
DSQL
Syntax
203
Chapter 5. Data Definition (DDL) Statements
The ALTER FUNCTION statement allows the following changes to a stored function definition:
For external functions (UDR), you can change the entry point and engine name. For legacy external
functions declared using DECLARE EXTERNAL FUNCTION — also known as UDFs — it is not possible to
convert to PSQL and vice versa.
After ALTER FUNCTION executes, existing privileges remain intact and dependencies are not affected.
Altering a function without specifying the SQL SECURITY clause will remove the SQL Security
property if currently set for this function. This means the behaviour will revert to the database
default.
Take care about changing the number and type of input parameters and the
output type of a stored function. Existing application code and procedures,
functions and triggers that call it could become invalid because the new
description of the parameters is incompatible with the old calling format. For
information on how to troubleshoot such a situation, see the article The
RDB$VALID_BLR Field in the Appendix.
• Administrators
See also
CREATE FUNCTION, CREATE OR ALTER FUNCTION, RECREATE FUNCTION, DROP FUNCTION
204
Chapter 5. Data Definition (DDL) Statements
Available in
DSQL
Syntax
The CREATE OR ALTER FUNCTION statement creates a new stored function or alters an existing one. If
the stored function does not exist, it will be created by invoking a CREATE FUNCTION statement
transparently. If the function already exists, it will be altered and compiled (through ALTER
FUNCTION) without affecting its existing privileges and dependencies.
See also
CREATE FUNCTION, ALTER FUNCTION, DROP FUNCTION
Available in
DSQL
Syntax
205
Chapter 5. Data Definition (DDL) Statements
Parameter Description
The DROP FUNCTION statement deletes an existing stored function. If the stored function has any
dependencies, the attempt to delete it will fail, and raise an error.
• Administrators
See also
CREATE FUNCTION, CREATE OR ALTER FUNCTION, RECREATE FUNCTION
Available in
DSQL
Syntax
The RECREATE FUNCTION statement creates a new stored function or recreates an existing one. If there
is a function with this name already, the engine will try to drop it and then create a new one.
Recreating an existing function will fail at COMMIT if the function has dependencies.
Be aware that dependency errors are not detected until the COMMIT phase of this
operation.
206
Chapter 5. Data Definition (DDL) Statements
After a procedure is successfully recreated, existing privileges to execute the stored function and
the privileges of the stored function itself are dropped.
See also
CREATE FUNCTION, DROP FUNCTION
• The default setting for the configuration parameter UdfAccess is None. To use
UDFs now requires an explicit configuration of Restrict path-list
• The UDF libraries (ib_udf, fbudf) are no longer distributed in the installation
kits
External functions, also known as “User-Defined Functions” (UDFs) are programs written in an
external programming language and stored in dynamically loaded libraries. Once declared in a
database, they become available in dynamic and procedural statements as though they were
implemented in the SQL language.
External functions extend the possibilities for processing data with SQL considerably. To make a
function available to a database, it is declared using the statement DECLARE EXTERNAL FUNCTION.
The library containing a function is loaded when any function included in it is called.
207
Chapter 5. Data Definition (DDL) Statements
See also
FUNCTION
Available in
DSQL, ESQL
Syntax
<arg_desc_list> ::=
<arg_type_decl> [, <arg_type_decl> ...]
<arg_type_decl> ::=
<udf_data_type> [BY {DESCRIPTOR | SCALAR_ARRAY} | NULL]
<udf_data_type> ::=
<scalar_datatype>
| BLOB
| CSTRING(length) [ CHARACTER SET charset ]
<scalar_datatype> ::=
!! See Scalar Data Types Syntax !!
<return_value> ::=
{ <udf_data_type> | PARAMETER param_num }
[{ BY VALUE | BY DESCRIPTOR [FREE_IT] | FREE_IT }]
208
Chapter 5. Data Definition (DDL) Statements
Parameter Description
library_name The name of the module (MODULE_NAME) from which the function is
exported. This will be the name of the file, without the “.dll” or “.so” file
extension.
param_num The number of the input parameter, numbered from 1 in the list of input
parameters in the declaration, describing the data type that will be
returned by the function
The DECLARE EXTERNAL FUNCTION statement makes a user-defined function available in the database.
UDF declarations must be made in each database that is going to use them. There is no need to
declare UDFs that will never be used.
The name of the external function must be unique among all function names. It may be different
from the exported name of the function, as specified in the ENTRY_POINT argument.
The input parameters of the function follow the name of the function and are separated with
commas. Each parameter has an SQL data type specified for it. Arrays cannot be used as function
parameters. In addition to the SQL types, the CSTRING type is available for specifying a null-
terminated string with a maximum length of LENGTH bytes. There are several mechanisms for
passing a parameter from the Firebird engine to an external function, each of these mechanisms
will be discussed below.
By default, input parameters are passed by reference. There is no separate clause to explicitly
indicate that parameters are passed by reference.
When passing a NULL value by reference, it is converted to the equivalent of zero, for example, a
number ‘0’ or an empty string (“''”). If the keyword NULL is specified after a parameter, then with
passing a NULL values, the null pointer will be passed to the external function.
Declaring a function with the NULL keyword does not guarantee that the function will correctly
handle a NULL input parameter. Any function must be written or rewritten to correctly handle NULL
values. Always use the function declaration as provided by its developer.
If BY DESCRIPTOR is specified, then the input parameter is passed by descriptor. In this case, the UDF
parameter will receive a pointer to an internal structure known as a descriptor. The descriptor
contains information about the data type, subtype, precision, character set and collation, scale, a
pointer to the data itself and some flags, including the NULL indicator. This declaration only works if
the external function is written using a handle.
209
Chapter 5. Data Definition (DDL) Statements
When passing a function parameter by descriptor, the passed value is not cast to
the declared data type.
The BY SCALAR_ARRAY clause is used when passing arrays as input parameters. Unlike other types,
you cannot return an array from a UDF.
RETURNS clause
(Required) specifies the output parameter returned by the function. A function is scalar, it
returns one value (output parameter). The output parameter can be of any SQL type (except an
array or an array element) or a null-terminated string (CSTRING). The output parameter can be
passed by reference (the default), by descriptor or by value. If the BY DESCRIPTOR clause is
specified, the output parameter is passed by descriptor. If the BY VALUE clause is specified, the
output parameter is passed by value.
PARAMETER keyword
specifies that the function returns the value from the parameter under number param_num. It is
necessary if you need to return a value of data type BLOB.
FREE_IT keyword
means that the memory allocated for storing the return value will be freed after the function is
executed. It is used only if the memory was allocated dynamically in the UDF. In such a UDF, the
memory must be allocated with the help of the ib_util_malloc function from the ib_util module,
a requirement for compatibility with the functions used in Firebird code and in the code of the
shipped UDF modules, for allocating and freeing memory.
ENTRY_POINT clause
specifies the name of the entry point (the name of the imported function), as exported from the
module.
MODULE_NAME clause
defines the name of the module where the exported function is located. The link to the module
should not be the full path and extension of the file, if that can be avoided. If the module is
located in the default location (in the ../UDF subdirectory of the Firebird server root) or in a
location explicitly configured in firebird.conf, it makes it easier to move the database between
different platforms. The UDFAccess parameter in the firebird.conf file allows access restrictions to
external functions modules to be configured.
Any user connected to the database can declare an external function (UDF).
• Administrators
210
Chapter 5. Data Definition (DDL) Statements
1. Declaring the addDay external function located in the fbudf module. The input and output
parameters are passed by reference.
2. Declaring the invl external function located in the fbudf module. The input and output
parameters are passed by descriptor.
3. Declaring the isLeapYear external function located in the fbudf module. The input parameter is
passed by reference, while the output parameter is passed by value.
4. Declaring the i64Truncate external function located in the fbudf module. The input and output
parameters are passed by descriptor. The second parameter of the function is used as the return
value.
See also
ALTER EXTERNAL FUNCTION, DROP EXTERNAL FUNCTION, CREATE FUNCTION
Alters the entry point and/or the module name of a user-defined function (UDF)
Available in
211
Chapter 5. Data Definition (DDL) Statements
DSQL
Syntax
Parameter Description
new_library_name The new name of the module (MODULE_NAME from which the function is
exported). This will be the name of the file, without the “.dll” or “.so” file
extension.
The ALTER EXTERNAL FUNCTION statement changes the entry point and/or the module name for a user-
defined function (UDF). Existing dependencies remain intact after the statement containing the
change(s) is executed.
Any user connected to the database can change the entry point and the module name.
• Administrators
212
Chapter 5. Data Definition (DDL) Statements
See also
DECLARE EXTERNAL FUNCTION, DROP EXTERNAL FUNCTION
Available in
DSQL, ESQL
Syntax
Parameter Description
The DROP EXTERNAL FUNCTION statement deletes the declaration of a user-defined function from the
database. If there are any dependencies on the external function, the statement will fail and raise
an error.
Any user connected to the database can delete the declaration of an internal function.
• Administrators
See also
DECLARE EXTERNAL FUNCTION
5.11. PACKAGE
A package is a group of procedures and functions managed as one entity.
213
Chapter 5. Data Definition (DDL) Statements
Available in
DSQL
Syntax
<package_item> ::=
<function_decl>;
| <procedure_decl>;
<function_decl> ::=
FUNCTION funcname [ ( [ <in_params> ] ) ]
RETURNS <domain_or_non_array_type> [COLLATE collation]
[DETERMINISTIC]
<procedure_decl> ::=
PROCEDURE procname [ ( [ <in_params> ] ) ]
[RETURNS (<out_params>)]
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
Parameter Description
package_name Package name. The maximum length is 63 characters. The package name
must be unique among all package names.
214
Chapter 5. Data Definition (DDL) Statements
Parameter Description
func_name Function name. The maximum length is 63 characters. The function name
must be unique within the package.
collation Collation
literal A literal value that is assignment-compatible with the data type of the
parameter
context_var Any context variable that is assignment-compatible with the data type of
the parameter
The CREATE PACKAGE statement creates a new package header. Routines (procedures and functions)
declared in the package header are available outside the package using the full identifier
(package_name.proc_name or package_name.func_name). Routines defined only in the package
body — but not in the package header — are not visible outside the package.
For this reason, it is recommended that the names of stored procedures and
functions in packages do not overlap with names of stored procedures and
functions in the global namespace.
Statement Terminators
Some SQL statement editors — specifically the isql utility that comes with Firebird, and possibly
some third-party editors — employ an internal convention that requires all statements to be
terminated with a semicolon. This creates a conflict with PSQL syntax when coding in these
environments. If you are unacquainted with this problem and its solution, please study the details
in the PSQL chapter in the section entitled Switching the Terminator in isql.
215
Chapter 5. Data Definition (DDL) Statements
SQL Security
The SQL SECURITY clause specifies the security context for executing other routines or inserting into
other tables from functions or procedures defined in this package. When SQL Security is not
specified, the default value of the database is applied at runtime.
The SQL SECURITY clause can only be specified for the package, not for individual procedures and
functions of the package.
• Administrators
The user who created the package header becomes its owner.
1. With DEFINER set for package pk, user US needs only the EXECUTE privilege on pk. If it were set for
INVOKER, either the user or the package would also need the INSERT privilege on table t.
216
Chapter 5. Data Definition (DDL) Statements
commit;
See also
CREATE PACKAGE BODY, RECREATE PACKAGE BODY, ALTER PACKAGE, DROP PACKAGE, RECREATE PACKAGE
Available in
DSQL
Syntax
The ALTER PACKAGE statement modifies the package header. It can be used to change the number and
definition of procedures and functions, including their input and output parameters. However, the
source and compiled form of the package body is retained, though the body might be incompatible
after the change to the package header. The validity of a package body for the defined header is
stored in the column RDB$PACKAGES.RDB$VALID_BODY_FLAG.
Altering a package without specifying the SQL SECURITY clause will remove the SQL Security
property if currently set for this package. This means the behaviour will revert to the database
default.
217
Chapter 5. Data Definition (DDL) Statements
• Administrators
See also
CREATE PACKAGE, DROP PACKAGE, RECREATE PACKAGE BODY
Available in
DSQL
Syntax
The CREATE OR ALTER PACKAGE statement creates a new package or modifies an existing package
header. If the package header does not exist, it will be created using CREATE PACKAGE. If it already
exists, then it will be modified using ALTER PACKAGE while retaining existing privileges and
dependencies.
218
Chapter 5. Data Definition (DDL) Statements
See also
CREATE PACKAGE, ALTER PACKAGE, RECREATE PACKAGE, RECREATE PACKAGE BODY
Available in
DSQL
Syntax
Parameter Description
The DROP PACKAGE statement deletes an existing package header. If a package body exists, it will be
dropped together with the package header. If there are still dependencies on the package, an error
will be raised.
• Administrators
219
Chapter 5. Data Definition (DDL) Statements
See also
CREATE PACKAGE, DROP PACKAGE BODY
Available in
DSQL
Syntax
The RECREATE PACKAGE statement creates a new package or recreates an existing package header. If a
package header with the same name already exists, then this statement will first drop it and then
create a new package header. It is not possible to recreate the package header if there are still
dependencies on the existing package, or if the body of the package exists. Existing privileges of the
package itself are not preserved, nor are privileges to execute the procedures or functions of the
package.
See also
CREATE PACKAGE, DROP PACKAGE, CREATE PACKAGE BODY, RECREATE PACKAGE BODY
220
Chapter 5. Data Definition (DDL) Statements
Available in
DSQL
Syntax
<package_item> ::=
!! See CREATE PACKAGE syntax !!
<package_body_item> ::=
<function_impl> |
<procedure_impl>
<function_impl> ::=
FUNCTION funcname [ ( [ <in_params> ] ) ]
RETURNS <domain_or_non_array_type> [COLLATE collation]
[DETERMINISTIC]
<module-body>
<procedure_impl> ::=
PROCEDURE procname [ ( [ <in_params> ] ) ]
[RETURNS (<out_params>)]
<module-body>
<module-body> ::=
!! See Syntax of Module Body !!
<in_params> ::=
!! See CREATE PACKAGE syntax !!
!! See also Rules below !!
<out_params> ::=
!! See CREATE PACKAGE syntax !!
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
221
Chapter 5. Data Definition (DDL) Statements
Parameter Description
package_name Package name. The maximum length is 63 characters. The package name
must be unique among all package names.
func_name Function name. The maximum length is 63 characters. The function name
must be unique within the package.
collation Collation
The CREATE PACKAGE BODY statement creates a new package body. The package body can only be
created after the package header has been created. If there is no package header with name
package_name, an error is raised.
All procedures and functions declared in the package header must be implemented in the package
body. Additional procedures and functions may be defined and implemented in the package body
only. Procedure and functions defined in the package body, but not defined in the package header,
are not visible outside the package body.
The names of procedures and functions defined in the package body must be unique among the
names of procedures and functions defined in the package header and implemented in the package
body.
For this reason, it is recommended that the names of stored procedures and
functions in packages do not overlap with names of stored procedures and
functions in the global namespace.
Rules
• In the package body, all procedures and functions must be implemented with the same
signature as declared in the header and at the beginning of the package body
• The default values for procedure or function parameters cannot be overridden (as specified in
the package header or in <package_item>). This means default values can only be defined in
<package_body_item> for procedures or functions that have not been defined in the package
header or earlier in the package body.
UDF declarations (DECLARE EXTERNAL FUNCTION) are not supported for packages. Use
222
Chapter 5. Data Definition (DDL) Statements
UDR instead.
• Administrators
See also
DROP PACKAGE BODY, RECREATE PACKAGE BODY, CREATE PACKAGE
Available in
DSQL
223
Chapter 5. Data Definition (DDL) Statements
Syntax
Parameter Description
• Administrators
See also
CREATE PACKAGE BODY, RECREATE PACKAGE BODY, DROP PACKAGE
Available in
DSQL
Syntax
The RECREATE PACKAGE BODY statement creates a new or recreates an existing package body. If a
package body with the same name already exists, the statement will try to drop it and then create a
224
Chapter 5. Data Definition (DDL) Statements
new package body. After recreating the package body, privileges of the package and its routines are
preserved.
See also
CREATE PACKAGE BODY, DROP PACKAGE BODY, RECREATE PACKAGE BODY, ALTER PACKAGE
5.13. FILTER
A BLOB FILTER is a database object that is a special type of external function, with the sole purpose
of taking a BLOB object in one format and converting it to a BLOB object in another format. The
formats of the BLOB objects are specified with user-defined BLOB subtypes.
External functions for converting BLOB types are stored in dynamic libraries and loaded when
necessary.
225
Chapter 5. Data Definition (DDL) Statements
Available in
DSQL, ESQL
Syntax
<mnemonic> ::=
BINARY | TEXT | BLR | ACL | RANGES
| SUMMARY | FORMAT | TRANSACTION_DESCRIPTION
| EXTERNAL_FILE_DESCRIPTION | user_defined
Parameter Description
The DECLARE FILTER statement makes a BLOB filter available to the database. The name of the BLOB
filter must be unique among the names of BLOB filters.
The subtypes can be specified as the subtype number or as the subtype mnemonic name. Custom
subtypes must be represented by negative numbers (from -1 to -32,768), or their user-defined name
from the RDB$TYPES table. An attempt to declare more than one BLOB filter with the same
combination of the input and output types will fail with an error.
INPUT_TYPE
clause defining the BLOB subtype of the object to be converted
226
Chapter 5. Data Definition (DDL) Statements
OUTPUT_TYPE
clause defining the BLOB subtype of the object to be created.
Mnemonic names can be defined for custom BLOB subtypes and inserted manually
into the system table RDB$TYPES system table:
Warning
In general, the system tables are not writable by users. However, inserting custom
types into RDB$TYPES is still possible if the user is an administrator, or has the
system privilege CREATE_USER_TYPES.
Parameters
ENTRY_POINT
clause defining the name of the entry point (the name of the imported function) in the module.
MODULE_NAME
The clause defining the name of the module where the exported function is located. By default,
modules must be located in the UDF folder of the root directory on the server. The UDFAccess
parameter in firebird.conf allows editing of access restrictions to filter libraries.
• Administrators
The user executing the DECLARE FILTER statement becomes the owner of the filter.
227
Chapter 5. Data Definition (DDL) Statements
See also
DROP FILTER
Available in
DSQL, ESQL
Syntax
Parameter Description
The DROP FILTER statement removes the declaration of a BLOB filter from the database. Removing a
BLOB filter from a database makes it unavailable for use from that database. The dynamic library
where the conversion function is located remains intact and the removal from one database does
not affect other databases in which the same BLOB filter is still declared.
• Administrators
228
Chapter 5. Data Definition (DDL) Statements
See also
DECLARE FILTER
Sequences are stored as 64-bit integers, regardless of the SQL dialect of the database.
If a client is connected using Dialect 1, the server handles sequence values as 32-bit
integers. Passing a sequence value to a 32-bit field or variable will not cause errors
as long as the current value of the sequence does not exceed the limits of a 32-bit
number. However, as soon as the sequence value exceeds this limit, a database in
Dialect 3 will produce an error. A database in Dialect 1 will truncate (overflow) the
value, which could compromise the uniqueness of the series.
This section describes how to create, alter, set and drop sequences.
Creates a sequence
Available in
DSQL, ESQL
Syntax
Parameter Description
start_value First value produced by NEXT VALUE FOR seq_name. A 64-bit integer from -2
-63 63
to 2 -1. Default is 1.
increment Increment of the sequence when using NEXT VALUE FOR seq_name; cannot
be 0. Default is 1.
229
Chapter 5. Data Definition (DDL) Statements
When a sequence is created, its current value is set so that the next value produced by NEXT VALUE
FOR seq_name is equal to start_value. In other words, the current value of the sequence is set to
(start_value - increment).
The optional INCREMENT [BY] clause allows you to specify a non-zero increment for the NEXT VALUE
FOR seq_name expression.
The GEN_ID(seq_name, step) function can be called instead, to “step” the sequence by a different
increment. The increment specified through INCREMENT [BY] is not used by GEN_ID. Using both NEXT
VALUE FOR and GEN_ID, especially when the sequence has an increment other than 1, may result in
values you did not expect. For example, if you execute CREATE SEQUENCE x START WITH 10 INCREMENT
BY 10, and then use GEN_ID(x, 1), the value returned is 1, and if you then call NEXT VALUE FOR x, you
get 11.
at the maximum value of the sequence (2 - 1) and count down. Firebird does not
do that, and instead starts at 1 unless you specify a START WITH value.
The statements CREATE SEQUENCE and CREATE GENERATOR are synonymous — both create a new
sequence. Either can be used, but CREATE SEQUENCE is recommended as that is the syntax defined in
the SQL standard.
• Administrators
The user executing CREATE SEQUENCE (CREATE GENERATOR) becomes its owner.
230
Chapter 5. Data Definition (DDL) Statements
4. Creating the EMP_NO_GEN sequence with an initial value of 1 and an increment of 10.
5. Creating the EMP_NO_GEN sequence with an initial value of 5 and an increment of 10.
See also
ALTER SEQUENCE, CREATE OR ALTER SEQUENCE, DROP SEQUENCE, RECREATE SEQUENCE, SET GENERATOR, NEXT
VALUE FOR, GEN_ID() function
Available in
DSQL
Syntax
Parameter Description
start_value Next value produced by NEXT VALUE FOR seq_name. A 64-bit integer from -2
-63 63
to 2 -1. Default is the start value in the metadata.
increment Increment of the sequence (when using NEXT VALUE FOR seq_name); cannot
be 0.
The ALTER SEQUENCE statement sets the next value of the sequence, and/or changes the increment of
the sequence.
The RESTART WITH start_value clause sets the current value of the sequence so that the next value
obtained from NEXT VALUE FOR seq_name is equal to start_value. To achieve this, the current value of
the sequence is set to (start_value - increment) with increment either as specified in the statement,
or from the metadata of the sequence. The RESTART clause without WITH start_value behaves as if
WITH start_value is specified with the start value from the metadata of the sequence.
231
Chapter 5. Data Definition (DDL) Statements
Contrary to Firebird 3.0, since Firebird 4.0 RESTART WITH start_value only restarts
the sequence with the specified value, and does not store start_value as the new
start value of the sequence. A subsequent ALTER SEQUENCE RESTART will use the start
value specified when the sequence was created, and not the start_value of this
statement. This behaviour is specified in the SQL standard.
It is currently not possible to change the start value stored in the metadata.
Incorrect use of ALTER SEQUENCE — changing the current value of the sequence — is
likely to break the logical integrity of data, or result in primary key or unique
constraint violations.
INCREMENT [BY] allows you to change the sequence increment for the NEXT VALUE FOR expression.
Changing the increment value takes effect for all queries that run after the transaction commits.
Procedures that are called for the first time after changing the commit, will use the new value if
they use NEXT VALUE FOR. Procedures that were already cached in the metadata cache will continue
to use the old increment. You may need to close all connections to the database for the metadata
cache to clear, and the new increment to be used. Procedures using NEXT VALUE FOR do not need to
be recompiled to see the new increment. Procedures using GEN_ID(gen, expression) are not affected
when the increment is changed.
• Administrators
• Users with the ALTER ANY SEQUENCE (ALTER ANY GENERATOR) privilege
1. Setting the value of the EMP_NO_GEN sequence so the next value is 145.
2. Resetting the sequence EMP_NO_GEN to the start value stored in the metadata
See also
232
Chapter 5. Data Definition (DDL) Statements
SET GENERATOR, CREATE SEQUENCE, CREATE OR ALTER SEQUENCE, DROP SEQUENCE, RECREATE SEQUENCE, NEXT
VALUE FOR, GEN_ID() function
Available in
DSQL, ESQL
Syntax
Parameter Description
start_value First or next value produced by NEXT VALUE FOR seq_name. A 64-bit integer
-63 63
from -2 to 2 -1. Default is 1.
increment Increment of the sequence when using NEXT VALUE FOR seq_name; cannot
be 0. Default is 1.
If the sequence does not exist, it will be created as documented under CREATE SEQUENCE. An existing
sequence will be changed:
• If RESTART is specified, the sequence is restarted with the start value stored in the metadata
• If the START WITH clause is specified, the sequence is restarted with start_value, but the
start_value is not stored. In other words, it behaves as RESTART WITH in ALTER SEQUENCE.
• If the INCREMENT [BY] clause is specified, increment is stored as the increment in the metadata,
and used for subsequent calls to NEXT VALUE FOR
See also
CREATE SEQUENCE, ALTER SEQUENCE, DROP SEQUENCE, RECREATE SEQUENCE, SET GENERATOR, NEXT VALUE FOR,
GEN_ID() function
233
Chapter 5. Data Definition (DDL) Statements
Drops a sequence
Available in
DSQL, ESQL
Syntax
Parameter Description
The statements DROP SEQUENCE and DROP GENERATOR are equivalent: both drop (delete) an existing
sequence. Either is valid but DROP SEQUENCE, being defined in the SQL standard, is recommended.
• Administrators
• Users with the DROP ANY SEQUENCE (DROP ANY GENERATOR) privilege
See also
CREATE SEQUENCE, CREATE OR ALTER SEQUENCE, RECREATE SEQUENCE
Available in
DSQL, ESQL
Syntax
234
Chapter 5. Data Definition (DDL) Statements
Parameter Description
start_value First value produced by NEXT VALUE FOR seq_name. A 64-bit integer from -2
-63 63
to 2 -1. Default is 1.
increment Increment of the sequence (when using NEXT VALUE FOR seq_name); cannot
be 0. Default is 1.
See CREATE SEQUENCE for the full syntax of CREATE SEQUENCE and descriptions of defining a sequences
and its options.
RECREATE SEQUENCE creates or recreates a sequence. If a sequence with this name already exists, the
RECREATE SEQUENCE statement will try to drop it and create a new one. Existing dependencies will
prevent the statement from executing.
See also
CREATE SEQUENCE, ALTER SEQUENCE, CREATE OR ALTER SEQUENCE, DROP SEQUENCE, SET GENERATOR, NEXT VALUE
FOR, GEN_ID() function
Available in
DSQL, ESQL
Syntax
Parameter Description
235
Chapter 5. Data Definition (DDL) Statements
The SET GENERATOR statement sets the current value of a sequence to the specified value.
• Administrators
• Users with the ALTER ANY SEQUENCE (ALTER ANY GENERATOR) privilege
See also
ALTER SEQUENCE, CREATE SEQUENCE, CREATE OR ALTER SEQUENCE, DROP SEQUENCE, NEXT VALUE FOR, GEN_ID()
function
5.15. EXCEPTION
This section describes how to create, modify and delete custom exceptions for use in error handlers
in PSQL modules.
Available in
DSQL, ESQL
Syntax
236
Chapter 5. Data Definition (DDL) Statements
<message-part> ::=
<text>
| @<slot>
Parameter Description
The statement CREATE EXCEPTION creates a new exception for use in PSQL modules. If an exception
with the same name exists, the statement will raise an error.
The default message is stored in character set NONE, i.e. in characters of any single-byte character set.
The text can be overridden in the PSQL code when the exception is thrown.
The error message may contain “parameter slots” that can be filled when raising the exception.
If the message contains a parameter slot number that is greater than 9, the second
and subsequent digits will be treated as literal text. For example @10 will be
interpreted as slot 1 followed by a literal ‘0’.
• Administrators
The user executing the CREATE EXCEPTION statement becomes the owner of the exception.
237
Chapter 5. Data Definition (DDL) Statements
See also
ALTER EXCEPTION, CREATE OR ALTER EXCEPTION, DROP EXCEPTION, RECREATE EXCEPTION
Available in
DSQL, ESQL
Syntax
• Administrators
See also
CREATE EXCEPTION, CREATE OR ALTER EXCEPTION, DROP EXCEPTION, RECREATE EXCEPTION
Available in
DSQL
238
Chapter 5. Data Definition (DDL) Statements
Syntax
The statement CREATE OR ALTER EXCEPTION is used to create the specified exception if it does not
exist, or to modify the text of the error message returned from it if it exists already. If an existing
exception is altered by this statement, any existing dependencies will remain intact.
See also
CREATE EXCEPTION, ALTER EXCEPTION, RECREATE EXCEPTION
Available in
DSQL, ESQL
Syntax
Parameter Description
The statement DROP EXCEPTION is used to delete an exception. Any dependencies on the exception
will cause the statement to fail, and the exception will not be deleted.
• Administrators
239
Chapter 5. Data Definition (DDL) Statements
See also
CREATE EXCEPTION, RECREATE EXCEPTION
Available in
DSQL
Syntax
The statement RECREATE EXCEPTION creates a new exception for use in PSQL modules. If an exception
with the same name exists already, the RECREATE EXCEPTION statement will try to drop it and create a
new one. If there are any dependencies on the existing exception, the attempted deletion fails and
RECREATE EXCEPTION is not executed.
See also
CREATE EXCEPTION, DROP EXCEPTION, CREATE OR ALTER EXCEPTION
5.16. COLLATION
In SQL, text strings are sortable objects. This means that they obey ordering rules, such as
alphabetical order. Comparison operations can be applied to such text strings (for example, “less
than” or “greater than”), where the comparison must apply a certain sort order or collation. For
example, the expression “'a' < 'b'” means that ‘'a'’ precedes ‘'b'’ in the collation. The expression
“'c' > 'b'” means that ‘'c'’ follows ‘'b'’ in the collation. Text strings of more than one character
are sorted using sequential character comparisons: first the first characters of the two strings are
compared, then the second characters, and so on, until a difference is found between the two
strings. This difference defines the sort order.
240
Chapter 5. Data Definition (DDL) Statements
A COLLATION is the schema object that defines a collation (or sort order).
Available in
DSQL
Syntax
Parameter Description
collname The name to use for the new collation. The maximum length is 63
characters
The CREATE COLLATION statement does not “create” anything, its purpose is to make a collation
known to a database. The collation must already be present on the system, typically in a library file,
and must be properly registered in a .conf file in the intl subdirectory of the Firebird installation.
The collation may alternatively be based on one that is already present in the database.
The optional FROM clause specifies the base collation that is used to derive a new collation. This
collation must already be present in the database. If the keyword EXTERNAL is specified, then
Firebird will scan the .conf files in $fbroot/intl/, where extname must exactly match the name in
the configuration file (case-sensitive).
If no FROM clause is present, Firebird will scan the .conf file(s) in the intl subdirectory for a collation
with the collation name specified in CREATE COLLATION. In other words, omitting the FROM basecoll
clause is equivalent to specifying FROM EXTERNAL ('collname').
241
Chapter 5. Data Definition (DDL) Statements
The — single-quoted — extname is case-sensitive and must correspond exactly with the collation
name in the .conf file. The collname, charset and basecoll parameters are case-insensitive unless
enclosed in double-quotes.
When creating a collation, you can specify whether trailing spaces are included in the comparison.
If the NO PAD clause is specified, trailing spaces are taken into account in the comparison. If the PAD
SPACE clause is specified, trailing spaces are ignored in the comparison.
The optional CASE clause allows you to specify whether the comparison is case-sensitive or case-
insensitive.
The optional ACCENT clause allows you to specify whether the comparison is accent-sensitive or
accent-insensitive (e.g. if ‘'e'’ and ‘'é'’ are considered equal or unequal).
Specific Attributes
The CREATE COLLATION statement can also include specific attributes to configure the collation. The
available specific attributes are listed in the table below. Not all specific attributes apply to every
collation. If the attribute is not applicable to the collation, but is specified when creating it, it will
not cause an error.
In the table, “1 bpc” indicates that an attribute is valid for collations of character sets using 1 byte
per character (so-called narrow character sets), and “UNI” for “Unicode collations”.
242
Chapter 5. Data Definition (DDL) Statements
If you want to add a new character set with its default collation into your database,
declare and run the stored procedure sp_register_character_set(name,
max_bytes_per_character), found in misc/intl.sql under the Firebird installation
directory.
In order for this to work, the character set must be present on the system and
registered in a .conf file in the intl subdirectory.
• Administrators
The user executing the CREATE COLLATION statement becomes the owner of the collation.
1. Creating a collation using the name found in the fbintl.conf file (case-sensitive)
2. Creating a collation using a special (user-defined) name (the “external” name must match the
name in the fbintl.conf file)
243
Chapter 5. Data Definition (DDL) Statements
4. Creating a case-insensitive collation based on one already existing in the database with specific
attributes
5. Creating a case-insensitive collation by the value of numbers (the so-called natural collation)
See also
DROP COLLATION
Available in
DSQL
Syntax
Parameter Description
The DROP COLLATION statement removes the specified collation from the database, if it exists. An
244
Chapter 5. Data Definition (DDL) Statements
If you want to remove an entire character set with all its collations from the
database, declare and execute the stored procedure
sp_unregister_character_set(name) from the misc/intl.sql subdirectory of the
Firebird installation.
• Administrators
See also
CREATE COLLATION
Available in
DSQL
Syntax
Parameter Description
This will affect the future usage of the character set, except for cases where the COLLATE clause is
explicitly overridden. In that case, the collation of existing domains, columns and PSQL variables
will remain intact after the change to the default collation of the underlying character set.
245
Chapter 5. Data Definition (DDL) Statements
If you change the default collation for the database character set (the one defined
when the database was created), it will change the default collation for the
database.
If you change the default collation for the character set that was specified during
the connection, string constants will be interpreted according to the new collation
value, except in those cases where the character set and/or the collation have been
overridden.
• Administrators
5.18. Comments
Database objects and a database itself may be annotated with comments. It is a convenient
mechanism for documenting the development and maintenance of a database. Comments created
with COMMENT ON will survive a gbak backup and restore.
5.18.1. COMMENT ON
Available in
DSQL
Syntax
<object> ::=
{DATABASE | SCHEMA}
| <basic-type> objectname
| USER username [USING PLUGIN pluginname]
| COLUMN relationname.fieldname
| [{PROCEDURE | FUNCTION}] PARAMETER
[packagename.]routinename.paramname
| {PROCEDURE | [EXTERNAL] FUNCTION}
246
Chapter 5. Data Definition (DDL) Statements
[package_name.]routinename
| [GLOBAL] MAPPING mappingname
<basic-type> ::=
CHARACTER SET | COLLATION | DOMAIN
| EXCEPTION | FILTER | GENERATOR
| INDEX | PACKAGE | ROLE
| SEQUENCE | TABLE | TRIGGER
| VIEW
Parameter Description
username Username
The COMMENT ON statement adds comments for database objects (metadata). Comments are saved to
the RDB$DESCRIPTION column of the corresponding system tables. Client applications can view
comments from these fields.
1. If you add an empty comment (“''”), it will be saved as NULL in the database.
Comments on users are visible to that user through the SEC$USERS virtual table.
247
Chapter 5. Data Definition (DDL) Statements
• Administrators
• Users with the ALTER ANY object_type privilege, where object_type is the type of object
commented on (e.g. PROCEDURE)
5. Adding a comment for a package, its procedures and functions, and their parameters
COMMENT ON
PROCEDURE PARAMETER APP_VAR.SET_DATERANGE.ADATEBEGIN
IS 'Start Date';
248
Chapter 6. Data Manipulation (DML) Statements
6.1. SELECT
“Queries” or retrieves data from the database
Global syntax
The above syntax is not the full SELECT syntax. For documentation reasons it is
simplified, and we attempt to build out the syntax in later sections. The full
SELECT syntax can be found below, in Full SELECT Syntax.
The SELECT statement retrieves data from the database and hands them to the application or the
enclosing SQL statement. Data is returned in zero or more rows, each containing one or more
249
Chapter 6. Data Manipulation (DML) Statements
columns or fields. The total of rows returned is the result set of the statement.
• The SELECT keyword, followed by a select list. This part specifies what you want to retrieve.
• The FROM keyword, followed by a selectable object. This tells the engine where you want to get it
from.
In its most basic form, SELECT retrieves a number of columns from a single table or view, like this:
In practice, a SELECT statement is usually executed with a WHERE clause, which limits the rows
returned. The result set may be sorted by an ORDER BY clause, and FIRST … SKIP, OFFSET … FETCH or
ROWS may further limit the number of returned rows, and can — for example — be used for
pagination.
The column list may contain all kinds of expressions, not only column names, and the source need
not be a table or view: it may also be a derived table, a common table expression (CTE) or a
selectable stored procedure. Multiple sources may be combined in a JOIN, and multiple result sets
may be combined in a UNION.
The following sections discuss the available SELECT subclauses and their usage in detail.
Syntax
SELECT
[FIRST <limit-expression>] [SKIP <limit-expression>]
FROM ...
...
<limit-expression> ::=
<integer-literal>
| <query-parameter>
| (<value-expression>)
250
Chapter 6. Data Manipulation (DML) Statements
Argument Description
FIRST and SKIP are Firebird-specific clauses. Use the SQL-standard OFFSET, FETCH
syntax wherever possible.
FIRST m limits the output of a query to the first m rows. SKIP n will skip the first n rows of the result
set before returning rows.
FIRST and SKIP are both optional. When used together as in “FIRST m SKIP n”, the n topmost rows of
the result set are discarded, and the first m rows of the rest of the set are returned.
• Any argument to FIRST and SKIP that is not an integer literal or an SQL parameter must be
enclosed in parentheses. This implies that a subquery expression must be enclosed in two pairs
of parentheses.
• If a SKIP lands past the end of the result set, an empty set is returned.
• If the number of rows in the result set (or the remainder left after a SKIP) is less than the value
of the m argument supplied for FIRST, that smaller number of rows is returned. These are valid
results, not error conditions.
Examples of FIRST/SKIP
1. The following query will return the first 10 names from the People table:
2. The following query will return everything but the first 10 names:
3. And this one returns the last 10 rows. Notice the double parentheses:
251
Chapter 6. Data Manipulation (DML) Statements
See also
OFFSET, FETCH, ROWS
The columns list contains one or more comma-separated value expressions. Each expression
provides a value for one output column. Alternatively, * (“star” or “all”) can be used to stand for all
the columns of all relations in the FROM clause.
Syntax
SELECT
[...]
[{ ALL | DISTINCT }] <select-list>
[...]
FROM ...
<select-sublist> ::=
table-alias.*
| <value-expression> [[AS] column-alias]
<value-expression> ::=
[table-alias.]col_name
| [table-alias.]selectable_SP_outparm
| <literal>
| <context-variable>
| <function-call>
| <single-value-subselect>
| <CASE-construct>
| any other expression returning a single
value of a Firebird data type or NULL
<function-call> ::=
<normal-function>
| <aggregate-function>
| <window-function>
<normal-function> ::=
!! See Built-in Scalar Functions !!
252
Chapter 6. Data Manipulation (DML) Statements
<aggregate-function> ::=
!! See Aggregate Functions !!
<window-function> ::=
!! See Window Functions !!
Argument Description
table-alias Name of relation (view, stored procedure, derived table), or its alias
literal A literal
It is always valid to qualify a column name (or “*”) with the name or alias of the table, view or
selectable SP to which it belongs, followed by a dot (‘.’). For example, relationname.columnname,
relationname.*, alias.columnname, alias.*. Qualifying is required if the column name occurs in more
than one relation taking part in a join. Qualifying “*” is required if it is not the only item in the
column list.
Aliases hide the original relation name: once a table, view or procedure has been
aliased, only the alias can be used as its qualifier throughout the query. The
relation name itself becomes unavailable.
The column list may optionally be preceded by one of the keywords DISTINCT or ALL:
• DISTINCT filters out any duplicate rows. That is, if two or more rows have the same values in
every corresponding column, only one of them is included in the result set
• ALL is the default: it returns all rows, including duplicates. ALL is rarely used; it is allowed for
compliance with the SQL standard.
A COLLATE clause of a value-expression will not change the appearance of the column as such.
However, if the specified collation changes the case or accent sensitivity of the column, it may
influence:
• The ordering, if an ORDER BY clause is also present, and it involves that column
• The rows retrieved (and hence the total number of rows in the result set), if DISTINCT is used
253
Chapter 6. Data Manipulation (DML) Statements
A query featuring a concatenation expression and a function call in the columns list:
select p.fullname,
(select name from classes c where c.id = p.class) as class,
(select name from mentors m where m.id = p.mentor) as mentor
from pupils p
The following query accomplishes the same as the previous one using joins instead of subselects:
select p.fullname,
c.name as class,
m.name as mentor
join classes c on c.id = p.class
from pupils p
join mentors m on m.id = p.mentor
This query uses a CASE construct to determine the correct title, e.g. when sending mail to a person:
SELECT
id,
254
Chapter 6. Data Manipulation (DML) Statements
salary,
name ,
DENSE_RANK() OVER (ORDER BY salary) AS EMP_RANK
FROM employees
ORDER BY salary;
Selecting from columns of a derived table. A derived table is a parenthesized SELECT statement
whose result set is used in an enclosing query as if it were a regular table or view. The derived table
is shown in bold here:
select fieldcount,
count(relation) as num_tables
from (select r.rdb$relation_name as relation,
count(*) as fieldcount
from rdb$relations r
join rdb$relation_fields rf
on rf.rdb$relation_name = r.rdb$relation_name
group by relation)
group by fieldcount
For those not familiar with RDB$DATABASE: this is a system table that is present in all Firebird
databases and is guaranteed to contain exactly one row. Although it wasn’t created for this purpose,
it has become standard practice among Firebird programmers to select from this table if you want
to select “from nothing”, i.e. if you need data that are not bound to a table or view, but can be
derived from the expressions in the output columns alone. Another example is:
Finally, an example where you select some meaningful information from RDB$DATABASE itself:
As you may have guessed, this will give you the default character set of the database.
255
Chapter 6. Data Manipulation (DML) Statements
See also
Functions, Aggregate Functions, Window Functions, Context Variables, CASE, Subqueries
The FROM clause specifies the source(s) from which the data are to be retrieved. In its simplest form,
this is a single table or view. However, the source can also be a selectable stored procedure, a
derived table, or a common table expression. Multiple sources can be combined using various types
of joins.
This section focuses on single-source selects. Joins are discussed in a following section.
Syntax
SELECT
...
FROM <table-reference> [, <table-reference> ...]
[...]
<table-primary> ::=
<table-or-query-name> [[AS] correlation-name]
| [LATERAL] <derived-table> [<correlation-or-recognition>]
| <parenthesized-joined-table>
<table-or-query-name> ::=
table-name
| query-name
| [package-name.]procedure-name [(<procedure-args>)]
<correlation-or-recognition> ::=
[AS] correlation-name [(<column-name-list>)]
Argument Description
256
Chapter 6. Data Manipulation (DML) Statements
Argument Description
correlation-name The alias of a data source (table, view, procedure, CTE, derived table)
When selecting from a single table or view, the FROM clause requires nothing more than the name.
An alias may be useful or even necessary if there are subqueries that refer to the main select
statement (as they often do — subqueries like this are called correlated subqueries).
Examples
select firstname,
middlename,
lastname,
date_of_birth,
(select name from schools s where p.school = s.id) schoolname
from pupils p
where year_started = '2012'
order by schoolname, date_of_birth
Correct use:
SELECT PEARS
FROM FRUIT;
SELECT FRUIT.PEARS
FROM FRUIT;
SELECT PEARS
257
Chapter 6. Data Manipulation (DML) Statements
FROM FRUIT F;
SELECT F.PEARS
FROM FRUIT F;
Incorrect use:
SELECT FRUIT.PEARS
FROM FRUIT F;
• utilizes the SUSPEND keyword so the caller can fetch the output rows one by one, like selecting
from a table or view.
The output parameters of a selectable stored procedure correspond to the columns of a regular
table.
Selecting from a stored procedure without input parameters is like selecting from a table or view:
Any required input parameters must be specified after the procedure name, enclosed in
parentheses:
Values for optional parameters (that is, parameters for which default values have been defined)
may be omitted or provided. However, if you provide them only partly, the parameters you omit
must all be at the tail end.
Supposing that the procedure visible_stars from the previous example has two optional
parameters: min_magn numeric(3,1) and spectral_class varchar(12), the following queries are all
valid:
258
Chapter 6. Data Manipulation (DML) Statements
But this one isn’t, because there’s a “hole” in the parameter list:
An alias for a selectable stored procedure is specified after the parameter list:
select
number,
(select name from contestants c where c.number = gw.number)
from get_winners('#34517', 'AMS') gw
If you refer to an output parameter (“column”) by qualifying it with the full procedure name, the
procedure alias should be omitted:
select
number,
(select name from contestants c where c.number = get_winners.number)
from get_winners('#34517', 'AMS')
See also
Stored Procedures, CREATE PROCEDURE
A derived table is a valid SELECT statement enclosed in parentheses, optionally followed by a table
alias and/or column aliases. The result set of the statement acts as a virtual table which the
enclosing statement can query.
Syntax
(<query-expression>) [<correlation-or-recognition>]
<correlation-or-recognition> ::=
[AS] correlation-name [(<column-name-list>)]
The SQL standard requires the <correlation-or-recognition>, and not providing one
makes it hard to reference the derived table or its columns. For maximum
259
Chapter 6. Data Manipulation (DML) Statements
The result set returned by this “SELECT … FROM (SELECT FROM …)” style of statement is a virtual
table that can be queried within the enclosing statement, as if it were a regular table or view.
The keyword LATERAL marks a table as a lateral derived table. Lateral derived tables can reference
tables (including other derived tables) that occur earlier in the FROM clause. See Joins with LATERAL
Derived Tables for more information.
The derived table in the query below returns the list of table names in the database, and the
number of columns in each table. A “drill-down” query on the derived table returns the counts of
fields and the counts of tables having each field count:
SELECT
FIELDCOUNT,
COUNT(RELATION) AS NUM_TABLES
FROM (SELECT
R.RDB$RELATION_NAME RELATION,
COUNT(*) AS FIELDCOUNT
FROM RDB$RELATIONS R
JOIN RDB$RELATION_FIELDS RF
ON RF.RDB$RELATION_NAME = R.RDB$RELATION_NAME
GROUP BY RELATION)
GROUP BY FIELDCOUNT
A trivial example demonstrating how the alias of a derived table and the list of column aliases (both
optional) can be used:
SELECT
DBINFO.DESCR, DBINFO.DEF_CHARSET
FROM (SELECT *
FROM RDB$DATABASE) DBINFO
(DESCR, REL_ID, SEC_CLASS, DEF_CHARSET)
• be nested
• be unions, and can be used in unions
260
Chapter 6. Data Manipulation (DML) Statements
• have WHERE, ORDER BY and GROUP BY clauses, FIRST/SKIP or ROWS directives, et al.
Furthermore,
• Each column in a derived table must have a name. If it does not have a name,
such as when it is a constant or a run-time expression, it should be given an
alias, either in the regular way or by including it in the list of column aliases in
the derived table’s specification.
◦ The list of column aliases is optional but, if it exists, it must contain an alias
for every column in the derived table
Suppose we have a table COEFFS which contains the coefficients of a number of quadratic equations
we have to solve. It has been defined like this:
Depending on the values of a, b and c, each equation may have zero, one or two solutions. It is
possible to find these solutions with a single-level query on table COEFFS, but the code will look
messy and several values (like the discriminant) will have to be calculated multiple times per row.
A derived table can help keep things clean here:
select
iif (D >= 0, (-b - sqrt(D)) / denom, null) sol_1,
iif (D > 0, (-b + sqrt(D)) / denom, null) sol_2
from
(select b, b*b - 4*a*c, 2*a from coeffs) (b, D, denom)
If we want to show the coefficients next to the solutions (which may not be a bad idea), we can alter
the query like this:
select
a, b, c,
iif (D >= 0, (-b - sqrt(D)) / denom, null) sol_1,
iif (D > 0, (-b + sqrt(D)) / denom, null) sol_2
from
261
Chapter 6. Data Manipulation (DML) Statements
Notice that whereas the first query used a column aliases list for the derived table, the second adds
aliases internally where needed. Both methods work, as long as every column is guaranteed to have
a name.
All columns in the derived table will be evaluated as many times as they are
specified in the main query. This is important, as it can lead to unexpected results
when using non-deterministic functions. The following shows an example of this.
SELECT
UUID_TO_CHAR(X) AS C1,
UUID_TO_CHAR(X) AS C2,
UUID_TO_CHAR(X) AS C3
FROM (SELECT GEN_UUID() AS X
FROM RDB$DATABASE) T;
C1 80AAECED-65CD-4C2F-90AB-5D548C3C7279
C2 C1214CD3-423C-406D-B5BD-95BF432ED3E3
C3 EB176C10-F754-4689-8B84-64B666381154
To ensure a single result of the GEN_UUID function, you can use the following
method:
SELECT
UUID_TO_CHAR(X) AS C1,
UUID_TO_CHAR(X) AS C2,
UUID_TO_CHAR(X) AS C3
FROM (SELECT GEN_UUID() AS X
FROM RDB$DATABASE
UNION ALL
SELECT NULL FROM RDB$DATABASE WHERE 1 = 0) T;
C1 80AAECED-65CD-4C2F-90AB-5D548C3C7279
C2 80AAECED-65CD-4C2F-90AB-5D548C3C7279
C3 80AAECED-65CD-4C2F-90AB-5D548C3C7279
SELECT
262
Chapter 6. Data Manipulation (DML) Statements
UUID_TO_CHAR(X) AS C1,
UUID_TO_CHAR(X) AS C2,
UUID_TO_CHAR(X) AS C3
FROM (SELECT
(SELECT GEN_UUID() FROM RDB$DATABASE) AS X
FROM RDB$DATABASE) T;
A common table expression — or CTE — is a more complex variant of the derived table, but it is also
more powerful. A preamble, starting with the keyword WITH, defines one or more named CTEs, each
with an optional column aliases list. The main query, which follows the preamble, can then access
these CTEs as if they were regular tables or views. The CTEs go out of scope once the main query
has run to completion.
For a full discussion of CTEs, please refer to the section Common Table Expressions (“WITH … AS …
SELECT”).
Except for the fact that the calculations that have to be made first are now at the beginning, this
isn’t a great improvement over the derived table version. However, we can now also eliminate the
double calculation of sqrt(D) for every row:
The code is a little more complicated now, but it might execute more efficiently (depending on what
263
Chapter 6. Data Manipulation (DML) Statements
takes more time: executing the SQRT function or passing the values of b, D and denom through an
extra CTE). Incidentally, we could have done the same with derived tables, but that would involve
nesting.
All columns in the CTE will be evaluated as many times as they are specified in the
main query. This is important, as it can lead to unexpected results when using non-
deterministic functions. The following shows an example of this.
WITH T (X) AS (
SELECT GEN_UUID()
FROM RDB$DATABASE)
SELECT
UUID_TO_CHAR(X) as c1,
UUID_TO_CHAR(X) as c2,
UUID_TO_CHAR(X) as c3
FROM T
C1 80AAECED-65CD-4C2F-90AB-5D548C3C7279
C2 C1214CD3-423C-406D-B5BD-95BF432ED3E3
C3 EB176C10-F754-4689-8B84-64B666381154
To ensure a single result of the GEN_UUID function, you can use the following
method:
WITH T (X) AS (
SELECT GEN_UUID()
FROM RDB$DATABASE
UNION ALL
SELECT NULL FROM RDB$DATABASE WHERE 1 = 0)
SELECT
UUID_TO_CHAR(X) as c1,
UUID_TO_CHAR(X) as c2,
UUID_TO_CHAR(X) as c3
FROM T;
C1 80AAECED-65CD-4C2F-90AB-5D548C3C7279
C2 80AAECED-65CD-4C2F-90AB-5D548C3C7279
C3 80AAECED-65CD-4C2F-90AB-5D548C3C7279
264
Chapter 6. Data Manipulation (DML) Statements
WITH T (X) AS (
SELECT (SELECT GEN_UUID() FROM RDB$DATABASE)
FROM RDB$DATABASE)
SELECT
UUID_TO_CHAR(X) as c1,
UUID_TO_CHAR(X) as c2,
UUID_TO_CHAR(X) as c3
FROM T;
See also
Common Table Expressions (“WITH … AS … SELECT”).
6.1.4. Joins
Joins combine data from two sources into a single set. This is done on a row-by-row basis and
usually involves checking a join condition to determine which rows should be merged and appear
in the resulting dataset. There are several types (INNER, OUTER) and classes (qualified, natural, etc.) of
joins, each with its own syntax and rules.
Since joins can be chained, the datasets involved in a join may themselves be joined sets.
Syntax
SELECT
...
FROM <table-reference> [, <table-reference> ...]
[...]
<table-primary> ::=
<table-or-query-name> [[AS] correlation-name]
| [LATERAL] <derived-table> [<correlation-or-recognition>]
| <parenthesized-joined-table>
<table-or-query-name> ::=
table-name
| query-name
| [package-name.]procedure-name [(<procedure-args>)]
<correlation-or-recognition> ::=
[AS] correlation-name [(<column-name-list>)]
265
Chapter 6. Data Manipulation (DML) Statements
<parenthesized-joined-table> ::=
(<parenthesized-joined-table)
| (<joined-table>)
<joined-table> ::=
<cross-join>
| <natural-join>
| <qualified-join>
<cross-join>
<table-reference> CROSS JOIN <table-primary>
<natural-join> ::=
<table-reference> NATURAL [<join-type>] JOIN <table-primary>
<qualified-join> ::=
<table-reference> [<join-type>] JOIN <table-primary>
{ ON <search-condition>
| USING (<column-name-list>) }
Argument Description
correlation-name The alias of a data source (table, view, procedure, CTE, derived table)
column-name-list List of aliases of the columns of a derived table, or the list of columns
used for an equi-join
A join combines data rows from two sets (usually referred to as the left set and the right set). By
default, only rows that meet the join condition (i.e. that match at least one row in the other set
when the join condition is applied) make it into the result set. This default type of join is called an
inner join. Suppose we have the following two tables:
266
Chapter 6. Data Manipulation (DML) Statements
Table A
ID S
235 Silence
Table B
CODE X
-23 56.7735
87 416.0
select *
from A
join B on A.id = B.code;
ID S CODE X
The first row of A has been joined with the second row of B because together they met the condition
“A.id = B.code”. The other rows from the source tables have no match in the opposite set and are
therefore not included in the join. Remember, this is an INNER join. We can make that fact explicit by
writing:
select *
from A
inner join B on A.id = B.code;
It is perfectly possible that a row in the left set matches several rows from the right set or vice
versa. In that case, all those combinations are included, and we can get results like:
ID S CODE X
267
Chapter 6. Data Manipulation (DML) Statements
Sometimes we want (or need) all the rows of one or both of the sources to appear in the joined set,
even if they don’t match a record in the other source. This is where outer joins come in. A LEFT
outer join includes all the records from the left set, but only matching records from the right set. In
a RIGHT outer join it’s the other way around. A FULL outer joins include all the records from both
sets. In all outer joins, the “holes” (the places where an included source record doesn’t have a
match in the other set) are filled up with NULLs.
To make an outer join, you must specify LEFT, RIGHT or FULL, optionally followed by the keyword
OUTER.
Below are the results of the various outer joins when applied to our original tables A and B:
select *
from A
left outer join B on A.id = B.code;
ID S CODE X
select *
from A
right outer join B on A.id = B.code
ID S CODE X
select *
from A
full outer join B on A.id = B.code
ID S CODE X
Qualified joins
Qualified joins specify conditions for the combining of rows. This happens either explicitly in an ON
clause or implicitly in a USING clause.
268
Chapter 6. Data Manipulation (DML) Statements
Syntax
<qualified-join> ::=
<table-reference> [<join-type>] JOIN <table-primary>
{ ON <search-condition>
| USING (<column-name-list>) }
Explicit-condition joins
Most qualified joins have an ON clause, with an explicit condition that can be any valid Boolean
expression, but usually involves a comparison between the two sources involved.
Often, the condition is an equality test (or a number of ANDed equality tests) using the “=” operator.
Joins like these are called equi-joins. (The examples in the section on inner and outer joins were all
equi-joins.)
/* For each man, select the women who are taller than he.
Men for whom no such woman exists are not included. */
select m.fullname as man, f.fullname as woman
from males m
join females f on f.height > m.height;
269
Chapter 6. Data Manipulation (DML) Statements
Equi-joins often compare columns that have the same name in both tables. If this is the case, we can
also use the second type of qualified join: the named columns join.
Named columns joins have a USING clause which states only the column names. So instead of this:
which is considerably shorter. The result set is a little different though — at least when using
“SELECT *”:
• The explicit-condition join — with the ON clause — will contain each of the columns SEA and SHIP
twice: once from table FLOTSAM, and once from table JETSAM. Obviously, they will have the same
values.
• The named columns join — with the USING clause — will contain these columns only once.
If you want all the columns in the result set of the named columns join, set up your query like this:
This will give you the same result set as the explicit-condition join.
For an OUTER named columns join, there’s an additional twist when using “SELECT *” or an
unqualified column name from the USING list:
If a row from one source set doesn’t have a match in the other but must still be included because of
the LEFT, RIGHT or FULL directive, the merged column in the joined set gets the non-NULL value. That is
fair enough, but now you can’t tell whether this value came from the left set, the right set, or both.
This can be especially deceiving when the value came from the right hand set, because “*” always
shows combined columns in the left hand part — even in the case of a RIGHT join.
Whether this is a problem or not depends on the situation. If it is, use the “a.*, b.*” approach
shown above, with a and b the names or aliases of the two sources. Or better yet, avoid “*”
altogether in your serious queries and qualify all column names in joined sets. This has the
270
Chapter 6. Data Manipulation (DML) Statements
additional benefit that it forces you to think about which data you want to retrieve and where from.
It is your responsibility to make sure the column names in the USING list are of compatible types
between the two sources. If the types are compatible but not equal, the engine converts them to the
type with the broadest range of values before comparing the values. This will also be the data type
of the merged column that shows up in the result set if “SELECT *” or the unqualified column name
is used. Qualified columns on the other hand will always retain their original data type.
If, when joining by named columns, you are using a join column in the WHERE
clause, always use the qualified column name, otherwise an index on this column
will not be used.
However:
SELECT 1 FROM t1 a JOIN t2 b USING (x) WHERE a.x = 0;
-- PLAN JOIN (A INDEX (RDB$1), B INDEX (RDB$2))
The fact is, the unspecified column in this case is implicitly replaced by
`COALESCE(a.x, b.x). This trick is used to disambiguate column names, but it also
interferes with the use of the index.
Natural joins
Taking the idea of the named columns join a step further, a natural join performs an automatic
equi-join on all the columns that have the same name in the left and right table. The data types of
these columns must be compatible.
Syntax
<natural-join> ::=
<table-reference> NATURAL [<join-type>] JOIN <table-primary>
create table TA (
271
Chapter 6. Data Manipulation (DML) Statements
a bigint,
s varchar(12),
ins_date date
);
create table TB (
a bigint,
descr varchar(12),
x float,
ins_date date
);
A natural join on TA and TB would involve the columns a and ins_date, and the following two
statements would have the same effect:
select * from TA
natural join TB;
select * from TA
join TB using (a, ins_date);
Like all joins, natural joins are inner joins by default, but you can turn them into outer joins by
specifying LEFT, RIGHT or FULL before the JOIN keyword.
If there are no columns with the same name in the two source relations, a CROSS
JOIN is performed. We’ll get to this type of join next.
Cross joins
A cross join produces the full set product — or Cartesian product — of the two data sources. This
means that it successfully matches every row in the left source to every row in the right source.
Syntax
<cross-join>
<table-reference> CROSS JOIN <table-primary>
Use of the comma syntax is discouraged, and we recommend using the explicit join syntax.
Cross-joining two sets is equivalent to joining them on a tautology (a condition that is always true).
The following two statements have the same effect:
select * from TA
272
Chapter 6. Data Manipulation (DML) Statements
select * from TA
join TB on TRUE;
Cross joins are inner joins, because they only include matching records –- it just so happens that
every record matches! An outer cross join, if it existed, wouldn’t add anything to the result, because
what outer joins add are non-matching records, and these don’t exist in cross joins.
Cross joins are seldom useful, except if you want to list all the possible combinations of two or more
variables. Suppose you are selling a product that comes in different sizes, different colors and
different materials. If these variables are each listed in a table of their own, this query would
return all the combinations:
Implicit Joins
In the SQL:89 standard, the tables involved in a join were specified as a comma-delimited list in the
FROM clause (in other words, a cross join). The join conditions were then specified in the WHERE clause
among other search terms. This type of join is called an implicit join.
/*
* A sample of all Detroit customers who
* made a purchase.
*/
SELECT *
FROM customers c, sales s
WHERE s.cust_id = c.id AND c.city = 'Detroit'
Mixing explicit and implicit joins is not recommend, but is allowed. However, some types of mixing
are not supported by Firebird.
For example, the following query will raise the error “Column does not belong to referenced table”
SELECT *
FROM TA, TB
JOIN TC ON TA.COL1 = TC.COL1
273
Chapter 6. Data Manipulation (DML) Statements
That is because the explicit join cannot see the TA table. However, the next query will complete
without error, since the restriction is not violated.
SELECT *
FROM TA, TB
JOIN TC ON TB.COL1 = TC.COL1
WHERE TA.COL2 = TB.COL2
A Note on Equality
This note about equality and inequality operators applies everywhere in Firebird’s
SQL language, not only in JOIN conditions.
The “=” operator, which is explicitly used in many conditional joins and implicitly in named column
joins and natural joins, only matches values to values. According to the SQL standard, NULL is not a
value and hence two NULLs are neither equal nor unequal to one another. If you need NULLs to match
each other in a join, use the IS NOT DISTINCT FROM operator. This operator returns true if the
operands have the same value or if they are both NULL.
select *
from A join B
on A.id is not distinct from B.code;
Likewise, when you want to join on inequality, use IS DISTINCT FROM, not “<>”, if you want NULL to be
considered different from any value and two NULLs considered equal:
select *
from A join B
on A.id is distinct from B.code;
Firebird rejects unqualified field names in a query if these field names exist in more than one
dataset involved in a join. This is even true for inner equi-joins where the field name figures in the
ON clause like this:
select a, b, c
from TA
join TB on TA.a = TB.a;
There is one exception to this rule: with named columns joins and natural joins, the unqualified
field name of a column taking part in the matching process may be used legally and refers to the
274
Chapter 6. Data Manipulation (DML) Statements
merged column of the same name. For named columns joins, these are the columns listed in the
USING clause. For natural joins, they are the columns that have the same name in both relations. But
please notice again that, especially in outer joins, plain colname isn’t always the same as
left.colname or right.colname. Types may differ, and one of the qualified columns may be NULL
while the other isn’t. In that case, the value in the merged, unqualified column may mask the fact
that one of the source values is absent.
A derived table defined with the LATERAL keyword is called a lateral derived table. If a derived table
is defined as lateral, then it is allowed to refer to other tables in the same FROM clause, but only those
declared before it in the FROM clause.
275
Chapter 6. Data Manipulation (DML) Statements
The WHERE clause serves to limit the rows returned to the ones that the caller is interested in. The
condition following the keyword WHERE can be as simple as a check like “AMOUNT = 3” or it can be a
multilayered, convoluted expression containing subselects, predicates, function calls, mathematical
and logical operators, context variables and more.
The condition in the WHERE clause is often called the search condition, the search expression or
simply the search.
In DSQL and ESQL, the search condition may contain parameters. This is useful if a query has to be
repeated a number of times with different input values. In the SQL string as it is passed to the
server, question marks are used as placeholders for the parameters. These question marks are
called positional parameters because they can only be told apart by their position in the string.
Connectivity libraries often support named parameters of the form :id, :amount, :a etc. These are
more user-friendly; the library takes care of translating the named parameters to positional
parameters before passing the statement to the server.
The search condition may also contain local (PSQL) or host (ESQL) variable names, preceded by a
colon.
Syntax
SELECT ...
FROM ...
[...]
WHERE <search-condition>
[...]
Parameter Description
Only those rows for which the search condition evaluates to TRUE are included in the result set. Be
careful with possible NULL outcomes: if you negate a NULL expression with NOT, the result will still be
NULL and the row will not pass. This is demonstrated in one of the examples below.
Examples
276
Chapter 6. Data Manipulation (DML) Statements
The following example shows what can happen if the search condition evaluates to NULL.
Suppose you have a table listing children’s names and the number of marbles they possess. At a
certain moment, the table contains this data:
CHILD MARBLES
Anita 23
Bob E. 12
Chris <null>
Deirdre 1
Eve 17
Fritz 0
Gerry 21
Hadassah <null>
Isaac 6
277
Chapter 6. Data Manipulation (DML) Statements
First, please notice the difference between NULL and 0: Fritz is known to have no marbles at all,
Chris’s and Hadassah’s marble counts are unknown.
you will get the names Anita, Bob E., Eve and Gerry. These children all have more than 10 marbles.
it’s the turn of Deirdre, Fritz and Isaac to fill the list. Chris and Hadassah are not included, because
they aren’t known to have ten or fewer marbles. Should you change that last query to:
the result will still be the same, because the expression NULL <= 10 yields UNKNOWN. This is not the
same as TRUE, so Chris and Hadassah are not listed. If you want them listed with the “poor” children,
change the query to:
Now the search condition becomes true for Chris and Hadassah, because “marbles is null”
obviously returns TRUE in their case. In fact, the search condition cannot be NULL for anybody now.
Lastly, two examples of SELECT queries with parameters in the search. It depends on the application
how you should define query parameters and even if it is possible at all. Notice that queries like
these cannot be executed immediately: they have to be prepared first. Once a parameterized query
has been prepared, the user (or calling code) can supply values for the parameters and have it
executed many times, entering new values before every call. How the values are entered and the
execution started is up to the application. In a GUI environment, the user typically types the
parameter values in one or more text boxes and then clicks an “Execute”, “Run” or “Refresh”
button.
278
Chapter 6. Data Manipulation (DML) Statements
The last query cannot be passed directly to the engine; the application must convert it to the other
format first, mapping named parameters to positional parameters.
GROUP BY merges output rows that have the same combination of values in its item list into a single
row. Aggregate functions in the select list are applied to each group individually instead of to the
dataset as a whole.
If the select list only contains aggregate columns or, more generally, columns whose values don’t
depend on individual rows in the underlying set, GROUP BY is optional. When omitted, the final
result set consists of a single row (provided that at least one aggregated column is present).
If the select list contains both aggregate columns and columns whose values may vary per row, the
GROUP BY clause becomes mandatory.
Syntax
<grouping-item> ::=
<non-aggr-select-item>
| <non-aggr-expression>
<non-aggr-select-item> ::=
column-copy
| column-alias
| column-position
Argument Description
<grouping-item> Expression to group on; in the rest of this chapter, we use <value-
expression> in GROUP BY syntax
non-aggr-expression Any non-aggregating expression that is not included in the SELECT list, i.e.
unselected columns from the source set or expressions that do not
depend on the data in the set at all
column-copy A literal copy, from the SELECT list, of an expression that contains no
aggregate function
column-alias The alias, from the SELECT list, of an expression (column) that contains no
aggregate function
column-position The position number, in the SELECT list, of an expression (column) that
contains no aggregate function
A general rule of thumb is that every non-aggregate item in the SELECT list must also be in the GROUP
279
Chapter 6. Data Manipulation (DML) Statements
1. By copying the item verbatim from the select list, e.g. “class” or “'D:' || upper(doccode)”.
3. By specifying the column position as an integer literal between 1 and the number of columns.
Integer values resulting from expressions or parameter substitutions are simple constant values
and not column position and will be used as such in the grouping. They will have no effect
though, as their value is the same for each row.
In addition to the required items, the grouping list may also contain:
• Columns from the source table that are not in the select list, or non-aggregate expressions based
on such columns. Adding such columns may further subdivide the groups. However, since these
columns are not in the select list, you can’t tell which aggregated row corresponds to which
value in the column. So, in general, if you are interested in this information, you also include
the column or expression in the select list — which brings you back to the rule: “every non-
aggregate column in the select list must also be in the grouping list”.
• Expressions that aren’t dependent on the data in the underlying set, e.g. constants, context
variables, single-value non-correlated subselects etc. This is only mentioned for completeness,
as adding such items is utterly pointless: they don’t affect the grouping at all. “Harmless but
useless” items like these may also figure in the select list without being copied to the grouping
list.
Examples
When the select list contains only aggregate columns, GROUP BY is not mandatory:
This will return a single row listing the number of male students and their average age. Adding
expressions that don’t depend on values in individual rows of table STUDENTS doesn’t change that:
The row will now have an extra column showing the current date, but other than that, nothing
fundamental has changed. A GROUP BY clause is still not required.
280
Chapter 6. Data Manipulation (DML) Statements
This will return a row for each class that has boys in it, listing the number of boys and their average
age in that particular class. (If you also leave the current_date field in, this value will be repeated on
every row, which is not very exciting.)
The above query has a major drawback though: it gives you information about the different classes,
but it doesn’t tell you which row applies to which class. To get that extra bit of information, add the
non-aggregate column CLASS to the select list:
Now we have a useful query. Notice that the addition of column CLASS also makes the GROUP BY
clause mandatory. We can’t drop that clause anymore, unless we also remove CLASS from the
column list.
The output of our last query may look something like this:
2A 12 13.5
2B 9 13.9
3A 11 14.6
3B 12 14.4
… … …
The headings “COUNT” and “AVG” are not very informative. In a simple case like this, you might get
away with that, but in general you should give aggregate columns a meaningful name by aliasing
them:
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M'
group by class;
Adding more non-aggregate (or, row-dependent) columns requires adding them to the GROUP BY
clause too. For instance, you might want to see the above information for girls as well; and you may
also want to differentiate between boarding and day students:
281
Chapter 6. Data Manipulation (DML) Statements
select class,
sex,
boarding_type,
count(*) as number,
avg(age) as avg_age
from students
group by class, sex, boarding_type;
2A F BOARDING 9 13.3
2A F DAY 6 13.5
2A M BOARDING 7 13.6
2A M DAY 5 13.4
2B F BOARDING 11 13.7
2B F DAY 5 13.7
2B M BOARDING 6 13.8
… … … … …
Each row in the result set corresponds to one particular combination of the columns CLASS, SEX and
BOARDING_TYPE. The aggregate results — number and average age — are given for each of these
groups individually. In a query like this, you don’t see a total for boys as a whole, or day students as
a whole. That’s the tradeoff: the more non-aggregate columns you add, the more you can pinpoint
specific groups, but the more you also lose sight of the general picture. Of course, you can still
obtain the “coarser” aggregates through separate queries.
HAVING
Just as a WHERE clause limits the rows in a dataset to those that meet the search condition, so the
HAVING sub-clause imposes restrictions on the aggregated rows in a grouped set. HAVING is optional,
and can only be used in conjunction with GROUP BY.
• Any aggregated column in the select list. This is the most widely used case.
• Any aggregated expression that is not in the select list, but allowed in the context of the query.
This is sometimes useful too.
• Any column in the GROUP BY list. While legal, it is more efficient to filter on these non-aggregated
data at an earlier stage: in the WHERE clause.
• Any expression whose value doesn’t depend on the contents of the dataset (like a constant or a
context variable). This is valid but not useful, because it will either suppress the entire set or
leave it untouched, based on conditions that have nothing to do with the set itself.
282
Chapter 6. Data Manipulation (DML) Statements
• Column positions. An integer in the HAVING clause is just an integer, not a column position.
Examples
Building on our earlier examples, this could be used to skip small groups of students:
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M'
group by class
having count(*) >= 5;
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M'
group by class
having max(age) - min(age) > 1.2;
Notice that if you’re interested in this information, you’ll likely also include min(age) and
max(age) — or the expression “max(age) - min(age)”.
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
from students
where sex = 'M'
group by class
having class starting with '3';
select class,
count(*) as num_boys,
avg(age) as boys_avg_age
283
Chapter 6. Data Manipulation (DML) Statements
from students
where sex = 'M' and class starting with '3'
group by class;
The WINDOW clause defines one or more named windows that can be referenced by window
functions in the current query specification.
Syntax
<query_spec> ::=
SELECT
[<limit_clause>]
[<distinct_clause>]
<select_list>
<from_clause>
[<where_clause>]
[<group_clause>]
[<having_clause>]
[<named_windows_clause>]
[<plan_clause>]
<named_windows_clause> ::=
WINDOW <window_definition> [, <window_definition> ...]
<window-specification-details> ::=
!! See Window (Analytical) Functions !!
In a query with multiple SELECT and WINDOW clauses (for example, with subqueries), the scope of the
`new_window_name_ is confined to its query context. That means a window name from an inner
context cannot be used in an outer context, nor vice versa. However, the same window name can
be used independently in different contexts, though to avoid confusion it might be better to avoid
this.
select
id,
department,
salary,
count(*) over w1,
first_value(salary) over w2,
last_value(salary) over w2
284
Chapter 6. Data Manipulation (DML) Statements
from employee
window w1 as (partition by department),
w2 as (w1 order by salary)
order by department, salary;
The PLAN clause enables the user to submit a data retrieval plan, thus overriding the plan that the
optimizer would have generated automatically.
Syntax
PLAN <plan-expression>
<plan-expression> ::=
(<plan-item> [, <plan-item> ...])
| <sorted-item>
| <joined-item>
| <merged-item>
| <hash-item>
<joined-item> ::=
JOIN (<plan-item>, <plan-item> [, <plan-item> ...])
<merged-item> ::=
[SORT] MERGE (<sorted-item>, <sorted-item> [, <sorted-item> ...])
<hash-item> ::=
HASH (<plan-item>, <plan-item> [, <plan-item> ...])
<basic-item> ::=
<relation> { NATURAL
| INDEX (<indexlist>)
| ORDER index [INDEX (<indexlist>)] }
Argument Description
285
Chapter 6. Data Manipulation (DML) Statements
Argument Description
Every time a user submits a query to the Firebird engine, the optimizer computes a data retrieval
strategy. Most Firebird clients can make this retrieval plan visible to the user. In Firebird’s own isql
utility, this is done with the command SET PLAN ON. If you are only interested in looking at query
plans, SET PLANONLY ON will show the plan without executing the query. Use SET PLANONLY OFF to
execute the query and show the plan.
A more detailed plan can be obtained when you enable an advanced plan. In isql this can be done
with SET EXPLAIN ON. The advanced plan displays more detailed information about the access
methods used by the optimizer, however it cannot be included in the PLAN clause of a statement. The
description of the advanced plan is beyond the scope of this Language Reference.
In most situations, you can trust that Firebird will select the optimal query plan for you. However,
if you have complicated queries that seem to be underperforming, it may be worth your while to
examine the plan and see if you can improve on it.
Simple Plans
The simplest plans consist of a relation name followed by a retrieval method. For example, for an
unsorted single-table select without a WHERE clause:
Advanced plan:
Select Expression
-> Table "STUDENTS" Full Scan
If there’s a WHERE or a HAVING clause, you can specify the index to be used for finding matches:
Advanced plan:
Select Expression
-> Filter
-> Table "STUDENTS" Access By ID
-> Bitmap
-> Index "IX_STUD_CLASS" Range Scan (full match)
286
Chapter 6. Data Manipulation (DML) Statements
The INDEX directive is also used for join conditions (to be discussed a little later). It can contain a list
of indexes, separated by commas.
ORDER specifies the index for sorting the set if an ORDER BY or GROUP BY clause is present:
Advanced plan:
Select Expression
-> Table "STUDENTS" Access By ID
-> Index "PK_STUDENTS" Full Scan
Advanced plan:
Select Expression
-> Filter
-> Table "STUDENTS" Access By ID
-> Index "PK_STUDENTS" Full Scan
-> Bitmap
-> Index "IX_STUD_CLASS" Range Scan (lower bound: 1/1)
Advanced plan:
Select Expression
-> Filter
-> Table "STUDENTS" Access By ID
-> Index "IX_STUD_CLASS" Range Scan (lower bound: 1/1)
287
Chapter 6. Data Manipulation (DML) Statements
-> Bitmap
-> Index "IX_STUD_CLASS" Range Scan (lower bound: 1/1)
For sorting sets when there’s no usable index available (or if you want to suppress its use), leave
out ORDER and prepend the plan expression with SORT:
Advanced plan:
Select Expression
-> Sort (record length: 128, key length: 56)
-> Table "STUDENTS" Full Scan
Advanced plan:
elect Expression
-> Sort (record length: 136, key length: 56)
-> Filter
-> Table "STUDENTS" Access By ID
-> Bitmap
-> Index "IX_STUD_CLASS" Range Scan (lower bound: 1/1)
Notice that SORT, unlike ORDER, is outside the parentheses. This reflects the fact that the data rows are
retrieved unordered and sorted afterward by the engine.
When selecting from a view, specify the view and the table involved. For instance, if you have a
view FRESHMEN that selects the first-year students:
Advanced plan:
Select Expression
288
Chapter 6. Data Manipulation (DML) Statements
Advanced plan:
Select Expression
-> Sort (record length: 144, key length: 24)
-> Filter
-> Table "STUDENTS" as "FRESHMEN" Access By ID
-> Bitmap
-> Index "PK_STUDENTS" Range Scan (lower bound: 1/1)
If a table or view has been aliased, it is the alias, not the original name, that must
be used in the PLAN clause.
Composite Plans
When a join is made, you can specify the index which is to be used for matching. You must also use
the JOIN directive on the two streams in the plan:
Advanced plan:
Select Expression
-> Nested Loop Join (inner)
-> Table "STUDENTS" as "S" Full Scan
-> Filter
-> Table "CLASSES" as "C" Access By ID
-> Bitmap
-> Index "PK_CLASSES" Unique Scan
289
Chapter 6. Data Manipulation (DML) Statements
from students s
join classes c on c.name = s.class
plan join (s order pk_students, c index (pk_classes))
order by s.id;
Advanced plan:
Select Expression
-> Nested Loop Join (inner)
-> Table "STUDENTS" as "S" Access By ID
-> Index "PK_STUDENTS" Full Scan
-> Filter
-> Table "CLASSES" as "C" Access By ID
-> Bitmap
-> Index "PK_CLASSES" Unique Scan
Advanced plan:
Select Expression
-> Sort (record length: 152, key length: 12)
-> Nested Loop Join (inner)
-> Table "STUDENTS" as "S" Full Scan
-> Filter
-> Table "CLASSES" as "C" Access By ID
-> Bitmap
-> Index "PK_CLASSES" Unique Scan
Advanced plan:
290
Chapter 6. Data Manipulation (DML) Statements
Select Expression
-> Sort (record length: 152, key length: 12)
-> Nested Loop Join (inner)
-> Filter
-> Table "STUDENTS" as "S" Access By ID
-> Bitmap
-> Index "FK_STUDENT_CLASS" Range Scan (lower bound: 1/1)
-> Filter
-> Table "CLASSES" as "C" Access By ID
-> Bitmap
-> Index "PK_CLASSES" Unique Scan
Advanced plan:
Select Expression
-> Sort (record length: 192, key length: 56)
-> Filter
-> Nested Loop Join (outer)
-> Table "CLASSES" as "C" Full Scan
-> Filter
-> Table "STUDENTS" as "S" Access By ID
-> Bitmap
-> Index "FK_STUDENT_CLASS" Range Scan (full match)
If there are no indices available to match the join condition (or if you don’t want to use it), then it is
possible connect the streams using HASH or MERGE method.
To connect using the HASH method in the plan, the HASH directive is used instead of the JOIN directive.
In this case, the smaller (secondary) stream is materialized completely into an internal buffer.
While reading this secondary stream, a hash function is applied and a pair {hash, pointer to buffer}
is written to a hash table. Then the primary stream is read and its hash key is tested against the
hash table.
select *
from students s
join classes c on c.cookie = s.cookie
plan hash (c natural, s natural)
291
Chapter 6. Data Manipulation (DML) Statements
Advanced plan:
Select Expression
-> Filter
-> Hash Join (inner)
-> Table "STUDENTS" as "S" Full Scan
-> Record Buffer (record length: 145)
-> Table "CLASSES" as "C" Full Scan
For a MERGE join, the plan must first sort both streams on their join column(s) and then merge. This
is achieved with the SORT directive (which we’ve already seen) and MERGE instead of JOIN:
Adding an ORDER BY clause means the result of the merge must also be sorted:
As follows from the formal syntax definition, JOINs and MERGEs in the plan may combine more than
two streams. Also, every plan expression may be used as a plan item in an encompassing plan. This
means that plans of certain complicated queries may have various nesting levels.
Finally, instead of MERGE you may also write SORT MERGE. As this makes no difference and may create
confusion with “real” SORT directives (the ones that do make a difference), it’s best to stick to plain
MERGE.
In addition to the plan for the main query, you can specify a plan for each subquery. For example,
the following query with multiple plans will work:
select *
from color
292
Chapter 6. Data Manipulation (DML) Statements
where exists (
select *
from hors
where horse.code_color = color.code_color
plan (horse index (fk_horse_color)))
plan (color natural)
Occasionally, the optimizer will accept a plan and then not follow it, even though it
does not reject it as invalid. One such example was
6.1.9. UNION
The UNION clause concatenates two or more datasets, thus increasing the number of rows but not
the number of columns. Datasets taking part in a UNION must have the same number of columns,
and columns at corresponding positions must be of the same type.
By default, a union suppresses duplicate rows. UNION ALL shows all rows, including any duplicates.
The optional DISTINCT keyword makes the default behaviour explicit.
Syntax
<query-expression> ::=
[<with-clause>] <query-expression-body> [<order-by-clause>]
[{ <rows-clause>
| [<result-offset-clause>] [<fetch-first-clause>] }]
<query-expression-body> ::=
<query-term>
| <query-expression-body> UNION [{ DISTINCT | ALL }] <query-term>
<query-primary> ::=
<query-specification>
| (<query-expression-body> [<order-by-clause>]
[<result-offset-clause>] [<fetch-first-clause>])
<query-specification> ::=
SELECT <limit-clause> [{ ALL | DISTINCT }] <select-list>
FROM <table-reference> [, <table-reference> ...]
[WHERE <search-condition>]
[GROUP BY <value-expression> [, <value-expression> ...]]
[HAVING <search-condition>]
[WINDOW <window-definition> [, <window-definition> ...]]
293
Chapter 6. Data Manipulation (DML) Statements
[PLAN <plan-expression>]
Unions take their column names from the first select query. If you want to alias union columns, do
so in the column list of the topmost SELECT. Aliases in other participating selects are allowed and
may even be useful, but will not propagate to the union level.
If a union has an ORDER BY clause, the only allowed sort items are integer literals indicating 1-based
column positions, optionally followed by an ASC/DESC and/or a NULLS {FIRST | LAST} directive. This
also implies that you cannot order a union by anything that isn’t a column in the union. (You can,
however, wrap it in a derived table, which gives you back all the usual sort options.)
Unions are allowed in subqueries of any kind and can themselves contain subqueries. They can
also contain joins, and can take part in a join when wrapped in a derived table.
Examples
This query presents information from different music collections in one dataset using unions:
If id, title, artist and length are the only fields in the tables involved, the query can also be
written as:
Qualifying the “stars” is necessary here because they are not the only item in the column list. Notice
how the “c” aliases in the first and third select do not conflict with each other: their scopes are not
union-wide but apply only to their respective select queries.
The next query retrieves names and phone numbers from translators and proofreaders.
294
Chapter 6. Data Manipulation (DML) Statements
Translators who also work as proofreaders will show up only once in the result set, provided their
phone number is the same in both tables. The same result can be obtained without DISTINCT. With
ALL, these people would appear twice.
Using parenthesized query expressions to show the employees with the highest and lowest salaries:
(
select emp_no, salary, 'lowest' as type
from employee
order by salary asc
fetch first row only
)
union all
(
select emp_no, salary, 'highest' as type
from employee
order by salary desc
fetch first row only
);
6.1.10. ORDER BY
When a SELECT statement is executed, the result set is not sorted in any way. It often happens that
rows appear to be sorted chronologically, simply because they are returned in the same order they
were added to the table by INSERT statements. This is not something you should rely on: the order
may change depending on the plan or updates to rows, etc. To specify an explicit sorting order for
the set specification, an ORDER BY clause is used.
Syntax
295
Chapter 6. Data Manipulation (DML) Statements
<sort-specification> ::=
<value-expression> [<ordering-specification>] [<null-ordering>]
<ordering-specification> ::=
ASC | ASCENDING
| DESC | DESCENDING
<null-ordering> ::=
NULLS FIRST
| NULLS LAST
Argument Description
The ORDER BY consists of a comma-separated list of the columns or expressions on which the result
data set should be sorted. The sort order can be specified by the name of the column — but only if
the column was not previously aliased in the SELECT columns list. The alias must be used if it was
used in the select list. The ordinal position number of the column in the SELECT column list, the alias
given to the column in the SELECT list with the help of the AS keyword, or the number of the column
in the SELECT list can be used without restriction.
The three forms of expressing the columns for the sort order can be mixed in the same ORDER BY
clause. For instance, one column in the list can be specified by its name and another column can be
specified by its number.
If you sort by column position or alias, then the expression corresponding to this
position (alias) will be copied from the SELECT list. This also applies to subqueries,
thus, the subquery will be executed at least twice.
If you use the column position to specify the sort order for a query of the SELECT *
style, the server expands the asterisk to the full column list to determine the
columns for the sort. It is, however, considered “sloppy practice” to design ordered
sets this way.
Sorting Direction
The keyword ASC — short for ASCENDING — specifies a sort direction from lowest to highest. ASC is the
default sort direction.
The keyword DESC — short for DESCENDING — specifies a sort direction from highest to lowest.
Specifying ascending order for one column and descending order for another is allowed.
Collation Order
Using the keyword COLLATE in a <value-expression> specifies the collation order to apply for a string
296
Chapter 6. Data Manipulation (DML) Statements
column if you need a collation order that is different from the normal collation for this column. The
normal collation order is defined by either the default collation for the database character set, or
the collation set explicitly in the column’s definition.
NULLs Position
The keyword NULLS defines where NULL in the associated column will fall in the sort order: NULLS
FIRST places the rows with the NULL column above rows ordered by that column’s value; NULLS LAST
places those rows after the ordered rows.
Not-parenthesized query expressions contributing to a UNION cannot take an ORDER BY clause. You
can order the entire output, using one ORDER BY clause at the end of the overall query, or use
parenthesized query expressions, which do allow ORDER BY.
The simplest — and, in some cases, the only — method for specifying the sort order is by the ordinal
column position. However, it is also valid to use the column names or aliases, from the first
contributing query only.
The ASC/DESC and/or NULLS directives are available for this global set.
If discrete ordering within the contributing set is required, use parenthesized query expressions,
derived tables, or common table expressions for those sets.
Examples of ORDER BY
Sorting the result set in ascending order, ordering by the RDB$CHARACTER_SET_ID and
RDB$COLLATION_ID columns of the RDB$COLLATIONS table:
SELECT
RDB$CHARACTER_SET_ID AS CHARSET_ID,
RDB$COLLATION_ID AS COLL_ID,
RDB$COLLATION_NAME AS NAME
FROM RDB$COLLATIONS
ORDER BY RDB$CHARACTER_SET_ID, RDB$COLLATION_ID;
SELECT
RDB$CHARACTER_SET_ID AS CHARSET_ID,
RDB$COLLATION_ID AS COLL_ID,
RDB$COLLATION_NAME AS NAME
FROM RDB$COLLATIONS
ORDER BY CHARSET_ID, COLL_ID;
297
Chapter 6. Data Manipulation (DML) Statements
SELECT
RDB$CHARACTER_SET_ID AS CHARSET_ID,
RDB$COLLATION_ID AS COLL_ID,
RDB$COLLATION_NAME AS NAME
FROM RDB$COLLATIONS
ORDER BY 1, 2;
Sorting a SELECT * query by position numbers — possible, but nasty and not recommended:
SELECT *
FROM RDB$COLLATIONS
ORDER BY 3, 2;
Sorting by the second column in the BOOKS table, or — if BOOKS has only one column — the
FILMS.DIRECTOR column:
SELECT
BOOKS.*,
FILMS.DIRECTOR
FROM BOOKS, FILMS
ORDER BY 2;
Sorting in descending order by the values of column PROCESS_TIME, with NULLs placed at the
beginning of the set:
SELECT *
FROM MSG
ORDER BY PROCESS_TIME DESC NULLS FIRST;
Sorting the set obtained by a UNION of two queries. Results are sorted in descending order for the
values in the second column, with NULLs at the end of the set; and in ascending order for the values
of the first column with NULLs at the beginning.
SELECT
DOC_NUMBER, DOC_DATE
FROM PAYORDER
UNION ALL
SELECT
DOC_NUMBER, DOC_DATE
FROM BUDGORDER
ORDER BY 2 DESC NULLS LAST, 1 ASC NULLS FIRST;
298
Chapter 6. Data Manipulation (DML) Statements
6.1.11. ROWS
Syntax
Argument Description
ROWS limits the amount of rows returned by the SELECT statement to a specified number or range.
The ROWS clause also does the same job as the FIRST and SKIP clauses, but neither are SQL-compliant.
Unlike FIRST and SKIP, and OFFSET and FETCH, the ROWS and TO clauses accept any type of integer
expression as their arguments, without parentheses. Of course, parentheses may still be needed for
nested evaluations inside the expression, and a subquery must always be enclosed in parentheses.
• Numbering of rows in the intermediate set — the overall set cached on disk
before the “slice” is extracted — starts at 1.
• OFFSET/FETCH, FIRST/SKIP, and ROWS can all be used without the ORDER BY clause,
although it rarely makes sense to do so — except perhaps when you want to
take a quick look at the table data and don’t care that rows will be in a non-
deterministic order. For this purpose, a query like “SELECT * FROM TABLE1 ROWS
20” would return the first 20 rows instead of a whole table that might be rather
big.
Calling ROWS m retrieves the first m records from the set specified.
• If m is greater than the total number of records in the intermediate data set, the entire set is
returned
Calling ROWS m TO n retrieves the rows from the set, starting at row m and ending after row n — the
299
Chapter 6. Data Manipulation (DML) Statements
set is inclusive.
• If m is greater than the total number of rows in the intermediate set and n >= m, an empty set is
returned
• If m is not greater than n and n is greater than the total number of rows in the intermediate set,
the result set will be limited to rows starting from m, up to the end of the set
• If m < 1 and n < 1, the SELECT statement call fails with an error
While ROWS is an alternative to the FIRST and SKIP syntax, there is one situation where the ROWS
syntax does not provide the same behaviour: specifying SKIP n on its own returns the entire
intermediate set, without the first n rows. The ROWS … TO syntax needs a little help to achieve this.
With the ROWS syntax, you need a ROWS clause in association with the TO clause and deliberately make
the second (n) argument greater than the size of the intermediate data set. This is achieved by
creating an expression for n that uses a subquery to retrieve the count of rows in the intermediate
set and adds 1 to it, or use a literal with a sufficiently large value.
The ROWS clause can be used instead of the SQL-standard OFFSET/FETCH or non-standard FIRST/SKIP
clauses, except the case where only OFFSET or SKIP is used, that is when the whole result set is
returned except for skipping the specified number of rows from the beginning.
To implement this behaviour using ROWS, you must specify the TO clause with a value larger than the
size of the returned result set.
ROWS syntax cannot be mixed with FIRST/SKIP or OFFSET/FETCH in the same SELECT expression. Using
the different syntaxes in different subqueries in the same statement is allowed.
When ROWS is used in a UNION query, the ROWS directive is applied to the unioned set and must be
placed after the last SELECT statement.
If a need arises to limit the subsets returned by one or more SELECT statements inside UNION, there
are a couple of options:
1. Use FIRST/SKIP syntax in these SELECT statements — bearing in mind that an ordering clause
(ORDER BY) cannot be applied locally to the discrete queries, but only to the combined output.
2. Convert the queries to derived tables with their own ROWS clauses.
300
Chapter 6. Data Manipulation (DML) Statements
Examples of ROWS
The following examples rewrite the examples used in the section about FIRST and SKIP, earlier in
this chapter.
Retrieve the first ten names from the output of a sorted query on the PEOPLE table:
or its equivalent
Return all records from the PEOPLE table except for the first 10 names:
And this query will return the last 10 records (pay attention to the parentheses):
This one will return rows 81-100 from the PEOPLE table:
ROWS can also be used with the UPDATE and DELETE statements.
See also
FIRST, SKIP, OFFSET, FETCH
301
Chapter 6. Data Manipulation (DML) Statements
Syntax
<offset-fetch-expression> ::=
<integer-literal>
| <query-parameter>
Argument Description
The OFFSET and FETCH clauses are an SQL standard-compliant equivalent for FIRST/SKIP, and an
alternative for ROWS. The OFFSET clause specifies the number of rows to skip. The FETCH clause
specifies the number of rows to fetch.
When <offset-fetch-expression> is left out of the FETCH clause (e.g. FETCH FIRST ROW ONLY), one row
will be fetched.
The choice between ROW or ROWS, or FIRST or NEXT in the clauses is just for aesthetic purposes (e.g.
making the query more readable or grammatically correct). There is no difference between OFFSET
10 ROW or OFFSET 10 ROWS, or FETCH NEXT 10 ROWS ONLY or FETCH FIRST 10 ROWS ONLY.
As with SKIP and FIRST, OFFSET and FETCH clauses can be applied independently, in both top-level and
nested query expressions.
1. Firebird doesn’t support the percentage FETCH nor the FETCH … WITH TIES
defined in the SQL standard.
3. The OFFSET and/or FETCH clauses cannot be combined with ROWS or FIRST/SKIP on
the same query expression.
4. Expressions, column references, etc. are not allowed within either clause.
5. Contrary to the ROWS clause, OFFSET and FETCH are only available on SELECT
statements.
302
Chapter 6. Data Manipulation (DML) Statements
Return all rows except the first 10, ordered by column COL1
SELECT *
FROM T1
ORDER BY COL1
OFFSET 10 ROWS
SELECT *
FROM T1
ORDER BY COL1
FETCH FIRST 10 ROWS ONLY
Using OFFSET and FETCH clauses in a derived table and in the outer query
SELECT *
FROM (
SELECT *
FROM T1
ORDER BY COL1 DESC
OFFSET 1 ROW
FETCH NEXT 10 ROWS ONLY
) a
ORDER BY a.COL1
FETCH FIRST ROW ONLY
The following examples rewrite the FIRST/SKIP examples and ROWS examples earlier in this chapter.
Retrieve the first ten names from the output of a sorted query on the PEOPLE table:
Return all records from the PEOPLE table except for the first 10 names:
And this query will return the last 10 records. Contrary to FIRST/SKIP and ROWS we cannot use
expressions (including sub-queries). To retrieve the last 10 rows, reverse the sort to the first (last)
303
Chapter 6. Data Manipulation (DML) Statements
This one will return rows 81-100 from the PEOPLE table:
See also
FIRST, SKIP, ROWS
FOR UPDATE does not do what its name suggests. Its only effect currently is to disable the pre-fetch
buffer.
The OF sub-clause does not do anything at all, and is only provided for syntax compatibility with
other database systems.
Syntax
304
Chapter 6. Data Manipulation (DML) Statements
[WHERE ...]
[FOR UPDATE [OF <column_list>]]
WITH LOCK [SKIP LOCKED]
WITH LOCK provides a limited explicit pessimistic locking capability for cautious use in conditions
where the affected row set is:
If the WITH LOCK clause succeeds, it will secure a lock on the selected rows and prevent any other
transaction from obtaining write access to any of those rows, or their dependants, until your
transaction ends.
WITH LOCK can only be used with a top-level, single-table SELECT statement. It is not available:
• in a subquery specification
• with the DISTINCT operator, a GROUP BY clause or any other aggregating operation
• with a view
As the engine considers, in turn, each record falling under an explicit lock statement, it returns
either the record version that is the most currently committed, regardless of database state when
the statement was submitted, or an exception.
When the optional SKIP LOCKED clause is specified, records locked by a different transaction are
skipped.
If a statement has both SKIP LOCKED and OFFSET/SKIP/ROWS subclauses, locked rows may be skipped
before OFFSET/SKIP/ROWS subclause can account for them, thus skipping more rows than specified in
OFFSET/SKIP/ROWS.
Wait behaviour and conflict reporting depend on the transaction parameters specified in the TPB
block:
305
Chapter 6. Data Manipulation (DML) Statements
isc_tpb_consistency Explicit locks are overridden by implicit or explicit table-level locks and
are ignored.
isc_tpb_concurrency + If a record is modified by any transaction that was committed since the
isc_tpb_nowait transaction attempting to get explicit lock started, or an active transaction
has performed a modification of this record, an update conflict exception
is raised immediately.
isc_tpb_concurrency + If the record is modified by any transaction that has committed since the
isc_tpb_wait transaction attempting to get explicit lock started, an update conflict
exception is raised immediately.
If the FOR UPDATE sub-clause precedes the WITH LOCK sub-clause, buffered fetches are suppressed.
Thus, the lock will be applied to each row, one by one, at the moment it is fetched. It becomes
possible, then, that a lock which appeared to succeed when requested will nevertheless fail
subsequently, when an attempt is made to fetch a row which has become locked by another
transaction in the meantime. This can be avoided by also using SKIP LOCKED.
See also
FOR UPDATE [OF]
306
Chapter 6. Data Manipulation (DML) Statements
When an UPDATE statement tries to access a record that is locked by another transaction, it either
raises an update conflict exception or waits for the locking transaction to finish, depending on TPB
mode. Engine behaviour here is the same as if this record had already been modified by the locking
transaction.
No special error codes are returned from conflicts involving pessimistic locks.
The engine guarantees that all records returned by an explicit lock statement are locked and do
meet the search conditions specified in WHERE clause, as long as the search conditions do not depend
on any other tables, via joins, subqueries, etc. It also guarantees that rows not meeting the search
conditions will not be locked by the statement. It can not guarantee that there are no rows which,
though meeting the search conditions, are not locked.
This situation can arise if other, parallel transactions commit their changes during
the course of the locking statement’s execution.
The engine locks rows at fetch time. This has important consequences if you lock several rows at
once. Many access methods for Firebird databases default to fetching output in packets of a few
hundred rows (“buffered fetches”). Most data access components cannot bring you the rows
contained in the last-fetched packet, when an error occurred.
• Rolling back of an implicit or explicit savepoint releases record locks that were taken under that
savepoint, but it doesn’t notify waiting transactions. Applications should not depend on this
behaviour as it may get changed in the future.
• While explicit locks can be used to prevent and/or handle unusual update conflict errors, the
volume of deadlock errors will grow unless you design your locking strategy carefully and
control it rigorously.
• Most applications do not need explicit locks at all. The main purposes of explicit locks are:
1. to prevent expensive handling of update conflict errors in heavily loaded applications, and
If your use of explicit locking doesn’t fall in one of these two categories, then it’s probably the
wrong way to do the task in Firebird.
• Explicit locking is an advanced feature; do not misuse it! While solutions for these kinds of
problems may be important for websites handling thousands of concurrent writers, or for
ERP/CRM systems operating in large corporations, most application programs do not need to
work in such conditions.
i. Simple:
307
Chapter 6. Data Manipulation (DML) Statements
SELECT ...
[WITH LOCK [SKIP LOCKED]]
OPTIMIZE FOR {FIRST | ALL} ROWS
This feature allows the optimizer to consider another (hopefully better) plan if only a subset or
rows is fetched initially by the user application (with the remaining rows being fetched on
demand), thus improving the response time.
It can also be specified at the session level using the SET OPTIMIZE management statement.
The default behaviour can be specified globally using the OptimizeForFirstRows setting in
firebird.conf or databases.conf.
6.1.16. INTO
Available in
PSQL
Syntax
In PSQL the INTO clause is placed at the end of the SELECT statement.
The colon (‘:’) prefix for local variable names in PSQL is optional in the INTO clause.
308
Chapter 6. Data Manipulation (DML) Statements
In PSQL code (triggers, stored procedures and executable blocks), the results of a SELECT statement
can be loaded row-by-row into local variables. It is often the only way to do anything with the
returned values at all, unless an explicit or implicit cursor name is specified. The number, order
and types of the variables must match the columns in the output row.
A “plain” SELECT statement can only be used in PSQL if it returns at most one row, i.e. if it is a
singleton select. For multi-row selects, PSQL provides the FOR SELECT loop construct, discussed later
in the PSQL chapter. PSQL also supports the DECLARE CURSOR statement, which binds a named cursor
to a SELECT statement. The cursor can then be used to walk the result set.
Examples
1. Selecting aggregated values and passing them into previously declared variables min_amt,
avg_amt and max_amt:
The CAST serves to make the average a floating point number; otherwise, since amount is
presumably an integer field, SQL rules would truncate it to the nearest lower integer.
2. A PSQL trigger that retrieves two values as a BLOB field (using the LIST() function) and assigns it
INTO a third field:
<query-expression> ::=
[<with-clause>] <query-expression-body> [<order-by-clause>]
[{ <rows-clause>
| [<result-offset-clause>] [<fetch-first-clause>] }]
<with-clause> ::=
WITH [RECURSIVE] <with-list-element> [, <with-list-element> ...]
<with-list-element> ::=
query-name [(<column-name-list>)] AS (<query-expression>)
309
Chapter 6. Data Manipulation (DML) Statements
Argument Description
A common table expression or CTE can be described as a virtual table or view, defined in a
preamble to a main query, and going out of scope after the main query’s execution. The main query
can reference any CTEs defined in the preamble as if they were regular tables or views. CTEs can be
recursive, i.e. self-referencing, but they cannot be nested.
CTE Notes
• A CTE definition can contain any legal query-expression, as long as it doesn’t have a “WITH…”
preamble of its own (no nesting).
• CTEs defined for the same main query can reference each other, but care should be taken to
avoid loops.
• Each CTE can be referenced multiple times in the main query, using different aliases if
necessary.
• When enclosed in parentheses, CTE constructs can be used as subqueries in SELECT statements,
but also in UPDATEs, MERGEs etc.
for
with my_rivers as (select * from rivers where owner = 'me')
select name, length from my_rivers into :rname, :rlen
do
begin
..
end
Example
with dept_year_budget as (
select fiscal_year,
dept_no,
sum(projected_budget) as budget
from proj_dept_budget
group by fiscal_year, dept_no
)
select d.dept_no,
d.department,
dyb_2008.budget as budget_08,
dyb_2009.budget as budget_09
from department d
310
Chapter 6. Data Manipulation (DML) Statements
Recursive CTEs
A recursive (self-referencing) CTE is a UNION which must have at least one non-recursive member,
called the anchor. The non-recursive member(s) must be placed before the recursive member(s).
Recursive members are linked to each other and to their non-recursive neighbour by UNION ALL
operators. The unions between non-recursive members may be of any type.
Recursive CTEs require the RECURSIVE keyword to be present right after WITH. Each recursive union
member may reference itself only once, and it must do so in a FROM clause.
A great benefit of recursive CTEs is that they use far less memory and CPU cycles than an
equivalent recursive stored procedure.
Execution Pattern
• For each row evaluated, it starts executing each recursive member one by one, using the
current values from the outer row as parameters.
• If the currently executing instance of a recursive member produces no rows, execution loops
back one level and gets the next row from the outer result set.
311
Chapter 6. Data Manipulation (DML) Statements
The next example returns the pedigree of a horse. The main difference is that recursion occurs
simultaneously in two branches of the pedigree.
312
Chapter 6. Data Manipulation (DML) Statements
HORSE.CODE_MOTHER,
HORSE.NAME,
'F' || PEDIGREE.MARK,
PEDIGREE.DEPTH + 1
FROM
HORSE
JOIN PEDIGREE
ON HORSE.CODE_HORSE = PEDIGREE.CODE_FATHER
WHERE
PEDIGREE.DEPTH < :MAX_DEPTH
UNION ALL
SELECT
HORSE.CODE_HORSE,
HORSE.CODE_FATHER,
HORSE.CODE_MOTHER,
HORSE.NAME,
'M' || PEDIGREE.MARK,
PEDIGREE.DEPTH + 1
FROM
HORSE
JOIN PEDIGREE
ON HORSE.CODE_HORSE = PEDIGREE.CODE_MOTHER
WHERE
PEDIGREE.DEPTH < :MAX_DEPTH
)
SELECT
CODE_HORSE,
NAME,
MARK,
DEPTH
FROM
PEDIGREE
The previous sections used incomplete or simplified fragments of the SELECT syntax. Following is the
full syntax.
Where possible, the syntax below uses syntax names from the SQL standard, which do not
necessarily match the syntax names in the Firebird source. In some cases, syntax productions have
been collapsed, because the productions in the SQL standard are verbose as they are also used to
add additional rules or definitions to a syntax element.
313
Chapter 6. Data Manipulation (DML) Statements
Although this is intended as the full syntax, some productions are not shown (e.g. <value-
expression>) and assumed to be clear for the reader, and in some cases we take shortcuts like using
query-name or column-alias for identifiers in a syntax production.
If you come across situations where these shortcuts do result in lack of clarity or other issues, let us
know on https://github.com/FirebirdSQL/firebird-documentation or on firebird-devel.
The syntax below does not include the PSQL SELECT … INTO syntax, which is essentially <cursor-
specification> INTO <variable-list>.
<cursor-specification> ::=
<query-expression> [<updatability-clause>] [<lock-clause>]
<query-expression> ::=
[<with-clause>] <query-expression-body> [<order-by-clause>]
[{ <rows-clause>
| [<result-offset-clause>] [<fetch-first-clause>] }]
<with-clause> ::=
WITH [RECURSIVE] <with-list-element> [, <with-list-element> ...]
<with-list-element> ::=
query-name [(<column-name-list>)] AS (<query-expression>)
<query-expression-body> ::=
<query-term>
| <query-expression-body> UNION [{ DISTINCT | ALL }] <query-term>
<query-primary> ::=
<query-specification>
| (<query-expression-body> [<order-by-clause>]
[<result-offset-clause>] [<fetch-first-clause>])
<query-specification> ::=
SELECT <limit-clause> [{ ALL | DISTINCT }] <select-list>
FROM <table-reference> [, <table-reference> ...]
[WHERE <search-condition>]
[GROUP BY <value-expression> [, <value-expression> ...]]
[HAVING <search-condition>]
[WINDOW <window-definition> [, <window-definition> ...]]
[PLAN <plan-expression>]
<limit-expression> ::=
<integer-literal>
314
Chapter 6. Data Manipulation (DML) Statements
| <query-parameter>
| (<value-expression>)
<select-sublist> ::=
table-alias.*
| <value-expression> [[AS] column-alias]
<table-primary> ::=
<table-or-query-name> [[AS] correlation-name]
| [LATERAL] <derived-table> [<correlation-or-recognition>]
| <parenthesized-joined-table>
<table-or-query-name> ::=
table-name
| query-name
| [package-name.]procedure-name [(<procedure-args>)]
<correlation-or-recognition> ::=
[AS] correlation-name [(<column-name-list>)]
<parenthesized-joined-table> ::=
(<parenthesized-joined-table)
| (<joined-table>)
<joined-table> ::=
<cross-join>
| <natural-join>
| <qualified-join>
<cross-join>
<table-reference> CROSS JOIN <table-primary>
<natural-join> ::=
<table-reference> NATURAL [<join-type>] JOIN <table-primary>
<qualified-join> ::=
<table-reference> [<join-type>] JOIN <table-primary>
{ ON <search-condition>
| USING (<column-name-list>) }
<window-definition> ::=
315
Chapter 6. Data Manipulation (DML) Statements
new-window-name AS (<window-specification-details>)
<window-specification-details> ::=
[existing-window-name]
[<window-partition-clause>]
[<order-by-clause>]
[<window-frame-clause>]
<window-partition-clause> ::=
PARTITION BY <value-expression> [, <value-expression> ...]
<order-by-clause> ::=
ORDER BY <sort-specification [, <sort-specification> ...]
<sort-specification> ::=
<value-expression> [<ordering-specification>] [<null-ordering>]
<ordering-specification> ::=
ASC | ASCENDING
| DESC | DESCENDING
<null-ordering> ::=
NULLS FIRST
| NULLS LAST
<window-frame-extent> ::=
<window-frame-start>
| <window-frame-between>
<window-frame-start> ::=
UNBOUNDED PRECEDING
| <value-expression> PRECEDING
| CURRENT ROW
<window-frame-between> ::=
BETWEEN { UNBOUNDED PRECEDING | <value-expression> PRECEDING
| CURRENT ROW | <value-expression> FOLLOWING }
AND { <value-expression> PRECEDING | CURRENT ROW
| <value-expression> FOLLOWING | UNBOUNDED FOLLOWING }
<result-offset-clause> :: =
OFFSET <offset-fetch-expression> { ROW | ROWS }
<offset-fetch-expression> ::=
<integer-literal>
| <query-parameter>
316
Chapter 6. Data Manipulation (DML) Statements
<fetch-first-clause> ::=
[FETCH { FIRST | NEXT }
[<offset-fetch-expression>] { ROW | ROWS } ONLY]
6.2. INSERT
Inserts rows of data into a table or updatable view
Syntax
<override_opt> ::=
OVERRIDING {USER | SYSTEM} VALUE
<output_column> ::=
target.*
| <return_expression> [COLLATE collation] [[AS] alias]
<return_expression> ::=
<value-expression>
| [target.]col_name
<value-expression> ::=
<literal>
| <context-variable>
| any other expression returning a single
value of a Firebird data type or NULL
317
Chapter 6. Data Manipulation (DML) Statements
Argument Description
target The name of the table or view to which a new row, or batch of rows,
should be added
value-expression An expression whose value is used for inserting into the table or for
returning
literal A literal
The INSERT statement is used to add rows to a table or to one or more tables underlying a view:
• If the column values are supplied in a VALUES clause, exactly one row is inserted
• The values may be provided instead by a SELECT expression, in which case zero to many rows
may be inserted
• With the DEFAULT VALUES clause, no values are provided at all and exactly one row is inserted.
Restrictions
• Columns returned to the NEW.column_name context variables in DML triggers should not have a
colon (“:”) prefixed to their names
• Columns may not appear more than once in the column list.
The VALUES list must provide a value for every column in the column list, in the same order and of
the correct type. The column list need not specify every column in the target but, if the column list
is absent, the engine requires a value for every column in the table or view (computed columns
excluded).
The expression DEFAULT allows a column to be specified in the column list, but instructs Firebird to
use the default value (either NULL or the value specified in the DEFAULT clause of the column
definition). For identity columns, specifying DEFAULT will generate the identity value. It is possible to
include calculated columns in the column list and specifying DEFAULT as the column value.
Introducer syntax provides a way to identify the character set of a value that is a
string constant (literal). Introducer syntax works only with literal strings: it cannot
318
Chapter 6. Data Manipulation (DML) Statements
Examples
For this method of inserting, the output columns of the SELECT statement (or <query-expression>)
must provide a value for every target column in the column list, in the same order and of the
correct type.
Literal values, context variables or expressions of compatible type can be substituted for any
column in the source row. In this case, a source column list and a corresponding VALUES list are
required.
If the column list is absent — as it is when SELECT * is used for the source expression — the
column_list must contain the names of every column in the target table or view (computed columns
excluded).
Examples
319
Chapter 6. Data Manipulation (DML) Statements
)
SELECT n FROM r
Of course, the column names in the source table need not be the same as those in the target table.
Any type of SELECT statement is permitted, as long as its output columns exactly match the insert
columns in number, order and type. Types need not be the same, but they must be assignment-
compatible.
Since Firebird 5.0, an INSERT … SELECT with a RETURNING clause produces zero or more rows, and the
statement is described as type isc_info_sql_stmt_select. In other words, an INSERT … SELECT …
RETURNING will no longer produce a “multiple rows in singleton select” error when the select
produces multiple rows.
For the time being, a INSERT … VALUES (…) or INSERT … DEFAULT VALUES with a RETURNING clause is
still described as isc_info_sql_stmt_exec_procedure. This behaviour may change in a future Firebird
version.
The DEFAULT VALUES clause allows insertion of a record without providing any values at all, either
directly or from a SELECT statement. This is only possible if every NOT NULL or CHECKed column in the
table either has a valid default declared or gets such a value from a BEFORE INSERT trigger.
Furthermore, triggers providing required field values must not depend on the presence of input
values.
Specifying DEFAULT VALUES is equivalent to specifying a values list with expression DEFAULT for all
columns.
Example
6.2.4. OVERRIDING
The OVERRIDING clause controls the behaviour of an identity column for this statement only.
This can be useful when merging or importing data from another source. After such an insert, it
may be necessary to change the next value of the identity sequence using ALTER TABLE to prevent
subsequent inserts from generating colliding identity values.
320
Chapter 6. Data Manipulation (DML) Statements
It is usually simpler to leave out the identity column to achieve the same effect.
Examples of OVERRIDING
-- for ALWAYS
-- value 11 is used anyway
insert into objects_always (id, name)
OVERRIDING SYSTEM VALUE values (11, 'Laptop');
An INSERT statement may optionally include a RETURNING clause to return values from the inserted
rows. The clause, if present, need not contain all columns referenced in the insert statement and
may also contain other columns or expressions. The returned values reflect any changes that may
have been made in BEFORE INSERT triggers.
The user executing the statement needs to have SELECT privileges on the columns specified in the
RETURNING clause.
The syntax of the returning_list is similar to the column list of a SELECT clause. It is possible to
reference all columns using * or table_name.*.
Multiple INSERTs
In DSQL, an INSERT … VALUES (…) RETURNING or INSERT … DEFAULT VALUES
RETURNING returns only one row, and a INSERT … SELECT … RETURNING can return
In PSQL, if the RETURNING clause is specified and more than one row is inserted by
the INSERT statement, the statement fails and a “multiple rows in singleton select”
error is returned. This behaviour may change in future Firebird versions.
321
Chapter 6. Data Manipulation (DML) Statements
Examples
• In DSQL, an INSERT … VALUES (…) RETURNING always returns exactly one row. This behaviour
may change in a future Firebird version.
• In DSQL, an INSERT … DEFAULT VALUES RETURNING always returns exactly one row.
• In PSQL, if multiple rows are returned, the statement fails with a “multiple rows in singleton
select” error. This behaviour may change in a future Firebird version.
• In PSQL, if no row was inserted, nothing is returned, and the target variables keep their existing
values.
Inserting into BLOB columns is only possible under the following circumstances:
1. The client application has made special provisions for such inserts, using the Firebird API. In
this case, the modus operandi is application-specific and outside the scope of this manual.
2. The value inserted is a string literal of no more than 65,533 bytes (64KB - 3).
A limit, in characters, is calculated at run-time for strings that are in multibyte character sets, to
avoid overrunning the bytes limit. For example, for a UTF8 string (max. 4 bytes/character), the
run-time limit is likely to be about (floor(65533/4)) = 16383 characters.
3. You are using the “INSERT … SELECT” form and one or more columns in the result set are BLOBs.
322
Chapter 6. Data Manipulation (DML) Statements
6.3. UPDATE
Updates existing rows in tables and updatable views
Syntax
<output_column> ::=
target.* | NEW.* | OLD.*
| <return_expression> [COLLATE collation] [[AS] alias]
<return_expression> ::=
<value-expression>
| [target.]col_name
| NEW.col_name
| OLD.col_name
<value-expression> ::=
<literal>
| <context-variable>
| any other expression returning a single
value of a Firebird data type or NULL
Argument Description
target The name of the table or view where the records are updated
value-expression Expression for the new value for a column that is to be updated in the
table or view by the statement, or a value to be returned
323
Chapter 6. Data Manipulation (DML) Statements
Argument Description
cursorname The name of the cursor through which the row(s) to be updated are
positioned
literal A literal
The UPDATE statement changes values in a table or in one or more of the tables that underlie a view.
The columns affected are specified in the SET clause. The rows affected may be limited by the WHERE
and ROWS clauses. If neither WHERE nor ROWS is present, all records in the table will be updated.
If you assign an alias to a table or a view, the alias must be used when specifying columns and also
in any column references included in other clauses.
Example
Correct usage:
Not possible:
In the SET clause, the assignment expressions, containing the columns with the values to be set, are
separated by commas. In an assignment expression, column names are on the left and the values or
expressions to assign are on the right. A column may be assigned only once in the SET clause.
A column name can be used in expressions on the right. The old value of the column will always be
used in these right-side values, even if the column was already assigned a new value earlier in the
324
Chapter 6. Data Manipulation (DML) Statements
SET clause.
Using the expression DEFAULT will set the column to its default value (either NULL or the value
specified on the DEFAULT clause of the column definition). For an identity column, specifying DEFAULT
will generate a new identity value. It is possible to “update” calculated columns in the SET clause if
and only if the assigned value is DEFAULT.
Here is an example
Data in the TSET table:
A B
---
1 0
2 0
The statement:
A B
---
5 1
5 2
Notice that the old values (1 and 2) are used to update the b column even after the column was
assigned a new value (5).
The WHERE clause sets the conditions that limit the set of records for a searched update.
In PSQL, if a named cursor is being used for updating a set, using the WHERE CURRENT OF clause, the
action is limited to the row where the cursor is currently positioned. This is a positioned update.
To be able to use the WHERE CURRENT OF clause in DSQL, the cursor name needs to be set on the
statement handle before executing the statement.
Examples
UPDATE People
SET firstname = 'Boris'
WHERE lastname = 'Johnson';
325
Chapter 6. Data Manipulation (DML) Statements
UPDATE employee e
SET salary = salary * 1.05
WHERE EXISTS(
SELECT *
FROM employee_project ep
WHERE e.emp_no = ep.emp_no);
UPDATE addresses
SET city = 'Saint Petersburg', citycode = 'PET'
WHERE city = 'Leningrad'
UPDATE employees
SET salary = 2.5 * salary
WHERE title = 'CEO'
For string literals with which the parser needs help to interpret the character set of the data, the
introducer syntax may be used. The string literal is preceded by the character set name, prefixed
with an underscore character:
UPDATE People
SET name = _ISO8859_1 'Hans-Jörg Schäfer'
WHERE id = 53662;
The ORDER BY and ROWS clauses make sense only when used together. However, they can be used
separately.
If ROWS has one argument, m, the rows to be updated will be limited to the first m rows.
Points to note
• If m > the number of rows being processed, the entire set of rows is updated
If two arguments are used, m and n, ROWS limits the rows being updated to rows from m to n
inclusively. Both arguments are integers and start from 1.
Points to note
• If m > the number of rows being processed, no rows are updated
• If n > the number of rows, rows from m to the end of the set are updated
326
Chapter 6. Data Manipulation (DML) Statements
ROWS Example
UPDATE employees
SET salary = salary + 50
ORDER BY salary ASC
ROWS 20;
When the SKIP LOCKED clause is specified, records locked by a different transaction are skipped by
the statement and are not updated.
When a ROWS clause is specified, the “skip locked” check is performed after skipping the requested
number of rows specified, and before counting the number of rows to update.
An UPDATE statement may include RETURNING to return some values from the updated rows. RETURNING
may include data from any column of the row, not only the columns that are updated by the
statement. It can include literals or expressions not associated with columns, if there is a need for
that.
The user executing the statement needs to have SELECT privileges on the columns specified in the
RETURNING clause.
When the RETURNING set contains data from the current row, the returned values report changes
made in the BEFORE UPDATE triggers, but not those made in AFTER UPDATE triggers.
The context variables OLD.fieldname and NEW.fieldname can be used as column names. If OLD. or NEW.
is not specified, or if the table name (target) is specified instead, the column values returned are the
NEW. ones.
The syntax of the returning_list is similar to the column list of a SELECT clause. It is possible to
reference all columns using *, or table_name.*, NEW.* and/or OLD.*.
In DSQL, a positioned update statement (WHERE CURRENT OF …) with RETURNING always returns a
single row, a normal update statement can return zero or more rows. The update is executed to
completion before rows are returned. In PSQL, attempts to execute an UPDATE … RETURNING that
affects multiple rows will result in the error “multiple rows in singleton select”. This behaviour may
change in a future Firebird version.
In PSQL, the INTO clause can be used to pass the returning values to local variables. It is not
available in DSQL. If no records are updated, nothing is returned and variables specified in
RETURNING will keep their previous values.
327
Chapter 6. Data Manipulation (DML) Statements
UPDATE Scholars
SET firstname = 'Hugh', lastname = 'Pickering'
WHERE firstname = 'Henry' and lastname = 'Higgins'
RETURNING id, old.lastname, new.lastname;
Updating a BLOB column always replaces the entire contents. Even the BLOB ID, the “handle” that is
stored directly in the column, is changed. BLOBs can be updated if:
1. The client application has made special provisions for this operation, using the Firebird API. In
this case, the modus operandi is application-specific and outside the scope of this manual.
2. The new value is a string literal of no more than 65,533 bytes (64KB - 3).
A limit, in characters, is calculated at run-time for strings that are in multi-byte character sets,
to avoid overrunning the bytes limit. For example, for a UTF8 string (max. 4 bytes/character),
the run-time limit is likely to be about (floor(65533/4)) = 16383 characters.
3. The source is itself a BLOB column or, more generally, an expression that returns a BLOB.
Syntax
<override_opt> ::=
OVERRIDING {USER | SYSTEM} VALUE
328
Chapter 6. Data Manipulation (DML) Statements
<output_column> ::=
target.* | NEW.* | OLD.*
| <return_expression> [COLLATE collation] [[AS] alias]
<return_expression> ::=
<value-expression>
| [target.]col_name
| NEW.col_name
| OLD.col_name
<value-expression> ::=
<literal>
| <context-variable>
| any other expression returning a single
value of a Firebird data type or NULL
Argument Description
target The name of the table or view where the record(s) is to be updated or a
new record inserted
UPDATE OR INSERT inserts a new record or updates one or more existing records. The action taken
depends on the values provided for the columns in the MATCHING clause (or, if the latter is absent, in
the primary key). If there are records found matching those values, they are updated. If not, a new
record is inserted. A match only counts if all the columns in the MATCHING clause or primary key
columns are equal. Matching is done with the IS NOT DISTINCT operator, so one NULL matches
another.
Restrictions
• If the table has no primary key, the MATCHING clause is mandatory.
• In the MATCHING list as well as in the update/insert column list, each column name may occur
only once.
• When values are returned into the context variable NEW, this name must not be preceded by a
colon (“:”).
329
Chapter 6. Data Manipulation (DML) Statements
The optional RETURNING clause, if present, need not contain all the columns mentioned in the
statement and may also contain other columns or expressions. The returned values reflect any
changes that may have been made in BEFORE triggers, but not those in AFTER triggers. OLD.fieldname
and NEW.fieldname may both be used in the list of columns to return; for field names not preceded
by either of these, the new value is returned.
The user executing the statement needs to have SELECT privileges on the columns specified in the
RETURNING clause.
The syntax of the returning_list is similar to the column list of a SELECT clause. It is possible to
reference all columns using *, or table_name.*, NEW.* and/or OLD.*.
In DSQL, a statement with a RETURNING clause can return zero or more rows. The update or insert is
executed to completion before rows are returned. In PSQL, if a RETURNING clause is present and more
than one matching record is found, an error “multiple rows in singleton select” is raised. This
behaviour may change in a future Firebird version.
Modifying data in a table, using UPDATE OR INSERT in a PSQL module. The return value is passed to a
local variable, whose colon prefix is optional.
6.5. DELETE
Deletes rows from a table or updatable view
Syntax
DELETE
FROM target [[AS] alias]
[WHERE {<search-conditions> | CURRENT OF cursorname}]
330
Chapter 6. Data Manipulation (DML) Statements
[PLAN <plan_items>]
[ORDER BY <sort_items>]
[ROWS m [TO n]]
[SKIP LOCKED]
[RETURNING <returning_list> [INTO <variables>]]
<output_column> ::=
target.*
| <return_expression> [COLLATE collation] [[AS] alias]
<return_expression> ::=
<value-expression>
| [target.]col_name
<value-expression> ::=
<literal>
| <context-variable>
| any other expression returning a single
value of a Firebird data type or NULL
<variables> ::=
[:]varname [, [:]varname ...]
Argument Description
target The name of the table or view from which the records are to be deleted
search-conditions Search condition limiting the set of rows being targeted for deletion
cursorname The name of the cursor in which current record is positioned for deletion
DELETE removes rows from a database table or from one or more of the tables that underlie a view.
WHERE and ROWS clauses can limit the number of rows deleted. If neither WHERE nor ROWS is present,
DELETE removes all the rows in the relation.
331
Chapter 6. Data Manipulation (DML) Statements
6.5.1. Aliases
If an alias is specified for the target table or view, it must be used to qualify all field name
references in the DELETE statement.
Examples
Supported usage:
Not possible:
6.5.2. WHERE
The WHERE clause sets the conditions that limit the set of records for a searched delete.
In PSQL, if a named cursor is being used for deleting a set, using the WHERE CURRENT OF clause, the
action is limited to the row where the cursor is currently positioned. This is a positioned delete.
To be able to use the WHERE CURRENT OF clause in DSQL, the cursor name needs to be set on the
statement handle before executing the statement.
Examples
332
Chapter 6. Data Manipulation (DML) Statements
6.5.3. PLAN
Example
The ORDER BY clause orders the set before the actual deletion takes place. It only makes sense in
combination with ROWS, but is also valid without it.
The ROWS clause limits the number of rows being deleted. Integer literals or any integer expressions
can be used for the arguments m and n.
If ROWS has one argument, m, the rows to be deleted will be limited to the first m rows.
Points to note
• If m > the number of rows being processed, the entire set of rows is deleted
If two arguments are used, m and n, ROWS limits the rows being deleted to rows from m to n
inclusively. Both arguments are integers and start from 1.
Points to note
• If m > the number of rows being processed, no rows are deleted
• If m > 0 and <= the number of rows in the set and n is outside these values, rows from m to the
end of the set are deleted
Examples
333
Chapter 6. Data Manipulation (DML) Statements
Deleting one record starting from the end, i.e. from Z…:
No sorting (ORDER BY) is specified so 8 found records, starting from the fifth one, will be deleted:
When the SKIP LOCKED clause is specified, records locked by a different transaction are skipped by
the statement and are not deleted.
When a ROWS clause is specified, the “skip locked” check is performed after skipping the requested
number of rows specified, and before counting the number of rows to delete.
6.5.6. RETURNING
A DELETE statement may optionally include a RETURNING clause to return values from the deleted
rows. The clause, if present, need not contain all the relation’s columns and may also contain other
columns or expressions.
The user executing the statement needs to have SELECT privileges on the columns specified in the
RETURNING clause.
The syntax of the returning_list is similar to the column list of a SELECT clause. It is possible to
reference all columns using *, or table_name.*.
• In DSQL, a positioned delete statement (WHERE CURRENT OF …) with RETURNING always returns a
singleton, never a multi-row set. If no records is deleted, the returned columns contain NULL.
• A normal DELETE statement can return zero or more rows; the deletion is executed to completion
334
Chapter 6. Data Manipulation (DML) Statements
• In PSQL, if a RETURNING clause is present and more than one matching record is found, an error
“multiple rows in singleton select” is raised. This behaviour may change in a future Firebird
version.
◦ If no row is deleted, nothing is returned and the target variables keep their values
Examples
6.6. MERGE
Merges data from a source set into a target table or updatable view
Syntax
<merge_when> ::=
<merge_when_matched>
| <merge_when_not_matched_target>
| <merge_when_not_matched_source>
<merge_when_matched> ::=
WHEN MATCHED [AND <condition>] THEN
{ UPDATE SET <assignment-list>
| DELETE }
<merge_when_not_matched_target> ::=
WHEN NOT MATCHED [BY TARGET] [AND <condition>] THEN
335
Chapter 6. Data Manipulation (DML) Statements
<merge_when_not_matched_source> ::=
WHEN NOT MATCHED BY SOURCE [ AND <condition> ] THEN
{ UPDATE SET <assignment-list>
| DELETE }
<table-primary> ::=
<table-or-query-name> [[AS] correlation-name]
| [LATERAL] <derived-table> [<correlation-or-recognition>]
| <parenthesized-joined-table>
<assignment_list ::=
col_name = <m_value> [, <col_name> = <m_value> ...]]
<override_opt> ::=
OVERRIDING {USER | SYSTEM} VALUE
<output_column> ::=
target.* | NEW.* | OLD.*
| <return_expression> [COLLATE collation] [[AS] alias]
<return_expression> ::=
<value-expression>
| [target.]col_name
| NEW.col_name
| OLD.col_name
<value-expression> ::=
<literal>
| <context-variable>
| any other expression returning a single
value of a Firebird data type or NULL
<variables> ::=
[:]varname [, [:]varname ...]
336
Chapter 6. Data Manipulation (DML) Statements
Argument Description
table-reference Data source. It can be a table, a view, a stored procedure, a derived table
or a parenthesized joined table
join_conditions The (ON) condition(s) for matching the source records with those in the
target
condition Additional test condition in WHEN MATCHED or WHEN NOT MATCHED clause
value-expression The value assigned to a column in the target table. This expression may
be a literal value, a PSQL variable, a column from the source, or a
compatible context variable
The MERGE statement merges records from a source <table-reference> into a target table or updatable
view. The source may be a table, view or “anything you can SELECT from” in general. Each source
record will be used to update one or more target records, insert a new record in the target table,
delete a record from the target table or do nothing.
The action taken depends on the supplied join condition, the WHEN clause(s), and
the — optional — condition in the WHEN clause. The join condition and condition in the WHEN will
typically contain a comparison of fields in the source and target relations.
Multiple WHEN MATCHED and WHEN NOT MATCHED clauses are allowed. For each row in the source, the
WHEN clauses are checked in the order they are specified in the statement. If the condition in the WHEN
clause does not evaluate to true, the clause is skipped, and the next clause will be checked. This will
be done until the condition for a WHEN clause evaluates to true, or a WHEN clauses without condition
matches, or there are no more WHEN clauses. If a matching clause is found, the action associated with
the clause is executed. For each row in the source, at most one action is executed. If the WHEN
MATCHED clause is present, and several records match a single record in the target table, an error is
raised.
Contrary to the other WHEN clauses, the WHEN NOT MATCHED BY SOURCE clauses evaluates records in the
target which match no record in source.
WHEN NOT MATCHED is evaluated from the source viewpoint, that is, the table or set
specified in USING. It has to work this way because if the source record does not
match a target record, INSERT is executed. Of course, if there is a target record
which does not match a source record, nothing is done.
337
Chapter 6. Data Manipulation (DML) Statements
Currently, in PSQL, the ROW_COUNT variable returns the value 1, even if more than
one record is modified or inserted. For details and progress, refer to firebird#4722.
The ORDER BY can be used to influence the order in which rows are evaluated. The primary use case
is when combined with RETURNING, to influence the order rows are returned.
A MERGE statement can contain a RETURNING clause to return rows added, modified or removed. The
merge is executed to completion before rows are returned. The RETURNING clause can contain any
columns from the target table (or updatable view), as well as other columns (eg from the source)
and expressions.
The user executing the statement needs to have SELECT privileges on the columns specified in the
RETURNING clause.
In PSQL, If a RETURNING clause is present and more than one matching record is found, an error
“multiple rows in singleton select” is raised. This behaviour may change in a future Firebird
version.
Column names can be qualified by the OLD or NEW prefix to define exactly what value to return:
before or after modification. The returned values include the changes made by BEFORE triggers.
The syntax of the returning_list is similar to the column list of a SELECT clause. It is possible to
reference all columns using *, or table_name.*, NEW.* and/or OLD.*.
For the UPDATE or INSERT action, unqualified column names, or those qualified by the target table
name or alias will behave as if qualified by NEW, while for the DELETE action as if qualified by OLD.
The following example modifies the previous example to affect one line, and adds a RETURNING
clause to return the old and new quantity of goods, and the difference between those values.
338
Chapter 6. Data Manipulation (DML) Statements
339
Chapter 6. Data Manipulation (DML) Statements
5. The following example updates the PRODUCT_INVENTORY table daily based on orders processed in
the SALES_ORDER_LINE table. If the stock level of the product would drop to zero or lower, then the
row for that product is removed from the PRODUCT_INVENTORY table.
See also
SELECT, INSERT, UPDATE, UPDATE OR INSERT, DELETE
Syntax
<inparam-list> ::=
<inparam> [, <inparam> ...]
340
Chapter 6. Data Manipulation (DML) Statements
<outvar-list> ::=
<outvar> [, <outvar> ...]
Argument Description
Executes an executable stored procedure, taking a list of one or more input parameters, if they are
defined for the procedure, and returning a one-row set of output values, if they are defined for the
procedure.
The EXECUTE PROCEDURE statement is most commonly used to invoke “executable” stored procedures
to perform some data-modifying task at the server side — those that do not contain any SUSPEND
statements in their code. They can be designed to return a result set, consisting of only one row,
which is usually passed, via a set of RETURNING_VALUES() variables, to another stored procedure that
calls it. Client interfaces usually have an API wrapper that can retrieve the output values into a
single-row buffer when calling EXECUTE PROCEDURE in DSQL.
Invoking “selectable” stored procedures is also possible with EXECUTE PROCEDURE, but it returns only
the first row of an output set which is almost surely designed to be multi-row. Selectable stored
procedures are designed to be invoked by a SELECT statement, producing output that behaves like a
virtual table.
• In PSQL and DSQL, input parameters may be any expression that resolves to
the expected type.
• Although parentheses are not required after the name of the stored procedure
to enclose the input parameters, their use is recommended for the sake of
readability.
• When DSQL applications call EXECUTE PROCEDURE using the Firebird API or some
form of wrapper for it, a buffer is prepared to receive the output row and the
RETURNING_VALUES clause is not used.
341
Chapter 6. Data Manipulation (DML) Statements
2. In Firebird’s command-line utility isql, with literal parameters and optional parentheses:
In DSQL (e.g. in isql), RETURNING_VALUES is not used. Any output values are captured by the
application and displayed automatically.
Available in
DSQL
Syntax
<param_decl> ::=
paramname <domain_or_non_array_type> [NOT NULL] [COLLATE collation]
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
<psql-module-body> ::=
!! See Syntax of a Module Body !!
342
Chapter 6. Data Manipulation (DML) Statements
Argument Description
collation Collation
Executes a block of PSQL code as if it were a stored procedure, optionally with input and output
parameters and variable declarations. This allows the user to perform “on-the-fly” PSQL within a
DSQL context.
6.8.1. Examples
1. This example injects the numbers 0 through 127 and their corresponding ASCII characters into
the table ASCIITABLE:
EXECUTE BLOCK
AS
declare i INT = 0;
BEGIN
WHILE (i < 128) DO
BEGIN
INSERT INTO AsciiTable VALUES (:i, ascii_char(:i));
i = i + 1;
END
END
2. The next example calculates the geometric mean of two numbers and returns it to the user:
Because this block has input parameters, it has to be prepared first. Then the parameters can be
set and the block executed. It depends on the client software how this must be done and even if
it is possible at all — see the notes below.
3. Our last example takes two integer values, smallest and largest. For all the numbers in the
range smallest…largest, the block outputs the number itself, its square, its cube and its fourth
power.
343
Chapter 6. Data Manipulation (DML) Statements
Again, it depends on the client software if and how you can set the parameter values.
Executing a block without input parameters should be possible with every Firebird client that
allows the user to enter their own DSQL statements. If there are input parameters, things get
trickier: these parameters must get their values after the statement is prepared, but before it is
executed. This requires special provisions, which not every client application offers. (Firebird’s
own isql, for one, doesn’t.)
The server only accepts question marks (“?”) as placeholders for the input values, not “:a”,
“:MyParam” etc., or literal values. Client software may support the “:xxx” form though, and will
preprocess it before sending it to the server.
If the block has output parameters, you must use SUSPEND or nothing will be returned.
Output is always returned in the form of a result set, just as with a SELECT statement. You can’t use
RETURNING_VALUES or execute the block INTO some variables, even if there is only one result row.
PSQL Links
For more information about writing PSQL, consult Chapter Procedural SQL (PSQL)
Statements.
Some SQL statement editors — specifically the isql utility that comes with Firebird, and possibly
some third-party editors — employ an internal convention that requires all statements to be
terminated with a semicolon. This creates a conflict with PSQL syntax when coding in these
environments. If you are unacquainted with this problem and its solution, please study the details
in the PSQL chapter in the section entitled Switching the Terminator in isql.
344
Chapter 7. Procedural SQL (PSQL) Statements
PSQL provides all the basic constructs of traditional structured programming languages, and also
includes DML statements (SELECT, INSERT, UPDATE, DELETE, etc.), with a slightly modified syntax in
some cases.
If DML statements (SELECT, INSERT, UPDATE, DELETE, etc.) in the body of a module (procedure, function,
trigger or block) use parameters, only named parameters can be used. If DML statements contain
named parameters, then they must be previously declared as local variables using DECLARE
[VARIABLE] in the declaration section of the module, or as input or output variables in the module
header.
When a DML statement with parameters is included in PSQL code, the parameter name must be
prefixed by a colon (‘:’) in most situations. The colon is optional in statement syntax that is specific
to PSQL, such as assignments and conditionals and the INTO clause. The colon prefix on parameters
is not required when calling stored procedures from within another PSQL module.
7.1.2. Transactions
Stored procedures and functions (including those defined in packages) are executed in the context
of the transaction in which they are called. Triggers are executed as an intrinsic part of the
operation of the DML statement: thus, their execution is within the same transaction context as the
statement itself. Individual transactions are launched for database event triggers fired on connect
or disconnect.
Statements that start and end transactions are not available in PSQL, but it is possible to run a
statement or a block of statements in an autonomous transaction.
PSQL code modules consist of a header and a body. The DDL statements for defining them are
complex statements; that is, they consist of a single statement that encloses blocks of multiple
345
Chapter 7. Procedural SQL (PSQL) Statements
statements. These statements begin with a verb (CREATE, ALTER, DROP, RECREATE, CREATE OR ALTER, or
EXECUTE BLOCK) and end with the last END statement of the body.
The header provides the module name and defines any input and output parameters or — for
functions — the return type. Stored procedures and PSQL blocks may have input and output
parameters. Functions may have input parameters and must have a scalar return type. Triggers do
not have either input or output parameters, but DML triggers do have the NEW and OLD “records”,
and INSERTING, UPDATING and DELETING variables.
The header of a trigger indicates the DML event (insert, update or delete, or a combination) or DDL
or database event and the phase of operation (BEFORE or AFTER that event) that will cause it to “fire”.
The module body is either a PSQL module body, or an external module body. PSQL blocks can only
have a PSQL module body.
<module-body> ::=
<psql-module-body> | <external-module-body>
<psql-module-body> ::=
AS
[<forward-declarations>]
[<declarations>]
BEGIN
[<PSQL_statements>]
END
<external-module-body> ::=
EXTERNAL [NAME <extname>] ENGINE engine
[AS '<extbody>']
<forward-declarations> ::=
<forward-declare-item> [<forward-declare-item> ...]
<declarations> ::=
<declare-item> [<declare-item> ...]
<forward-declare-item> ::=
<subfunc-forward>
| <subproc-forward>
<declare-item> ::=
<declare-var>
| <declare-cursor>
| <subfunc-def>
| <subproc-def>
346
Chapter 7. Procedural SQL (PSQL) Statements
<extname> ::=
'<module-name>!<routine-name>[!<misc-info>]'
<declare-var> ::=
!! See DECLARE VARIABLE !!
<declare-cursor> ::=
!! See DECLARE .. CURSOR !!
Parameter Description
declarations Section for declaring local variables, named cursors, and subroutines
PSQL_statements Procedural SQL statements. Some PSQL statements may not be valid in all
types of PSQL. For example, RETURN <value>; is only valid in functions.
extbody External procedure body. A string literal that can be used by UDRs for
various purposes.
routine-name The internal name of the procedure inside the external module
misc-info Optional string that is passed to the procedure in the external module
The PSQL module body starts with an optional section that declares variables and subroutines,
followed by a block of statements that run in a logical sequence, like a program. A block of
statements — or compound statement — is enclosed by the BEGIN and END keywords, and is executed
as a single unit of code. The main BEGIN…END block may contain any number of other BEGIN…END
blocks, both embedded and sequential. Blocks can be nested to a maximum depth of 512 blocks. All
statements except BEGIN and END are terminated by semicolons (‘;’). No other character is valid for
347
Chapter 7. Procedural SQL (PSQL) Statements
Here we digress a little, to explain how to switch the terminator character in the isql utility to
make it possible to define PSQL modules in that environment without conflicting with isql
itself, which uses the same character, semicolon (‘;’), as its own statement terminator.
Sets the terminator character(s) to avoid conflict with the terminator character in PSQL
statements
Available in
ISQL only
Syntax
Argument Description
When you write your triggers, stored procedures, stored functions or PSQL blocks in
isql — either in the interactive interface or in scripts — running a SET TERM statement is
needed to switch the normal isql statement terminator from the semicolon to another
character or short string, to avoid conflicts with the non-changeable semicolon terminator in
PSQL. The switch to an alternative terminator needs to be done before you begin defining
PSQL objects or running your scripts.
The alternative terminator can be any string of characters except for a space, an apostrophe
or the current terminator character(s). Any letter character(s) used will be case-sensitive.
Example
Changing the default semicolon to ‘^’ (caret) and using it to submit a stored procedure
definition: character as an alternative terminator character:
SET TERM ^;
348
Chapter 7. Procedural SQL (PSQL) Statements
SET TERM ;^
The external module body specifies the UDR engine used to execute the external module, and
optionally specifies the name of the UDR routine to call (<extname>) and/or a string (<extbody>)
with UDR-specific semantics.
Configuration of external modules and UDR engines is not covered further in this Language
Reference. Consult the documentation of a specific UDR engine for details.
Modularity
applications working with the database can use the same stored procedure, thereby reducing
the size of the application code and avoiding code duplication.
Enhanced Performance
since stored procedures are executed on a server instead of at the client, network traffic is
reduced, which improves performance.
Executable Procedures
Executable procedures usually modify data in a database. They can receive input parameters and
return a single set of output (RETURNS) parameters. They are called using the EXECUTE PROCEDURE
statement. See an example of an executable stored procedure at the end of the CREATE PROCEDURE
section of Chapter 5, Data Definition (DDL) Statements.
349
Chapter 7. Procedural SQL (PSQL) Statements
Selectable Procedures
Selectable stored procedures usually retrieve data from a database, returning an arbitrary number
of rows to the caller. The caller receives the output one row at a time from a row buffer that the
database engine prepares for it.
Selectable procedures can be useful for obtaining complex sets of data that are often impossible or
too difficult or too slow to retrieve using regular DSQL SELECT queries. Typically, this style of
procedure iterates through a looping process of extracting data, perhaps transforming it before
filling the output variables (parameters) with fresh data at each iteration of the loop. A SUSPEND
statement at the end of the iteration fills the buffer and waits for the caller to fetch the row.
Execution of the next iteration of the loop begins when the buffer has been cleared.
Selectable procedures may have input parameters, and the output set is specified by the RETURNS
clause in the header.
A selectable stored procedure is called with a SELECT statement. See an example of a selectable
stored procedure at the end of the CREATE PROCEDURE section of Chapter 5, Data Definition (DDL)
Statements.
The syntax for creating executable stored procedures and selectable stored procedures is the same.
The difference comes in the logic of the program code, specifically the absence or presence of a
SUSPEND statement.
For information about creating stored procedures, see CREATE PROCEDURE in Chapter 5, Data
Definition (DDL) Statements.
For information about modifying existing stored procedures, see ALTER PROCEDURE, CREATE OR ALTER
PROCEDURE, RECREATE PROCEDURE.
For information about dropping (deleting) stored procedures, see DROP PROCEDURE.
Unlike stored procedures, stored functions always return one scalar value. To return a value from a
stored function, use the RETURN statement, which immediately terminates the function.
350
Chapter 7. Procedural SQL (PSQL) Statements
For information about creating stored functions, see CREATE FUNCTION in Chapter 5, Data Definition
(DDL) Statements.
For information about modifying stored functions, see ALTER FUNCTION, CREATE OR ALTER FUNCTION,
RECREATE FUNCTION.
For information about dropping (deleting) stored functions, see DROP FUNCTION.
A PSQL block is not defined and stored as an object, unlike stored procedures and triggers. It
executes in run-time and cannot reference itself.
Like stored procedures, anonymous PSQL blocks can be used to process data and to retrieve data
from the database.
Syntax (incomplete)
EXECUTE BLOCK
[(<inparam> = ? [, <inparam> = ? ...])]
[RETURNS (<outparam> [, <outparam> ...])]
<psql-module-body>
<psql-module-body> ::=
!! See Syntax of Module Body !!
Argument Description
See also
351
Chapter 7. Procedural SQL (PSQL) Statements
7.5. Packages
A package is a group of stored procedures and functions defined as a single database object.
Firebird packages are made up of two parts: a header (PACKAGE keyword) and a body (PACKAGE BODY
keywords). This separation is similar to Delphi modules; the header corresponds to the interface
part, and the body corresponds to the implementation part.
The notion of “packaging” the code components of a database operation addresses has several
advantages:
Modularisation
Blocks of interdependent code are grouped into logical modules, as done in other programming
languages.
In programming, it is well recognised that grouping code in various ways, in namespaces, units
or classes, for example, is a good thing. This is not possible with standard stored procedures and
functions in the database. Although they can be grouped in different script files, two problems
remain:
b. Scripted routines all participate in a flat namespace and are callable by everyone (we are not
referring to security permissions here).
Whenever a packaged routine determines that it uses a certain database object, a dependency
on that object is registered in Firebird’s system tables. Thereafter, to drop, or maybe alter that
object, you first need to remove what depends on it. Since the dependency on other objects only
exists for the package body, and not the package header, this package body can easily be
removed, even if another object depends on this package. When the body is dropped, the header
remains, allowing you to recreate its body once the changes related to the removed object are
done.
Packaged routines do not have individual privileges. The privileges apply to the package as a
whole. Privileges granted to packages are valid for all package body routines, including private
ones, but are stored for the package header. An EXECUTE privilege on a package granted to a user
(or other object), grants that user the privilege to execute all routines defined in the package
352
Chapter 7. Procedural SQL (PSQL) Statements
header.
For example
Private scopes
Stored procedures and functions can be privates; that is, make them available only for internal
usage within the defining package.
All programming languages have the notion of routine scope, which is not possible without some
form of grouping. Firebird packages also work like Delphi units in this regard. If a routine is not
declared in the package header (interface) and is implemented in the body (implementation), it
becomes a private routine. A private routine can only be called from inside its package.
For information on creating packages, see CREATE PACKAGE, and CREATE PACKAGE BODY in Chapter 5,
Data Definition (DDL) Statements.
For information on modifying existing package header or bodies, see ALTER PACKAGE, CREATE OR ALTER
PACKAGE, RECREATE PACKAGE, and RECREATE PACKAGE BODY.
For information on dropping (deleting) a package, see DROP PACKAGE, and DROP PACKAGE BODY.
7.6. Triggers
A trigger is another form of executable code that is stored in the metadata of the database for
execution by the server. A trigger cannot be called directly. It is called automatically (“fired”) when
data-changing events involving one particular table or view occur, or on a specific database or DDL
event.
A trigger applies to exactly one table or view or database event, and only one phase in an event
(BEFORE or AFTER the event). A single DML trigger might be written to fire only when one specific
data-changing event occurs (INSERT, UPDATE or DELETE), or it might be written to apply to more than
one of those.
A DML trigger is executed in the context of the transaction in which the data-changing DML
statement is running. For triggers that respond to database events, the rule is different: for DDL
triggers and transaction triggers, the trigger runs in the same transaction that executed the DDL, for
other types, a new default transaction is started.
353
Chapter 7. Procedural SQL (PSQL) Statements
More than one trigger can be defined for each phase-event combination. The order in which they
are executed — also known as “firing order” — can be specified explicitly with the optional POSITION
argument in the trigger definition. You have 32,767 numbers to choose from. Triggers with the
lowest position numbers fire first.
If a POSITION clause is omitted, the position is 0. If multiple triggers have the same position and
phase, those triggers will be executed in an undefined order, while respecting the total order by
position and phase.
DML triggers are those that fire when a DML operation changes the state of data: updating rows in
tables, inserting new rows or deleting rows. They can be defined for both tables and views.
Trigger Options
Six base options are available for the event-phase combination for tables and views:
These base forms are for creating single phase/single-event triggers. Firebird also supports forms
for creating triggers for one phase and multiple-events, BEFORE INSERT OR UPDATE OR DELETE, for
example, or AFTER UPDATE OR DELETE: the combinations are your choice.
The Boolean context variables INSERTING, UPDATING and DELETING can be used in the body of a trigger
to determine the type of event that fired the trigger.
For DML triggers, the Firebird engine provides access to sets of OLD and NEW context variables (or,
“records”). Each is a record of the values of the entire row: one for the values as they are before the
data-changing event (the BEFORE phase) and one for the values as they will be after the event (the
AFTER phase). They are referenced in statements using the form NEW.column_name and
OLD.column_name, respectively. The column_name can be any column in the table’s definition, not just
those that are being updated.
354
Chapter 7. Procedural SQL (PSQL) Statements
• In BEFORE UPDATE and BEFORE INSERT code, the NEW value is read/write, unless it is a COMPUTED BY
column
• In INSERT triggers, references to OLD are invalid and will throw an exception
• In DELETE triggers, references to NEW are invalid and will throw an exception
A trigger associated with a database or transaction event can be defined for the following events:
When a transaction ON TRANSACTION The trigger is executed in the transaction context of the
is started START started transaction (immediately after start)
When a transaction ON TRANSACTION The trigger is executed in the transaction context of the
is committed COMMIT committing transaction (immediately before commit)
When a transaction ON TRANSACTION The trigger is executed in the transaction context of the
is cancelled ROLLBACK rolling back transaction (immediately before roll back)
DDL triggers fire on specified metadata change events in a specified phase. BEFORE triggers run
before changes to system tables. AFTER triggers run after changes to system tables.
DDL triggers are a specific type of database trigger, so most rules for and semantics of database
triggers also apply for DDL triggers.
Semantics
1. BEFORE triggers are fired before changes to the system tables. AFTER triggers are fired after
system table changes.
Important Rule
The event type [BEFORE | AFTER] of a DDL trigger cannot be changed.
2. When a DDL statement fires a trigger that raises an exception (BEFORE or AFTER, intentionally or
unintentionally) the statement will not be committed. That is, exceptions can be used to ensure
that a DDL operation will fail if the conditions are not precisely as intended.
3. DDL trigger actions are executed only when committing the transaction in which the affected
DDL command runs. Never overlook the fact that what is possible to do in an AFTER trigger is
exactly what is possible to do after a DDL command without autocommit. You cannot, for
355
Chapter 7. Procedural SQL (PSQL) Statements
4. With “CREATE OR ALTER” statements, a trigger is fired one time at the CREATE event or the ALTER
event, according to the previous existence of the object. With RECREATE statements, a trigger is
fired for the DROP event if the object exists, and for the CREATE event.
5. ALTER and DROP events are generally not fired when the object name does not exist. For the
exception, see point 6.
6. The exception to rule 5 is that BEFORE ALTER/DROP USER triggers fire even when the username
does not exist. This is because, underneath, these commands perform DML on the security
database, and the verification is not done before the command on it is run. This is likely to be
different with embedded users, so do not write code that depends on this.
7. If an exception is raised after the DDL command starts its execution and before AFTER triggers
are fired, AFTER triggers will not be fired.
8. Packaged procedures and functions do not fire individual {CREATE | ALTER | DROP} {PROCEDURE |
FUNCTION} triggers.
When a DDL trigger is running, the DDL_TRIGGER namespace is available for use with
RDB$GET_CONTEXT. This namespace contains information on the currently firing trigger.
See also The DDL_TRIGGER Namespace in RDB$GET_CONTEXT in Chapter 8, Built-in Scalar Functions.
For information on creating triggers, see CREATE TRIGGER, CREATE OR ALTER TRIGGER, and RECREATE
TRIGGER in Chapter 5, Data Definition (DDL) Statements.
For information on modifying triggers, see ALTER TRIGGER, CREATE OR ALTER TRIGGER, and RECREATE
TRIGGER.
The colon marker prefix (‘:’) is used in PSQL to mark a reference to a variable in a DML
statement. The colon marker is not required before variable names in other PSQL code.
356
Chapter 7. Procedural SQL (PSQL) Statements
The colon prefix can also be used for the NEW and OLD contexts, and for cursor variables.
Syntax
varname = <value_expr>;
Argument Description
PSQL uses the equal symbol (‘=’) as its assignment operator. The assignment statement assigns a
SQL expression value on the right to the variable on the left of the operator. The expression can be
any valid SQL expression: it may contain literals, internal variable names, arithmetic, logical and
string operations, calls to internal functions, stored functions or external functions (UDFs).
357
Chapter 7. Procedural SQL (PSQL) Statements
END
See also
DECLARE VARIABLE
Management statement are allowed in PSQL modules (triggers, procedures, functions and PSQL
blocks), which is especially helpful for applications that need management statements to be
executed at the start of a session, specifically in ON CONNECT triggers.
SET BIND
SET DECFLOAT
SET ROLE
See also
Management Statements
358
Chapter 7. Procedural SQL (PSQL) Statements
Syntax
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
Argument Description
collation Collation
literal Literal of a type compatible with the type of the local variable
context_var Any context variable whose type is compatible with the type of the local
variable
The statement DECLARE [VARIABLE] is used for declaring a local variable. One DECLARE [VARIABLE]
statement is required for each local variable. Any number of DECLARE [VARIABLE] statements can be
included and in any order. The name of a local variable must be unique among the names of local
variables and input and output parameters declared for the module.
• A domain name can be specified as the type; the variable will inherit all of its attributes.
• If the TYPE OF domain clause is used instead, the variable will inherit only the domain’s data type,
and, if applicable, its character set and collation attributes. Any default value or constraints
such as NOT NULL or CHECK constraints are not inherited.
• If the TYPE OF COLUMN relation.column option is used to “borrow” from a column in a table or
view, the variable will inherit only the column’s data type, and, if applicable, its character set
and collation attributes. Any other attributes are ignored.
For local variables, you can specify the NOT NULL constraint, disallowing NULL values for the variable.
If a domain has been specified as the data type and the domain already has the NOT NULL constraint,
the declaration is unnecessary. For other forms, including use of a domain that is nullable, the NOT
NULL constraint can be included if needed.
359
Chapter 7. Procedural SQL (PSQL) Statements
Unless specified, the character set and collation of a string variable will be the database defaults. A
CHARACTER SET clause can be specified to handle string data that needs a different character set. A
valid collation (COLLATE clause) can also be included, with or without the character set clause.
Initializing a Variable
Local variables are NULL when execution of the module begins. They can be explicitly initialized so
that a starting or default value is available when they are first referenced. The initial value can be
specified in two ways, DEFAULT <initvalue> and = <initvalue>. The value can be any type-
compatible literal or context variable, including NULL.
Be sure to use the DEFAULT clause for any variable that has a NOT NULL constraint
and does not otherwise have a default value available (i.e. inherited from a
domain).
See also
Data Types and Subtypes, Custom Data Types — Domains, CREATE DOMAIN
Syntax
360
Chapter 7. Procedural SQL (PSQL) Statements
Argument Description
The DECLARE … CURSOR … FOR statement binds a named cursor to the result set obtained by the
SELECT statement specified in the FOR clause. In the body code, the cursor can be opened, used to
iterate row-by-row through the result set, and closed. While the cursor is open, the code can
perform positioned updates and deletes using the WHERE CURRENT OF in the UPDATE or DELETE
statement.
The cursor can be forward-only (unidirectional) or scrollable. The optional clause SCROLL makes the
cursor scrollable, the NO SCROLL clause, forward-only. By default, cursors are forward-only.
Forward-only cursors can — as the name implies — only move forward in the dataset. Forward-only
cursors only support the FETCH [NEXT FROM] statement, other fetch options raise an error. Scrollable
cursors allow you to move not only forward in the dataset, but also back, as well as N positions
relative to the current position.
Cursor Idiosyncrasies
• The optional FOR UPDATE clause can be included in the SELECT statement, but its absence does not
prevent successful execution of a positioned update or delete
• Care should be taken to ensure that the names of declared cursors do not conflict with any
names used subsequently in statements for AS CURSOR clauses
• If the cursor is needed only to walk the result set, it is nearly always easier and less error-prone
to use a FOR SELECT statement with the AS CURSOR clause. Declared cursors must be explicitly
opened, used to fetch data, and closed. The context variable ROW_COUNT has to be checked after
each fetch and, if its value is zero, the loop has to be terminated. A FOR SELECT statement does
this automatically.
Nevertheless, declared cursors provide a high level of control over sequential events and allow
several cursors to be managed in parallel.
361
Chapter 7. Procedural SQL (PSQL) Statements
Each parameter has to have been declared beforehand as a PSQL variable, or as input or output
parameters. When the cursor is opened, the parameter is assigned the current value of the
variable.
Note particularly that the behaviour may depend on the query plan, specifically on
the indexes being used. Currently, there are no strict rules for this behaviour, and
this may change in future versions of Firebird.
EXECUTE BLOCK
RETURNS (
N INT,
RNAME CHAR(63))
AS
- Declaring a scrollable cursor
DECLARE C SCROLL CURSOR FOR (
SELECT
ROW_NUMBER() OVER (ORDER BY RDB$RELATION_NAME) AS N,
RDB$RELATION_NAME
362
Chapter 7. Procedural SQL (PSQL) Statements
FROM RDB$RELATIONS
ORDER BY RDB$RELATION_NAME);
BEGIN
/ * PSQL statements * /
END
3. A collection of scripts for creating views with a PSQL block using named cursors.
EXECUTE BLOCK
RETURNS (
SCRIPT BLOB SUB_TYPE TEXT)
AS
DECLARE VARIABLE FIELDS VARCHAR(8191);
DECLARE VARIABLE FIELD_NAME TYPE OF RDB$FIELD_NAME;
DECLARE VARIABLE RELATION RDB$RELATION_NAME;
DECLARE VARIABLE SOURCE TYPE OF COLUMN RDB$RELATIONS.RDB$VIEW_SOURCE;
DECLARE VARIABLE CUR_R CURSOR FOR (
SELECT
RDB$RELATION_NAME,
RDB$VIEW_SOURCE
FROM
RDB$RELATIONS
WHERE
RDB$VIEW_SOURCE IS NOT NULL);
-- Declaring a named cursor where
-- a local variable is used
DECLARE CUR_F CURSOR FOR (
SELECT
RDB$FIELD_NAME
FROM
RDB$RELATION_FIELDS
WHERE
-- the variable must be declared earlier
RDB$RELATION_NAME = :RELATION);
BEGIN
OPEN CUR_R;
WHILE (1 = 1) DO
BEGIN
FETCH CUR_R
INTO :RELATION, :SOURCE;
IF (ROW_COUNT = 0) THEN
LEAVE;
FIELDS = NULL;
-- The CUR_F cursor will use the value
-- of the RELATION variable initiated above
OPEN CUR_F;
WHILE (1 = 1) DO
BEGIN
FETCH CUR_F
363
Chapter 7. Procedural SQL (PSQL) Statements
INTO :FIELD_NAME;
IF (ROW_COUNT = 0) THEN
LEAVE;
IF (FIELDS IS NULL) THEN
FIELDS = TRIM(FIELD_NAME);
ELSE
FIELDS = FIELDS || ', ' || TRIM(FIELD_NAME);
END
CLOSE CUR_F;
SUSPEND;
END
CLOSE CUR_R;
END
See also
OPEN, FETCH, CLOSE
Declares a sub-function
Syntax
<subfunc-header> ::=
DECLARE FUNCTION subfuncname [ ( [ <in_params> ] ) ]
RETURNS <domain_or_non_array_type> [COLLATE collation]
[DETERMINISTIC]
<in_params> ::=
!! See CREATE FUNCTION Syntax !!
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
<psql-module-body> ::=
!! See Syntax of Module Body !!
364
Chapter 7. Procedural SQL (PSQL) Statements
Argument Description
The DECLARE FUNCTION statement declares a sub-function. A sub-function is only visible to the PSQL
module that defined the sub-function.
A sub-function can use variables, but not cursors, from its parent module. It can access other
routines from its parent modules, including recursive calls to itself.
• A sub-function cannot be nested in another subroutine. Subroutines are only supported in top-
level PSQL modules (stored procedures, stored functions, triggers and PSQL blocks). This
restriction is not enforced by the syntax, but attempts to create nested sub-functions will raise
an error “feature is not supported” with detail message “nested sub function”.
• Currently, a sub-function has no direct access to use cursors from its parent module.
A sub-function can be forward declared to resolve mutual dependencies between subroutines, and
must be followed by its actual definition. When a sub-function is forward declared and has
parameters with default values, the default values should only be specified in the forward
declaration, and should not be repeated in subfunc_def.
Declaring a sub-function with the same name as a stored function will hide that
stored function from your module. It will not be possible to call that stored
function.
Contrary to DECLARE [VARIABLE], a DECLARE FUNCTION is not terminated by a
semicolon. The END of its main BEGIN … END block is considered its terminator.
Examples of Sub-Functions
365
Chapter 7. Procedural SQL (PSQL) Statements
END
See also
DECLARE PROCEDURE, CREATE FUNCTION
Declares a sub-procedure
Syntax
<subproc-header> ::=
DECLARE subprocname [ ( [ <in_params> ] ) ]
[RETURNS (<out_params>)]
<in_params> ::=
!! See CREATE PROCEDURE Syntax !!
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
366
Chapter 7. Procedural SQL (PSQL) Statements
<psql-module-body> ::=
!! See Syntax of Module Body !!
Argument Description
The DECLARE PROCEDURE statement declares a sub-procedure. A sub-procedure is only visible to the
PSQL module that defined the sub-procedure.
A sub-procedure can use variables, but not cursors, from its parent module. It can access other
routines from its parent modules.
• Currently, the sub-procedure has no direct access to use cursors from its parent module.
Examples of Sub-Procedures
EXECUTE BLOCK
RETURNS (name VARCHAR(63))
AS
-- Sub-procedure returning a list of tables
DECLARE PROCEDURE get_tables
RETURNS (table_name VARCHAR(63))
AS
BEGIN
367
Chapter 7. Procedural SQL (PSQL) Statements
368
Chapter 7. Procedural SQL (PSQL) Statements
See also
DECLARE FUNCTION, CREATE PROCEDURE
Syntax
<block> ::=
BEGIN
[<compound_statement> ...]
[<when_do> ...]
END
<when_do> ::=
!! See WHEN ... DO !!
The BEGIN … END construct is a two-part statement that wraps a block of statements that are
executed as one unit of code. Each block starts with the keyword BEGIN and ends with the keyword
END. Blocks can be nested a maximum depth of 512 nested blocks. A block can be empty, allowing
them to act as stubs, without the need to write dummy statements.
For error handling, you can add one or more WHEN … DO statements immediately before END. Other
statements are not allowed after WHEN … DO.
The BEGIN … END itself should not be followed by a statement terminator (semicolon). However,
when defining or altering a PSQL module in the isql utility, that application requires that the last
END statement be followed by its own terminator character, that was previously switched — using
SET TERM — to a string other than a semicolon. That terminator is not part of the PSQL syntax.
The final, or outermost, END statement in a trigger terminates the trigger. What the final END
statement does in a stored procedure depends on the type of procedure:
• In a selectable procedure, the final END statement returns control to the caller, returning
SQLCODE 100, indicating that there are no more rows to retrieve
• In an executable procedure, the final END statement returns control to the caller, along with the
current values of any output parameters defined.
369
Chapter 7. Procedural SQL (PSQL) Statements
A sample procedure from the employee.fdb database, showing simple usage of BEGIN … END blocks:
SET TERM ^;
CREATE OR ALTER PROCEDURE DEPT_BUDGET (
DNO CHAR(3))
RETURNS (
TOT DECIMAL(12,2))
AS
DECLARE VARIABLE SUMB DECIMAL(12,2);
DECLARE VARIABLE RDNO CHAR(3);
DECLARE VARIABLE CNT INTEGER;
BEGIN
TOT = 0;
SELECT BUDGET
FROM DEPARTMENT
WHERE DEPT_NO = :DNO
INTO :TOT;
SELECT COUNT(BUDGET)
FROM DEPARTMENT
WHERE HEAD_DEPT = :DNO
INTO :CNT;
IF (CNT = 0) THEN
SUSPEND;
SUSPEND;
END^
SET TERM ;^
See also
EXIT, SET TERM, WHEN … DO
Conditional branching
370
Chapter 7. Procedural SQL (PSQL) Statements
Syntax
IF (<condition>)
THEN <compound_statement>
[ELSE <compound_statement>]
Argument Description
The conditional branch statement IF … THEN is used to branch the execution process in a PSQL
module. The condition is always enclosed in parentheses. If the condition returns the value TRUE,
execution branches to the statement or the block of statements after the keyword THEN. If an ELSE is
present, and the condition returns FALSE or UNKNOWN, execution branches to the statement or the
block of statements after it.
Multi-Branch Decisions
PSQL does not provide more advanced multi-branch jumps, such as CASE or SWITCH. However,
it is possible to chain IF … THEN … ELSE statements, see the example section below.
Alternatively, the CASE statement from DSQL is available in PSQL and is able to satisfy at least
some use cases in the manner of a switch:
CASE <test_expr>
WHEN <expr> THEN <result>
[WHEN <expr> THEN <result> ...]
[ELSE <defaultresult>]
END
CASE
WHEN <bool_expr> THEN <result>
[WHEN <bool_expr> THEN <result> ...]
[ELSE <defaultresult>]
END
Example in PSQL
...
C = CASE
WHEN A=2 THEN 1
WHEN A=1 THEN 3
ELSE 0
END;
...
371
Chapter 7. Procedural SQL (PSQL) Statements
IF Examples
1. An example using the IF statement. Assume that the variables FIRST, LINE2 and LAST were
declared earlier.
...
IF (FIRST IS NOT NULL) THEN
LINE2 = FIRST || ' ' || LAST;
ELSE
LINE2 = LAST;
...
2. Given IF … THEN … ELSE is a statement, it is possible to chain them together. Assume that the
INT_VALUE and STRING_VALUE variables were declared earlier.
IF (INT_VALUE = 1) THEN
STRING_VALUE = 'one';
ELSE IF (INT_VALUE = 2) THEN
STRING_VALUE = 'two';
ELSE IF (INT_VALUE = 3) THEN
STRING_VALUE = 'three';
ELSE
STRING_VALUE = 'too much';
This specific example can be replaced with a simple CASE or the DECODE function.
See also
WHILE … DO, CASE
7.7.9. WHILE … DO
Looping construct
Syntax
[label:]
WHILE (<condition>) DO
<compound_statement>
Argument Description
label Optional label for LEAVE and CONTINUE. Follows the rules for identifiers.
A WHILE statement implements the looping construct in PSQL. The statement or the block of
372
Chapter 7. Procedural SQL (PSQL) Statements
statements will be executed as long as the condition returns TRUE. Loops can be nested to any depth.
WHILE … DO Examples
A procedure calculating the sum of numbers from 1 to I shows how the looping construct is used.
S
==========
10
See also
IF … THEN … ELSE, BREAK, LEAVE, CONTINUE, EXIT, FOR SELECT, FOR EXECUTE STATEMENT
7.7.10. BREAK
Exits a loop
Syntax
[label:]
<loop_stmt>
BEGIN
...
BREAK;
...
END
<loop_stmt> ::=
FOR <select_stmt> INTO <var_list> DO
373
Chapter 7. Procedural SQL (PSQL) Statements
Argument Description
label Label
The BREAK statement immediately terminates the inner loop of a WHILE or FOR looping statement.
Code continues to be executed from the first statement after the terminated loop block.
See also
LEAVE
7.7.11. LEAVE
Exits a loop
Syntax
[label:]
<loop_stmt>
BEGIN
...
LEAVE [label];
...
END
<loop_stmt> ::=
FOR <select_stmt> INTO <var_list> DO
| FOR EXECUTE STATEMENT ... INTO <var_list> DO
| WHILE (<condition>)} DO
Argument Description
label Label
The LEAVE statement immediately terminates the inner loop of a WHILE or FOR looping statement.
Using the optional label parameter, LEAVE can also exit an outer loop, that is, the loop labelled with
label. Code continues to be executed from the first statement after the terminated loop block.
374
Chapter 7. Procedural SQL (PSQL) Statements
LEAVE Examples
1. Leaving a loop if an error occurs on an insert into the NUMBERS table. The code continues to be
executed from the line C = 0.
...
WHILE (B < 10) DO
BEGIN
INSERT INTO NUMBERS(B)
VALUES (:B);
B = B + 1;
WHEN ANY DO
BEGIN
EXECUTE PROCEDURE LOG_ERROR (
CURRENT_TIMESTAMP,
'ERROR IN B LOOP');
LEAVE;
END
END
C = 0;
...
2. An example using labels in the LEAVE statement. LEAVE LOOPA terminates the outer loop and LEAVE
LOOPB terminates the inner loop. Note that the plain LEAVE statement would be enough to
terminate the inner loop.
...
STMT1 = 'SELECT NAME FROM FARMS';
LOOPA:
FOR EXECUTE STATEMENT :STMT1
INTO :FARM DO
BEGIN
STMT2 = 'SELECT NAME ' || 'FROM ANIMALS WHERE FARM = ''';
LOOPB:
FOR EXECUTE STATEMENT :STMT2 || :FARM || ''''
INTO :ANIMAL DO
BEGIN
IF (ANIMAL = 'FLUFFY') THEN
LEAVE LOOPB;
ELSE IF (ANIMAL = FARM) THEN
LEAVE LOOPA;
SUSPEND;
END
END
...
See also
BREAK, CONTINUE, EXIT
375
Chapter 7. Procedural SQL (PSQL) Statements
7.7.12. CONTINUE
Syntax
[label:]
<loop_stmt>
BEGIN
...
CONTINUE [label];
...
END
<loop_stmt> ::=
FOR <select_stmt> INTO <var_list> DO
| FOR EXECUTE STATEMENT ... INTO <var_list> DO
| WHILE (<condition>)} DO
Argument Description
label Label
The CONTINUE statement skips the remainder of the current block of a loop and starts the next
iteration of the current WHILE or FOR loop. Using the optional label parameter, CONTINUE can also start
the next iteration of an outer loop, that is, the loop labelled with label.
CONTINUE Examples
FOR SELECT A, D
FROM ATABLE INTO achar, ddate
DO
BEGIN
IF (ddate < current_date - 30) THEN
CONTINUE;
/* do stuff */
END
See also
BREAK, LEAVE, EXIT
376
Chapter 7. Procedural SQL (PSQL) Statements
7.7.13. EXIT
Syntax
EXIT;
The EXIT statement causes execution of the current PSQL module to jump to the final END statement
from any point in the code, thus terminating the program.
EXIT Examples
See also
BREAK, LEAVE, CONTINUE, SUSPEND
7.7.14. SUSPEND
Passes output to the buffer and suspends execution while waiting for caller to fetch it
Syntax
SUSPEND;
The SUSPEND statement is used in selectable stored procedures to pass the values of output
parameters to a buffer and suspend execution. Execution remains suspended until the calling
application fetches the contents of the buffer. Execution resumes from the statement directly after
the SUSPEND statement. In practice, this is likely to be a new iteration of a looping process.
Important Notes
377
Chapter 7. Procedural SQL (PSQL) Statements
3. Applications using interfaces that wrap the API perform the fetches from
selectable procedures transparently.
SUSPEND Examples
See also
EXIT
Syntax
378
Chapter 7. Procedural SQL (PSQL) Statements
<option> ::=
WITH {AUTONOMOUS | COMMON} TRANSACTION
| WITH CALLER PRIVILEGES
| AS USER user
| PASSWORD password
| ROLE role
| ON EXTERNAL [DATA SOURCE] <connection_string>
<connection_string> ::=
!! See <filespec> in the CREATE DATABASE syntax !!
Argument Description
varname Variable
The statement EXECUTE STATEMENT takes a string parameter and executes it as if it were a DSQL
statement. If the statement returns data, it can be passed to local variables by way of an INTO clause.
EXECUTE STATEMENT can only produce a single row of data. Statements producing multiple rows of
data must be executed with FOR EXECUTE STATEMENT.
Parameterized Statements
You can use parameters — either named or positional — in the DSQL statement string. Each
parameter must be assigned a value.
379
Chapter 7. Procedural SQL (PSQL) Statements
To relax this rule, named parameters can be prefixed with the keyword EXCESS to indicate that
the parameter may be absent from the statement text. This option is useful for dynamically
generated statements that conditionally include or exclude certain parameters.
3. If the statement has parameters, they must be enclosed in parentheses when EXECUTE STATEMENT
is called, regardless of whether they come directly as strings, as variable names or as
expressions
4. Each named parameter must be prefixed by a colon (‘:’) in the statement string itself, but not
when the parameter is assigned a value
5. Positional parameters must be assigned their values in the same order as they appear in the
query text
6. The assignment operator for parameters is the special operator “:=”, similar to the assignment
operator in Pascal
7. Each named parameter can be used in the statement more than once, but its value must be
assigned only once
8. With positional parameters, the number of assigned values must match the number of
parameter placeholders (question marks) in the statement exactly
9. A named parameter in the statement text can only be a regular identifier (it cannot be a quoted
identifier)
...
DECLARE license_num VARCHAR(15);
DECLARE connect_string VARCHAR (100);
DECLARE stmt VARCHAR (100) =
'SELECT license '
'FROM cars '
'WHERE driver = :driver AND location = :loc';
BEGIN
-- ...
EXECUTE STATEMENT (stmt)
(driver := current_driver,
loc := current_location)
ON EXTERNAL connect_string
INTO license_num;
380
Chapter 7. Procedural SQL (PSQL) Statements
3. Use of EXCESS to allow named parameters to be unused (note: this is a FOR EXECUTE STATEMENT):
CREATE PROCEDURE P_EXCESS (A_ID INT, A_TRAN INT = NULL, A_CONN INT = NULL)
RETURNS (ID INT, TRAN INT, CONN INT)
AS
DECLARE S VARCHAR(255) = 'SELECT * FROM TTT WHERE ID = :ID';
DECLARE W VARCHAR(255) = '';
BEGIN
IF (A_TRAN IS NOT NULL)
THEN W = W || ' AND TRAN = :a';
IF (W <> '')
THEN S = S || W;
-- OK in all cases
FOR EXECUTE STATEMENT (:S) (EXCESS a := :A_TRAN, EXCESS b := A_CONN, id := A_ID)
INTO :ID, :TRAN, :CONN
DO SUSPEND;
END
By default, the executed SQL statement runs within the current transaction. Using WITH AUTONOMOUS
TRANSACTION causes a separate transaction to be started, with the same parameters as the current
transaction. This separate transaction will be committed when the statement was executed without
errors and rolled back otherwise.
The clause WITH COMMON TRANSACTION uses the current transaction whenever possible; this is the
default behaviour. If the statement must run in a separate connection, an already started
381
Chapter 7. Procedural SQL (PSQL) Statements
transaction within that connection is used, if available. Otherwise, a new transaction is started with
the same parameters as the current transaction. Any new transactions started under the “COMMON”
regime are committed or rolled back with the current transaction.
By default, the SQL statement is executed with the privileges of the current user. Specifying WITH
CALLER PRIVILEGES combines the privileges of the calling procedure or trigger with those of the user,
as if the statement were executed directly by the routine. WITH CALLER PRIVILEGES has no effect if the
ON EXTERNAL clause is also present.
With ON EXTERNAL [DATA SOURCE], the SQL statement is executed in a separate connection to the same
or another database, possibly even on another server. If connection_string is NULL or “''” (empty
string), the entire ON EXTERNAL [DATA SOURCE] clause is considered absent, and the statement is
executed against the current database.
Connection Pooling
• External connections made by statements WITH COMMON TRANSACTION (the default) will remain
open until the current transaction ends. They can be reused by subsequent calls to EXECUTE
STATEMENT, but only if connection_string is identical, including case
• External connections made by statements WITH AUTONOMOUS TRANSACTION are closed as soon as the
statement has been executed
• Statements using WITH AUTONOMOUS TRANSACTION can and will re-use connections that were opened
earlier by statements WITH COMMON TRANSACTION. If this happens, the reused connection will be left
open after the statement has been executed. (It must be, because it has at least one active
transaction!)
Transaction Pooling
• If WITH COMMON TRANSACTION is in effect, transactions will be reused as much as possible. They will
be committed or rolled back together with the current transaction
• If WITH AUTONOMOUS TRANSACTION is specified, a fresh transaction will always be started for the
statement. This transaction will be committed or rolled back immediately after the statement’s
execution
Exception Handling
When ON EXTERNAL is used, the extra connection is always made via a so-called external provider,
even if the connection is to the current database. One of the consequences is that exceptions cannot
be caught in the usual way. Every exception caused by the statement is wrapped in either an
eds_connection or an eds_statement error. To catch them in your PSQL code, you have to use WHEN
GDSCODE eds_connection, WHEN GDSCODE eds_statement or WHEN ANY.
Without ON EXTERNAL, exceptions are caught in the usual way, even if an extra connection is made to
the current database.
382
Chapter 7. Procedural SQL (PSQL) Statements
Miscellaneous Notes
• The character set used for the external connection is the same as that for the current connection
The optional AS USER, PASSWORD and ROLE clauses allow specification of which user will execute the
SQL statement and with which role. The method of user login, and whether a separate connection is
opened, depends on the presence and values of the ON EXTERNAL [DATA SOURCE], AS USER, PASSWORD
and ROLE clauses:
◦ If at least one of AS USER, PASSWORD and ROLE is present, native authentication is attempted
with the given parameter values (locally or remotely, depending on connection_string). No
defaults are used for missing parameters
◦ If all three are absent, and connection_string contains no hostname, then the new
connection is established on the local server with the same user and role as the current
connection. The term 'local' means “on the same machine as the server” here. This is not
necessarily the location of the client
◦ If all three are absent, and connection_string contains a hostname, then trusted
authentication is attempted on the remote host (again, 'remote' from the perspective of the
server). If this succeeds, the remote operating system will provide the username (usually the
operating system account under which the Firebird process runs)
• If ON EXTERNAL is absent:
◦ If at least one of AS USER, PASSWORD and ROLE is present, a new connection to the current
database is opened with the supplied parameter values. No defaults are used for missing
parameters
◦ If all three are absent, the statement is executed within the current connection
If a parameter value is NULL or “''” (empty string), the entire parameter is considered absent.
Additionally, AS USER is considered absent if its value is equal to CURRENT_USER, and ROLE if it is the
same as CURRENT_ROLE.
2. There are no dependency checks to discover whether tables or columns have been dropped
3. Execution is considerably slower than when the same statements are executed directly as PSQL
code
4. Return values are strictly checked for data type to avoid unpredictable type-casting exceptions.
For example, the string '1234' would convert to an integer, 1234, but 'abc' would give a
conversion error
All in all, this feature is meant to be used cautiously, and you should always take the caveats into
account. If you can achieve the same result with PSQL and/or DSQL, it will almost always be
383
Chapter 7. Procedural SQL (PSQL) Statements
preferable.
See also
FOR EXECUTE STATEMENT
Syntax
[label:]
FOR <select_stmt> [AS CURSOR cursor_name]
DO <compound_statement>
Argument Description
label Optional label for LEAVE and CONTINUE. Follows the rules for identifiers.
cursor_name Cursor name. It must be unique among cursor names in the PSQL module
(stored procedure, stored function, trigger or PSQL block)
• retrieves each row sequentially from the result set, and executes the statement or block of
statements for each row. In each iteration of the loop, the field values of the current row are
copied into pre-declared variables.
Including the AS CURSOR clause enables positioned deletes and updates to be performed — see
notes below
• can contain named parameters that must be previously declared in the DECLARE VARIABLE
statement or exist as input or output parameters of the procedure
• requires an INTO clause at the end of the SELECT … FROM … specification if AS CURSOR is absent In
each iteration of the loop, the field values of the current row are copied to the list of variables
specified in the INTO clause. The loop repeats until all rows are retrieved, after which it
terminates
• can be terminated before all rows are retrieved by using a BREAK, LEAVE or EXIT statement
The optional AS CURSOR clause surfaces the result set of the FOR SELECT structure as an undeclared,
named cursor that can be operated on using the WHERE CURRENT OF clause inside the statement or
384
Chapter 7. Procedural SQL (PSQL) Statements
block following the DO command, to delete or update the current row before execution moves to the
next row. In addition, it is possible to use the cursor name as a record variable (similar to OLD and
NEW in triggers), allowing access to the columns of the result set (i.e. cursor_name.columnname).
The cursor variable can be referenced without colon prefix, but in that case, depending on the
scope of the contexts in the statement, the name may resolve in the statement context instead of
to the cursor (e.g. you select from a table with the same name as the cursor).
• In a FOR SELECT statement without an AS CURSOR clause, you must use the INTO clause. If an AS
CURSOR clause is specified, the INTO clause is allowed, but optional; you can access the fields
through the cursor instead.
• Reading from a cursor variable returns the current field values. This means that an UPDATE
statement (with a WHERE CURRENT OF clause) will update not only the table, but also the fields in
the cursor variable for subsequent reads. Executing a DELETE statement (with a WHERE CURRENT OF
clause) will set all fields in the cursor variable to NULL for subsequent reads
1. The OPEN, FETCH and CLOSE statements cannot be applied to a cursor surfaced by the AS CURSOR
clause
2. The cursor_name argument associated with an AS CURSOR clause must not clash with any names
created by DECLARE VARIABLE or DECLARE CURSOR statements at the top of the module body, nor
with any other cursors surfaced by an AS CURSOR clause
3. The optional FOR UPDATE clause in the SELECT statement is not required for a positioned update
385
Chapter 7. Procedural SQL (PSQL) Statements
SM = AA + BB;
DF = AA - BB;
SUSPEND;
END
END
SUSPEND;
END
END
END
3. Using the AS CURSOR clause to surface a cursor for the positioned delete of a record:
386
Chapter 7. Procedural SQL (PSQL) Statements
POP INTEGER)
AS
BEGIN
FOR SELECT TOWN, POP
FROM TOWNS
INTO :TOWN, :POP AS CURSOR TCUR
DO
BEGIN
IF (:TOWN = :TOWNTODELETE) THEN
-- Positional delete
DELETE FROM TOWNS
WHERE CURRENT OF TCUR;
ELSE
SUSPEND;
END
END
EXECUTE BLOCK
RETURNS (o CHAR(63))
AS
BEGIN
FOR SELECT rdb$relation_name AS name
FROM rdb$relations AS CURSOR c
DO
BEGIN
o = c.name;
SUSPEND;
END
END
EXECUTE BLOCK
RETURNS (o1 CHAR(63), o2 CHAR(63))
AS
BEGIN
FOR SELECT rdb$relation_name
FROM rdb$relations
WHERE
rdb$relation_name = 'RDB$RELATIONS' AS CURSOR c
DO
BEGIN
FOR SELECT
-- with a prefix resolves to the cursor
:c.rdb$relation_name x1,
-- no prefix as an alias for the rdb$relations table
c.rdb$relation_name x2
387
Chapter 7. Procedural SQL (PSQL) Statements
FROM rdb$relations c
WHERE
rdb$relation_name = 'RDB$DATABASE' AS CURSOR d
DO
BEGIN
o1 = d.x1;
o2 = d.x2;
SUSPEND;
END
END
END
See also
DECLARE .. CURSOR, BREAK, LEAVE, CONTINUE, EXIT, SELECT, UPDATE, DELETE
Executes dynamically created SQL statements and loops over its result set
Syntax
[label:]
FOR <execute_statement> DO <compound_statement>
Argument Description
label Optional label for LEAVE and CONTINUE. Follows the rules for identifiers.
The statement FOR EXECUTE STATEMENT is used, in a manner analogous to FOR SELECT, to loop through
the result set of a dynamically executed query that returns multiple rows.
388
Chapter 7. Procedural SQL (PSQL) Statements
LINE = '';
FOR
EXECUTE STATEMENT
'SELECT T1.' || :Q_FIELD_NAME ||
' FROM ' || :Q_TABLE_NAME || ' T1 '
INTO :P_ONE_LINE
DO
IF (:P_ONE_LINE IS NOT NULL) THEN
LINE = :LINE || :P_ONE_LINE || ' ';
SUSPEND;
END
See also
EXECUTE STATEMENT, BREAK, LEAVE, CONTINUE
7.7.18. OPEN
Syntax
OPEN cursor_name;
Argument Description
cursor_name Cursor name. A cursor with this name must be previously declared with a
DECLARE CURSOR statement
An OPEN statement opens a previously declared cursor, executes its declared SELECT statement, and
makes the first record of the result data set ready to fetch. OPEN can be applied only to cursors
previously declared in a DECLARE .. CURSOR statement.
If the SELECT statement of the cursor has parameters, they must be declared as local variables, or
input or output parameters before the cursor is declared. When the cursor is opened, the
parameter is assigned the current value of the variable.
OPEN Examples
SET TERM ^;
389
Chapter 7. Procedural SQL (PSQL) Statements
SELECT RDB$RELATION_NAME
FROM RDB$RELATIONS);
BEGIN
OPEN C;
WHILE (1 = 1) DO
BEGIN
FETCH C INTO :RNAME;
IF (ROW_COUNT = 0) THEN
LEAVE;
SUSPEND;
END
CLOSE C;
END^
SET TERM ;^
2. A collection of scripts for creating views using a PSQL block with named cursors:
EXECUTE BLOCK
RETURNS (
SCRIPT BLOB SUB_TYPE TEXT)
AS
DECLARE VARIABLE FIELDS VARCHAR(8191);
DECLARE VARIABLE FIELD_NAME TYPE OF RDB$FIELD_NAME;
DECLARE VARIABLE RELATION RDB$RELATION_NAME;
DECLARE VARIABLE SOURCE TYPE OF COLUMN RDB$RELATIONS.RDB$VIEW_SOURCE;
-- named cursor
DECLARE VARIABLE CUR_R CURSOR FOR (
SELECT
RDB$RELATION_NAME,
RDB$VIEW_SOURCE
FROM
RDB$RELATIONS
WHERE
RDB$VIEW_SOURCE IS NOT NULL);
-- named cursor with local variable
DECLARE CUR_F CURSOR FOR (
SELECT
RDB$FIELD_NAME
FROM
RDB$RELATION_FIELDS
WHERE
-- Important! The variable has to be declared earlier
RDB$RELATION_NAME = :RELATION);
BEGIN
OPEN CUR_R;
WHILE (1 = 1) DO
BEGIN
FETCH CUR_R
INTO :RELATION, :SOURCE;
390
Chapter 7. Procedural SQL (PSQL) Statements
IF (ROW_COUNT = 0) THEN
LEAVE;
FIELDS = NULL;
-- The CUR_F cursor will use
-- variable value of RELATION initialized above
OPEN CUR_F;
WHILE (1 = 1) DO
BEGIN
FETCH CUR_F
INTO :FIELD_NAME;
IF (ROW_COUNT = 0) THEN
LEAVE;
IF (FIELDS IS NULL) THEN
FIELDS = TRIM(FIELD_NAME);
ELSE
FIELDS = FIELDS || ', ' || TRIM(FIELD_NAME);
END
CLOSE CUR_F;
SUSPEND;
END
CLOSE CUR_R;
END
See also
DECLARE .. CURSOR, FETCH, CLOSE
7.7.19. FETCH
Syntax
<fetch_scroll> ::=
NEXT | PRIOR | FIRST | LAST
| RELATIVE n | ABSOLUTE n
391
Chapter 7. Procedural SQL (PSQL) Statements
Argument Description
cursor_name Cursor name. A cursor with this name must be previously declared with a
DECLARE … CURSOR statement and opened by an OPEN statement.
The FETCH statement fetches the next row from the result set of the cursor and assigns the column
values to PSQL variables. The FETCH statement can be used only with a cursor declared with the
DECLARE .. CURSOR statement.
Using the optional fetch_scroll part of the FETCH statement, you can specify in which direction and
how many rows to advance the cursor position. The NEXT fetch option can be used for scrollable and
forward-only cursors. Other fetch options are only supported for scrollable cursors.
PRIOR
moves the cursor one record back
FIRST
moves the cursor to the first record.
LAST
moves the cursor to the last record
RELATIVE n
moves the cursor n rows from the current position; positive numbers move forward, negative
numbers move backwards; using zero (0) will not move the cursor, and ROW_COUNT will be set to
zero as no new row was fetched.
ABSOLUTE n
moves the cursor to the specified row; n is an integer expression, where 1 indicates the first row.
For negative values, the absolute position is taken from the end of the result set, so -1 indicates
the last row, -2 the second to last row, etc. A value of zero (0) will position before the first row.
The optional INTO clause gets data from the current row of the cursor and loads them into PSQL
variables. If a fetch moves beyond the bounds of the result set, the variables will be set to NULL.
It is also possible to use the cursor name as a variable of a record type (similar to OLD and NEW in
triggers), allowing access to the columns of the result set (i.e. cursor_name.columnname).
392
Chapter 7. Procedural SQL (PSQL) Statements
The cursor variable can be referenced without colon prefix, but in that case, depending on the
scope of the contexts in the statement, the name may resolve in the statement context instead of
to the cursor (e.g. you select from a table with the same name as the cursor).
• In a FOR SELECT statement without an AS CURSOR clause, you must use the INTO clause. If an AS
CURSOR clause is specified, the INTO clause is allowed, but optional; you can access the fields
through the cursor instead.
• Reading from a cursor variable returns the current field values. This means that an UPDATE
statement (with a WHERE CURRENT OF clause) will update not only the table, but also the fields in
the cursor variable for subsequent reads. Executing a DELETE statement (with a WHERE CURRENT OF
clause) will set all fields in the cursor variable to NULL for subsequent reads
• When the cursor is not positioned on a row — it is positioned before the first row, or after the
last row — attempts to read from the cursor variable will result in error “Cursor cursor_name
is not positioned in a valid record”
For checking whether all the rows of the result set have been fetched, the context variable
ROW_COUNT returns the number of rows fetched by the statement. If a record was fetched, then
ROW_COUNT is one (1), otherwise zero (0).
FETCH Examples
EXECUTE BLOCK
RETURNS (SCRIPT BLOB SUB_TYPE TEXT)
AS
393
Chapter 7. Procedural SQL (PSQL) Statements
394
Chapter 7. Procedural SQL (PSQL) Statements
EXECUTE BLOCK
RETURNS (N INT, RNAME CHAR (63))
AS
DECLARE C SCROLL CURSOR FOR (
SELECT
ROW_NUMBER() OVER (ORDER BY RDB$RELATION_NAME) AS N,
RDB$RELATION_NAME
FROM RDB$RELATIONS
ORDER BY RDB$RELATION_NAME);
BEGIN
OPEN C;
-- move to the first record (N = 1)
FETCH FIRST FROM C;
RNAME = C.RDB$RELATION_NAME;
N = C.N;
SUSPEND;
-- move 1 record forward (N = 2)
FETCH NEXT FROM C;
RNAME = C.RDB$RELATION_NAME;
N = C.N;
SUSPEND;
-- move to the fifth record (N = 5)
FETCH ABSOLUTE 5 FROM C;
RNAME = C.RDB$RELATION_NAME;
N = C.N;
SUSPEND;
-- move 1 record backward (N = 4)
FETCH PRIOR FROM C;
RNAME = C.RDB$RELATION_NAME;
N = C.N;
SUSPEND;
-- move 3 records forward (N = 7)
FETCH RELATIVE 3 FROM C;
RNAME = C.RDB$RELATION_NAME;
N = C.N;
SUSPEND;
-- move back 5 records (N = 2)
FETCH RELATIVE -5 FROM C;
RNAME = C.RDB$RELATION_NAME;
N = C.N;
SUSPEND;
-- move to the first record (N = 1)
FETCH FIRST FROM C;
RNAME = C.RDB$RELATION_NAME;
N = C.N;
SUSPEND;
-- move to the last entry
FETCH LAST FROM C;
395
Chapter 7. Procedural SQL (PSQL) Statements
RNAME = C.RDB$RELATION_NAME;
N = C.N;
SUSPEND;
CLOSE C;
END
See also
DECLARE .. CURSOR, OPEN, CLOSE
7.7.20. CLOSE
Syntax
CLOSE cursor_name;
Argument Description
cursor_name Cursor name. A cursor with this name must be previously declared with a
DECLARE … CURSOR statement and opened by an OPEN statement
A CLOSE statement closes an open cursor. Only a cursor that was declared with DECLARE .. CURSOR
can be closed with a CLOSE statement. Any cursors that are still open will be automatically closed
after the module code completes execution.
CLOSE Examples
See also
DECLARE .. CURSOR, OPEN, FETCH
Syntax
Argument Description
396
Chapter 7. Procedural SQL (PSQL) Statements
An autonomous transaction has the same isolation level as its parent transaction. Any exception
that is thrown in the block of the autonomous transaction code will result in the autonomous
transaction being rolled back and all changes made will be undone. If the code executes
successfully, the autonomous transaction will be committed.
Using an autonomous transaction in a trigger for the database ON CONNECT event, to log all
connection attempts, including those that failed:
See also
Transaction Control
7.7.22. POST_EVENT
397
Chapter 7. Procedural SQL (PSQL) Statements
Syntax
POST_EVENT event_name;
Argument Description
The POST_EVENT statement notifies the event manager about the event, which saves it to an event
table. When the transaction is committed, the event manager notifies applications that have
registered their interest in the event.
The event name can be a code, or a short message: the choice is open as it is a string of up to 127
bytes. Keep in mind that the application listening for an event must use the exact event name when
registering.
The content of the string can be a string literal, a variable or any valid SQL expression that resolves
to a string.
POST_EVENT Examples
Notifying the listening applications about inserting a record into the SALES table:
7.7.23. RETURN
Syntax
RETURN value;
Argument Description
value Expression with the value to return; Can be any expression type-
compatible with the return type of the function
The RETURN statement ends the execution of a function and returns the value of the expression
value.
398
Chapter 7. Procedural SQL (PSQL) Statements
RETURN can only be used in PSQL functions (stored functions and local sub-functions).
RETURN Examples
In PSQL code, exceptions are handled by means of the WHEN … DO statement. Handling an exception
in the code involves either fixing the problem in situ, or stepping past it; either solution allows
execution to continue without returning an exception message to the client.
An exception results in execution being terminated in the current block. Instead of passing the
execution to the END statement, the procedure moves outward through levels of nested blocks,
starting from the block where the exception is caught, searching for the code of the handler that
“knows” about this exception. It stops searching when it finds the first WHEN statement that can
handle this exception.
All exceptions handled by Firebird have predefined numeric values for context variables (symbols)
and text messages associated with them. Error messages are output in English by default. Localized
Firebird builds are available, where error messages are translated into other languages.
Complete listings of the system exceptions can be found in Appendix B, Exception Codes and
Messages:
Custom exceptions can be declared in the database as persistent objects and called in PSQL code to
signal specific errors; for example, to enforce certain business rules. A custom exception consists of
an identifier, and a default message of 1021 bytes. For details, see CREATE EXCEPTION.
7.8.3. EXCEPTION
Syntax
EXCEPTION [
399
Chapter 7. Procedural SQL (PSQL) Statements
exception_name
[ custom_message
| USING (<value_list>)]
]
Argument Description
val Value expression that replaces parameter slots in the exception message
text
The EXCEPTION statement with exception_name throws the user-defined exception with the specified
name. An alternative message text of up to 1,021 bytes can optionally override the exception’s
default message text.
The default exception message can contain slots for parameters that can be filled when throwing
an exception. To pass parameter values to an exception, use the USING clause. Considering, in left-to-
right order, each parameter passed in the exception-raising statement as “the Nth”, with N starting
at 1:
• If a NULL parameter is passed, the slot will be replaced with the string “*** null ***”
• If more parameters are passed than are defined in the exception message, the surplus ones are
ignored
The status vector is generated as the code combination isc_except, <exception number>,
isc_formatted_exception, <formatted exception message>, <exception parameters>.
The error code used (isc_formatted_exception) was introduced in Firebird 3.0, so the client must be
at least version 3.0, or at least use the firebird.msg from version 3.0 or higher, to translate the status
vector to a string.
If the message contains a parameter slot number that is greater than 9, the second
and subsequent digits will be treated as literal text. For example @10 will be
interpreted as slot 1 followed by a literal ‘0’.
As an example:
400
Chapter 7. Procedural SQL (PSQL) Statements
Within the exception-handling block — and only within it — the caught exception can be re-thrown
by executing the EXCEPTION statement without parameters. If located outside the block, the re-
thrown EXCEPTION call has no effect.
EXCEPTION Examples
401
Chapter 7. Procedural SQL (PSQL) Statements
2. Throwing an exception upon a condition and replacing the original message with an alternative
message:
402
Chapter 7. Procedural SQL (PSQL) Statements
END
See also
CREATE EXCEPTION, WHEN … DO
7.8.4. WHEN … DO
Syntax
<block> ::=
BEGIN
[<compound_statement> ...]
[<when_do> ...]
END
<<when_do>> ::=
WHEN {<error> [, <error> ...] | ANY}
403
Chapter 7. Procedural SQL (PSQL) Statements
DO <compound_statement>
<error> ::=
{ EXCEPTION exception_name
| SQLCODE number
| GDSCODE errcode
| SQLSTATE sqlstate_code }
Argument Description
The WHEN … DO statement handles Firebird errors and user-defined exceptions. The statement
catches all errors and user-defined exceptions listed after the keyword WHEN keyword. If WHEN is
followed by the keyword ANY, the statement catches any error or user-defined exception, even if
they have already been handled in a WHEN block located higher up.
The WHEN … DO statements must be located at the end of a block of statements, before the block’s END
keyword, and after any other statement.
The keyword DO is followed by a single statement, or statements wrapped in a BEGIN … END block,
that handles the exception. The SQLCODE, GDSCODE, and SQLSTATE context variables are available in the
context of this statement or block. Use the RDB$ERROR function to obtain the SQLCODE, GDSCODE,
SQLSTATE, custom exception name and exception message. The EXCEPTION statement, without
parameters, can also be used in this context to re-throw the error or exception.
Targeting GDSCODE
The argument for the WHEN GDSCODE clause is the symbolic name associated with the internally-
defined exception, such as grant_obj_notfound for GDS error 335544551.
The WHEN … DO statement or block is only executed when one of the events targeted by its
conditions occurs at run-time. If the WHEN … DO statement is executed, even if it does nothing,
execution will continue as if no error occurred: the error or user-defined exception neither
404
Chapter 7. Procedural SQL (PSQL) Statements
terminates nor rolls back the operations of the trigger or stored procedure.
However, if the WHEN … DO statement or block does nothing to handle or resolve the error, the DML
statement (SELECT, INSERT, UPDATE, DELETE, MERGE) that caused the error will be rolled back and none of
the statements below it in the same block of statements are executed.
1. If the error is not caused by one of the DML statements (SELECT, INSERT, UPDATE,
DELETE, MERGE), the entire block of statements will be rolled back, not only the
one that caused an error. Any operations in the WHEN … DO statement will be
rolled back as well. The same limitation applies to the EXECUTE PROCEDURE
statement. Read an interesting discussion of the phenomenon in Firebird
Tracker ticket firebird#4803.
2. In selectable stored procedures, output rows that were already passed to the
client in previous iterations of a FOR SELECT … DO … SUSPEND loop remain
available to the client if an exception is thrown subsequently in the process of
retrieving rows.
A WHEN … DO statement catches errors and exceptions in the current block of statements. It also
catches exceptions from nested blocks, if those exceptions have not been handled in those blocks.
All changes made before the statement that caused the error are visible to a WHEN … DO statement.
However, if you try to log them in an autonomous transaction, those changes are unavailable,
because the transaction where the changes took place is not committed at the point when the
autonomous transaction is started. Example 4, below, demonstrates this behaviour.
405
Chapter 7. Procedural SQL (PSQL) Statements
...
WHEN GDSCODE GRANT_OBJ_NOTFOUND,
GDSCODE GRANT_FLD_NOTFOUND,
GDSCODE GRANT_NOPRIV,
GDSCODE GRANT_NOPRIV_ON_BASE
DO
BEGIN
EXECUTE PROCEDURE LOG_GRANT_ERROR(GDSCODE,
RDB$ERROR(MESSAGE);
EXIT;
406
Chapter 7. Procedural SQL (PSQL) Statements
END
...
EXECUTE BLOCK
AS
DECLARE VARIABLE I INT;
BEGIN
BEGIN
I = 1/0;
WHEN SQLSTATE '22003' DO
EXCEPTION E_CUSTOM_EXCEPTION
'Numeric value out of range.';
WHEN SQLSTATE '22012' DO
EXCEPTION E_CUSTOM_EXCEPTION
'Division by zero.';
WHEN SQLSTATE '23000' DO
EXCEPTION E_CUSTOM_EXCEPTION
'Integrity constraint violation.';
END
END
See also
EXCEPTION, CREATE EXCEPTION, SQLCODE and GDSCODE Error Codes and Message Texts and SQLSTATE
Codes and Message Texts, GDSCODE, SQLCODE, SQLSTATE, RDB$ERROR()
407
Chapter 8. Built-in Scalar Functions
Result type
VARCHAR(255)
Syntax
Parameter Description
namespace Namespace
The namespaces
The USER_SESSION and USER_TRANSACTION namespaces are initially empty. A user can create and set
variables with RDB$SET_CONTEXT() and retrieve them with RDB$GET_CONTEXT(). The SYSTEM namespace
is read-only. The DDL_TRIGGER namespace is only valid in DDL triggers, and is read-only. The SYSTEM
and DDL_TRIGGER namespaces contain a number of predefined variables, shown below.
408
Chapter 8. Built-in Scalar Functions
CLIENT_HOST
The wire protocol host name of remote client. Value is returned for all supported protocols.
CLIENT_PID
Process ID of remote client application.
CLIENT_PROCESS
Process name of remote client application.
CURRENT_ROLE
Same as global CURRENT_ROLE variable.
CURRENT_USER
Same as global CURRENT_USER variable.
DB_FILE_ID
Unique filesystem-level ID of the current database.
DB_GUID
GUID of the current database.
DB_NAME
Canonical name of current database; either the full path to the database or — if connecting via
the path is disallowed — its alias.
DECFLOAT_ROUND
Rounding mode of the current connection used in operations with DECFLOAT values. See also SET
DECFLOAT.
DECFLOAT_TRAPS
Exceptional conditions for the current connection in operations with DECFLOAT values that cause
a trap. See also SET DECFLOAT.
EFFECTIVE_USER
Effective user at the point RDB$GET_CONTEXT is called; indicates privileges of which user is
currently used to execute a function, procedure, trigger.
ENGINE_VERSION
The Firebird engine (server) version.
EXT_CONN_POOL_ACTIVE_COUNT
Count of active connections associated with the external connection pool.
EXT_CONN_POOL_IDLE_COUNT
Count of currently inactive connections available in the connection pool.
EXT_CONN_POOL_LIFETIME
External connection pool idle connection lifetime, in seconds.
409
Chapter 8. Built-in Scalar Functions
EXT_CONN_POOL_SIZE
External connection pool size.
GLOBAL_CN
Most current value of global Commit Number counter.
ISOLATION_LEVEL
The isolation level of the current transaction: 'READ COMMITTED', 'SNAPSHOT' or 'CONSISTENCY'.
LOCK_TIMEOUT
Lock timeout of the current transaction.
NETWORK_PROTOCOL
The protocol used for the connection: 'TCPv4', 'TCPv6', 'XNET' or NULL.
PARALLEL_WORKERS
The maximum number of parallel workers of the connection.
READ_ONLY
Returns 'TRUE' if current transaction is read-only and 'FALSE' otherwise.
REPLICA_MODE
Replica mode of the database: 'READ-ONLY', 'READ-WRITE' and NULL.
REPLICATION_SEQUENCE
Current replication sequence (number of the latest segment written to the replication journal).
SESSION_ID
Same as global CURRENT_CONNECTION variable.
SESSION_IDLE_TIMEOUT
Connection-level idle timeout, or 0 if no timeout was set. When 0 is reported the database
ConnectionIdleTimeout from databases.conf or firebird.conf applies.
SESSION_TIMEZONE
Current session time zone.
SNAPSHOT_NUMBER
Current snapshot number for the transaction executing this statement. For SNAPSHOT and
SNAPSHOT TABLE STABILITY, this number is stable for the duration of the transaction; for READ
COMMITTED this number will change (increment) as concurrent transactions are committed.
STATEMENT_TIMEOUT
Connection-level statement timeout, or 0 if no timeout was set. When 0 is reported the database
StatementTimeout from databases.conf or firebird.conf applies.
TRANSACTION_ID
Same as global CURRENT_TRANSACTION variable.
410
Chapter 8. Built-in Scalar Functions
WIRE_COMPRESSED
Compression status of the current connection. If the connection is compressed, returns TRUE; if it
is not compressed, returns FALSE. Returns NULL if the connection is embedded.
WIRE_CRYPT_PLUGIN
If connection is encrypted - returns name of current plugin, otherwise NULL.
WIRE_ENCRYPTED
Encryption status of the current connection. If the connection is encrypted, returns TRUE; if it is
not encrypted, returns FALSE. Returns NULL if the connection is embedded.
The DDL_TRIGGER namespace is valid only when a DDL trigger is running. Its use is also valid in
stored procedures and functions when called by DDL triggers.
The DDL_TRIGGER context works like a stack. Before a DDL trigger is fired, the values relative to the
executed command are pushed onto this stack. After the trigger finishes, the values are popped. So
in the case of cascade DDL statements, when a user DDL command fires a DDL trigger and this
trigger executes another DDL command with EXECUTE STATEMENT, the values of the DDL_TRIGGER
namespace are the ones relative to the command that fired the last DDL trigger on the call stack.
OBJECT_TYPE
object type (TABLE, VIEW, etc)
DDL_EVENT
event name (<ddl event item>), where <ddl event item> is EVENT_TYPE || ' ' || OBJECT_TYPE
OBJECT_NAME
metadata object name
OLD_OBJECT_NAME
for tracking the renaming of a domain (see note)
NEW_OBJECT_NAME
for tracking the renaming of a domain (see note)
SQL_TEXT
sql statement text
411
Chapter 8. Built-in Scalar Functions
Examples
See also
RDB$SET_CONTEXT()
8.1.2. RDB$SET_CONTEXT()
Result type
INTEGER
Syntax
Parameter Description
namespace Namespace
The namespaces
The USER_SESSION and USER_TRANSACTION namespaces are initially empty. A user can create and set
variables with RDB$SET_CONTEXT() and retrieve them with RDB$GET_CONTEXT(). The USER_SESSION
context is bound to the current connection, the USER_TRANSACTION context to the current transaction.
Lifecycle
• When a transaction ends, its USER_TRANSACTION context is cleared.
• When a connection is reset using ALTER SESSION RESET, the USER_TRANSACTION and USER_SESSION
contexts are cleared.
412
Chapter 8. Built-in Scalar Functions
The function returns 1 when the variable already existed before the call and 0 when it didn’t. To
remove a variable from a context, set it to NULL. If the given namespace doesn’t exist, an error is
raised. Both namespace and variable names must be entered as single-quoted, case-sensitive, non-
NULL strings.
Examples
See also
RDB$GET_CONTEXT()
Absolute value
Result type
Numerical, matching input type
Syntax
ABS (number)
Parameter Description
413
Chapter 8. Built-in Scalar Functions
8.2.2. ACOS()
Arc cosine
Result type
DOUBLE PRECISION
Syntax
ACOS (number)
Parameter Description
See also
COS(), ASIN(), ATAN()
8.2.3. ACOSH()
Result type
DOUBLE PRECISION
Syntax
ACOSH (number)
Parameter Description
See also
COSH(), ASINH(), ATANH()
8.2.4. ASIN()
Arc sine
Result type
DOUBLE PRECISION
414
Chapter 8. Built-in Scalar Functions
Syntax
ASIN (number)
Parameter Description
See also
SIN(), ACOS(), ATAN()
8.2.5. ASINH()
Result type
DOUBLE PRECISION
Syntax
ASINH (number)
Parameter Description
See also
SINH(), ACOSH(), ATANH()
8.2.6. ATAN()
Arc tangent
Result type
DOUBLE PRECISION
Syntax
ATAN (number)
415
Chapter 8. Built-in Scalar Functions
Parameter Description
See also
ATAN2(), TAN(), ACOS(), ASIN()
8.2.7. ATAN2()
Result type
DOUBLE PRECISION
Syntax
ATAN2 (y, x)
Parameter Description
Returns the angle whose sine-to-cosine ratio is given by the two arguments, and whose sine and
cosine signs correspond to the signs of the arguments. This allows results across the entire circle,
including the angles -pi/2 and pi/2.
• If both y and x are 0, the result is meaningless. An error will be raised if both arguments are 0.
• A fully equivalent description of this function is the following: ATAN2(y, x) is the angle
between the positive X-axis and the line from the origin to the point (x, y). This also makes
it obvious that ATAN2(0, 0) is undefined.
• If both sine and cosine of the angle are already known, ATAN2(sin, cos) gives the angle.
8.2.8. ATANH()
Result type
DOUBLE PRECISION
416
Chapter 8. Built-in Scalar Functions
Syntax
ATANH (number)
Parameter Description
See also
TANH(), ACOSH(), ASINH()
Ceiling of a number
Result type
BIGINT or INT128 for exact numeric number, or DOUBLE PRECISION or DECFLOAT for floating point
number
Syntax
CEIL[ING] (number)
Parameter Description
Returns the smallest whole number greater than or equal to the argument.
See also
FLOOR(), ROUND(), TRUNC()
8.2.10. COS()
Cosine
Result type
DOUBLE PRECISION
Syntax
COS (angle)
417
Chapter 8. Built-in Scalar Functions
Parameter Description
See also
ACOS(), COT(), SIN(), TAN()
8.2.11. COSH()
Hyperbolic cosine
Result type
DOUBLE PRECISION
Syntax
COSH (number)
Parameter Description
See also
ACOSH(), SINH(), TANH()
8.2.12. COT()
Cotangent
Result type
DOUBLE PRECISION
Syntax
COT (angle)
Parameter Description
See also
COS(), SIN(), TAN()
418
Chapter 8. Built-in Scalar Functions
8.2.13. EXP()
Natural exponent
Result type
DOUBLE PRECISION
Syntax
EXP (number)
Parameter Description
See also
LN()
8.2.14. FLOOR()
Floor of a number
Result type
BIGINT or INT128 for exact numeric number, or DOUBLE PRECISION or DECFLOAT for floating point
number
Syntax
FLOOR (number)
Parameter Description
Returns the largest whole number smaller than or equal to the argument.
See also
CEIL(), CEILING(), ROUND(), TRUNC()
8.2.15. LN()
Natural logarithm
Result type
DOUBLE PRECISION
419
Chapter 8. Built-in Scalar Functions
Syntax
LN (number)
Parameter Description
See also
EXP(), LOG(), LOG10()
8.2.16. LOG()
Result type
DOUBLE PRECISION
Syntax
LOG (x, y)
Parameter Description
See also
POWER(), LN(), LOG10()
8.2.17. LOG10()
Result type
DOUBLE PRECISION
420
Chapter 8. Built-in Scalar Functions
Syntax
LOG10 (number)
Parameter Description
See also
POWER(), LN(), LOG()
8.2.18. MOD()
Remainder
Result type
SMALLINT, INTEGER or BIGINT depending on the type of a. If a is a floating-point type, the result is a
BIGINT.
Syntax
MOD (a, b)
Parameter Description
• Non-integer arguments are rounded before the division takes place. So, “mod(7.5, 2.5)” gives 2
(“mod(8, 3)”), not 0.
• Do not confuse MOD() with the mathematical modulus operator; e.g. mathematically, -21 mod 4 is
3, while Firebird’s MOD(-21, 4) is -1. In other words, MOD() behaves as % in languages like C and
Java.
8.2.19. PI()
Approximation of pi.
Result type
DOUBLE PRECISION
421
Chapter 8. Built-in Scalar Functions
Syntax
PI ()
8.2.20. POWER()
Power
Result type
DOUBLE PRECISION
Syntax
POWER (x, y)
Parameter Description
y
Returns x to the power of y (x ).
See also
EXP(), LOG(), LOG10(), SQRT()
8.2.21. RAND()
Result type
DOUBLE PRECISION
Syntax
RAND ()
8.2.22. ROUND()
Result type
single argument: integer type, DOUBLE PRECISION or DECFLOAT;
two arguments: numerical, matching first argument
422
Chapter 8. Built-in Scalar Functions
Syntax
Parameter Description
Rounds a number to the nearest integer. If the fractional part is exactly 0.5, rounding is upward for
positive numbers and downward for negative numbers. With the optional scale argument, the
number can be rounded to powers-of-ten multiples (tens, hundreds, tenths, hundredths, etc.).
If you are used to the behaviour of the external function ROUND, please notice that
the internal function always rounds halves away from zero, i.e. downward for
negative numbers.
ROUND Examples
If the scale argument is present, the result usually has the same scale as the first argument:
ROUND(45.1212) -- returns 45
See also
CEIL(), CEILING(), FLOOR(), TRUNC()
8.2.23. SIGN()
Sign or signum
Result type
423
Chapter 8. Built-in Scalar Functions
SMALLINT
Syntax
SIGN (number)
Parameter Description
• number < 0 → -1
• number = 0 → 0
• number > 0 → 1
8.2.24. SIN()
Sine
Result type
DOUBLE PRECISION
Syntax
SIN (angle)
Parameter Description
See also
ASIN(), COS(), COT(), TAN()
8.2.25. SINH()
Hyperbolic sine
Result type
DOUBLE PRECISION
Syntax
SINH (number)
424
Chapter 8. Built-in Scalar Functions
Parameter Description
See also
ASINH(), COSH(), TANH()
8.2.26. SQRT()
Square root
Result type
DOUBLE PRECISION
Syntax
SQRT (number)
Parameter Description
See also
POWER()
8.2.27. TAN()
Tangent
Result type
DOUBLE PRECISION
Syntax
TAN (angle)
Parameter Description
See also
ATAN(), ATAN2(), COS(), COT(), SIN(), TAN()
425
Chapter 8. Built-in Scalar Functions
8.2.28. TANH()
Hyperbolic tangent
Result type
DOUBLE PRECISION
Syntax
TANH (number)
Parameter Description
Due to rounding, the result is in the range [-1, 1] (mathematically, it’s <-1, 1>).
See also
ATANH(), COSH(), TANH()
8.2.29. TRUNC()
Truncate number
Result type
single argument: integer type, DOUBLE PRECISION or DECFLOAT;
two arguments: numerical, matching first argument
Syntax
Parameter Description
The single argument variant returns the integer part of a number. With the optional scale
426
Chapter 8. Built-in Scalar Functions
argument, the number can be truncated to powers-of-ten multiples (tens, hundreds, tenths,
hundredths, etc.).
• If the scale argument is present, the result usually has the same scale as the first argument, e.g.
If you are used to the behaviour of the external function TRUNCATE, please notice
that the internal function TRUNC always truncates toward zero, i.e. upward for
negative numbers.
See also
CEIL(), CEILING(), FLOOR(), ROUND()
Result type
CHAR(1) CHARACTER SET NONE
Syntax
ASCII_CHAR (code)
Parameter Description
Returns the ASCII character corresponding to the number passed in the argument.
• If you are used to the behaviour of the ASCII_CHAR UDF, which returns an empty
string if the argument is 0, please notice that the internal function returns a
character with ASCII code 0 (character NUL) here.
See also
ASCII_VAL(), UNICODE_CHAR()
427
Chapter 8. Built-in Scalar Functions
8.3.2. ASCII_VAL()
Result type
SMALLINT
Syntax
ASCII_VAL (ch)
Parameter Description
ch A string of the [VAR]CHAR data type or a text BLOB with the maximum size
of 32,767 bytes
• If the argument is a string with more than one character, the ASCII code of the first character is
returned.
See also
ASCII_CHAR(), UNICODE_VAL()
8.3.3. BASE64_DECODE()
Result type
VARBINARY or BLOB
Syntax
BASE64_DECODE (base64_data)
Parameter Description
BASE64_DECODE decodes a string with base64-encoded data, and returns the decoded value as
VARBINARY or BLOB as appropriate for the input. If the length of the type of base64_data is not a
multiple of 4, an error is raised at prepare time. If the length of the value of base64_data is not a
multiple of 4, an error is raised at execution time.
428
Chapter 8. Built-in Scalar Functions
When the input is not BLOB, the length of the resulting type is calculated as type_length * 3 / 4,
where type_length is the maximum length in characters of the input type.
Example of BASE64_DECODE
CAST
============
Test base64
See also
BASE64_ENCODE(), HEX_DECODE()
8.3.4. BASE64_ENCODE()
Result type
VARCHAR CHARACTER SET ASCII or BLOB SUB_TYPE TEXT CHARACTER SET ASCII
Syntax
BASE64_ENCODE (binary_data)
Parameter Description
BASE64_ENCODE encodes binary_data with base64, and returns the encoded value as a VARCHAR
CHARACTER SET ASCII or BLOB SUB_TYPE TEXT CHARACTER SET ASCII as appropriate for the input. The
returned value is padded with ‘=’ so its length is a multiple of 4.
When the input is not BLOB, the length of the resulting type is calculated as type_length * 4 / 3
rounded up to a multiple of four, where type_length is the maximum length in bytes of the input
type. If this length exceeds the maximum length of VARCHAR, the function returns a BLOB.
Example of BASE64_ENCODE
BASE64_ENCODE
================
VGVzdCBiYXNlNjQ=
429
Chapter 8. Built-in Scalar Functions
See also
BASE64_DECODE(), HEX_ENCODE()
8.3.5. BIT_LENGTH()
Result type
INTEGER, or BIGINT for BLOB
Syntax
BIT_LENGTH (string)
Parameter Description
Gives the length in bits of the input string. For multibyte character sets, this may be less than the
number of characters times 8 times the “formal” number of bytes per character as found in
RDB$CHARACTER_SETS.
With arguments of type CHAR, this function takes the entire formal string length (i.e. the declared
length of a field or variable) into account. If you want to obtain the “logical” bit length, not counting
the trailing spaces, right-TRIM the argument before passing it to BIT_LENGTH.
BIT_LENGTH Examples
select bit_length
(cast (_iso8859_1 'Grüß di!' as varchar(24) character set utf8))
from rdb$database
-- returns 80: ü and ß take up two bytes each in UTF8
select bit_length
(cast (_iso8859_1 'Grüß di!' as char(24) character set utf8))
from rdb$database
-- returns 208: all 24 CHAR positions count, and two of them are 16-bit
See also
OCTET_LENGTH(), CHAR_LENGTH(), CHARACTER_LENGTH()
430
Chapter 8. Built-in Scalar Functions
8.3.6. BLOB_APPEND()
Result type
BLOB
Syntax
Parameter Description
The BLOB_APPEND function concatenates blobs without creating intermediate BLOBs, avoiding
excessive memory consumption and growth of the database file. The BLOB_APPEND function takes two
or more arguments and adds them to a BLOB which remains open for further modification by a
subsequent BLOB_APPEND call.
The resulting BLOB is left open for writing instead of being closed when the function returns. In
other words, the BLOB can be appended as many times as required. The engine marks the BLOB
returned by BLOB_APPEND with an internal flag, BLB_close_on_read, and closes it automatically when
needed.
1. NULL: new, empty BLOB SUB_TYPE TEXT is created, using the connection character set as the
character set
2. permanent BLOB (from a table) or temporary BLOB which was already closed: new BLOB is created
with the same subtype and, if subtype is TEXT the same character set, populated with the content
of the original BLOB.
3. temporary unclosed BLOB with the BLB_close_on_read flag (e.g. created by another call to
BLOB_APPEND): used as-is, remaining arguments are appended to this BLOB
4. other data types: a new BLOB SUB_TYPE TEXT is created, populated with the original argument
converted to string. If the original value is a character type, its character set is used (for string
literals, the connection character set), otherwise the connection character set.
Other arguments can be of any type. The following behavior is defined for them:
2. BLOBs, if necessary, are transliterated to the character set of the first argument and their
contents are appended to the result
3. other data types are converted to strings (as usual) and appended to the result
The BLOB_APPEND function returns a temporary unclosed BLOB with the BLB_close_on_read flag. If the
first argument is such a temporary unclosed BLOB (e.g. created by a previous call to BLOB_APPEND), it
431
Chapter 8. Built-in Scalar Functions
will be used as-is, otherwise a new BLOB is created. Thus, a series of operations like blob =
BLOB_APPEND (blob, …) will result in the creation of at most one BLOB (unless you try to append a
BLOB to itself). This blob will be automatically closed by the engine when the client reads it, assigns
it to a table, or uses it in other expressions that require reading the content.
2. Testing a blob for NULL using the IS [NOT] NULL operator does not read it and
therefore a temporary blob with the BLB_close_on_read flag will not be closed
after such a test.
BLOB_APPEND Examples
execute block
returns (b blob sub_type text)
as
begin
-- creates a new temporary not closed BLOB
-- and writes the string from the 2nd argument into it
b = blob_append(null, 'Hello ');
-- comparing a BLOB with a string will close it, because the BLOB needs to be read
if (b = 'Hello World!') then
begin
-- ...
end
suspend;
end
See also
Concatenation Operator, LIST(), RDB$BLOB_UTIL
432
Chapter 8. Built-in Scalar Functions
Result type
INTEGER, or BIGINT for BLOB
Syntax
CHAR_LENGTH (string)
| CHARACTER_LENGTH (string)
Parameter Description
With arguments of type CHAR, this function returns the formal string length (i.e. the declared length
of a field or variable). If you want to obtain the “logical” length, not counting the trailing spaces,
right-TRIM the argument before passing it to CHAR[ACTER]_LENGTH.
This function fully supports text BLOBs of any length and character set.
CHAR_LENGTH Examples
select char_length
(cast (_iso8859_1 'Grüß di!' as varchar(24) character set utf8))
from rdb$database
-- returns 8; the fact that ü and ß take up two bytes each is irrelevant
select char_length
(cast (_iso8859_1 'Grüß di!' as char(24) character set utf8))
from rdb$database
-- returns 24: all 24 CHAR positions count
See also
BIT_LENGTH(), OCTET_LENGTH()
433
Chapter 8. Built-in Scalar Functions
8.3.8. CRYPT_HASH()
Cryptographic hash
Result type
VARBINARY
Syntax
Parameter Description
CRYPT_HASH returns a cryptographic hash calculated from the input argument using the specified
algorithm. If the input argument is not a string or binary type, it is converted to string before
hashing.
This function returns a VARBINARY with the length depending on the specified algorithm.
• The MD5 and SHA1 algorithms are not recommended for security purposes due to
known attacks to generate hash collisions. These two algorithms are provided
for backward-compatibility only.
• When hashing string or binary values, take into account the effects of trailing
blanks (spaces or NULs). The value 'ab' in a CHAR(5) (3 trailing spaces) has a
different hash than if it is stored in a VARCHAR(5) (no trailing spaces) or CHAR(6)
(4 trailing spaces).
To avoid this, make sure you always use a variable length data type, or the
same fixed length data type, or normalize values before hashing, for example
using TRIM(TRAILING FROM value).
Examples of CRYPT_HASH
See also
HASH()
434
Chapter 8. Built-in Scalar Functions
8.3.9. HASH()
Non-cryptographic hash
Result type
INTEGER, BIGINT
Syntax
Parameter Description
HASH returns a hash value for the input argument. If the input argument is not a string or binary
type, it is converted to string before hashing.
The optional USING clause specifies the non-cryptographic hash algorithm to apply. When the USING
clause is absent, the legacy PJW algorithm is applied; this is identical to its behaviour in previous
Firebird versions.
This function fully supports text BLOBs of any length and character set.
Supported algorithms
not specified
When no algorithm is specified, Firebird applies the 64-bit variant of the non-cryptographic PJW
hash function (also known as ELF64). This is a fast algorithm for general purposes (hash tables,
etc.), but its collision quality is suboptimal. Other hash functions — specified explicitly in the
USING clause, or cryptographic hashes through CRYPT_HASH() — should be used for more reliable
hashing.
CRC32
With CRC32, Firebird applies the CRC32 algorithm using the polynomial 0x04C11DB7.
Examples of HASH
435
Chapter 8. Built-in Scalar Functions
See also
CRYPT_HASH()
8.3.10. HEX_DECODE()
Result type
VARBINARY or BLOB
Syntax
HEX_DECODE (hex_data)
Parameter Description
HEX_DECODE decodes a string with hex-encoded data, and returns the decoded value as VARBINARY or
BLOB as appropriate for the input. If the length of the type of hex_data is not a multiple of 2, an error
is raised at prepare time. If the length of the value of hex_data is not a multiple of 2, an error is
raised at execution time.
When the input is not BLOB, the length of the resulting type is calculated as type_length / 2, where
type_length is the maximum length in characters of the input type.
Example of HEX_DECODE
CAST
============
Hexadecimal
See also
HEX_ENCODE(), BASE64_DECODE()
436
Chapter 8. Built-in Scalar Functions
8.3.11. HEX_ENCODE()
Result type
VARCHAR CHARACTER SET ASCII or BLOB SUB_TYPE TEXT CHARACTER SET ASCII
Syntax
HEX_ENCODE (binary_data)
Parameter Description
HEX_ENCODE encodes binary_data with hex, and returns the encoded value as a VARCHAR CHARACTER SET
ASCII or BLOB SUB_TYPE TEXT CHARACTER SET ASCII as appropriate for the input.
When the input is not BLOB, the length of the resulting type is calculated as type_length * 2, where
type_length is the maximum length in bytes of the input type. If this length exceeds the maximum
length of VARCHAR, the function returns a BLOB.
Example of HEX_ENCODE
select hex_encode('Hexadecimal')
from rdb$database;
HEX_ENCODE
======================
48657861646563696D616C
See also
HEX_DECODE(), BASE64_ENCODE()
8.3.12. LEFT()
Result type
VARCHAR or BLOB
Syntax
437
Chapter 8. Built-in Scalar Functions
Parameter Description
• This function fully supports text BLOBs of any length, including those with a multi-byte character
set.
• If string is a BLOB, the result is a BLOB. Otherwise, the result is a VARCHAR(n) with n the length of
the input string.
• If the length argument exceeds the string length, the input string is returned unchanged.
• If the length argument is not a whole number, bankers' rounding (round-to-even) is applied, i.e.
0.5 becomes 0, 1.5 becomes 2, 2.5 becomes 2, 3.5 becomes 4, etc.
See also
RIGHT()
8.3.13. LOWER()
Result type
(VAR)CHAR, (VAR)BINARY or BLOB
Syntax
LOWER (string)
Parameter Description
Returns the lowercase equivalent of the input string. The exact result depends on the character set.
With ASCII or NONE for instance, only ASCII characters are lowercased; with character set OCTETS
/(VAR)BINARY, the entire string is returned unchanged.
LOWER Examples
See also
UPPER()
438
Chapter 8. Built-in Scalar Functions
8.3.14. LPAD()
Left-pads a string
Result type
VARCHAR or BLOB
Syntax
Parameter Description
padstr The character or string to be used to pad the source string up to the
specified length. Default is space (“' '”)
Left-pads a string with spaces or with a user-supplied string until a given length is reached.
• This function fully supports text BLOBs of any length and character set.
• If padstr is given and equal to '' (empty string), no padding takes place.
• If endlen is less than the current string length, the string is truncated to endlen, even if padstr is
the empty string.
When used on a BLOB, this function may need to load the entire object into memory.
Although it does try to limit memory consumption, this may affect performance if
huge BLOBs are involved.
LPAD Examples
See also
RPAD()
439
Chapter 8. Built-in Scalar Functions
8.3.15. OCTET_LENGTH()
Result type
INTEGER, or BIGINT for BLOB
Syntax
OCTET_LENGTH (string)
Parameter Description
Gives the length in bytes (octets) of the input string. For multibyte character sets, this may be less
than the number of characters times the “formal” number of bytes per character as found in
RDB$CHARACTER_SETS.
With arguments of type CHAR or BINARY, this function takes the entire formal string length (i.e. the
declared length of a field or variable) into account. If you want to obtain the “logical” byte length,
not counting the trailing spaces, right-TRIM the argument before passing it to OCTET_LENGTH.
OCTET_LENGTH Examples
select octet_length
(cast (_iso8859_1 'Grüß di!' as varchar(24) character set utf8))
from rdb$database
-- returns 10: ü and ß take up two bytes each in UTF8
select octet_length
(cast (_iso8859_1 'Grüß di!' as char(24) character set utf8))
from rdb$database
-- returns 26: all 24 CHAR positions count, and two of them are 2-byte
See also
BIT_LENGTH(), CHAR_LENGTH(), CHARACTER_LENGTH()
8.3.16. OVERLAY()
440
Chapter 8. Built-in Scalar Functions
Result type
VARCHAR or BLOB
Syntax
Parameter Description
pos The position from which replacement takes place (starting position)
By default, the number of characters removed from (overwritten in) the host string equals the
length of the replacement string. With the optional fourth argument, a different number of
characters can be specified for removal.
• If string or replacement is a BLOB, the result is a BLOB. Otherwise, the result is a VARCHAR(n) with n
the sum of the lengths of string and replacement.
• If pos is beyond the end of string, replacement is placed directly after string.
• If the number of characters from pos to the end of string is smaller than the length of
replacement (or than the length argument, if present), string is truncated at pos and replacement
placed after it.
• If pos or length is not a whole number, bankers' rounding (round-to-even) is applied, i.e. 0.5
becomes 0, 1.5 becomes 2, 2.5 becomes 2, 3.5 becomes 4, etc.
When used on a BLOB, this function may need to load the entire object into memory.
This may affect performance if huge BLOBs are involved.
OVERLAY Examples
441
Chapter 8. Built-in Scalar Functions
See also
REPLACE()
8.3.17. POSITION()
Result type
INTEGER
Syntax
Parameter Description
Returns the (1-based) position of the first occurrence of a substring in a host string. With the
optional third argument, the search starts at a given offset, disregarding any matches that may
occur earlier in the string. If no match is found, the result is 0.
• The optional third argument is only supported in the second syntax (comma
syntax).
• This function fully supports text BLOBs of any size and character set.
442
Chapter 8. Built-in Scalar Functions
When used on a BLOB, this function may need to load the entire object into memory.
This may affect performance if huge BLOBs are involved.
POSITION Examples
See also
SUBSTRING()
8.3.18. REPLACE()
Result type
VARCHAR or BLOB
Syntax
Parameter Description
• This function fully supports text BLOBs of any length and character set.
• If any argument is a BLOB, the result is a BLOB. Otherwise, the result is a VARCHAR(n) with n
calculated from the lengths of str, find and repl in such a way that even the maximum possible
number of replacements won’t overflow the field.
• If repl is the empty string, all occurrences of find are deleted from str.
• If any argument is NULL, the result is always NULL, even if nothing would have been replaced.
When used on a BLOB, this function may need to load the entire object into memory.
This may affect performance if huge BLOBs are involved.
443
Chapter 8. Built-in Scalar Functions
REPLACE Examples
See also
OVERLAY(), SUBSTRING(), POSITION(), CHAR_LENGTH(), CHARACTER_LENGTH()
8.3.19. REVERSE()
Reverses a string
Result type
VARCHAR
Syntax
REVERSE (string)
Parameter Description
REVERSE Examples
This function is useful if you want to group, search or order on string endings, e.g.
when dealing with domain names or email addresses:
444
Chapter 8. Built-in Scalar Functions
8.3.20. RIGHT()
Result type
VARCHAR or BLOB
Syntax
Parameter Description
• If string is a BLOB, the result is a BLOB. Otherwise, the result is a VARCHAR(n) with n the length of
the input string.
• If the length argument exceeds the string length, the input string is returned unchanged.
• If the length argument is not a whole number, bankers' rounding (round-to-even) is applied, i.e.
0.5 becomes 0, 1.5 becomes 2, 2.5 becomes 2, 3.5 becomes 4, etc.
When used on a BLOB, this function may need to load the entire object into memory.
This may affect performance if huge BLOBs are involved.
See also
LEFT(), SUBSTRING()
8.3.21. RPAD()
Right-pads a string
Result type
VARCHAR or BLOB
Syntax
Parameter Description
445
Chapter 8. Built-in Scalar Functions
Parameter Description
endlen The character or string to be used to pad the source string up to the
specified length. Default is space (' ')
Right-pads a string with spaces or with a user-supplied string until a given length is reached.
• This function fully supports text BLOBs of any length and character set.
• If padstr is given and equals '' (empty string), no padding takes place.
• If endlen is less than the current string length, the string is truncated to endlen, even if padstr is
the empty string.
When used on a BLOB, this function may need to load the entire object into memory.
Although it does try to limit memory consumption, this may affect performance if
huge BLOBs are involved.
RPAD Examples
See also
LPAD()
8.3.22. SUBSTRING()
Result types
VARCHAR or BLOB
Syntax
SUBSTRING ( <substring-args> )
<substring-args> ::=
str FROM startpos [FOR length]
| str SIMILAR <similar-pattern> ESCAPE <escape>
<similar-pattern> ::=
<similar-pattern-R1>
446
Chapter 8. Built-in Scalar Functions
Parameter Description
startpos Integer expression, the position from which to start retrieving the
substring
Returns a string’s substring starting at the given position, either to the end of the string or with a
given length, or extracts a substring using an SQL regular expression pattern.
When used on a BLOB, this function may need to load the entire object into memory.
Although it does try to limit memory consumption, this may affect performance if
huge BLOBs are involved.
Positional SUBSTRING
In its simple, positional form (with FROM), this function returns the substring starting at character
position startpos (the first character being 1). Without the FOR argument, it returns all the
remaining characters in the string. With FOR, it returns length characters or the remainder of the
string, whichever is shorter.
When startpos is smaller than 1, substring behaves as if the string has 1 - startpos extra positions
before the actual first character at position 1. The length is considered from this imaginary start of
the string, so the resulting string could be shorter than the specified length, or even empty.
The function fully supports binary and text BLOBs of any length, and with any character set. If str is
a BLOB, the result is also a BLOB. For any other argument type, the result is a VARCHAR.
For non-BLOB arguments, the width of the result field is always equal to the length of str, regardless
of startpos and length. So, substring('pinhead' from 4 for 2) will return a VARCHAR(7) containing
the string 'he'.
Example
447
Chapter 8. Built-in Scalar Functions
In the regular expression form (with SIMILAR), the SUBSTRING function returns part of the string
matching an SQL regular expression pattern. If no match is found, NULL is returned.
The SIMILAR pattern is formed from three SQL regular expression patterns, R1, R2 and R3. The
entire pattern takes the form of R1 || '<escape>"' || R2 || '<escape>"' || R3, where <escape> is
the escape character defined in the ESCAPE clause. R2 is the pattern that matches the substring to
extract, and is enclosed between escaped double quotes (<escape>", e.g. “#"” with escape character ‘
#’). R1 matches the prefix of the string, and R3 the suffix of the string. Both R1 and R3 are optional
(they can be empty), but the pattern must match the entire string. In other words, it is not sufficient
to specify a pattern that only finds the substring to extract.
The escaped double quotes around R2 can be compared to defining a single capture group in more
common regular expression syntax like PCRE. That is, the full pattern is equivalent to R1(R2)R3,
which must match the entire input string, and the capture group is the substring to be returned.
If any one of R1, R2, or R3 is not a zero-length string and does not have the format of an SQL regular
expression, then an exception is raised.
The full SQL regular expression format is described in Syntax: SQL Regular Expressions
Examples
See also
POSITION(), LEFT(), RIGHT(), CHAR_LENGTH(), CHARACTER_LENGTH(), SIMILAR TO
8.3.23. TRIM()
Result type
448
Chapter 8. Built-in Scalar Functions
VARCHAR or BLOB
Syntax
Parameter Description
what The substring that should be removed (multiple times if there are several
matches) from the beginning, the end, or both sides of the input string str.
By default, it is space (' ')
Removes leading and/or trailing spaces (or optionally other strings) from the input string.
If str is a BLOB, the result is a BLOB. Otherwise, it is a VARCHAR(n) with n the formal length of str.
When used on a BLOB, this function may need to load the entire object into memory.
This may affect performance if huge BLOBs are involved.
TRIM Examples
select trim (leading from ' Waste no space ') from rdb$database
-- returns 'Waste no space '
select trim (leading '.' from ' Waste no space ') from rdb$database
-- returns ' Waste no space '
select trim ('la' from 'lalala I love you Ella') from rdb$database
-- returns ' I love you El'
select trim ('la' from 'Lalala I love you Ella') from rdb$database
-- returns 'Lalala I love you El'
449
Chapter 8. Built-in Scalar Functions
8.3.24. UNICODE_CHAR()
Result type
CHAR(1) CHARACTER SET UTF8
Syntax
UNICODE_CHAR (code)
Parameter Description
Returns the character corresponding to the Unicode code point passed in the argument.
See also
UNICODE_VAL(), ASCII_CHAR()
8.3.25. UNICODE_VAL()
Result type
INTEGER
Syntax
UNICODE_VAL (ch)
Parameter Description
Returns the Unicode code point (range 0…0x10FFFF) of the character passed in.
• If the argument is a string with more than one character, the Unicode code point of the first
character is returned.
See also
UNICODE_CHAR(), ASCII_VAL()
450
Chapter 8. Built-in Scalar Functions
8.3.26. UPPER()
Result type
(VAR)CHAR, (VAR)BINARY or BLOB
Syntax
UPPER (str)
Parameter Description
Returns the uppercase equivalent of the input string. The exact result depends on the character set.
With ASCII or NONE for instance, only ASCII characters are uppercased; with character set OCTETS
/(VAR)BINARY, the entire string is returned unchanged.
UPPER Examples
See also
LOWER()
Result type
DATE, TIME or TIMESTAMP
Syntax
DATEADD (<args>)
<args> ::=
<amount> <unit> TO <datetime>
451
Chapter 8. Built-in Scalar Functions
Parameter Description
amount An integer expression of the SMALLINT, INTEGER or BIGINT type. For unit
MILLISECOND, the type is NUMERIC(18, 1). A negative value is subtracted.
Adds the specified number of years, months, weeks, days, hours, minutes, seconds or milliseconds
to a date/time value.
• With TIME arguments, only HOUR, MINUTE, SECOND and MILLISECOND can be used.
Examples of DATEADD
select
cast(dateadd(-1 * extract(millisecond from ts) millisecond to ts) as varchar(30)) as
t,
extract(millisecond from ts) as ms
from (
select timestamp '2014-06-09 13:50:17.4971' as ts
from rdb$database
) a
T MS
------------------------ ------
2014-06-09 13:50:17.0000 497.1
452
Chapter 8. Built-in Scalar Functions
See also
DATEDIFF(), Operations Using Date and Time Values
8.4.2. DATEDIFF()
Result type
BIGINT, or NUMERIC(18,1) for MILLISECOND
Syntax
DATEDIFF (<args>)
<args> ::=
<unit> FROM <moment1> TO <moment2>
| <unit>, <moment1>, <moment2>
<unit> ::=
YEAR | MONTH | WEEK | DAY
| HOUR | MINUTE | SECOND | MILLISECOND
<momentN> ::= a DATE, TIME or TIMESTAMP expression
Parameter Description
Returns the number of years, months, weeks, days, hours, minutes, seconds or milliseconds elapsed
between two date/time values.
• DATE and TIMESTAMP arguments can be combined. No other mixes are allowed.
• With TIME arguments, only HOUR, MINUTE, SECOND and MILLISECOND can be used.
Computation
• DATEDIFF doesn’t look at any smaller units than the one specified in the first argument. As a
result,
453
Chapter 8. Built-in Scalar Functions
DATEDIFF Examples
See also
DATEADD(), Operations Using Date and Time Values
8.4.3. EXTRACT()
Result type
SMALLINT or NUMERIC
Syntax
<part> ::=
YEAR | MONTH | QUARTER | WEEK
| DAY | WEEKDAY | YEARDAY
| HOUR | MINUTE | SECOND | MILLISECOND
| TIMEZONE_HOUR | TIMEZONE_MINUTE
<datetime> ::= a DATE, TIME or TIMESTAMP expression
Parameter Description
The returned data types and possible ranges are shown in the table below. If you try to extract a
part that isn’t present in the date/time argument (e.g. SECOND from a DATE or YEAR from a TIME), an
error occurs.
454
Chapter 8. Built-in Scalar Functions
MILLISECOND
Extracts the millisecond value from a TIME or TIMESTAMP. The data type returned is NUMERIC(9,1).
If you extract the millisecond from CURRENT_TIME, be aware that this variable defaults to seconds
precision, so the result will always be 0. Extract from CURRENT_TIME(3) or CURRENT_TIMESTAMP to get
milliseconds precision.
WEEK
Extracts the ISO-8601 week number from a DATE or TIMESTAMP. ISO-8601 weeks start on a Monday
and always have the full seven days. Week 1 is the first week that has a majority (at least 4) of its
days in the new year. The first 1-3 days of the year may belong to the last week (52 or 53) of the
previous year. Likewise, a year’s final 1-3 days may belong to week 1 of the following year.
Be careful when combining WEEK and YEAR results. For instance, 30 December 2008
lies in week 1 of 2009, so extract(week from date '30 Dec 2008') returns 1.
However, extracting YEAR always gives the calendar year, which is 2008. In this
case, WEEK and YEAR are at odds with each other. The same happens when the first
days of January belong to the last week of the previous year.
Please also notice that WEEKDAY is not ISO-8601 compliant: it returns 0 for Sunday,
whereas ISO-8601 specifies 7.
See also
Data Types for Dates and Times
8.4.4. FIRST_DAY()
Result Type
455
Chapter 8. Built-in Scalar Functions
Syntax
Parameter Description
date_or_timestamp Expression of type DATE, TIMESTAMP WITHOUT TIME ZONE or TIMESTAMP WITH
TIME ZONE
FIRST_DAY returns a date or timestamp (same as the type of date_or_timestamp) with the first day of
the year, month or week of a given date or timestamp value.
• The first day of the week is considered as Sunday, following the same rules as
• When a timestamp is passed, the return value preserves the time part.
Examples of FIRST_DAY
select
first_day(of month from current_date),
first_day(of year from current_timestamp),
first_day(of week from date '2017-11-01'),
first_day(of quarter from date '2017-11-01')
from rdb$database;
8.4.5. LAST_DAY()
Result Type
DATE, TIMESTAMP (with or without time zone)
Syntax
456
Chapter 8. Built-in Scalar Functions
Parameter Description
date_or_timestamp Expression of type DATE, TIMESTAMP WITHOUT TIME ZONE or TIMESTAMP WITH
TIME ZONE
LAST_DAY returns a date or timestamp (same as the type of date_or_timestamp) with the last day of
the year, month or week of a given date or timestamp value.
• The last day of the week is considered as Saturday, following the same rules as
• When a timestamp is passed, the return value preserves the time part.
Examples of LAST_DAY
select
last_day(of month from current_date),
last_day(of year from current_timestamp),
last_day(of week from date '2017-11-01'),
last_day(of quarter from date '2017-11-01')
from rdb$database;
Result type
As specified by target_type
Syntax
<domain_or_non_array_type> ::=
!! See Scalar Data Types Syntax !!
<array_datatype> ::=
!! See Array Data Types Syntax !!
Parameter Description
457
Chapter 8. Built-in Scalar Functions
CAST converts an expression to the desired data type or domain. If the conversion is not possible, an
error is raised.
“Shorthand” Syntax
Alternative syntax, supported only when casting a string literal to a DATE, TIME or TIMESTAMP:
datatype 'date/timestring'
This syntax was already available in InterBase, but was never properly documented. In the SQL
standard, this feature is called “datetime literals”.
Since Firebird 4.0, the use of 'NOW', 'YESTERDAY' and 'TOMORROW' in the shorthand
cast is no longer allowed; only literals defining a fixed moment in time are
supported.
The following table shows the type conversions possible with CAST.
From To
[VAR]CHAR [VAR]CHAR
BLOB BLOB
Numeric types
DATE
TIME
TIMESTAMP
DATE [VAR]CHAR
TIME BLOB
TIMESTAMP
TIMESTAMP [VAR]CHAR
BLOB
DATE
TIME
Keep in mind that sometimes information is lost, for instance when you cast a TIMESTAMP to a DATE.
Also, the fact that types are CAST-compatible is in itself no guarantee that a conversion will succeed.
“CAST(123456789 as SMALLINT)” will definitely result in an error, as will “CAST('Judgement Day' as
DATE)”.
458
Chapter 8. Built-in Scalar Functions
Casting Parameters
cast (? as integer)
This gives you control over the type of the parameter set up by the engine. Please notice that with
statement parameters, you always need a full-syntax cast — shorthand casts are not supported.
Casting to a domain or its base type are supported. When casting to a domain, any constraints (NOT
NULL and/or CHECK) declared for the domain must be satisfied, or the cast will fail. Please be aware
that a CHECK passes if it evaluates to TRUE or NULL! So, given the following statements:
When the TYPE OF modifier is used, the expression is cast to the base type of the domain, ignoring
any constraints. With domain quint defined as above, the following two casts are equivalent and
will both succeed:
If TYPE OF is used with a (VAR)CHAR type, its character set and collation are retained:
459
Chapter 8. Built-in Scalar Functions
If a domain’s definition is changed, existing CASTs to that domain or its type may
become invalid. If these CASTs occur in PSQL modules, their invalidation may be
detected. See the note The RDB$VALID_BLR field, in Appendix A.
It is also possible to cast expressions to the type of an existing table or view column. Only the type
itself is used; in the case of string types, this includes the character set but not the collation.
Constraints and default values of the source column are not applied.
If a column’s definition is altered, existing CASTs to that column’s type may become
invalid. If these CASTs occur in PSQL modules, their invalidation may be detected.
See the note The RDB$VALID_BLR field, in Appendix A.
Cast Examples
A full-syntax cast:
Notice that you can drop even the shorthand cast from the example above, as the engine will
understand from the context (comparison to a DATE field) how to interpret the string:
However, this is not always possible. The cast below cannot be dropped, otherwise the engine
would find itself with an integer to be subtracted from a string:
460
Chapter 8. Built-in Scalar Functions
Bitwise AND
Result type
integer type (the widest type of the arguments)
SMALLINT result is returned only if all the arguments are explicit SMALLINTs or NUMERIC(n, 0) with n
<= 4; otherwise small integers return an INTEGER result.
Syntax
Parameter Description
See also
BIN_OR(), BIN_XOR()
8.6.2. BIN_NOT()
Bitwise NOT
Result type
integer type matching the argument
SMALLINT result is returned only if all the arguments are explicit SMALLINTs or NUMERIC(n, 0) with n
<= 4; otherwise small integers return an INTEGER result.
Syntax
BIN_NOT (number)
Parameter Description
Returns the result of the bitwise NOT operation on the argument, i.e. one’s complement.
See also
461
Chapter 8. Built-in Scalar Functions
8.6.3. BIN_OR()
Bitwise OR
Result type
integer type (the widest type of the arguments)
SMALLINT result is returned only if all the arguments are explicit SMALLINTs or NUMERIC(n, 0) with n
<= 4; otherwise small integers return an INTEGER result.
Syntax
Parameter Description
See also
BIN_AND(), BIN_XOR()
8.6.4. BIN_SHL()
Bitwise left-shift
Result type
BIGINT or INT128 depending on the first argument
Syntax
Parameter Description
Returns the first argument bitwise left-shifted by the second argument, i.e. a << b or a·2b.
See also
BIN_SHR()
462
Chapter 8. Built-in Scalar Functions
8.6.5. BIN_SHR()
Result type
BIGINT or INT128 depending on the first argument
Syntax
Parameter Description
Returns the first argument bitwise right-shifted by the second argument, i.e. a >> b or a/2b.
The operation performed is an arithmetic right shift (x86 SAR), meaning that the sign of the first
operand is always preserved.
See also
BIN_SHL()
8.6.6. BIN_XOR()
Bitwise XOR
Result type
integer type (the widest type of the arguments)
SMALLINT result is returned only if all the arguments are explicit SMALLINTs or NUMERIC(n, 0) with n
<= 4; otherwise small integers return an INTEGER result.
Syntax
Parameter Description
See also
BIN_AND(), BIN_OR()
463
Chapter 8. Built-in Scalar Functions
Result type
BINARY(16)
Syntax
CHAR_TO_UUID (ascii_uuid)
Parameter Description
CHAR_TO_UUID Examples
See also
UUID_TO_CHAR(), GEN_UUID()
8.7.2. GEN_UUID()
Result type
BINARY(16)
Syntax
GEN_UUID ()
464
Chapter 8. Built-in Scalar Functions
GEN_UUID Example
See also
UUID_TO_CHAR(), CHAR_TO_UUID()
8.7.3. UUID_TO_CHAR()
Result type
CHAR(36)
Syntax
UUID_TO_CHAR (uuid)
Parameter Description
UUID_TO_CHAR Examples
See also
CHAR_TO_UUID(), GEN_UUID()
Result type
465
Chapter 8. Built-in Scalar Functions
Syntax
Parameter Description
If step equals 0, the function will leave the value of the generator unchanged and return its current
value.
The SQL-compliant NEXT VALUE FOR syntax is preferred, except when an increment other than the
configured increment of the sequence is needed.
If the value of the step parameter is less than zero, it will decrease the value of the
generator. You should be cautious with such manipulations in the database, as
they could compromise data integrity (meaning, subsequent insert statements
could fail due to generating of duplicate id values).
GEN_ID Example
See also
NEXT VALUE FOR, CREATE SEQUENCE (GENERATOR)
Result type
Depends on input
Syntax
466
Chapter 8. Built-in Scalar Functions
Parameter Description
The COALESCE function takes two or more arguments and returns the value of the first non-NULL
argument. If all the arguments evaluate to NULL, the result is NULL.
COALESCE Examples
This example picks the Nickname from the Persons table. If it happens to be NULL, it goes on to
FirstName. If that too is NULL, “'Mr./Mrs.'” is used. Finally, it adds the family name. All in all, it tries
to use the available data to compose a full name that is as informal as possible. This scheme only
works if absent nicknames and first names are NULL: if one of them is an empty string, COALESCE will
happily return that to the caller. That problem can be fixed by using NULLIF().
select
coalesce (Nickname, FirstName, 'Mr./Mrs.') || ' ' || LastName
as FullName
from Persons
See also
IIF(), NULLIF(), CASE
8.9.2. DECODE()
Result type
Depends on input
Syntax
DECODE(<testexpr>,
<expr1>, <result1>
[<expr2>, <result2> ...]
[, <defaultresult>])
Parameter Description
expr1, expr2, … exprN Expressions of any compatible types, to which the testexpr expression is
compared
467
Chapter 8. Built-in Scalar Functions
DECODE is a shorthand for the so-called “simple CASE” construct, in which a given expression is
compared to a number of other expressions until a match is found. The result is determined by the
value listed after the matching expression. If no match is found, the default result is returned, if
present, otherwise NULL is returned.
CASE <testexpr>
WHEN <expr1> THEN <result1>
[WHEN <expr2> THEN <result2> ...]
[ELSE <defaultresult>]
END
Matching is done with the ‘=’ operator, so if testexpr is NULL, it won’t match any of
the exprs, not even those that are NULL.
DECODE Examples
select name,
age,
decode(upper(sex),
'M', 'Male',
'F', 'Female',
'Unknown'),
religion
from people
See also
CASE, Simple CASE
8.9.3. IIF()
Result type
Depends on input
Syntax
Parameter Description
468
Chapter 8. Built-in Scalar Functions
Parameter Description
IIF takes three arguments. If the first evaluates to true, the second argument is returned; otherwise
the third is returned.
IIF could be likened to the ternary “<condition> ? resultT : resultF” operator in C-like languages.
IIF Examples
See also
CASE, DECODE()
8.9.4. MAXVALUE()
Result type
Varies according to input — result will be of the same data type as the first expression in the list
(expr1).
Syntax
Parameter Description
Returns the maximum value from a list of numerical, string, or date/time expressions. This function
fully supports text BLOBs of any length and character set.
If one or more expressions resolve to NULL, MAXVALUE returns NULL. This behaviour differs from the
aggregate function MAX.
MAXVALUE Examples
469
Chapter 8. Built-in Scalar Functions
See also
MINVALUE()
8.9.5. MINVALUE()
Result type
Varies according to input — result will be of the same data type as the first expression in the list
(expr1).
Syntax
Parameter Description
Returns the minimum value from a list of numerical, string, or date/time expressions. This function
fully supports text BLOBs of any length and character set.
If one or more expressions resolve to NULL, MINVALUE returns NULL. This behaviour differs from the
aggregate function MIN.
MINVALUE Examples
See also
MAXVALUE()
8.9.6. NULLIF()
Result type
Depends on input
Syntax
470
Chapter 8. Built-in Scalar Functions
Parameter Description
exp1 An expression
NULLIF returns the value of the first argument, unless it is equal to the second. In that case, NULL is
returned.
NULLIF Example
This will return the average weight of the persons listed in FatPeople, excluding those having a
weight of -1, since AVG skips NULL data. Presumably, -1 indicates “weight unknown” in this table. A
plain AVG(Weight) would include the -1 weights, thus skewing the result.
See also
COALESCE(), DECODE(), IIF(), CASE
Result type
SMALLINT
Syntax
Parameter Description
Unlike the comparison operators (‘<’, ‘=’, ‘>’, etc.), comparison is exact: COMPARE_DECFLOAT(2.17,
471
Chapter 8. Built-in Scalar Functions
See also
TOTALORDER()
8.10.2. NORMALIZE_DECFLOAT()
Result type
DECFLOAT
Syntax
NORMALIZE_DECFLOAT (decfloat_value)
Parameter Description
For any non-zero value, trailing zeroes are removed with appropriate correction of the exponent.
Examples of NORMALIZE_DECFLOAT
-- will return 12
select normalize_decfloat(12.00)
from rdb$database;
8.10.3. QUANTIZE()
Returns a value that is equal in value — except for rounding — to the first argument, but with the
same exponent as the second argument
Result type
DECFLOAT
Syntax
472
Chapter 8. Built-in Scalar Functions
Parameter Description
exp_value Value or expression to use for its exponent; needs to be of type DECFLOAT,
or cast-compatible with DECFLOAT
QUANTIZE returns a DECFLOAT value that is equal in value and sign (except for rounding) to
decfloat_value, and that has an exponent equal to the exponent of exp_value. The type of the return
value is DECFLOAT(16) if both arguments are DECFLOAT(16), otherwise the result type is DECFLOAT(34).
The target exponent is the exponent used in the Decimal64 or Decimal128 storage
format of DECFLOAT of exp_value. This is not necessarily the same as the exponent
displayed in tools like isql. For example, the value 1.23E+2 is coefficient 123 and
exponent 0, while 1.2 is coefficient 12 and exponent -1.
If the exponent of decfloat_value is greater than the one of exp_value, the coefficient of
decfloat_value is multiplied by a power of ten, and its exponent decreased. If the exponent is
smaller, then its coefficient is rounded using the current decfloat rounding mode, and its exponent
is increased.
When it is not possible to achieve the target exponent because the coefficient would exceed the
target precision (16 or 34 decimal digits), either a “Decfloat float invalid operation” error is raised or
NaN is returned (depending on the current decfloat traps configuration).
There are almost no restrictions on the exp_value. However, in almost all usages, NaN/sNaN/Infinity
will produce an exception (unless allowed by the current decfloat traps configuration), NULL will
make the function return NULL, and so on.
Examples of QUANTIZE
V PIC QUANTIZE
====== ====== ========
3.16 0.001 3.160
3.16 0.01 3.16
3.16 0.1 3.2
3.16 1 3
3.16 1E+1 0E+1
-0.1 1 -0
0 1E+5 0E+5
316 0.1 316.0
316 1 316
316 1E+1 3.2E+2
316 1E+2 3E+2
473
Chapter 8. Built-in Scalar Functions
8.10.4. TOTALORDER()
Result type
SMALLINT
Syntax
Parameter Description
TOTALORDER compares two DECFLOAT values including any special values. The comparison is exact,
and returns a SMALLINT, one of:
-NaN < -sNaN < -INF < -0.1 < -0.10 < -0 < 0 < 0.10 < 0.1 < INF < sNaN < NaN
See also
COMPARE_DECFLOAT()
Result type
VARBINARY or BLOB
Syntax
DECRYPT ( encrypted_input
USING <algorithm> [MODE <mode>]
KEY key
[IV iv] [<ctr_type>] [CTR_LENGTH ctr_length]
[COUNTER initial_counter] )
474
Chapter 8. Built-in Scalar Functions
Parameter Description
• Sizes of data strings (like encrypted_input, key and iv) must meet the
requirements of the selected algorithm and mode.
• This function returns BLOB SUB_TYPE BINARY when the first argument is a BLOB,
and VARBINARY for all other text and binary types.
• When the encrypted data was text, it must be explicitly cast to a string type of
appropriate character set.
• The ins and outs of the various algorithms are considered beyond the scope of
this language reference. We recommend searching the internet for further
details on the algorithms.
DECRYPT Examples
See also
ENCRYPT(), RSA_DECRYPT()
8.11.2. ENCRYPT()
Result type
VARBINARY or BLOB
Syntax
ENCRYPT ( input
USING <algorithm> [MODE <mode>]
KEY key
[IV iv] [<ctr_type>] [CTR_LENGTH ctr_length]
[COUNTER initial_counter] )
<block_cipher> ::=
475
Chapter 8. Built-in Scalar Functions
Parameter Description
ctr_length Counter length; only for CTR mode. Default is size of iv.
• This function returns BLOB SUB_TYPE BINARY when the first argument is a BLOB,
and VARBINARY for all other text and binary types.
• Sizes of data strings (like key and iv) must meet the requirements of the
selected algorithm and mode, see table Encryption Algorithm Requirements.
◦ In general, the size of iv must match the block size of the algorithm
◦ For ECB and CBC mode, input must be multiples of the block size, you will
need to manually pad with zeroes or spaces as appropriate.
• The ins and outs of the various algorithms and modes are considered beyond
the scope of this language reference. We recommend searching the internet for
further details on the algorithms.
Block Ciphers
476
Chapter 8. Built-in Scalar Functions
BLOWFI 8 - 56 8
SH
KHAZAD 16 8
RC5 8 - 128 8
RC6 8 - 128 16
SAFER+ 16, 24, 32 16
TWOFIS 16, 24, 32 16
H
XTEA 16 8
Stream Ciphers
CHACHA 16, 32 1 Nonce size (IV) is 8 or 12 bytes. For
20 nonce size 8, initial_counter is a 64-bit
integer, for size 12, 32-bit.
RC4 5 - 256 1
SOBER1 4x 1 Nonce size (IV) is 4y bytes, the length is
28 independent of key size.
ENCRYPT Examples
See also
DECRYPT(), RSA_ENCRYPT()
8.11.3. RSA_DECRYPT()
Decrypts data using an RSA private key and removes OAEP or PKCS 1.5 padding
Result type
VARBINARY
Syntax
477
Chapter 8. Built-in Scalar Functions
Parameter Description
RSA_DECRYPT decrypts encrypted_input using the RSA private key and then removes padding from the
resulting data.
By default, OAEP padding is used. The PKCS_1_5 option will switch to the less secure PKCS 1.5
padding.
The PKCS_1_5 option is only for backward compatibility with systems applying
PKCS 1.5 padding. For security reasons, it should not be used in new projects.
• When the encrypted data was text, it must be explicitly cast to a string type of
appropriate character set.
RSA_DECRYPT Examples
Run the examples of the RSA_PRIVATE and RSA_PUBLIC, RSA_ENCRYPT functions first.
See also
RSA_ENCRYPT(), RSA_PRIVATE(), DECRYPT()
8.11.4. RSA_ENCRYPT()
Pads data using OAEP or PKCS 1.5 and then encrypts it with an RSA public key
Result type
VARBINARY
478
Chapter 8. Built-in Scalar Functions
Syntax
Parameter Description
RSA_ENCRYPT pads input using the OAEP or PKCS 1.5 padding scheme and then encrypts it using the
specified RSA public key. This function is normally used to encrypt short symmetric keys which are
then used in block ciphers to encrypt a message.
By default, OAEP padding is used. The PKCS_1_5 option will switch to the less secure PKCS 1.5
padding.
The PKCS_1_5 option is only for backward compatibility with systems applying
PKCS 1.5 padding. For security reasons, it should not be used in new projects.
RSA_ENCRYPT Examples
See also
RSA_DECRYPT(), RSA_PUBLIC(), ENCRYPT()
8.11.5. RSA_PRIVATE()
Result type
VARBINARY
Syntax
RSA_PRIVATE (key_length)
479
Chapter 8. Built-in Scalar Functions
Parameter Description
key_length Key length in bytes; minimum 4, maximum 1024. A size of 256 bytes (2048
bits) or larger is recommended.
RSA_PRIVATE generates an RSA private key of the specified length (in bytes) in PKCS#1 format.
The larger the length specified, the longer it takes for the function to generate a private key.
RSA_PRIVATE Examples
Putting private keys in the context variables is not secure; we’re doing it here for
demonstration purposes. SYSDBA and users with the role RDB$ADMIN or the system
privilege MONITOR_ANY_ATTACHMENT can see all context variables from all
attachments.
See also
RSA_PUBLIC(), RSA_DECRYPT()
8.11.6. RSA_PUBLIC()
Result type
VARBINARY
Syntax
RSA_PUBLIC (private_key)
Parameter Description
RSA_PUBLIC returns the RSA public key in PKCS#1 format for the provided RSA private key (also
PKCS#1 format).
RSA_PUBLIC Examples
480
Chapter 8. Built-in Scalar Functions
See also
RSA_PRIVATE(), RSA_ENCRYPT()
8.11.7. RSA_SIGN_HASH()
PSS encodes a message hash and signs it with an RSA private key
Result type
VARBINARY
Syntax
RSA_SIGN_HASH (message_digest
KEY private_key
[HASH <hash>] [SALT_LENGTH salt_length]
[PKCS_1_5])
Parameter Description
message_digest Hash of message to sign. The hash algorithm used should match hash
hash Hash to generate PSS encoding; default is SHA256. This should be the same
hash as used to generate message_digest.
salt_length Length of the desired salt in bytes; default is 8; minimum 1, maximum 32.
The recommended value is between 8 and 16.
RSA_SIGN_HASH performs PSS encoding of the message_digest to be signed, and signs using the RSA
private key.
By default, OAEP padding is used. The PKCS_1_5 option will switch to the less secure PKCS 1.5
padding.
The PKCS_1_5 option is only for backward compatibility with systems applying
PKCS 1.5 padding. For security reasons, it should not be used in new projects.
This function expects the hash of a message (or message digest), not the actual
message. The hash argument should specify the algorithm that was used to
generate that hash.
A function that accepts the actual message to hash might be introduced in a future
version of Firebird.
481
Chapter 8. Built-in Scalar Functions
PSS encoding
RSA_SIGN_HASH Examples
See also
RSA_VERIFY_HASH(), RSA_PRIVATE(), CRYPT_HASH()
8.11.8. RSA_VERIFY_HASH()
Result type
BOOLEAN
Syntax
RSA_VERIFY_HASH (message_digest
SIGNATURE signature KEY public_key
[HASH <hash>] [SALT_LENGTH salt_length]
[PKCS_1_5])
Parameter Description
message_digest Hash of message to verify. The hash algorithm used should match hash
public_key RSA public key in PKCS#1 format matching the private key used to sign
hash Hash to use for the message digest; default is SHA256. This should be the
same hash as used to generate message_digest, and as used in
RSA_SIGN_HASH
salt_length Length of the salt in bytes; default is 8; minimum 1, maximum 32. Value
must match the length used in RSA_SIGN_HASH.
482
Chapter 8. Built-in Scalar Functions
RSA_VERIFY_HASH performs PSS encoding of the message_digest to be verified, and verifies the digital
signature using the provided RSA public key.
By default, OAEP padding is used. The PKCS_1_5 option will switch to the less secure PKCS 1.5
padding.
The PKCS_1_5 option is only for backward compatibility with systems applying
PKCS 1.5 padding. For security reasons, it should not be used in new projects.
This function expects the hash of a message (or message digest), not the actual
message. The hash argument should specify the algorithm that was used to
generate that hash.
A function that accepts the actual message to hash might be introduced in a future
version of Firebird.
RSA_VERIFY_HASH Examples
Run the examples of the RSA_PRIVATE, RSA_PUBLIC and RSA_SIGN_HASH functions first.
select rsa_verify_hash(
crypt_hash('Test message' using sha256)
signature rdb$get_context('USER_SESSION', 'msg')
key rdb$get_context('USER_SESSION', 'public_key'))
from rdb$database
See also
RSA_SIGN_HASH(), RSA_PUBLIC(), CRYPT_HASH()
8.12.1. MAKE_DBKEY()
Result type
BINARY(8)
Syntax
483
Chapter 8. Built-in Scalar Functions
Parameter Description
recnum Record number. Either absolute (if dpnum and ppnum are absent), or
relative (if dpnum present)
dpnum Data page number. Either absolute (if ppnum is absent) or relative (if
ppnum present)
MAKE_DBKEY creates a DBKEY value using a relation name or ID, record number, and (optionally)
logical numbers of data page and pointer page.
3. Argument recnum represents an absolute record number in the relation (if the
next arguments dpnum and ppnum are missing), or a record number relative to
the first record, specified by the next arguments.
4. Argument dpnum is a logical number of data page in the relation (if the next
argument ppnum is missing), or number of data pages relative to the first data
page addressed by the given ppnum.
6. All numbers are zero-based. Maximum allowed value for dpnum and ppnum is
32
2 (4294967296). If dpnum is specified, then recnum can be negative. If dpnum
is missing and recnum is negative, then NULL is returned. If ppnum is specified,
then dpnum can be negative. If ppnum is missing and dpnum is negative, then
NULL is returned.
Examples of MAKE_DBKEY
1. Select record using relation name (note that relation name is uppercase)
select *
484
Chapter 8. Built-in Scalar Functions
from rdb$relations
where rdb$db_key = make_dbkey('RDB$RELATIONS', 0)
select *
from rdb$relations
where rdb$db_key = make_dbkey(6, 0)
select *
from rdb$relations
where rdb$db_key >= make_dbkey(6, 0, 0)
and rdb$db_key < make_dbkey(6, 0, 1)
4. Select all records physically residing on the first data page of 6th pointer page
select *
from SOMETABLE
where rdb$db_key >= make_dbkey('SOMETABLE', 0, 0, 5)
and rdb$db_key < make_dbkey('SOMETABLE', 0, 1, 5)
8.12.2. RDB$ERROR()
Available in
PSQL
Result type
Varies (see table below)
Syntax
RDB$ERROR (<context>)
<context> ::=
GDSCODE | SQLCODE | SQLSTATE | EXCEPTION | MESSAGE
485
Chapter 8. Built-in Scalar Functions
RDB$ERROR returns data of the specified context about the active PSQL exception. Its scope is
confined to exception-handling blocks in PSQL (WHEN … DO). Outside the exception handling blocks,
RDB$ERROR always returns NULL. This function cannot be called from DSQL.
Example of RDB$ERROR
BEGIN
...
WHEN ANY DO
EXECUTE PROCEDURE P_LOG_EXCEPTION(RDB$ERROR(MESSAGE));
END
See also
Trapping and Handling Errors, GDSCODE, SQLCODE, SQLSTATE
8.12.3. RDB$GET_TRANSACTION_CN()
Result type
BIGINT
Syntax
RDB$GET_TRANSACTION_CN (transaction_id)
Parameter Description
transaction_id Transaction id
If the return value is greater than 1, it is the actual CN of the transaction if it was committed after
the database was started.
The function can also return one of the following results, indicating the commit status of the
transaction:
486
Chapter 8. Built-in Scalar Functions
-1 Transaction is in limbo
1 Transaction committed before the database started or less than the Oldest Interesting
Transaction for the database
NULL Transaction number supplied is NULL or greater than Next Transaction for the
database
For more information about CN, consult the Firebird 4.0 Release Notes.
RDB$GET_TRANSACTION_CN Examples
8.12.4. RDB$ROLE_IN_USE()
Result type
BOOLEAN
Syntax
RDB$ROLE_IN_USE (role_name)
Parameter Description
role_name String expression for the role to check. Case-sensitive, must match the
role name as stored in RDB$ROLES
RDB$ROLE_IN_USE returns TRUE if the specified role is active for the current connection, and FALSE
otherwise. Contrary to CURRENT_ROLE — which only returns the explicitly specified role — this
function can be used to check for roles that are active by default, or cumulative roles activated by
an explicitly specified role.
RDB$ROLE_IN_USE Examples
select rdb$role_name
from rdb$roles
where rdb$role_in_use(rdb$role_name);
See also
487
Chapter 8. Built-in Scalar Functions
CURRENT_ROLE
8.12.5. RDB$SYSTEM_PRIVILEGE()
Result type
BOOLEAN
Syntax
RDB$SYSTEM_PRIVILEGE (<sys_privilege>)
<sys_privilege> ::=
!! See CREATE ROLE !!
Parameter Description
RDB$SYSTEM_PRIVILEGE accepts a system privilege name and returns TRUE if the current connection
has the given system privilege, and FALSE otherwise.
The authorization of the current connection is determined by privileges of the current user, the
user PUBLIC, and the currently active roles (explicitly set or activated by default).
RDB$SYSTEM_PRIVILEGE Examples
See also
Fine-grained System Privileges
488
Chapter 9. Aggregate Functions
Syntax
<aggregate_function> ::=
aggragate_function ([<expr> [, <expr> ...]])
[FILTER (WHERE <condition>)]
The aggregate functions can also be used as window functions with the OVER () clause. See Window
(Analytical) Functions for more information.
Aggregate functions are available in DSQL and PSQL. Availability in ESQL is not tracked by this
Language Reference.
It can be thought of as a more explicit form of using an aggregate function with a condition (DECODE,
CASE, IIF, NULLIF) to ignore some values that would otherwise be considered by the aggregation.
The FILTER clause can be used with any aggregate functions in aggregate or windowed (OVER)
statements, but not with window-only functions like DENSE_RANK.
Example of FILTER
Suppose you need a query to count the rows with status = 'A' and the row with status = 'E' as
different columns. The old way to do it would be:
The FILTER clause lets you express those conditions more explicitly:
You can use more than one FILTER modifier in an aggregate query. You could, for
example, use 12 filters on totals aggregating sales for a year to produce monthly
489
Chapter 9. Aggregate Functions
Average
Result type
Depends on the input type
Syntax
Parameter Description
AVG returns the average argument value in the group. NULL is ignored.
• Parameter ALL (the default) applies the aggregate function to all values.
• Parameter DISTINCT directs the AVG function to consider only one instance of each unique value,
no matter how many times this value occurs.
• If the set of retrieved records is empty or contains only NULL, the result will be NULL.
AVG Examples
SELECT
dept_no,
AVG(salary)
FROM employee
490
Chapter 9. Aggregate Functions
GROUP BY dept_no
See also
SELECT
9.2.2. COUNT()
Result type
BIGINT
Syntax
Parameter Description
• ALL is the default: it counts all values in the set that are not NULL.
• If COUNT (*) is specified instead of the expression expr, all rows will be counted. COUNT (*) —
◦ does not take an expr argument, since its context is column-unspecific by definition
◦ counts each row separately and returns the number of rows in the specified table or group
without omitting duplicate rows
• If the result set is empty or contains only NULL in the specified column(s), the returned count is
zero.
COUNT Examples
SELECT
dept_no,
COUNT(*) AS cnt,
COUNT(DISTINCT name) AS cnt_name
FROM employee
491
Chapter 9. Aggregate Functions
GROUP BY dept_no
See also
SELECT.
9.2.3. LIST()
Result type
BLOB
Syntax
Parameter Description
LIST returns a string consisting of the non-NULL argument values in the group, separated either by a
comma or by a user-supplied separator. If there are no non-NULL values (this includes the case
where the group is empty), NULL is returned.
• ALL (the default) results in all non-NULL values being listed. With DISTINCT, duplicates are
removed, except if expr is a BLOB.
• The optional separator argument may be any string expression. This makes it possible to specify
e.g. ascii_char(13) as a separator.
• The expr and separator arguments support BLOBs of any size and character set.
• Datetime and numeric arguments are implicitly converted to strings before concatenation.
• The result is a text BLOB, except when expr is a BLOB of another subtype.
• The ordering of the list values is undefined — the order in which the strings are concatenated is
determined by read order from the source set which, in tables, is not generally defined. If
ordering is important, the source data can be pre-sorted using a derived table or similar.
Some reports indicate this no longer works in Firebird 5.0, or only in more
492
Chapter 9. Aggregate Functions
LIST Examples
See also
SELECT
9.2.4. MAX()
Maximum
Result type
Returns a result of the same data type the input expression.
Syntax
Parameter Description
• If the input argument is a string, the function will return the value that will be sorted last if
COLLATE is used.
• This function fully supports text BLOBs of any size and character set.
MAX Examples
SELECT
493
Chapter 9. Aggregate Functions
dept_no,
MAX(salary)
FROM employee
GROUP BY dept_no
See also
MIN(), SELECT
9.2.5. MIN()
Minimum
Result type
Returns a result of the same data type the input expression.
Syntax
Parameter Description
• If the input argument is a string, the function will return the value that will be sorted first if
COLLATE is used.
• This function fully supports text BLOBs of any size and character set.
MIN Examples
SELECT
dept_no,
MIN(salary)
FROM employee
GROUP BY dept_no
See also
MAX(), SELECT
494
Chapter 9. Aggregate Functions
9.2.6. SUM()
Sum
Result type
Depends on the input type
Syntax
Parameter Description
SUM calculates and returns the sum of non-NULL values in the group.
• ALL is the default option — all values in the set that are not NULL are processed. If DISTINCT is
specified, duplicates are removed from the set and the SUM evaluation is done afterward.
SUM Examples
SELECT
dept_no,
SUM (salary),
FROM employee
GROUP BY dept_no
See also
SELECT
495
Chapter 9. Aggregate Functions
Correlation coefficient
Result type
DOUBLE PRECISION
Syntax
Parameter Description
The CORR function return the correlation coefficient for a pair of numerical expressions.
In a statistical sense, correlation is the degree to which a pair of variables are linearly related. A
linear relation between variables means that the value of one variable can to a certain extent
predict the value of the other. The correlation coefficient represents the degree of correlation as a
number ranging from -1 (high inverse correlation) to 1 (high correlation). A value of 0 corresponds
to no correlation.
If the group or window is empty, or contains only NULL values, the result will be NULL.
CORR Examples
select
corr(alength, aheight) AS c_corr
from measure
See also
COVAR_POP(), STDDEV_POP()
496
Chapter 9. Aggregate Functions
9.3.2. COVAR_POP()
Population covariance
Result type
DOUBLE PRECISION
Syntax
Parameter Description
The function COVAR_POP returns the population covariance for a pair of numerical expressions.
If the group or window is empty, or contains only NULL values, the result will be NULL.
COVAR_POP Examples
select
covar_pop(alength, aheight) AS c_covar_pop
from measure
See also
COVAR_SAMP(), SUM(), COUNT()
9.3.3. COVAR_SAMP()
Sample covariance
Result type
DOUBLE PRECISION
Syntax
497
Chapter 9. Aggregate Functions
Parameter Description
The function COVAR_SAMP returns the sample covariance for a pair of numerical expressions.
If the group or window is empty, contains only 1 row, or contains only NULL values, the result will be
NULL.
COVAR_SAMP Examples
select
covar_samp(alength, aheight) AS c_covar_samp
from measure
See also
COVAR_POP(), SUM(), COUNT()
9.3.4. STDDEV_POP()
Result type
DOUBLE PRECISION or NUMERIC depending on the type of expr
Syntax
STDDEV_POP ( <expr> )
Parameter Description
The function STDDEV_POP returns the population standard deviation for a group or window. NULL
values are skipped.
498
Chapter 9. Aggregate Functions
SQRT(VAR_POP(<expr>))
If the group or window is empty, or contains only NULL values, the result will be NULL.
STDDEV_POP Examples
select
dept_no
stddev_pop(salary)
from employee
group by dept_no
See also
STDDEV_SAMP(), VAR_POP(), SQRT
9.3.5. STDDEV_SAMP()
Result type
DOUBLE PRECISION or NUMERIC depending on the type of expr
Syntax
STDDEV_POP ( <expr> )
Parameter Description
The function STDDEV_SAMP returns the sample standard deviation for a group or window. NULL values
are skipped.
SQRT(VAR_SAMP(<expr>))
If the group or window is empty, contains only 1 row, or contains only NULL values, the result will be
NULL.
STDDEV_SAMP Examples
499
Chapter 9. Aggregate Functions
select
dept_no
stddev_samp(salary)
from employee
group by dept_no
See also
STDDEV_POP(), VAR_SAMP(), SQRT
9.3.6. VAR_POP()
Population variance
Result type
DOUBLE PRECISION or NUMERIC depending on the type of expr
Syntax
VAR_POP ( <expr> )
Parameter Description
The function VAR_POP returns the population variance for a group or window. NULL values are
skipped.
If the group or window is empty, or contains only NULL values, the result will be NULL.
VAR_POP Examples
select
dept_no
var_pop(salary)
from employee
group by dept_no
See also
500
Chapter 9. Aggregate Functions
9.3.7. VAR_SAMP()
Sample variance
Result type
DOUBLE PRECISION or NUMERIC depending on the type of expr
Syntax
VAR_SAMP ( <expr> )
Parameter Description
The function VAR_POP returns the sample variance for a group or window. NULL values are skipped.
If the group or window is empty, contains only 1 row, or contains only NULL values, the result will be
NULL.
VAR_SAMP Examples
select
dept_no
var_samp(salary)
from employee
group by dept_no
See also
VAR_POP(), SUM(), COUNT()
501
Chapter 9. Aggregate Functions
axis. As set of linear functions can be used for calculating these values.
The linear regression aggregate functions take a pair of arguments, the dependent variable
expression (y) and the independent variable expression (x), which are both numeric value
expressions. Any row in which either argument evaluates to NULL is removed from the rows that
qualify. If there are no rows that qualify, then the result of REGR_COUNT is 0 (zero), and the other
linear regression aggregate functions result in NULL.
9.4.1. REGR_AVGX()
Result type
DOUBLE PRECISION
Syntax
Parameter Description
The function REGR_AVGX calculates the average of the independent variable (x) of the regression line.
<exprX> :==
CASE WHEN <x> IS NOT NULL AND <y> IS NOT NULL THEN <x> END
See also
REGR_AVGY(), REGR_COUNT(), SUM()
9.4.2. REGR_AVGY()
Result type
502
Chapter 9. Aggregate Functions
DOUBLE PRECISION
Syntax
Parameter Description
The function REGR_AVGY calculates the average of the dependent variable (y) of the regression line.
<exprY> :==
CASE WHEN <x> IS NOT NULL AND <y> IS NOT NULL THEN <y> END
See also
REGR_AVGX(), REGR_COUNT(), SUM()
9.4.3. REGR_COUNT()
Result type
DOUBLE PRECISION
Syntax
Parameter Description
503
Chapter 9. Aggregate Functions
Parameter Description
The function REGR_COUNT counts the number of non-empty pairs of the regression line.
COUNT(*) FILTER (WHERE <x> IS NOT NULL AND <y> IS NOT NULL)
See also
COUNT()
9.4.4. REGR_INTERCEPT()
Result type
DOUBLE PRECISION
Syntax
Parameter Description
The function REGR_INTERCEPT calculates the point of intersection of the regression line with the y-
axis.
REGR_INTERCEPT Examples
504
Chapter 9. Aggregate Functions
BYYEAR TOTAL_VALUE
------ ------------
1991 118377.35
1992 414557.62
1993 710737.89
1994 1006918.16
1995 1303098.43
1996 1599278.69
1997 1895458.96
1998 2191639.23
1999 2487819.50
2000 2783999.77
...
See also
REGR_AVGX(), REGR_AVGY(), REGR_SLOPE()
9.4.5. REGR_R2()
505
Chapter 9. Aggregate Functions
Result type
DOUBLE PRECISION
Syntax
Parameter Description
The REGR_R2 function calculates the coefficient of determination, or R-squared, of the regression
line.
POWER(CORR(<y>, <x>), 2)
See also
CORR(), POWER
9.4.6. REGR_SLOPE()
Result type
DOUBLE PRECISION
Syntax
Parameter Description
506
Chapter 9. Aggregate Functions
<exprX> :==
CASE WHEN <x> IS NOT NULL AND <y> IS NOT NULL THEN <x> END
See also
COVAR_POP(), VAR_POP()
9.4.7. REGR_SXX()
Result type
DOUBLE PRECISION
Syntax
Parameter Description
The function REGR_SXX calculates the sum of squares of the independent expression variable (x).
<exprX> :==
CASE WHEN <x> IS NOT NULL AND <y> IS NOT NULL THEN <x> END
See also
REGR_COUNT(), VAR_POP()
507
Chapter 9. Aggregate Functions
9.4.8. REGR_SXY()
Result type
DOUBLE PRECISION
Syntax
Parameter Description
The function REGR_SXY calculates the sum of products of independent variable expression (x) times
dependent variable expression (y).
See also
COVAR_POP(), REGR_COUNT()
9.4.9. REGR_SYY()
Result type
DOUBLE PRECISION
Syntax
508
Chapter 9. Aggregate Functions
Parameter Description
The function REGR_SYY calculates the sum of squares of the dependent variable (y).
<exprY> :==
CASE WHEN <x> IS NOT NULL AND <y> IS NOT NULL THEN <y> END
See also
REGR_COUNT(), VAR_POP()
509
Chapter 10. Window (Analytical) Functions
The window functions are used with the OVER clause. They may appear only in the SELECT list, or the
ORDER BY clause of a query.
Window functions are available in DSQL and PSQL. Availability in ESQL is not tracked by this
Language Reference.
Syntax
<window_function> ::=
<aggregate-function> OVER <window-name-or-spec>
| <window-function-name> ([<value-expression> [, <value-expression> ...]])
OVER <window-name-or-spec>
<aggregate-function> ::=
!! See Aggregate Functions !!
<window-name-or-spec> ::=
(<window-specification-details>) | existing_window_name
<window-function-name> ::=
<ranking-function>
| <navigational-function>
<ranking-function> ::=
RANK | DENSE_RANK | PERCENT_RANK | ROW_NUMBER
| CUME_DIST | NTILE
<navigational-function>
LEAD | LAG | FIRST_VALUE | LAST_VALUE | NTH_VALUE
<window-specification-details> ::=
[existing-window-name]
[<window-partition-clause>]
[<order-by-clause>]
[<window-frame-clause>]
<window-partition-clause> ::=
PARTITION BY <value-expression> [, <value-expression> ...]
<order-by-clause> ::=
ORDER BY <sort-specification [, <sort-specification> ...]
510
Chapter 10. Window (Analytical) Functions
<sort-specification> ::=
<value-expression> [<ordering-specification>] [<null-ordering>]
<ordering-specification> ::=
ASC | ASCENDING
| DESC | DESCENDING
<null-ordering> ::=
NULLS FIRST
| NULLS LAST
<window-frame-extent> ::=
<window-frame-start>
| <window-frame-between>
<window-frame-start> ::=
UNBOUNDED PRECEDING
| <value-expression> PRECEDING
| CURRENT ROW
<window-frame-between> ::=
BETWEEN { UNBOUNDED PRECEDING | <value-expression> PRECEDING
| CURRENT ROW | <value-expression> FOLLOWING }
AND { <value-expression> PRECEDING | CURRENT ROW
| <value-expression> FOLLOWING | UNBOUNDED FOLLOWING }
Argument Description
existing-window-name A named window defined using the WINDOW clause of the current query
specification.
Imagine a table EMPLOYEE with columns ID, NAME and SALARY, and the need to show each employee
with their respective salary and the percentage of their salary over the payroll.
511
Chapter 10. Window (Analytical) Functions
select
id,
department,
salary,
salary / (select sum(salary) from employee) portion
from employee
order by id;
Results
The query is repetitive and lengthy to run, especially if EMPLOYEE happens to be a complex view.
The same query could be specified in a much faster and more elegant way using a window
function:
select
id,
department,
salary,
salary / sum(salary) OVER () portion
from employee
order by id;
Here, sum(salary) over () is computed with the sum of all SALARY from the query (the EMPLOYEE
table).
10.2. Partitioning
Like aggregate functions, that may operate alone or in relation to a group, window functions may
also operate on a group, which is called a “partition”.
Syntax
Aggregation over a group could produce more than one row, so the result set generated by a
partition is joined with the main query using the same expression list as the partition.
Continuing the EMPLOYEE example, instead of getting the portion of each employee’s salary over the
512
Chapter 10. Window (Analytical) Functions
all-employees total, we would like to get the portion based on the employees in the same
department:
select
id,
department,
salary,
salary / sum(salary) OVER (PARTITION BY department) portion
from employee
order by id;
Results
10.3. Ordering
The ORDER BY sub-clause can be used with or without partitions. The ORDER BY clause within OVER
specifies the order in which the window function will process rows. This order does not have to be
the same as the order rows appear in the output.
There is an important concept associated with window functions: for each row there is a set of rows
in its partition called the window frame. By default, when specifying ORDER BY, the frame consists of
all rows from the beginning of the partition to the current row and rows equal to the current ORDER
BY expression. Without ORDER BY, the default frame consists of all rows in the partition.
As a result, for standard aggregate functions, the ORDER BY clause produces partial aggregation
results as rows are processed.
Example
select
id,
salary,
sum(salary) over (order by salary) cumul_salary
from employee
order by salary;
Results
id salary cumul_salary
-- ------ ------------
513
Chapter 10. Window (Analytical) Functions
3 8.00 8.00
4 9.00 17.00
1 10.00 37.00
5 10.00 37.00
2 12.00 49.00
Then cumul_salary returns the partial/accumulated (or running) aggregation (of the SUM function). It
may appear strange that 37.00 is repeated for the ids 1 and 5, but that is how it should work. The
ORDER BY keys are grouped together, and the aggregation is computed once (but summing the two
10.00). To avoid this, you can add the ID field to the end of the ORDER BY clause.
It’s possible to use multiple windows with different orders, and ORDER BY parts like ASC/DESC and
NULLS FIRST/LAST.
With a partition, ORDER BY works the same way, but at each partition boundary the aggregation is
reset.
All aggregation functions can use ORDER BY, except for LIST().
The frame comprises three pieces: unit, start bound, and end bound. The unit can be RANGE or ROWS,
which defines how the bounds will work.
UNBOUNDED PRECEDING
<expr> PRECEDING
CURRENT ROW
<expr> FOLLOWING
UNBOUNDED FOLLOWING
• With RANGE, the ORDER BY should specify exactly one expression, and that expression should be of
a numeric, date, time, or timestamp type. For <expr> PRECEDING, expr is subtracted from the ORDER
BY expression, and for <expr> FOLLOWING, expr is added. For CURRENT ROW, the expression is used
as-is.
All rows inside the current partition that are between the bounds are considered part of the
resulting window frame.
• With ROWS, ORDER BY expressions are not limited by number or type. For this unit, <expr>
PRECEDING and <expr FOLLOWING relate to the row position within the current partition, and not
the values of the ordering keys.
Both UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING work identical with RANGE and ROWS. UNBOUNDED
514
Chapter 10. Window (Analytical) Functions
PRECEDING start at the first row of the current partition, and UNBOUNDED FOLLOWING ends at the last row
of the current partition.
The frame syntax with <window-frame-start> specifies the start-frame, with the end-frame being
CURRENT ROW.
• ROW_NUMBER, LAG and LEAD always work as ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
• DENSE_RANK, RANK, PERCENT_RANK and CUME_DIST always work as RANGE BETWEEN UNBOUNDED PRECEDING
AND CURRENT ROW
• FIRST_VALUE, LAST_VALUE and NTH_VALUE respect frames, but the RANGE unit behaviour is identical to
ROWS.
When the ORDER BY clause is used, but a frame clause is omitted, the default considers the partition
up to the current row. When combined with SUM, this results in a running total:
select
id,
salary,
sum(salary) over (order by salary) sum_salary
from employee
order by salary;
Result:
| id | salary | sum_salary |
|---:|-------:|-----------:|
| 3 | 8.00 | 8.00 |
| 4 | 9.00 | 17.00 |
| 1 | 10.00 | 37.00 |
| 5 | 10.00 | 37.00 |
| 2 | 12.00 | 49.00 |
On the other hand, if we apply a frame for the entire partition, we get the total for the entire
partition.
select
id,
salary,
sum(salary) over (
order by salary
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) sum_salary
from employee
515
Chapter 10. Window (Analytical) Functions
order by salary;
Result:
| id | salary | sum_salary |
|---:|-------:|-----------:|
| 3 | 8.00 | 49.00 |
| 4 | 9.00 | 49.00 |
| 1 | 10.00 | 49.00 |
| 5 | 10.00 | 49.00 |
| 2 | 12.00 | 49.00 |
This example is to demonstrate how this works; the result of this example would be simpler to
produce with sum(salary) over().
We can use a range frame to compute the count of employees with salaries between (an employee’s
salary - 1) and (their salary + 1) with this query:
select
id,
salary,
count(*) over (
order by salary
RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING
) range_count
from employee
order by salary;
Result:
| id | salary | range_count |
|---:|-------:|------------:|
| 3 | 8.00 | 2 |
| 4 | 9.00 | 4 |
| 1 | 10.00 | 3 |
| 5 | 10.00 | 3 |
| 2 | 12.00 | 1 |
516
Chapter 10. Window (Analytical) Functions
b. as a base window of another named or inline (OVER) window, if it is not a window with a frame
(ROWS or RANGE clauses)
A window with a base windows cannot have PARTITION BY, nor override the
ordering (ORDER BY) of a base window.
These functions can be used with or without partitioning and ordering. However, using them
without ordering almost never makes sense.
The ranking functions can be used to create different type of counters. Consider SUM(1) OVER (ORDER
BY SALARY) as an example of what they can do, each of them differently. Following is an example
query, also comparing with the SUM behavior.
select
id,
salary,
dense_rank() over (order by salary),
rank() over (order by salary),
row_number() over (order by salary),
sum(1) over (order by salary)
from employee
order by salary;
Results
The difference between DENSE_RANK and RANK is that there is a gap related to duplicate rows (relative
to the window ordering) only in RANK. DENSE_RANK continues assigning sequential numbers after the
duplicate salary. On the other hand, ROW_NUMBER always assigns sequential numbers, even when
there are duplicate values.
10.6.1. CUME_DIST()
Result type
DOUBLE PRECISION
517
Chapter 10. Window (Analytical) Functions
Syntax
CUME_DIST is calculated as the number of rows preceding or peer of the current row divided by the
number of rows in the partition.
CUME_DIST Examples
select
id,
salary,
cume_dist() over (order by salary)
from employee
order by salary;
Result
id salary cume_dist
-- ------ ---------
3 8.00 0.2
4 9.00 0.4
1 10.00 0.8
5 10.00 0.8
2 12.00 1
10.6.2. DENSE_RANK()
See also RANK(), PERCENT_RANK()
Rank of rows in a partition without gaps
Result type
BIGINT
Syntax
Rows with the same window_order values get the same rank within the partition window_partition,
if specified. The dense rank of a row is equal to the number of different rank values in the partition
preceding the current row, plus one.
DENSE_RANK Examples
518
Chapter 10. Window (Analytical) Functions
select
id,
salary,
dense_rank() over (order by salary)
from employee
order by salary;
Result
id salary dense_rank
-- ------ ----------
3 8.00 1
4 9.00 2
1 10.00 3
5 10.00 3
2 12.00 4
10.6.3. NTILE()
See also RANK(), ROW_NUMBER()
Distributes the rows of the current window partition into the specified number of tiles (groups)
Result type
BIGINT
Syntax
Argument Description
NTILE Examples
select
id,
salary,
rank() over (order by salary),
ntile(3) over (order by salary)
from employee
order by salary;
519
Chapter 10. Window (Analytical) Functions
Result
10.6.4. PERCENT_RANK()
Result type
DOUBLE PRECISION
Syntax
PERCENT_RANK is calculated as the RANK() minus 1 of the current row divided by the number of rows
in the partition minus 1.
PERCENT_RANK Examples
select
id,
salary,
rank() over (order by salary),
percent_rank() over (order by salary)
from employee
order by salary;
Result
520
Chapter 10. Window (Analytical) Functions
10.6.5. RANK()
See also RANK(), CUME_DIST()
Rank of each row in a partition
Result type
BIGINT
Syntax
Rows with the same values of window-order get the same rank with in the partition window-
partition, if specified. The rank of a row is equal to the number of rank values in the partition
preceding the current row, plus one.
RANK Examples
select
id,
salary,
rank() over (order by salary)
from employee
order by salary;
Result
id salary rank
-- ------ ----
3 8.00 1
4 9.00 2
1 10.00 3
5 10.00 3
2 12.00 5
See also
DENSE_RANK(), ROW_NUMBER()
10.6.6. ROW_NUMBER()
Result type
BIGINT
521
Chapter 10. Window (Analytical) Functions
Syntax
Returns the sequential row number in the partition, where 1 is the first row in each of the
partitions.
ROW_NUMBER Examples
select
id,
salary,
row_number() over (order by salary)
from employee
order by salary;
Result
id salary rank
-- ------ ----
3 8.00 1
4 9.00 2
1 10.00 3
5 10.00 4
2 12.00 5
See also
DENSE_RANK(), RANK()
This is likely to produce strange or unexpected results for NTH_VALUE and especially
LAST_VALUE, so make sure to specify an explicit frame with these functions.
522
Chapter 10. Window (Analytical) Functions
select
id,
salary,
first_value(salary) over (order by salary),
last_value(salary) over (order by salary),
nth_value(salary, 2) over (order by salary),
lag(salary) over (order by salary),
lead(salary) over (order by salary)
from employee
order by salary;
Results
10.7.1. FIRST_VALUE()
Result type
The same as type as expr
Syntax
Argument Description
See also
LAST_VALUE(), NTH_VALUE()
10.7.2. LAG()
Value from row in the current partition with a given offset before the current row
Result type
523
Chapter 10. Window (Analytical) Functions
Syntax
Argument Description
offset The offset in rows before the current row to get the value identified by
expr. If offset is not specified, the default is 1. offset can be a column,
subquery or other expression that results in a positive integer value, or
another type that can be implicitly converted to BIGINT. offset cannot be
negative (use LEAD instead).
default The default value to return if offset points outside the partition. Default is
NULL.
The LAG function provides access to the row in the current partition with a given offset before the
current row.
If offset points outside the current partition, default will be returned, or NULL if no default was
specified.
LAG Examples
Suppose you have RATE table that stores the exchange rate for each day. To trace the change of the
exchange rate over the past five days you can use the following query.
select
bydate,
cost,
cost - lag(cost) over (order by bydate) as change,
100 * (cost - lag(cost) over (order by bydate)) /
lag(cost) over (order by bydate) as percent_change
from rate
where bydate between dateadd(-4 day to current_date)
and current_date
order by bydate
Result
524
Chapter 10. Window (Analytical) Functions
See also
LEAD()
10.7.3. LAST_VALUE()
Result type
The same as type as expr
Syntax
Argument Description
See also
FIRST_VALUE(), NTH_VALUE()
10.7.4. LEAD()
Value from a row in the current partition with a given offset after the current row
Result type
The same as type as expr
Syntax
Argument Description
525
Chapter 10. Window (Analytical) Functions
Argument Description
offset The offset in rows after the current row to get the value identified by
expr. If offset is not specified, the default is 1. offset can be a column,
subquery or other expression that results in a positive integer value, or
another type that can be implicitly converted to BIGINT. offset cannot be
negative (use LAG instead).
default The default value to return if offset points outside the partition. Default is
NULL.
The LEAD function provides access to the row in the current partition with a given offset after the
current row.
If offset points outside the current partition, default will be returned, or NULL if no default was
specified.
See also
LAG()
10.7.5. NTH_VALUE()
The Nth value starting from the first or the last row of the current frame
Result type
The same as type as expr
Syntax
Argument Description
offset The offset in rows from the start (FROM FIRST), or the last (FROM LAST) to get
the value identified by expr. offset can be a column, subquery or other
expression that results in a positive integer value, or another type that
can be implicitly converted to BIGINT. offset cannot be zero or negative.
The NTH_VALUE function returns the Nth value starting from the first (FROM FIRST) or the last (FROM
LAST) row of the current frame, see also note on frame for navigational functions. Offset 1 with FROM
FIRST is equivalent to FIRST_VALUE, and offset 1 with FROM LAST is equivalent to LAST_VALUE.
See also
FIRST_VALUE(), LAST_VALUE()
526
Chapter 10. Window (Analytical) Functions
When using aggregate functions inside OVER, all columns not used in aggregate
functions must be specified in the GROUP BY clause of the SELECT.
select
code_employee_group,
avg(salary) as avg_salary,
rank() over (order by avg(salary)) as salary_rank
from employee
group by code_employee_group
527
Chapter 11. System Packages
RDB$PROFILER
Profiler
RDB$TIME_ZONE_UTIL
Time zone utilities
11.1. RDB$BLOB_UTIL
Package of functions and procedures for blob manipulation
RDB$BLOB_UTIL.IS_WRITABLE returns TRUE when a BLOB is suitable for data appending using
BLOB_APPEND without copying.
Input parameter
• BLOB type BLOB NOT NULL
RDB$BLOB_UTIL.NEW_BLOB creates a new BLOB SUB_TYPE BINARY. It returns a BLOB suitable for data
appending, similar to BLOB_APPEND.
The advantage over BLOB_APPEND is that it’s possible to set custom SEGMENTED and TEMP_STORAGE
options.
BLOB_APPEND always creates BLOBs in temporary storage, which may not always be the best
approach if the created BLOB is going to be stored in a permanent table, as this will require a copy
operation.
The BLOB returned from this function, even when TEMP_STORAGE = FALSE, may be used with
BLOB_APPEND for appending data.
Input parameters
• SEGMENTED type BOOLEAN NOT NULL
528
Chapter 11. System Packages
RDB$BLOB_UTIL.OPEN_BLOB opens an existing BLOB for reading. It returns a handle (an integer bound
to the transaction) suitable for use with other functions of this package, like SEEK, READ_DATA and
CLOSE_HANDLE.
Handles which are not explicitly closed are closed automatically when the transaction ends.
Input parameter
• BLOB type BLOB NOT NULL
If LENGTH is passed with a positive number, it returns a VARBINARY with its maximum length.
If LENGTH is NULL it returns a segment of the BLOB with a maximum length of 32765.
Input parameters
• HANDLE type INTEGER NOT NULL
RDB$BLOB_UTIL.SEEK sets the position for the next READ_DATA, it returns the new position.
2 from end.
Input parameters
• HANDLE type INTEGER NOT NULL
SEEK only works on stream blobs. Attempting to seek on a segmented blob results in error “invalid
529
Chapter 11. System Packages
If the same BLOB is used after cancel, an “invalid blob id” error will be raised.
Input parameter
• BLOB type BLOB
Handles which are not explicitly closed are closed automatically when the transaction ends.
Input parameter
• HANDLE type INTEGER NOT NULL
11.1.8. Examples
Create a BLOB in temporary space and return it in EXECUTE BLOCK
suspend;
end
530
Chapter 11. System Packages
s = rdb$blob_util.read_data(bhandle, 3);
suspend;
s = rdb$blob_util.read_data(bhandle, 3);
suspend;
s = rdb$blob_util.read_data(bhandle, 3);
suspend;
Seek in a blob
set term !;
-- Add data.
b = blob_append(b, '0123456789');
-- Advance 2.
rdb$blob_util.seek(bhandle, 1, 2);
s = rdb$blob_util.read_data(bhandle, 3);
suspend;
531
Chapter 11. System Packages
set term ;!
set term !;
set term ;!
11.2. RDB$PROFILER
A package with functions and procedures to run and control the profiler.
These profiler controls are standard, but the actual profiler is a plugin. The profiler used depends
on the setting of DefaultProfilerPlugin in firebird.conf or databases.conf, or the PLUGIN_NAME
parameter of START_SESSION.
Users are allowed to profile their own connections. Profiling connections from other users requires
the PROFILE_ANY_ATTACHMENT system privilege.
RDB$PROFILER.START_SESSION starts a new profiler session, makes it the current session (of the given
ATTACHMENT_ID) and returns its identifier.
If FLUSH_INTERVAL is different from NULL, auto-flush is set up in the same way as manually calling
RDB$PROFILER.SET_FLUSH_INTERVAL.
532
Chapter 11. System Packages
PLUGIN_OPTIONS are plugin specific options and currently should be NULL for the Default_Profiler
plugin.
Input parameters
• DESCRIPTION type VARCHAR(255) CHARACTER SET UTF8 default NULL
RDB$PROFILER.CANCEL_SESSION cancels the current profiler session (of the given ATTACHMENT_ID).
All session data present in the profiler plugin is discarded and will not be flushed.
Input parameter
• ATTACHMENT_ID type BIGINT NOT NULL default CURRENT_CONNECTION
RDB$PROFILER.DISCARD removes all sessions (of the given ATTACHMENT_ID) from memory, without
flushing them.
Input parameter
• ATTACHMENT_ID type BIGINT NOT NULL default CURRENT_CONNECTION
RDB$PROFILER.FINISH_SESSION finishes the current profiler session (of the given ATTACHMENT_ID).
If FLUSH is TRUE, the snapshot tables are updated with data of the finished session (and old finished
sessions not yet present in the snapshot), otherwise data remains only in memory for later update.
Input parameters
• FLUSH type BOOLEAN NOT NULL default TRUE
533
Chapter 11. System Packages
RDB$PROFILER.FLUSH updates the snapshot tables with data from the profile sessions (of the given
ATTACHMENT_ID) in memory.
Input parameter
• ATTACHMENT_ID type BIGINT NOT NULL default CURRENT_CONNECTION
RDB$PROFILER.PAUSE_SESSION pauses the current profiler session (of the given ATTACHMENT_ID), so the
next executed statements statistics are not collected.
If FLUSH is TRUE, the snapshot tables are updated with data up to the current moment, otherwise data
remains only in memory for later update.
Input parameters
• FLUSH type BOOLEAN NOT NULL default FALSE
RDB$PROFILER.RESUME_SESSION resumes the current profiler session (of the given ATTACHMENT_ID), if it
was paused, so the next executed statements statistics are collected again.
Input parameter
• ATTACHMENT_ID type BIGINT NOT NULL default CURRENT_CONNECTION
Input parameters
• FLUSH_INTERVAL type INTEGER NOT NULL
534
Chapter 11. System Packages
11.2.9. Example
set term !;
set term ;!
2. Start profiling
set term !;
execute block
as
begin
execute procedure ins;
delete from tab;
end!
535
Chapter 11. System Packages
set term ;!
3. Data analysis
select preq.*
from plg$prof_requests preq
join plg$prof_sessions pses
on pses.profile_id = preq.profile_id and
pses.description = 'Profile Session 1';
select pstat.*
from plg$prof_psql_stats pstat
join plg$prof_sessions pses
on pses.profile_id = pstat.profile_id and
pses.description = 'Profile Session 1'
order by pstat.profile_id,
pstat.request_id,
pstat.line_num,
pstat.column_num;
select pstat.*
from plg$prof_record_source_stats pstat
join plg$prof_sessions pses
on pses.profile_id = pstat.profile_id and
pses.description = 'Profile Session 2'
order by pstat.profile_id,
pstat.request_id,
536
Chapter 11. System Packages
pstat.cursor_id,
pstat.record_source_id;
11.3. RDB$TIME_ZONE_UTIL
A package of time zone utility functions and procedures.
Example
select rdb$time_zone_util.database_version()
from rdb$database;
Returns:
DATABASE_VERSION
================
2023c
RDB$TIME_ZONE_UTIL.TRANSITIONS returns the set of rules between the start and end timestamps for a
named time zone.
Input parameters
• RDB$TIME_ZONE_NAME type CHAR(63)
Output parameters:
RDB$START_TIMESTAMP
type TIMESTAMP WITH TIME ZONE — The start timestamp of the transition
RDB$END_TIMESTAMP
type TIMESTAMP WITH TIME ZONE — The end timestamp of the transition
RDB$ZONE_OFFSET
type SMALLINT — The zone’s offset, in minutes
537
Chapter 11. System Packages
RDB$DST_OFFSET
type SMALLINT — The zone’s DST offset, in minutes
RDB$EFFECTIVE_OFFSET
type SMALLINT — Effective offset (ZONE_OFFSET + DST_OFFSET)
Example
select *
from rdb$time_zone_util.transitions(
'America/Sao_Paulo',
timestamp '2017-01-01',
timestamp '2019-01-01');
538
Chapter 12. Context Variables
12.1. CURRENT_CONNECTION
Unique identifier of the current connection.
Type
BIGINT
Syntax
CURRENT_CONNECTION
Its value is derived from a counter on the database header page, which is incremented for each
new connection. When a database is restored, this counter is reset to zero.
Examples
12.2. CURRENT_DATE
Current server date in the session time zone
Type
DATE
Syntax
CURRENT_DATE
Within a PSQL module (procedure, trigger or executable block), the value of CURRENT_DATE will
remain constant every time it is read. If multiple modules call or trigger each other, the value will
remain constant throughout the duration of the outermost module. If you need a progressing value
in PSQL (e.g. to measure time intervals), use 'TODAY'.
Examples
539
Chapter 12. Context Variables
12.3. CURRENT_ROLE
Current explicit role of the connection
Type
VARCHAR(63)
Syntax
CURRENT_ROLE
CURRENT_ROLE is a context variable containing the explicitly specified role of the currently connected
user. If there is no explicitly specified role, CURRENT_ROLE is 'NONE'.
CURRENT_ROLE always represents a valid role or 'NONE'. If a user connects with a non-existing role,
the engine silently resets it to 'NONE' without returning an error.
Roles that are active by default and not explicitly specified on connect or using SET ROLE are not
returned by CURRENT_ROLE. Use RDB$ROLE_IN_USE to check for all active roles.
Example
See also
RDB$ROLE_IN_USE
12.4. CURRENT_TIME
Current server time in the session time zone, with time zone information
Type
TIME WITH TIME ZONE
Data type changed in Firebird 4.0 from TIME WITHOUT TIME ZONE to TIME WITH TIME
ZONE. Use LOCALTIME to obtain TIME WITHOUT TIME ZONE.
Syntax
CURRENT_TIME [ (<precision>) ]
<precision> ::= 0 | 1 | 2 | 3
540
Chapter 12. Context Variables
Parameter Description
CURRENT_TIME has a default precision of 0 decimals, where CURRENT_TIMESTAMP has a default precision
of 3 decimals. As a result, CURRENT_TIMESTAMP is not the exact sum of CURRENT_DATE and CURRENT_TIME,
unless you explicitly specify a precision (i.e. CURRENT_TIME(3) or CURRENT_TIMESTAMP(0)).
Within a PSQL module (procedure, trigger or executable block), the value of CURRENT_TIME will
remain constant every time it is read. If multiple modules call or trigger each other, the value will
remain constant throughout the duration of the outermost module. If you need a progressing value
in PSQL (e.g. to measure time intervals), use 'NOW'.
Since Firebird 4.0, CURRENT_TIME returns the TIME WITH TIME ZONE type. In order for
your queries to be compatible with database code of Firebird 4.0 and higher,
Firebird 3.0.4 and Firebird 2.5.9 introduced the LOCALTIME expression. In Firebird
3.0.4 and Firebird 2.5.9, LOCALTIME is a synonym for CURRENT_TIME.
In Firebird 5.0, LOCALTIME returns TIME [WITHOUT TIME ZONE]), while CURRENT_TIME
returns TIME WITH TIME ZONE.
Examples
See also
CURRENT_TIMESTAMP, LOCALTIME, LOCALTIMESTAMP
12.5. CURRENT_TIMESTAMP
Current server date and time in the session time zone, with time zone information
Type
TIMESTAMP WITH TIME ZONE
Data type changed in Firebird 4.0 from TIMESTAMP WITHOUT TIME ZONE to TIMESTAMP
541
Chapter 12. Context Variables
WITH TIME ZONE. Use LOCALTIMESTAMP to obtain TIMESTAMP WITHOUT TIME ZONE.
Syntax
CURRENT_TIMESTAMP [ (<precision>) ]
<precision> ::= 0 | 1 | 2 | 3
Parameter Description
The default precision of CURRENT_TIME is 0 decimals, so CURRENT_TIMESTAMP is not the exact sum of
CURRENT_DATE and CURRENT_TIME, unless you explicitly specify a precision (i.e. CURRENT_TIME(3) or
CURRENT_TIMESTAMP(0)).
Within a PSQL module (procedure, trigger or executable block), the value of CURRENT_TIMESTAMP will
remain constant every time it is read. If multiple modules call or trigger each other, the value will
remain constant throughout the duration of the outermost module. If you need a progressing value
in PSQL (e.g. to measure time intervals), use 'NOW'.
Since Firebird 4.0, CURRENT_TIMESTAMP returns the TIMESTAMP WITH TIME ZONE type. In
order for your queries to be compatible with database code of Firebird 4.0 and
higher, Firebird 3.0.4 and Firebird 2.5.9 introduced the LOCALTIMESTAMP expression.
In Firebird 3.0.4 and Firebird 2.5.9, LOCALTIMESTAMP is a synonym for
CURRENT_TIMESTAMP.
Examples
See also
542
Chapter 12. Context Variables
12.6. CURRENT_TRANSACTION
Unique identifier of the current transaction
Type
BIGINT
Syntax
CURRENT_TRANSACTION
The transaction identifier is derived from a counter on the database header page, which is
incremented for each new transaction. When a database is restored, this counter is reset to zero.
Examples
New.Txn_ID = current_transaction;
12.7. CURRENT_USER
Name of the user of the current connection
Type
VARCHAR(63)
Syntax
CURRENT_USER
Example
12.8. DELETING
Indicates if the trigger fired for a DELETE operation
543
Chapter 12. Context Variables
Available in
PSQL — DML triggers only
Type
BOOLEAN
Syntax
DELETING
Example
if (deleting) then
begin
insert into Removed_Cars (id, make, model, removed)
values (old.id, old.make, old.model, current_timestamp);
end
12.9. GDSCODE
Firebird error code of the error in a WHEN … DO block
Available in
PSQL
Type
INTEGER
Syntax
GDSCODE
In a “WHEN … DO” error handling block, the GDSCODE context variable contains the numeric value of
the current Firebird error code. GDSCODE is non-zero in WHEN … DO blocks, if the current error has a
Firebird error code. Outside error handlers, GDSCODE is always 0. Outside PSQL, it doesn’t exist at all.
After WHEN GDSCODE, you must use symbolic names like grant_obj_notfound etc. But
the GDSCODE context variable is an INTEGER. If you want to compare it against a
specific error, the numeric value must be used, e.g. 335544551 for
grant_obj_notfound.
Example
544
Chapter 12. Context Variables
begin
execute procedure log_grant_error(gdscode);
exit;
end
12.10. INSERTING
Indicates if the trigger fired for an INSERT operation
Available in
PSQL — triggers only
Type
BOOLEAN
Syntax
INSERTING
Example
12.11. LOCALTIME
Current server time in the session time zone, without time zone information
Type
TIME WITHOUT TIME ZONE
Syntax
LOCALTIME [ (<precision>) ]
<precision> ::= 0 | 1 | 2 | 3
545
Chapter 12. Context Variables
Parameter Description
LOCALTIME returns the current server time in the session time zone. The default is 0 decimals, i.e.
seconds precision.
LOCALTIME was introduced in Firebird 3.0.4 and Firebird 2.5.9 as an alias of CURRENT_TIME. In Firebird
5.0, CURRENT_TIME returns a TIME WITH TIME ZONE instead of a TIME [WITHOUT TIME ZONE], while
LOCALTIME returns TIME [WITHOUT TIME ZONE]. It is recommended to use LOCALTIME when you do not
need time zone information.
LOCALTIME has a default precision of 0 decimals, where LOCALTIMESTAMP has a default precision of 3
decimals. As a result, LOCALTIMESTAMP is not the exact sum of CURRENT_DATE and LOCALTIME, unless you
explicitly specify a precision (i.e. LOCALTIME(3) or LOCALTIMESTAMP(0)).
Within a PSQL module (procedure, trigger or executable block), the value of LOCALTIME will remain
constant every time it is read. If multiple modules call or trigger each other, the value will remain
constant throughout the duration of the outermost module. If you need a progressing value in PSQL
(e.g. to measure time intervals), use 'NOW'.
Examples
See also
CURRENT_TIME, LOCALTIMESTAMP
12.12. LOCALTIMESTAMP
Current server time and date in the session time zone, without time zone information
Type
TIMESTAMP WITHOUT TIME ZONE
Syntax
LOCALTIMESTAMP [ (<precision>) ]
<precision> ::= 0 | 1 | 2 | 3
546
Chapter 12. Context Variables
Parameter Description
LOCALTIMESTAMP returns the current server date and time in the session time zone. The default is 3
decimals, i.e. milliseconds precision.
The default precision of LOCALTIME is 0 decimals, so LOCALTIMESTAMP is not the exact sum of
CURRENT_DATE and LOCALTIME, unless you explicitly specify a precision (i.e. LOCATIME(3) or
LOCALTIMESTAMP(0)).
Within a PSQL module (procedure, trigger or executable block), the value of LOCALTIMESTAMP will
remain constant every time it is read. If multiple modules call or trigger each other, the value will
remain constant throughout the duration of the outermost module. If you need a progressing value
in PSQL (e.g. to measure time intervals), use 'NOW'.
Examples
See also
CURRENT_TIMESTAMP, LOCALTIME
12.13. NEW
Record with the inserted or updated values of a row
Available in
PSQL — triggers only,
DSQL — RETURNING clause of UPDATE, UPDATE OR INSERT and MERGE
Type
Record type
Syntax
NEW.column_name
547
Chapter 12. Context Variables
Parameter Description
NEW contains the new version of a database record that has just been inserted or updated. NEW is
read-only in AFTER triggers.
In multi-action triggers NEW is always available. However, if the trigger is fired by a DELETE, there will
be no new version of the record. In that situation, reading from NEW will always return NULL; writing
to it will cause a runtime exception.
12.14. 'NOW'
Current date and/or time in cast context
Type
CHAR(3), or depends on explicit CAST
'NOW' is not a variable, but a string literal or datetime mnemonic. It is, however, special in the sense
that when you CAST() it to a datetime type, you will get the current date and/or time. If the datetime
type has a time component, the precision is 3 decimals, i.e. milliseconds. 'NOW' is case-insensitive,
and the engine ignores leading or trailing spaces when casting.
'NOW' always returns the actual date/time, even in PSQL modules, where CURRENT_DATE, CURRENT_TIME
and CURRENT_TIMESTAMP return the same value throughout the duration of the outermost routine.
This makes 'NOW' useful for measuring time intervals in triggers, procedures and executable blocks.
Except in the situation mentioned above, reading CURRENT_DATE, CURRENT_TIME and CURRENT_TIMESTAMP
is generally preferable to casting 'NOW'. Be aware though that CURRENT_TIME defaults to seconds
precision; to get milliseconds precision, use CURRENT_TIME(3).
Firebird 3.0 and earlier allowed the use of 'NOW' in datetime literals (a.k.a.
"`shorthand casts"`), this is no longer allowed since Firebird 4.0.
Examples
548
Chapter 12. Context Variables
12.15. OLD
Record with the initial values of a row before update or delete
Available in
PSQL — triggers only,
DSQL — RETURNING clause of UPDATE, UPDATE OR INSERT and MERGE
Type
Record type
Syntax
OLD.column_name
Parameter Description
OLD contains the existing version of a database record just before a deletion or update. The OLD
record is read-only.
In multi-action triggers OLD is always available. However, if the trigger is fired by an INSERT, there is
obviously no pre-existing version of the record. In that situation, reading from OLD will always
return NULL.
12.16. RESETTING
Indicates if the trigger fired during a session reset
Available in
PSQL — triggers only
Type
BOOLEAN
Syntax
RESETTING
Its value is TRUE if session reset is in progress and FALSE otherwise. Intended for use in ON DISCONNECT
and ON CONNECT database triggers to detect an ALTER SESSION RESET.
12.17. ROW_COUNT
Number of affected rows of the last executed statement
549
Chapter 12. Context Variables
Available in
PSQL
Type
INTEGER
Syntax
ROW_COUNT
The ROW_COUNT context variable contains the number of rows affected by the most recent DML
statement (INSERT, UPDATE, DELETE, SELECT or FETCH) in the current PSQL module.
• In a FOR SELECT loop, ROW_COUNT is incremented with every iteration (starting at 0 before the
first).
• After a FETCH from a cursor, ROW_COUNT is 1 if a data row was retrieved and 0 otherwise. Fetching
more records from the same cursor does not increment ROW_COUNT beyond 1.
Example
12.18. SQLCODE
SQLCODE of the Firebird error in a WHEN … DO block
Available in
PSQL
Deprecated in
2.5.1
Type
INTEGER
Syntax
SQLCODE
550
Chapter 12. Context Variables
In a “WHEN … DO” error handling block, the SQLCODE context variable contains the numeric value of
the current SQL error code. SQLCODE is non-zero in WHEN … DO blocks, if the current error has a SQL
error code. Outside error handlers, SQLCODE is always 0. Outside PSQL, it doesn’t exist at all.
Example
when any
do
begin
if (sqlcode <> 0) then
Msg = 'An SQL error occurred!';
else
Msg = 'Something bad happened!';
exception ex_custom Msg;
end
12.19. SQLSTATE
SQLSTATE code of the Firebird error in a WHEN … DO block
Available in
PSQL
Type
CHAR(5)
Syntax
SQLSTATE
In a “WHEN … DO” error handler, the SQLSTATE context variable contains the 5-character, SQL-
compliant status code of the current error. Outside error handlers, SQLSTATE is always '00000'.
Outside PSQL, it is not available at all.
SQLSTATE is destined to replace SQLCODE. The latter is now deprecated in Firebird and will disappear
in a future version.
Each SQLSTATE code is the concatenation of a 2-character class and a 3-character subclass. Classes 00
(successful completion), 01 (warning) and 02 (no data) represent completion conditions. Every status
code outside these classes is an exception. Because classes 00, 01 and 02 don’t raise an error, they
won’t ever show up in the SQLSTATE variable.
For a complete listing of SQLSTATE codes, consult the SQLSTATE Codes and Message Texts section in
Appendix B, Exception Codes and Messages.
551
Chapter 12. Context Variables
Example
when any
do
begin
Msg = case sqlstate
when '22003' then 'Numeric value out of range.'
when '22012' then 'Division by zero.'
when '23000' then 'Integrity constraint violation.'
else 'Something bad happened! SQLSTATE = ' || sqlstate
end;
exception ex_custom Msg;
end
12.20. 'TODAY'
Current date in cast context
Type
CHAR(5), or depends on explicit CAST
'TODAY' is not a variable, but a string literal or date mnemonic. It is, however, special in the sense
that when you CAST() it to a date/time type, you will get the current date. If the target datetime type
has a time component, it will be set to zero. 'TODAY' is case-insensitive, and the engine ignores
leading or trailing spaces when casting.
'TODAY' always returns the actual date, even in PSQL modules, where CURRENT_DATE, CURRENT_TIME
and CURRENT_TIMESTAMP return the same value throughout the duration of the outermost routine.
This makes 'TODAY' useful for measuring time intervals in triggers, procedures and executable
blocks (at least if your procedures are running for days).
Except in the situation mentioned above, reading CURRENT_DATE, is generally preferable to casting
'TODAY'.
When cast to a TIMESTAMP WITH TIME ZONE, the time reflected will be 00:00:00 in UTC rebased to the
session time zone.
Firebird 3.0 and earlier allowed the use of 'TODAY' in datetime literals (a.k.a.
"`shorthand casts"`), this is no longer allowed since Firebird 4.0.
Examples
552
Chapter 12. Context Variables
12.21. 'TOMORROW'
Tomorrow’s date in cast context
Type
CHAR(8), or depends on explicit CAST
'TOMORROW' is not a variable, but a string literal. It is, however, special in the sense that when you
CAST() it to a date/time type, you will get the date of the next day. See also 'TODAY'.
Examples
12.22. UPDATING
Indicates if the trigger fired for an UPDATE operation
Available in
PSQL — triggers only
Type
BOOLEAN
Syntax
UPDATING
Example
553
Chapter 12. Context Variables
12.23. 'YESTERDAY'
Yesterday’s date in cast context
Type
CHAR(9), or depends on explicit CAST
'YESTERDAY' is not a variable, but a string literal. It is, however, special in the sense that when you
CAST() it to a date/time type, you will get the date of the day before. See also 'TODAY'.
Examples
12.24. USER
Name of the user of the current connection
Type
VARCHAR(63)
Syntax
USER
Example
554
Chapter 13. Transaction Control
Unless explicitly mentioned otherwise in an “Available in” section, transaction control statements
are available in DSQL. Availability in ESQL is — bar some exceptions — not tracked by this
Language Reference. Transaction control statements are not available in PSQL.
SET TRANSACTION
configures and starts a transaction
COMMIT
signals the end of a unit of work and writes changes permanently to the database
ROLLBACK
undoes the changes performed in the transaction or to a savepoint
SAVEPOINT
marks a position in the log of work done, in case a partial rollback is needed
RELEASE SAVEPOINT
erases a savepoint
Available in
DSQL, ESQL
Syntax
SET TRANSACTION
[NAME tr_name]
[<tr_option> ...]
<tr_option> ::=
READ {ONLY | WRITE}
| [NO] WAIT
| [ISOLATION LEVEL] <isolation_level>
| NO AUTO UNDO
555
Chapter 13. Transaction Control
| RESTART REQUESTS
| AUTO COMMIT
| IGNORE LIMBO
| LOCK TIMEOUT seconds
| RESERVING <tables>
| USING <dbhandles>
<isolation_level> ::=
SNAPSHOT [AT NUMBER snapshot_number]
| SNAPSHOT TABLE [STABILITY]
| READ {UNCOMMITED | COMMITTED} [<read-commited-opt>]
<read-commited-opt> ::=
[NO] RECORD_VERSION | READ CONSISTENCY
Parameter Description
seconds The time in seconds for the statement to wait in case a conflict occurs.
Has to be greater than or equal to 0.
dbhandles The list of databases the database can access. Available only in ESQL
dbhandle The handle of the database the transaction can access. Available only in
ESQL
Generally, only client applications start transactions. Exceptions are when the server starts an
autonomous transaction, and transactions for certain background system threads/processes, such
as sweeping.
A client application can start any number of concurrently running transactions. A single connection
can have multiple concurrent active transactions (though not all drivers or access components
support this). A limit does exist, for the total number of transactions in all client applications
556
Chapter 13. Transaction Control
working with one particular database from the moment the database was restored from its gbak
backup or from the moment the database was created originally. The limit is 2
48
— 281,474,976,710,656.
All clauses in the SET TRANSACTION statement are optional. If the statement starting a transaction has
no clauses specified, the transaction will be started with default values for access mode, lock
resolution mode and isolation level, which are:
SET TRANSACTION
READ WRITE
WAIT
ISOLATION LEVEL SNAPSHOT;
Database drivers or access components may use different defaults for transactions
started through their API. Check their documentation for details.
The server assigns integer numbers to transactions sequentially. Whenever a client starts any
transaction, either explicitly defined or by default, the server sends the transaction ID to the client.
This number can be retrieved in SQL using the context variable CURRENT_TRANSACTION.
Transaction Name
The optional NAME attribute defines the name of a transaction. Use of this attribute is available only
in Embedded SQL (ESQL). In ESQL applications, named transactions make it possible to have
several transactions active simultaneously in one application. If named transactions are used, a
host-language variable with the same name must be declared and initialized for each named
transaction. This is a limitation that prevents dynamic specification of transaction names and thus
rules out transaction naming in DSQL.
Transaction Parameters
• lock resolution mode (WAIT, NO WAIT) with an optional LOCK TIMEOUT specification
The READ UNCOMMITTED isolation level is a synonym for READ COMMITTED, and is
557
Chapter 13. Transaction Control
Access Mode
The two database access modes for transactions are READ WRITE and READ ONLY.
• If the access mode is READ WRITE, operations in the context of this transaction can be both read
operations and data update operations. This is the default mode.
• If the access mode is READ ONLY, only SELECT operations can be executed in the context of this
transaction. Any attempt to change data in the context of such a transaction will result in
database exceptions. However, this does not apply to global temporary tables (GTT), which are
allowed to be changed in READ ONLY transactions, see Global Temporary Tables (GTT) in Chapter
5, Data Definition (DDL) Statements for details.
When several client processes work with the same database, locks may occur when one process
makes uncommitted changes in a table row, or deletes a row, and another process tries to update or
delete the same row. Such locks are called update conflicts.
Locks may occur in other situations when multiple transaction isolation levels are used.
WAIT Mode
In the WAIT mode (the default mode), if a conflict occurs between two parallel processes executing
concurrent data updates in the same database, a WAIT transaction will wait till the other transaction
has finished — by committing (COMMIT) or rolling back (ROLLBACK). The client application with the
WAIT transaction will be put on hold until the conflict is resolved.
If a LOCK TIMEOUT is specified for the WAIT transaction, waiting will continue only for the number of
seconds specified in this clause. If the lock is unresolved at the end of the specified interval, the
error message “Lock time-out on wait transaction” is returned to the client.
Lock resolution behaviour can vary a little, depending on the transaction isolation level.
NO WAIT Mode
In the NO WAIT mode, a transaction will immediately throw a database exception if a conflict occurs.
LOCK TIMEOUT is a separate transaction option, but can only be used for WAIT transactions. Specifying
LOCK TIMEOUT with a NO WAIT transaction will raise an error “invalid parameter in transaction
parameter block -Option isc_tpb_lock_timeout is not valid if isc_tpb_nowait was used previously in
TPB”
558
Chapter 13. Transaction Control
Isolation Level
Keeping the work of one database task separated from others is what isolation is about. Changes
made by one statement become visible to all remaining statements executing within the same
transaction, regardless of its isolation level. Changes that are in progress within other transactions
remain invisible to the current transaction as long as they remain uncommitted. The isolation level
and, sometimes, other attributes, determine how transactions will interact when another
transaction wants to commit work.
The ISOLATION LEVEL attribute defines the isolation level for the transaction being started. It is the
most significant transaction parameter for determining its behavior towards other concurrently
running transactions.
• SNAPSHOT
SNAPSHOT isolation level — the default level — allows the transaction to see only those changes that
were committed before it was started. Any committed changes made by concurrent transactions
will not be seen in a SNAPSHOT transaction while it is active. The changes will become visible to a
new transaction once the current transaction is either committed or rolled back, but not if it is only
a roll back to a savepoint.
Autonomous Transactions
Changes made by autonomous transactions are not seen in the context of the
SNAPSHOT transaction that launched it.
Using SNAPSHOT AT NUMBER snaphot_number, a SNAPSHOT transaction can be started sharing the
snapshot of another transaction. With this feature it’s possible to create parallel processes (using
different attachments) reading consistent data from a database. For example, a backup process may
create multiple threads reading data from the database in parallel, or a web service may dispatch
distributed sub-services doing processing in parallel.
Alternatively, this feature can also be used via the API, using Transaction Parameter Buffer item
isc_tpb_at_snapshot_number.
559
Chapter 13. Transaction Control
To share a stable view between transactions, the other transaction also needs to
have isolation level SNAPSHOT. With READ COMMITTED, the snapshot number will move
forward.
Example
The SNAPSHOT TABLE STABILITY — or SNAPSHOT TABLE — isolation level is the most restrictive. As in
SNAPSHOT, a transaction in SNAPSHOT TABLE STABILITY isolation sees only those changes that were
committed before the current transaction was started. After a SNAPSHOT TABLE STABILITY is started,
no other transactions can make any changes to any table in the database that has changes pending
for this transaction. Other transactions can read other data, but any attempt at inserting, updating
or deleting by a parallel process will cause conflict exceptions.
The RESERVING clause can be used to allow other transactions to change data in some tables.
If any other transaction has an uncommitted change pending in any (non-SHARED) table listed in the
RESERVING clause, trying to start a SNAPSHOT TABLE STABILITY transaction will result in an indefinite
wait (default or explicit WAIT), or an exception (NO WAIT or after expiration of the LOCK TIMEOUT).
The READ COMMITTED isolation level allows all data changes that other transactions have committed
since it started to be seen immediately by the uncommitted current transaction. Uncommitted
changes are not visible to a READ COMMITTED transaction.
To retrieve the updated list of rows in the table you are interested in — “refresh” — the SELECT
statement needs to be executed again, whilst still in the uncommitted READ COMMITTED transaction.
One of three modifying parameters can be specified for READ COMMITTED transactions, depending on
the kind of conflict resolution desired: READ CONSISTENCY, RECORD_VERSION or NO RECORD_VERSION. When
the ReadConsistency setting is set to 1 in firebird.conf (the default) or in databases.conf, these
variants are effectively ignored and behave as READ CONSISTENCY. Otherwise, these variants are
mutually exclusive.
◦ with WAIT specified, it will wait until the other transaction is either committed or rolled back.
If the other transaction is rolled back, or if it is committed and its transaction ID is older
560
Chapter 13. Transaction Control
than the current transaction’s ID, then the current transaction’s change is allowed. A lock
conflict error is returned if the other transaction was committed and its ID was newer than
that of the current transaction.
• With RECORD_VERSION specified, the transaction reads the latest committed version of the row,
regardless of other pending versions of the row. The lock resolution strategy (WAIT or NO WAIT)
does not affect the behavior of the transaction at its start in any way.
• With READ CONSISTENCY specified (or ReadConsistency = 1), the execution of a statement obtains a
snapshot of the database to ensure a consistent read at the statement-level of the transactions
committed when execution started.
The other two variants can result in statement-level inconsistent reads as they may read some
but not all changes of a concurrent transaction if that transaction commits during statement
execution. For example, a SELECT COUNT(*) could read some, but not all inserted records of
another transaction if the commit of that transaction occurs while the statement is reading
records.
This statement-level snapshot is obtained for the execution of a top-level statement, nested
statements (triggers, stored procedures and functions, dynamics statements, etc.) use the
statement-level snapshot created for the top-level statement.
When an update conflict occurs, the behaviour of a READ COMMITTED READ CONSISTENCY
transaction is different from READ COMMITTED RECORD VERSION. The following actions are
performed:
3. Remaining records of the current UPDATE/DELETE cursor are processed, and they are write-
locked too.
4. Once the cursor is fetched, all modifications performed since the top-level statement was
started are undone, already taken write-locks for every updated/deleted/locked record are
preserved, all inserted records are removed.
561
Chapter 13. Transaction Control
This algorithm ensures that already updated records remain locked after restart, they are
visible to the new snapshot, and could be updated again with no further conflicts. Also, due to
READ CONSISTENCY nature, the modified record set remains consistent.
• This restart algorithm is applied to UPDATE, DELETE, SELECT WITH LOCK and
MERGE statements, with or without the RETURNING clause, executed directly
by a client application or inside a PSQL object (stored
procedure/function, trigger, EXECUTE BLOCK, etc).
• Any error not handled at step (3) above aborts the restart algorithm and
statement execution continues normally.
• UPDATE/DELETE triggers fire multiple times for the same record if the
statement execution was restarted and the record is updated/deleted
again.
and so on. Beware that such code could be executed more than once if
update conflicts happen.
562
Chapter 13. Transaction Control
NO AUTO UNDO
The NO AUTO UNDO option affects the handling of record versions (garbage) produced by the
transaction in the event of rollback. With NO AUTO UNDO flagged, the ROLLBACK statement marks the
transaction as rolled back without deleting the record versions created in the transaction. They are
left to be mopped up later by garbage collection.
NO AUTO UNDO might be useful when a lot of separate statements are executed that change data in
conditions where the transaction is likely to be committed successfully most of the time.
The NO AUTO UNDO option is ignored for transactions where no changes are made.
RESTART REQUESTS
— src/jrd/tra.cpp
The exact semantics and effects of this clause are not clear, and we recommend you do not use this
clause.
AUTO COMMIT
Specifying AUTO COMMIT enables auto-commit mode for the transaction. In auto-commit mode,
Firebird will internally execute the equivalent of COMMIT RETAIN after each statement execution.
This is not a generally useful auto-commit mode; the same transaction context is
retained until the transaction is ended through a commit or rollback. In other
words, when you use SNAPSHOT or SNAPSHOT TABLE STABILITY, this auto-commit will
not change record visibility (effects of transactions that were committed after this
transaction was started will not be visible).
For READ COMMITTED, the same warnings apply as for commit retaining: prolonged
use of a single transaction in auto-commit mode can inhibit garbage collection and
degrade performance.
IGNORE LIMBO
This flag is used to signal that records created by limbo transactions are to be ignored. Transactions
are left “in limbo” if the second stage of a two-phase commit fails.
Historical Note
IGNORE LIMBO surfaces the TPB parameter isc_tpb_ignore_limbo, available in the API
since InterBase times and is mainly used by gfix.
RESERVING
The RESERVING clause in the SET TRANSACTION statement reserves tables specified in the table list.
563
Chapter 13. Transaction Control
Reserving a table prevents other transactions from making changes in them or even, with the
inclusion of certain parameters, from reading data from them while this transaction is running.
A RESERVING clause can also be used to specify a list of tables that can be changed by other
transactions, even if the transaction is started with the SNAPSHOT TABLE STABILITY isolation level.
If one of the keywords SHARED or PROTECTED is omitted, SHARED is assumed. If the whole FOR clause is
omitted, FOR SHARED READ is assumed. The names and compatibility of the four access options for
reserving tables are not obvious.
PROTECTED Yes No No No
WRITE
The combinations of these RESERVING clause flags for concurrent access depend on the isolation
levels of the concurrent transactions:
• SNAPSHOT isolation
◦ Concurrent SNAPSHOT transactions with SHARED READ do not affect one other’s access
◦ A concurrent mix of SNAPSHOT and READ COMMITTED transactions with SHARED WRITE do not
affect one another’s access, but they block transactions with SNAPSHOT TABLE STABILITY
isolation from either reading from or writing to the specified table(s)
◦ Concurrent transactions with any isolation level and PROTECTED READ can only read data from
the reserved tables. Any attempt to write to them will cause an exception
◦ With PROTECTED WRITE, concurrent transactions with SNAPSHOT and READ COMMITTED isolation
cannot write to the specified tables. Transactions with SNAPSHOT TABLE STABILITY isolation
cannot read from or write to the reserved tables at all.
◦ All concurrent transactions with SHARED READ, regardless of their isolation levels, can read
from or write (if in READ WRITE mode) to the reserved tables
◦ Concurrent transactions with SNAPSHOT and READ COMMITTED isolation levels and SHARED WRITE
can read data from and write (if in READ WRITE mode) to the specified tables but concurrent
access to those tables from transactions with SNAPSHOT TABLE STABILITY is blocked whilst
these transactions are active
◦ Concurrent transactions with any isolation level and PROTECTED READ can only read from the
564
Chapter 13. Transaction Control
reserved tables
◦ With PROTECTED WRITE, concurrent SNAPSHOT and READ COMMITTED transactions can read from
but not write to the reserved tables. Access by transactions with the SNAPSHOT TABLE
STABILITY isolation level is blocked.
◦ With SHARED READ, all concurrent transactions with any isolation level can both read from
and write (if in READ WRITE mode) to the reserved tables
◦ SHARED WRITE allows all transactions in SNAPSHOT and READ COMMITTED isolation to read from
and write (if in READ WRITE mode) to the specified tables and blocks access from transactions
with SNAPSHOT TABLE STABILITY isolation
◦ With PROTECTED READ, concurrent transactions with any isolation level can only read from the
reserved tables
◦ With PROTECTED WRITE, concurrent transactions in SNAPSHOT and READ COMMITTED isolation can
read from but not write to the specified tables. Access from transactions in SNAPSHOT TABLE
STABILITY isolation is blocked.
In Embedded SQL, the USING clause can be used to conserve system resources by limiting the
number of databases a transaction can access. USING is mutually exclusive with RESERVING. A USING
clause in SET TRANSACTION syntax is not supported in DSQL.
See also
COMMIT, ROLLBACK
13.1.2. COMMIT
Commits a transaction
Available in
DSQL, ESQL
Syntax
Parameter Description
The COMMIT statement commits all work carried out in the context of this transaction (inserts,
updates, deletes, selects, execution of procedures). New record versions become available to other
transactions and, unless the RETAIN clause is employed, all server resources allocated to its work are
released.
If any conflicts or other errors occur in the database during the process of committing the
565
Chapter 13. Transaction Control
transaction, the transaction is not committed, and the reasons are passed back to the user
application for handling, and the opportunity to attempt another commit or to roll the transaction
back.
COMMIT Options
• The optional TRANSACTION tr_name clause, available only in Embedded SQL, specifies the name of
the transaction to be committed. With no TRANSACTION clause, COMMIT is applied to the default
transaction.
In ESQL applications, named transactions make it possible to have several transactions active
simultaneously in one application. If named transactions are used, a host-language variable
with the same name must be declared and initialized for each named transaction. This is a
limitation that prevents dynamic specification of transaction names and thus, rules out
transaction naming in DSQL.
• The keyword RELEASE is available only in Embedded SQL and enables disconnection from all
databases after the transaction is committed. RELEASE is retained in Firebird only for
compatibility with legacy versions of InterBase. It has been superseded in ESQL by the
DISCONNECT statement.
• The RETAIN [SNAPSHOT] clause is used for the “soft” commit, variously referred to amongst host
languages and their practitioners as COMMIT WITH RETAIN, “CommitRetaining”, “warm commit”, et
al. The transaction is committed, but some server resources are retained and a new transaction
is restarted transparently with the same Transaction ID. The state of row caches and cursors
remains as it was before the soft commit.
For soft-committed transactions whose isolation level is SNAPSHOT or SNAPSHOT TABLE STABILITY,
the view of database state does not update to reflect changes by other transactions, and the user
of the application instance continues to have the same view as when the original transaction
started. Changes made during the life of the retained transaction are visible to that transaction,
of course.
See also
SET TRANSACTION, ROLLBACK
13.1.3. ROLLBACK
Available in
566
Chapter 13. Transaction Control
DSQL, ESQL
Syntax
Parameter Description
The ROLLBACK statement rolls back all work carried out in the context of this transaction (inserts,
updates, deletes, selects, execution of procedures). ROLLBACK never fails and, thus, never causes
exceptions. Unless the RETAIN clause is employed, all server resources allocated to the work of the
transaction are released.
The TRANSACTION and RELEASE clauses are only valid in ESQL. The ROLLBACK TO SAVEPOINT statement is
not available in ESQL.
ROLLBACK Options
• The optional TRANSACTION tr_name clause, available only in Embedded SQL, specifies the name of
the transaction to be committed. With no TRANSACTION clause, ROLLBACK is applied to the default
transaction.
In ESQL applications, named transactions make it possible to have several transactions active
simultaneously in one application. If named transactions are used, a host-language variable
with the same name must be declared and initialized for each named transaction. This is a
limitation that prevents dynamic specification of transaction names and thus, rules out
transaction naming in DSQL.
• The keyword RETAIN keyword specifies that, although all work of the transaction is to be rolled
back, the transaction context is to be retained. Some server resources are retained, and the
transaction is restarted transparently with the same Transaction ID. The state of row caches and
cursors is kept as it was before the “soft” rollback.
For transactions whose isolation level is SNAPSHOT or SNAPSHOT TABLE STABILITY, the view of
database state is not updated by the soft rollback to reflect changes by other transactions. The
user of the application instance continues to have the same view as when the transaction
started originally. Changes that were made and soft-committed during the life of the retained
transaction are visible to that transaction, of course.
See also
SET TRANSACTION, COMMIT
567
Chapter 13. Transaction Control
ROLLBACK TO SAVEPOINT
The ROLLBACK TO SAVEPOINT statement specifies the name of a savepoint to which changes are to be
rolled back. The effect is to roll back all changes made within the transaction, from the specified
savepoint forward until the point when ROLLBACK TO SAVEPOINT is requested.
• Any database mutations performed since the savepoint was created are undone. User variables
set with RDB$SET_CONTEXT() remain unchanged.
• Any savepoints that were created after the one named are destroyed. Savepoints earlier than
the one named are preserved, along with the named savepoint itself. Repeated rollbacks to the
same savepoint are thus allowed.
• All implicit and explicit record locks that were acquired since the savepoint are released. Other
transactions that have requested access to rows locked after the savepoint are not notified and
will continue to wait until the transaction is committed or rolled back. Other transactions that
have not already requested the rows can request and access the unlocked rows immediately.
See also
SAVEPOINT, RELEASE SAVEPOINT
13.1.4. SAVEPOINT
Creates a savepoint
Syntax
SAVEPOINT sp_name
Parameter Description
The SAVEPOINT statement creates an SQL-compliant savepoint that acts as a marker in the “stack” of
data activities within a transaction. Subsequently, the tasks performed in the “stack” can be undone
back to this savepoint, leaving the earlier work and older savepoints untouched. Savepoints are
sometimes called “nested transactions”.
If a savepoint already exists with the same name as the name supplied for the new one, the existing
savepoint is released, and a new one is created using the supplied name.
To roll changes back to the savepoint, the statement ROLLBACK TO SAVEPOINT is used.
Memory Considerations
The internal mechanism beneath savepoints can consume large amounts of
memory, especially if the same rows receive multiple updates in one transaction.
When a savepoint is no longer needed, but the transaction still has work to do, a
568
Chapter 13. Transaction Control
RELEASE SAVEPOINT statement will erase it and thus free the resources.
See also
ROLLBACK TO SAVEPOINT, RELEASE SAVEPOINT
Releases a savepoint
Syntax
Parameter Description
The statement RELEASE SAVEPOINT erases a named savepoint, freeing up all the resources it
encompasses. By default, all the savepoints created after the named savepoint are released as well.
The qualifier ONLY directs the engine to release only the named savepoint.
See also
SAVEPOINT
By default, the engine uses an automatic transaction-level system savepoint to perform transaction
rollback. When a ROLLBACK statement is issued, all changes performed in this transaction are backed
out via a transaction-level savepoint, and the transaction is then committed. This logic reduces the
amount of garbage collection caused by rolled back transactions.
When the volume of changes performed under a transaction-level savepoint is getting large (~50000
records affected), the engine releases the transaction-level savepoint and uses the Transaction
569
Chapter 13. Transaction Control
If you expect the volume of changes in your transaction to be large, you can
specify the NO AUTO UNDO option in your SET TRANSACTION statement to block the
creation of the transaction-level savepoint. Using the API, you can set this with the
TPB flag isc_tpb_no_auto_undo.
Transaction control statements are not allowed in PSQL, as that would break the atomicity of the
statement that calls the procedure. However, Firebird does support the raising and handling of
exceptions in PSQL, so that actions performed in stored procedures and triggers can be selectively
undone without the entire procedure failing.
• undo all actions performed by the procedure or trigger or, in a selectable procedure, all actions
performed since the last SUSPEND, when execution terminates prematurely because of an
uncaught error or exception
Each PSQL exception handling block is also bounded by automatic system savepoints. A BEGIN…END
block does not itself create an automatic savepoint. A savepoint is created only in blocks that
contain a <<fblangref50-psql-when,WHEN statement> for handling exceptions.
570
Chapter 14. Security
There is also a fourth level of data security: wire protocol encryption, which
encrypts data in transit between client and server. Wire protocol encryption is out
of scope for this Language Reference.
The information about users authorised to access a specific Firebird server is stored in a special
security database named security5.fdb. Each record in security5.fdb is a user account for one user.
For each database, the security database can be overridden in the databases.conf file (parameter
SecurityDatabase). Any database can be a security database, even for that database itself.
A username, with a maximum length of 63 characters, is an identifier, following the normal rules
for identifiers (unquoted case-insensitive, double-quoted case-sensitive). For backwards
compatibility, some statements (e.g. isqls CONNECT) accept usernames enclosed in single quotes,
which will behave as normal, unquoted identifiers.
The maximum password length depends on the user manager plugin (parameter UserManager, in
firebird.conf or databases.conf). Passwords are case-sensitive. The default user manager is the first
plugin in the UserManager list, but this can be overridden in the SQL user management statements.
For the Srp plugin, the maximum password length is 255 characters, for an effective length of 20
bytes (see also Why is the effective password length of SRP 20 bytes?). For the Legacy_UserManager
plugin only the first eight bytes of a password are significant; whilst it is valid to enter a password
longer than eight bytes for Legacy_UserManager, any subsequent characters are ignored.
The SRP plugin does not actually have a 20 byte limit on password length, and longer
passwords can be used (with an implementation limit of 255 characters). Hashes of different
passwords longer than 20 bytes are also — usually — different. This effective limit comes
from the limited hash length in SHA1 (used inside Firebird’s SRP implementation), 20 bytes or
160 bits, and the “pigeonhole principle”. Sooner or later, there will be a shorter (or longer)
password that has the same hash (e.g. in a brute force attack). That is why often the effective
571
Chapter 14. Security
The embedded version of the server does not use authentication; for embedded, the filesystem
permissions to open the database file are used as authorization to access the database. However,
the username, and — if necessary — the role, must be specified in the connection parameters, as
they control access to database objects.
SYSDBA or the owner of the database have unrestricted access to all objects of the database. Users
with the RDB$ADMIN role have similar unrestricted access if they specify that role when connecting or
with SET ROLE.
In Firebird, the SYSDBA account is a “superuser” that exists beyond any security restrictions. It has
complete access to all objects in all regular databases on the server, and full read/write access to the
accounts in the security database security5.fdb. No user has remote access to the metadata of the
security database.
For Srp, the SYSDBA account does not exist by default; it will need to be created using an embedded
connection. For Legacy_Auth, the default SYSDBA password on Windows and macOS is
“masterkey” — or “masterke”, to be exact, because of the 8-character length limit.
Other users can acquire elevated privileges in several ways, some of which depend on the
operating system platform. These are discussed in the sections that follow and are summarised in
Administrators and Fine-grained System Privileges.
POSIX Hosts
On POSIX systems, including macOS, the POSIX username will be used as the Firebird Embedded
username if username is not explicitly specified.
On POSIX hosts, other than macOS, the SYSDBA user does not have a default password. If the full
installation is done using the standard scripts, a one-off password will be created and stored in a
text file in the same directory as security5.fdb, commonly /opt/firebird/. The name of the
password file is SYSDBA.password.
The root user can act directly as SYSDBA on Firebird Embedded. Firebird will treat root as though
it were SYSDBA, and it provides access to all databases on the server.
572
Chapter 14. Security
Windows Hosts
On the Windows Server operating systems, operating system accounts can be used. Windows
authentication (also known as “trusted authentication”) can be enabled by including the Win_Sspi
plugin in the AuthServer list in firebird.conf. The plugin must also be present in the AuthClient
setting at the client-side.
Windows operating system administrators are not automatically granted SYSDBA privileges when
connecting to a database. To make that happen, the internally-created role RDB$ADMIN must be
altered by SYSDBA or the database owner, to enable it. For details, refer to the later section entitled
AUTO ADMIN MAPPING.
Prior to Firebird 3.0, with trusted authentication enabled, users who passed the
default checks were automatically mapped to CURRENT_USER. In Firebird 3.0 and
later, the mapping must be done explicitly using CREATE MAPPING.
The “owner” of a database is either the user who was CURRENT_USER at the time of creation (or
restore) of the database or, if the USER parameter was supplied in the CREATE DATABASE statement, the
specified user.
“Owner” is not a username. The user who is the owner of a database has full administrator
privileges with respect to that database, including the right to drop it, to restore it from a backup
and to enable or disable the AUTO ADMIN MAPPING capability.
A user with the USER_MANAGEMENT system privilege in the security database can create, alter and drop
users. To receive the USER_MANAGEMENT privilege, the security database must have a role with that
privilege:
There are two options for the user to exercise these privileges:
1. Grant the role as a default role. The user will always be able to create, alter or drop users.
2. Grant the role as a normal role. The user will only be able to create, alter or drop users when
the role is specified explicitly on login or using SET ROLE.
If the security database is a different database than the user connects to — which is usually the
573
Chapter 14. Security
case when using security5.fdb — then a role with the same name must also exist and be granted
to the user in that database for the user to be able to activate the role. The role in the other
database does not need any system privileges or other privileges.
The USER_MANAGEMENT system privilege does not allow a user to grant or revoke the admin role. This
requires the RDB$ADMIN role.
The internally-created role RDB$ADMIN is present in all databases. Assigning the RDB$ADMIN role to a
regular user in a database grants that user the privileges of the SYSDBA, in that database only.
The elevated privileges take effect when the user is logged in to that regular database under the
RDB$ADMIN role, and gives full control over all objects in that database.
Being granted the RDB$ADMIN role in the security database confers the authority to create, alter and
drop user accounts.
In both cases, the user with the elevated privileges can assign RDB$ADMIN role to any other user. In
other words, specifying WITH ADMIN OPTION is unnecessary because it is built into the role.
Since nobody — not even SYSDBA — can connect to the security database remotely, the GRANT and
REVOKE statements are of no use for this task. Instead, the RDB$ADMIN role is granted and revoked
using the SQL statements for user management:
Parameter Description
See also
CREATE USER, ALTER USER, GRANT, REVOKE
574
Chapter 14. Security
With Firebird 3.0, gsec was deprecated. It is recommended to use the SQL user
management statements instead.
An alternative is to use gsec with the -admin parameter to store the RDB$ADMIN attribute on the user’s
record:
Depending on the administrative status of the current user, more parameters may be needed when
invoking gsec, e.g. -user and -pass, -role, or -trusted.
To manage user accounts through SQL, the user must have the RDB$ADMIN role in the security
database. No user can connect to the security database remotely, so the solution is that the user
connects to a regular database. From there, they can submit any SQL user management command.
Contrary to Firebird 3.0 or earlier, the user does not need to specify the RDB$ADMIN role on connect,
nor do they need to have the RDB$ADMIN role in the database used to connect.
To perform user management with gsec, the user must provide the extra switch -role rdb$admin.
In a regular database, the RDB$ADMIN role is granted and revoked with the usual syntax for granting
and revoking roles:
Parameter Description
To grant and revoke the RDB$ADMIN role, the grantor must be logged in as an administrator.
See also
GRANT, REVOKE
575
Chapter 14. Security
To exercise their RDB$ADMIN privileges, the role must either have been granted as a default role, or
the grantee has to include the role in the connection attributes when connecting to the database, or
specify it later using SET ROLE.
Windows Administrators are not automatically granted RDB$ADMIN privileges when connecting to a
database (when Win_Sspi is enabled). The AUTO ADMIN MAPPING switch determines whether
Administrators have automatic RDB$ADMIN rights, on a database-by-database basis. By default, when
a database is created, it is disabled.
If AUTO ADMIN MAPPING is enabled in the database, it will take effect whenever a Windows
Administrator connects:
After a successful “auto admin” connection, the current role is set to RDB$ADMIN.
If an explicit role was specified on connect, the RDB$ADMIN role can be assumed later in the session
using SET TRUSTED ROLE.
Either statement must be issued by a user with sufficient rights, that is:
• An administrator
The statement
576
Chapter 14. Security
In a regular database, the status of AUTO ADMIN MAPPING is checked only at connect time. If an
Administrator has the RDB$ADMIN role because auto-mapping was on when they logged in, they will
keep that role for the duration of the session, even if they or someone else turns off the mapping in
the meantime.
Likewise, switching on AUTO ADMIN MAPPING will not change the current role to RDB$ADMIN for
Administrators who were already connected.
The ALTER ROLE RDB$ADMIN statement cannot enable or disable AUTO ADMIN MAPPING in the security
database. However, you can create a global mapping for the predefined group
DOMAIN_ANY_RID_ADMINS to the role RDB$ADMIN in the following way:
Depending on the administrative status of the current user, more parameters may be needed when
invoking gsec, e.g. -user and -pass, -role, or -trusted.
Only SYSDBA can enable AUTO ADMIN MAPPING if it is disabled, but any administrator can turn it off.
577
Chapter 14. Security
When turning off AUTO ADMIN MAPPING in gsec, the user turns off the mechanism itself which gave
them access, and thus they would not be able to re-enable AUTO ADMIN MAPPING. Even in an
interactive gsec session, the new flag setting takes effect immediately.
14.1.3. Administrators
An administrator is a user that has sufficient rights to read, write to, create, alter or delete any
object in a database to which that user’s administrator status applies. The table summarises how
“superuser” privileges are enabled in the various Firebird security contexts.
root user on POSIX Auto Exactly like SYSDBA. Firebird Embedded only.
Windows Set as CURRENT_ROLE Exactly like SYSDBA if the following are all true:
Administrator if login succeeds
• In firebird.conf file, AuthServer includes Win_Sspi, and
Win_Sspi is present in the client-side plugins
(AuthClient) configuration
Database owner Auto Like SYSDBA, but only in the databases they own
Regular user Must be Like SYSDBA, but only in the databases where the role is
previously granted
granted; must be
supplied at login
or have been
granted as a
default role
POSIX OS user Must be Like SYSDBA, but only in the databases where the role is
previously granted. Firebird Embedded only.
granted; must be
supplied at login
or have been
granted as a
default role
578
Chapter 14. Security
Windows user Must be Like SYSDBA, but only in the databases where the role is
previously granted. Only available if in firebird.conf file, AuthServer
granted; must be includes Win_Sspi, and Win_Sspi is present in the client-side
supplied at login plugins (AuthClient) configuration
In addition to granting users full administrative privileges, system privileges make it possible to
grant regular users a subset of administrative privileges that have historically been limited to
SYSDBA and administrators only. For example:
The implementation defines a set of system privileges, analogous to object privileges, from which
lists of privileged tasks can be assigned to roles.
It is also possible to grant normal privileges to a system privilege, making the system privilege act
like a special role type.
The system privileges are assigned through CREATE ROLE and ALTER ROLE.
Be aware that each system privilege provides a very thin level of control. For some
tasks it may be necessary to give the user more than one privilege to perform some
task. For example, add IGNORE_DB_TRIGGERS to USE_GSTAT_UTILITY because gstat
needs to ignore database triggers.
The following table lists the names of the valid system privileges that can be granted to and revoked
from roles.
579
Chapter 14. Security
• SYSDBA
• When the AUTO ADMIN MAPPING flag is enabled in the security database (security5.fdb or the
security database configured for the current database in the databases.conf), any Windows
Administrator — assuming Win_Sspi was used to connect without specifying roles.
580
Chapter 14. Security
• Any user with the system privilege USER_MANAGEMENT in the security database
Non-privileged users can use only the ALTER USER statement, and then only to modify some data of
their own account.
Available in
DSQL
Syntax
<user_option> ::=
PASSWORD 'password'
| FIRSTNAME 'firstname'
| MIDDLENAME 'middlename'
| LASTNAME 'lastname'
| {GRANT | REVOKE} ADMIN ROLE
| {ACTIVE | INACTIVE}
| USING PLUGIN plugin_name
<user_var> ::=
tag_name = 'tag_value'
| DROP tag_name
Parameter Description
username Username. The maximum length is 63 characters, following the rules for
Firebird identifiers.
password User password. Valid or effective password length depends on the user
manager plugin. Case-sensitive.
581
Chapter 14. Security
Parameter Description
tag_value Value of the custom attribute. The maximum length is 255 characters.
If the user already exist in the Firebird security database for the specified user manager plugin, an
error is raised. It is possible to create multiple users with the same name: one per user manager
plugin.
The username argument must follow the rules for Firebird regular identifiers: see Identifiers in the
Structure chapter. Usernames are case-sensitive when double-quoted (in other words, they follow
the same rules as other delimited identifiers).
Usernames follow the general rules and syntax of identifiers. Thus, a user named
"Alex" is distinct from a user named "ALEX"
The PASSWORD clause specifies the user’s password, and is required. The valid or effective password
length depends on the user manager plugin, see also User Authentication.
The optional FIRSTNAME, MIDDLENAME and LASTNAME clauses can be used to specify additional user
properties, such as the person’s first name, middle name and last name, respectively. These are
VARCHAR(32) fields and can be used to store anything you prefer.
If the GRANT ADMIN ROLE clause is specified, the new user account is created with the privileges of the
RDB$ADMIN role in the security database (security5.fdb or database-specific). It allows the new user
to manage user accounts from any regular database they log into, but it does not grant the user any
special privileges on objects in those databases.
The REVOKE ADMIN ROLE clause is syntactically valid in a CREATE USER statement, but has no effect. It is
not possible to specify GRANT ADMIN ROLE and REVOKE ADMIN ROLE in one statement.
The ACTIVE clause specifies the user is active and can log in, this is the default.
The INACTIVE clause specifies the user is inactive and cannot log in. It is not possible to specify
ACTIVE and INACTIVE in one statement. The ACTIVE/INACTIVE option is not supported by the
Legacy_UserManager and will be ignored.
582
Chapter 14. Security
The USING PLUGIN clause explicitly specifies the user manager plugin to use for creating the user.
Only plugins listed in the UserManager configuration for this database (firebird.conf, or overridden
in databases.conf) are valid. The default user manager (first in the UserManager configuration) is
applied when this clause is not specified.
Users of the same name created using different user manager plugins are different
objects. Therefore, the user created with one user manager plugin can only be
altered or dropped by that same plugin.
From the perspective of ownership, and privileges and roles granted in a database,
different user objects with the same name are considered one and the same user.
The TAGS clause can be used to specify additional user attributes. Custom attributes are not
supported (silently ignored) by the Legacy_UserManager. Custom attributes names follow the rules of
Firebird identifiers, but are handled case-insensitive (for example, specifying both "A BC" and "a
bc" will raise an error). The value of a custom attribute can be a string of maximum 255 characters.
The DROP tag_name option is syntactically valid in CREATE USER, but behaves as if the property is not
specified.
Users can view and alter their own custom attributes. Do not use this for sensitive
or security related information.
CREATE/ALTER/DROP USER are DDL statements, and only take effect at commit.
Remember to COMMIT your work. In isql, the command SET AUTO ON will enable
autocommit on DDL statements. In third-party tools and other user applications,
this may not be the case.
• the USER_MANAGEMENT system privilege in the security database. Users with the USER_MANAGEMENT
system privilege can not grant or revoke the admin role.
583
Chapter 14. Security
See also
ALTER USER, CREATE OR ALTER USER, DROP USER
Available in
DSQL
Syntax
<user_option> ::=
PASSWORD 'password'
| FIRSTNAME 'firstname'
| MIDDLENAME 'middlename'
| LASTNAME 'lastname'
| {GRANT | REVOKE} ADMIN ROLE
| {ACTIVE | INACTIVE}
| USING PLUGIN plugin_name
<user_var> ::=
tag_name = 'tag_value'
| DROP tag_name
584
Chapter 14. Security
Any user can alter their own account, except that only an administrator may use GRANT/REVOKE
ADMIN ROLE and ACTIVE/INACTIVE.
All clauses are optional, but at least one other than USING PLUGIN must be present:
• The PASSWORD parameter is for changing the password for the user
• FIRSTNAME, MIDDLENAME and LASTNAME update these optional user properties, such as the person’s
first name, middle name and last name respectively
• GRANT ADMIN ROLE grants the user the privileges of the RDB$ADMIN role in the security database
(security5.fdb), enabling them to manage the accounts of other users. It does not grant the user
any special privileges in regular databases.
• REVOKE ADMIN ROLE removes the user’s administrator in the security database which, once the
transaction is committed, will deny that user the ability to alter any user account except their
own
• INACTIVE will disable an account (not supported for Legacy_UserManager). This is convenient to
temporarily disable an account without deleting it.
• TAGS can be used to add, update or remove (DROP) additional custom attributes (not supported for
Legacy_UserManager). Attributes not listed will not be changed.
If you need to change your own account, then instead of specifying the name of the current user,
you can use the CURRENT USER clause.
The ALTER CURRENT USER statement follows the normal rules for selecting the user
manager plugin. If the current user was created with a non-default user manager
plugin, they will need to explicitly specify the user manager plugins with USING
PLUGIN plugin_name, or they will receive an error that the user is not found. Or, if a
user with the same name exists for the default user manager, they will alter that
user instead.
Remember to commit your work if you are working in an application that does not
auto-commit DDL.
To modify the account of another user, the current user must have
• the USER_MANAGEMENT system privilege in the security database Users with the USER_MANAGEMENT
system privilege can not grant or revoke the admin role.
Anyone can modify their own account, except for the GRANT/REVOKE ADMIN ROLE and ACTIVE/INACTIVE
options, which require administrative privileges to change.
585
Chapter 14. Security
1. Changing the password for the user bobby and granting them user management privileges:
2. Editing the optional properties (the first and last names) of the user dan:
See also
CREATE USER, DROP USER
Creates a Firebird user account if it doesn’t exist, or alters a Firebird user account
Available in
DSQL
Syntax
<user_option> ::=
PASSWORD 'password'
| FIRSTNAME 'firstname'
| MIDDLENAME 'middlename'
| LASTNAME 'lastname'
| {GRANT | REVOKE} ADMIN ROLE
| {ACTIVE | INACTIVE}
| USING PLUGIN plugin_name
<user_var> ::=
tag_name = 'tag_value'
| DROP tag_name
586
Chapter 14. Security
See CREATE USER and ALTER USER for details on the statement parameters.
If the user does not exist, it will be created as if executing a CREATE USER statement. If the user
already exists, it will be modified as if executing an ALTER USER statement. The CREATE OR ALTER USER
statement must contain at least one of the optional clauses other than USING PLUGIN. If the user does
not exist yet, the PASSWORD clause is required.
Remember to commit your work if you are working in an application that does not
auto-commit DDL.
See also
CREATE USER, ALTER USER, DROP USER
Available in
DSQL
Syntax
Parameter Description
username Username
The optional USING PLUGIN clause explicitly specifies the user manager plugin to use for dropping
the user. Only plugins listed in the UserManager configuration for this database (firebird.conf, or
overridden in databases.conf) are valid. The default user manager (first in the UserManager
configuration) is applied when this clause is not specified.
Users of the same name created using different user manager plugins are different
objects. Therefore, the user created with one user manager plugin can only be
587
Chapter 14. Security
Remember to commit your work if you are working in an application that does not
auto-commit DDL.
See also
CREATE USER, ALTER USER
A privilege comprises a DML access type (SELECT, INSERT, UPDATE, DELETE, EXECUTE and REFERENCES), the
name of a database object (table, view, procedure, role) and the name of the grantee (user,
procedure, trigger, role). Various means are available to grant multiple types of access on an object
to multiple users in a single GRANT statement. Privileges may be revoked from a user with REVOKE
statements.
An additional type of privileges, DDL privileges, provide rights to create, alter or drop specific types
of metadata objects. System privileges provide a subset of administrator permissions to a role (and
indirectly, to a user).
Privileges are stored in the database to which they apply and are not applicable to any other
588
Chapter 14. Security
database, except the DATABASE DDL privileges, which are stored in the security database.
The user who created a database object becomes its owner. Only the owner of an object and users
with administrator privileges in the database, including the database owner, can alter or drop the
database object.
Administrators, the database owner or the object owner can grant privileges to and revoke them
from other users, including privileges to grant privileges to other users. The process of granting and
revoking SQL privileges is implemented with two statements, GRANT and REVOKE.
14.4. ROLE
A role is a database object that packages a set of privileges. Roles implement the concept of access
control at a group level. Multiple privileges are granted to the role and then that role can be
granted to or revoked from one or many users, or even other roles.
A role that has been granted as a “default” role will be activated automatically. Otherwise, a user
must supply that role in their login credentials — or with SET ROLE — to exercise the associated
privileges. Any other privileges granted to the user directly are not affected by their login with the
role.
Logging in with multiple explicit roles simultaneously is not supported, but a user can have
multiple default roles active at the same time.
In this section the tasks of creating and dropping roles are discussed.
Creates a role
Available in
DSQL, ESQL
Syntax
<sys_privileges> ::=
<sys_privilege> [, <sys_privilege> ...]
<sys_privilege> ::=
USER_MANAGEMENT | READ_RAW_PAGES
| CREATE_USER_TYPES | USE_NBACKUP_UTILITY
| CHANGE_SHUTDOWN_MODE | TRACE_ANY_ATTACHMENT
| MONITOR_ANY_ATTACHMENT | ACCESS_SHUTDOWN_DATABASE
| CREATE_DATABASE | DROP_DATABASE
| USE_GBAK_UTILITY | USE_GSTAT_UTILITY
589
Chapter 14. Security
| USE_GFIX_UTILITY | IGNORE_DB_TRIGGERS
| CHANGE_HEADER_SETTINGS
| SELECT_ANY_OBJECT_IN_DATABASE
| ACCESS_ANY_OBJECT_IN_DATABASE
| MODIFY_ANY_OBJECT_IN_DATABASE
| CHANGE_MAPPING_RULES | USE_GRANTED_BY_CLAUSE
| GRANT_REVOKE_ON_ANY_OBJECT
| GRANT_REVOKE_ANY_DDL_RIGHT
| CREATE_PRIVILEGED_ROLES | GET_DBCRYPT_INFO
| MODIFY_EXT_CONN_POOL | REPLICATE_INTO_DATABASE
| PROFILE_ANY_ATTACHMENT
Parameter Description
The statement CREATE ROLE creates a new role object, to which one or more privileges can be
granted subsequently. The name of a role must be unique among the names of roles in the current
database.
It is advisable to make the name of a role unique among usernames as well. The
system will not prevent the creation of a role whose name clashes with an existing
username, but if it happens, the user will be unable to connect to the database.
• Administrators
The user executing the CREATE ROLE statement becomes the owner of the role.
Creating a role SELECT_ALL with the system privilege to select from any selectable object
590
Chapter 14. Security
See also
ALTER ROLE, DROP ROLE, GRANT, REVOKE, Fine-grained System Privileges
Alters a role
Available in
DSQL
Syntax
<sys_privileges> ::=
!! See CREATE ROLE !!
Parameter Description
rolename Role name; specifying anything other than RDB$ADMIN will fail
ALTER ROLE can be used to grant or revoke system privileges from a role, or enable and disable the
capability for Windows Administrators to assume administrator privileges automatically when
logging in.
This last capability can affect only one role: the system-generated role RDB$ADMIN.
It is not possible to selectively grant or revoke system privileges. Only the privileges listed in the SET
SYSTEM PRIVILEGES clause will be available to the role after commit, and DROP SYSTEM PRIVILEGES will
remove all system privileges from this role.
• Administrators
• Users with the ALTER ANY ROLE privilege, with the following caveats
◦ Setting or dropping auto admin mapping also requires the system privilege
CHANGE_MAPPING_RULES
591
Chapter 14. Security
Grant a role SELECT_ALL the system privilege to select from any selectable object
See also
CREATE ROLE, GRANT, REVOKE, Fine-grained System Privileges
Drops a role
Available in
DSQL, ESQL
Syntax
The statement DROP ROLE deletes an existing role. It takes a single argument, the name of the role.
Once the role is deleted, the entire set of privileges is revoked from all users and objects that were
granted the role.
• Administrators
See also
CREATE ROLE, GRANT, REVOKE
592
Chapter 14. Security
14.5.1. GRANT
Available in
DSQL, ESQL
GRANT <privileges>
TO <grantee_list>
[WITH GRANT OPTION]
[{GRANTED BY | AS} [USER] grantor]
<privileges> ::=
<table_privileges> | <execute_privilege>
| <usage_privilege> | <ddl_privileges>
| <db_ddl_privilege>
<table_privileges> ::=
{ALL [PRIVILEGES] | <table_privilege_list> }
ON [TABLE] {table_name | view_name}
<table_privilege_list> ::=
<table_privilege> [, <tableprivilege> ...]
<table_privilege> ::=
SELECT | DELETE | INSERT
| UPDATE [(col [, col ...])]
| REFERENCES [(col [, col ...)]
<ddl_privileges> ::=
{ALL [PRIVILEGES] | <ddl_privilege_list>} <object_type>
<ddl_privilege_list> ::=
<ddl_privilege> [, <ddl_privilege> ...]
593
Chapter 14. Security
<object_type> ::=
CHARACTER SET | COLLATION | DOMAIN | EXCEPTION
| FILTER | FUNCTION | GENERATOR | PACKAGE
| PROCEDURE | ROLE | SEQUENCE | TABLE | VIEW
<db_ddl_privileges> ::=
{ALL [PRIVILEGES] | <db_ddl_privilege_list>} {DATABASE | SCHEMA}
<db_ddl_privilege_list> ::=
<db_ddl_privilege> [, <db_ddl_privilege> ...]
<grantee> ::=
PROCEDURE proc_name | FUNCTION func_name
| PACKAGE package_name | TRIGGER trig_name
| VIEW view_name | ROLE role_name
| [USER] username | GROUP Unix_group
| SYSTEM PRIVILEGE <sys_privilege>
<sys_privilege> ::=
!! See CREATE ROLE !!
GRANT <role_granted_list>
TO <role_grantee_list>
[WITH ADMIN OPTION]
[{GRANTED BY | AS} [USER] grantor]
<role_granted_list> ::=
<role_granted> [, <role_granted ...]
<role_grantee_list> ::=
<role_grantee> [, <role_grantee> ...]
<role_grantee> ::=
user_or_role_name
| USER username
| ROLE role_name
594
Chapter 14. Security
Parameter Description
username The username to which the privileges are granted to or to which the role
is assigned. If the USER keyword is absent, it can also be a role.
The GRANT statement grants one or more privileges on database objects to users, roles, or other
database objects.
A regular, authenticated user has no privileges on any database object until they are explicitly
granted to that individual user, to a role granted to the user as a default role, or to all users bundled
as the user PUBLIC. When an object is created, only its creator (the owner) and administrators have
privileges to it, and can grant privileges to other users, roles, or objects.
Different sets of privileges apply to different types of metadata objects. The different types of
privileges will be described separately later in this section.
SCHEMA is currently a synonym for DATABASE; this may change in a future version, so
we recommend to always use DATABASE
The TO Clause
The TO clause specifies the users, roles, and other database objects that are to be granted the
privileges enumerated in privileges. The clause is mandatory.
The optional USER keyword in the TO clause allow you to specify exactly who or what is granted the
privilege. If a USER (or ROLE) keyword is not specified, the server first checks for a role with this
name and, if there is no such role, the privileges are granted to the user with that name without
further checking.
595
Chapter 14. Security
• When a GRANT statement is executed, the security database is not checked for
the existence of the grantee user. This is not a bug: SQL permissions are
concerned with controlling data access for authenticated users, both native
and trusted, and trusted operating system users are not stored in the security
database.
• When granting a privilege to a database object other than user or role, such as
a procedure, trigger or view, you must specify the object type.
• Privileges granted to a system privilege will be applied when the user is logged
in with a role that has that system privilege.
A role is a “container” object that can be used to package a collection of privileges. Use of the role is
then granted to each user or role that requires those privileges. A role can also be granted to a list
of users or roles.
The role must exist before privileges can be granted to it. See CREATE ROLE for the syntax and rules.
The role is maintained by granting privileges to it and, when required, revoking privileges from it.
When a role is dropped — see DROP ROLE — all users lose the privileges acquired through the role.
Any privileges that were granted additionally to an affected user by way of a different grant
statement are retained.
Unless the role is granted as a default role, a user that is granted a role must explicitly specify that
role, either with their login credentials or activating it using SET ROLE, to exercise the associated
privileges. Any other privileges granted to the user or received through default roles are not
affected by explicitly specifying a role.
More than one role can be granted to the same user. Although only one role can be explicitly
specified, multiple roles can be active for a user, either as default roles, or as roles granted to the
current role.
Cumulative Roles
The ability to grant roles to other roles and default roles results in so-called cumulative roles.
Multiple roles can be active for a user, and the user receives the cumulative privileges of all those
roles.
When a role is explicitly specified on connect or using SET ROLE, the user will assume all privileges
granted to that role, including those privileges granted to the secondary roles (including roles
granted on that secondary role, etc). Or in other words, when the primary role is explicitly
specified, the secondary roles are also activated. The function RDB$ROLE_IN_USE can be used to check
596
Chapter 14. Security
See also Default Roles for the effects of DEFAULT with cumulative roles, and The WITH ADMIN OPTION
Clause for effects on granting.
Default Roles
A role can be granted as a default role by prefixing the role with DEFAULT in the GRANT statement.
Granting roles as a default role to users simplifies management of privileges, as this makes it
possible to group privileges on a role and granting that group of privileges to a user without
requiring the user to explicitly specify the role. Users can receive multiple default roles, granting
them all privileges of those default roles.
The effects of a default role depend on whether the role is granted to a user or to another role:
• When a role is granted to a user as a default role, the role will be activated automatically, and its
privileges will be applied to the user without the need to explicitly specify the role.
Roles that are active by default are not returned from CURRENT_ROLE, but the function
RDB$ROLE_IN_USE can be used to check if a role is currently active.
• When a role is granted to another role as a default role, the rights of that role will only be
automatically applied to the user if the primary role is granted as a default role to the user,
otherwise the primary role needs to be specified explicitly (in other words, it behaves the same
as when the secondary role was granted without the DEFAULT clause).
For a linked list of granted roles, all roles need to be granted as a default role for them to be
applied automatically. That is, for GRANT DEFAULT ROLEA TO ROLE ROLEB, GRANT ROLEB TO ROLE
ROLEC, GRANT DEFAULT ROLEC TO USER USER1 only ROLEC is active by default for USER1. To assume the
privileges of ROLEA and ROLEB, ROLEC needs to be explicitly specified, or ROLEB needs to be granted
DEFAULT to ROLEC.
Firebird has a predefined user named PUBLIC, that represents all users. Privileges for operations on
a particular object that are granted to the user PUBLIC can be exercised by any authenticated user.
If privileges are granted to the user PUBLIC, they should be revoked from the user
PUBLIC as well.
The optional WITH GRANT OPTION clause allows the users specified in the user list to grant the
privileges specified in the privilege list to other users.
By default, when privileges are granted in a database, the current user is recorded as the grantor.
597
Chapter 14. Security
The GRANTED BY clause enables the current user to grant those privileges as another user.
When using the REVOKE statement, it will fail if the current user is not the user that was named in
the GRANTED BY clause.
The GRANTED BY (and AS) clause can be used only by the database owner and other administrators.
The object owner cannot use GRANTED BY unless they also have administrator privileges.
For tables and views, unlike other metadata objects, it is possible to grant several privileges at once.
INSERT
Permits the user or object to INSERT rows into the table or view
DELETE
Permits the user or object to DELETE rows from the table or view
UPDATE
Permits the user or object to UPDATE rows in the table or view, optionally restricted to specific
columns
REFERENCES
Permits the user or object to reference the table via a foreign key, optionally restricted to the
specified columns. If the primary or unique key referenced by the foreign key of the other table
is composite then all columns of the key must be specified.
ALL [PRIVILEGES]
Combines SELECT, INSERT, UPDATE, DELETE and REFERENCES privileges in a single package
2. The SELECT privilege to the MANAGER, ENGINEER roles and to the user IVAN:
598
Chapter 14. Security
3. All privileges to the ADMINISTRATOR role, together with the authority to grant the same privileges
to others:
4. The SELECT and REFERENCES privileges on the NAME column to all users and objects:
5. The SELECT privilege being granted to the user IVAN by the user ALEX:
The EXECUTE privilege applies to stored procedures, stored functions (including UDFs), and packages.
It allows the grantee to execute the specified object, and, if applicable, to retrieve its output.
In the case of selectable stored procedures, it acts somewhat like a SELECT privilege, insofar as this
style of stored procedure is executed in response to a SELECT statement.
For packages, the EXECUTE privilege can only be granted for the package as a whole, not for
individual subroutines.
599
Chapter 14. Security
To be able to use metadata objects other than tables, views, stored procedures or functions, triggers
and packages, it is necessary to grant the user (or database object like trigger, procedure or
function) the USAGE privilege on these objects.
By default, Firebird executes PSQL modules with the privileges of the caller, so it is necessary that
either the user or otherwise the routine itself has been granted the USAGE privilege. This can be
changed with the SQL SECURITY clause of the DDL statements of those objects.
The USAGE privilege is currently only available for exceptions and sequences (in
gen_id(gen_name, n) or next value for gen_name). Support for the USAGE privilege
for other metadata objects may be added in future releases.
For sequences (generators), the USAGE privilege only grants the right to increment
the sequence using the GEN_ID function or NEXT VALUE FOR. The SET GENERATOR
statement is a synonym for ALTER SEQUENCE … RESTART WITH …, and is considered a
DDL statement. By default, only the owner of the sequence and administrators
have the rights to such operations. The right to set the initial value of any sequence
can be granted with GRANT ALTER ANY SEQUENCE, which is not recommend for
general users.
600
Chapter 14. Security
DDL Privileges
By default, only administrators can create new metadata objects. Altering or dropping these objects
is restricted to the owner of the object (its creator) and administrators. DDL privileges can be used
to grant privileges for these operations to other users.
ALTER ANY
Allows modification of any object of the specified type
DROP ANY
Allows deletion of any object of the specified type
ALL [PRIVILEGES]
Combines the CREATE, ALTER ANY and DROP ANY privileges for the specified type
There are no separate DDL privileges for triggers and indexes. The necessary privileges are
inherited from the table or view. Creating, altering or dropping a trigger or index requires the ALTER
ANY TABLE or ALTER ANY VIEW privilege.
601
Chapter 14. Security
The syntax for granting privileges to create, alter or drop a database deviates from the normal
syntax of granting DDL privileges for other object types.
ALTER
Allows modification of the current database
DROP
Allows deletion of the current database
ALL [PRIVILEGES]
Combines the ALTER and DROP privileges. ALL does not include the CREATE privilege.
The ALTER DATABASE and DROP DATABASE privileges apply only to the current database, whereas DDL
privileges ALTER ANY and DROP ANY on other object types apply to all objects of the specified type in
the current database. The privilege to alter or drop the current database can only be granted by
administrators.
The CREATE DATABASE privilege is a special kind of privilege as it is saved in the security database. A
list of users with the CREATE DATABASE privilege is available from the virtual table SEC$DB_CREATORS.
Only administrators in the security database can grant the privilege to create a new database.
SCHEMA is currently a synonym for DATABASE; this may change in a future version, so
we recommend to always use DATABASE
2. Granting JOE the privilege to execute ALTER DATABASE for the current database:
602
Chapter 14. Security
Assigning Roles
Assigning a role is similar to granting a privilege. One or more roles can be assigned to one or more
users, including the user PUBLIC, using one GRANT statement.
The optional WITH ADMIN OPTION clause allows the users specified in the user list to grant the role(s)
specified to other users or roles.
For cumulative roles, a user can only exercise the WITH ADMIN OPTION of a secondary role if all
intermediate roles are also granted WITH ADMIN OPTION. That is, GRANT ROLEA TO ROLE ROLEB WITH
ADMIN OPTION, GRANT ROLEB TO ROLE ROLEC, GRANT ROLEC TO USER USER1 WITH ADMIN OPTION only allows
USER1 to grant ROLEC to other users or roles, while using GRANT ROLEB TO ROLE ROLEC WITH ADMIN
OPTION allows USER1 to grant ROLEA, ROLEB and ROLEC to other users.
2. Assigning the MANAGER role to the user ALEX with the authority to assign this role to other users:
GRANT MANAGER
TO USER ALEX WITH ADMIN OPTION;
GRANT MANAGER
TO ROLE DIRECTOR;
603
Chapter 14. Security
See also
REVOKE
14.6.1. REVOKE
Available in
DSQL, ESQL
<privileges> ::=
!! See GRANT syntax !!
<role_granted_list> ::=
!! See GRANT syntax !!
<role_grantee_list> ::=
!! See GRANT syntax !!
<grantee_list> ::=
!! See GRANT syntax !!
Parameter Description
grantor The grantor user on whose behalf the privilege(s) are being revoked
604
Chapter 14. Security
The REVOKE statement revokes privileges that were granted using the GRANT statement from users,
roles, and other database objects. See GRANT for detailed descriptions of the various types of
privileges.
Only the user who granted the privilege can revoke it.
When the DEFAULT clause is specified, the role itself is not revoked, only its DEFAULT property is
removed without revoking the role itself.
The FROM clause specifies a list of users, roles and other database objects that will have the
enumerated privileges revoked. The optional USER keyword in the FROM clause allow you to specify
exactly which type is to have the privilege revoked. If a USER (or ROLE) keyword is not specified, the
server first checks for a role with this name and, if there is no such role, the privileges are revoked
from the user with that name without further checking.
• The REVOKE statement does not check for the existence of the user from which
the privileges are being revoked.
• When revoking a privilege from a database object other than USER or ROLE, you
must specify its object type
The optional GRANT OPTION FOR clause revokes the user’s privilege to grant the specified privileges to
other users, roles, or database objects (as previously granted with the WITH GRANT OPTION). It does
not revoke the specified privilege itself.
One usage of the REVOKE statement is to remove roles that were assigned to a user, or a group of
users, by a GRANT statement. In the case of multiple roles and/or multiple grantees, the REVOKE verb is
followed by the list of roles that will be removed from the list of users specified after the FROM
clause.
The optional ADMIN OPTION FOR clause provides the means to revoke the grantee’s “administrator”
privilege, the ability to assign the same role to other users, without revoking the grantee’s privilege
to the role.
605
Chapter 14. Security
A privilege that has been granted using the GRANTED BY clause is internally attributed explicitly to
the grantor designated by that original GRANT statement. Only that user can revoke the granted
privilege. Using the GRANTED BY clause you can revoke privileges as if you are the specified user. To
revoke a privilege with GRANTED BY, the current user must be logged in either with full
administrative privileges, or as the user designated as grantor by that GRANTED BY clause.
Not even the owner of a role can use GRANTED BY unless they have administrative
privileges.
The REVOKE ALL ON ALL statement allows a user to revoke all privileges (including roles) on all object
from one or more users, roles or other database objects. It is a quick way to “clear” privileges when
access to the database must be blocked for a particular user or role.
When the current user is logged in with full administrator privileges in the database, the REVOKE ALL
ON ALL will remove all privileges, no matter who granted them. Otherwise, only the privileges
granted by the current user are removed.
1. Revoking the privileges for selecting and inserting into the table (or view) SALES
2. Revoking the privilege for selecting from the CUSTOMER table from the MANAGER and ENGINEER roles
and from the user IVAN:
3. Revoking from the ADMINISTRATOR role the privilege to grant any privileges on the CUSTOMER table
to other users or roles:
606
Chapter 14. Security
4. Revoking the privilege for selecting from the COUNTRY table and the privilege to reference the
NAME column of the COUNTRY table from any user, via the special user PUBLIC:
5. Revoking the privilege for selecting form the EMPLOYEE table from the user IVAN, that was granted
by the user ALEX:
6. Revoking the privilege for updating the FIRST_NAME and LAST_NAME columns of the EMPLOYEE table
from the user IVAN:
7. Revoking the privilege for inserting records into the EMPLOYEE_PROJECT table from the
ADD_EMP_PROJ procedure:
8. Revoking the privilege for executing the procedure ADD_EMP_PROJ from the MANAGER role:
9. Revoking the privilege to grant the EXECUTE privilege for the function GET_BEGIN_DATE to other
users from the role MANAGER:
10. Revoking the EXECUTE privilege on the package DATE_UTILS from user ALEX:
11. Revoking the USAGE privilege on the sequence GEN_AGE from the role MANAGER:
607
Chapter 14. Security
12. Revoking the USAGE privilege on the sequence GEN_AGE from the trigger TR_AGE_BI:
13. Revoking the USAGE privilege on the exception E_ACCESS_DENIED from the package PKG_BILL:
15. Revoking the privilege to alter any procedure from user JOE:
17. Revoking the DIRECTOR and MANAGER roles from the user IVAN:
18. Revoke from the user ALEX the privilege to grant the MANAGER role to other users:
19. Revoking all privileges (including roles) on all objects from the user IVAN:
608
Chapter 14. Security
After this statement is executed by an administrator, the user IVAN will have no privileges
whatsoever, except those granted through PUBLIC.
20. Revoking the DEFAULT property of the DIRECTOR role from user ALEX, while the role itself remains
granted:
See also
GRANT
• when EXECUTE STATEMENT ON EXTERNAL DATA SOURCE requires data exchange between clusters
• when server-wide SYSDBA access to databases is needed from other clusters, using services.
• On Windows, due to support for Trusted User authentication: to map Windows users to a
Firebird user and/or role. An example is the need for a ROLE granted to a Windows group to be
assigned automatically to members of that group.
The single solution for all such cases is mapping the login information assigned to a user when it
connects to a Firebird server to internal security objects in a database — CURRENT_USER and
CURRENT_ROLE.
1. mapping scope — whether the mapping is local to the current database or whether its effect is
to be global, affecting all databases in the cluster, including security databases
2. mapping name — an SQL identifier, since mappings are objects in a database, like any other
3. the object FROM which the mapping maps. It consists of four items:
▪ plugin name or
▪ any method
609
Chapter 14. Security
◦ The type of that name — username, role, or OS group — depending upon the plugin that
added that name during authentication.
Available in
DSQL
Syntax
Parameter Description
The CREATE MAPPING statement creates a mapping of security objects (e.g. users, groups, roles) of one
or more authentication plugins to internal security objects — CURRENT_USER and CURRENT_ROLE.
If the GLOBAL clause is present, then the mapping will be applied not only for the current database,
but for all databases in the same cluster, including security databases.
There can be global and local mappings with the same name. They are distinct
objects.
610
Chapter 14. Security
Global mapping works best if a Firebird 3.0 or higher version database is used as
the security database. If you plan to use another database for this purpose — using
your own provider, for example — then you should create a table in it named
RDB$MAP, with the same structure as RDB$MAP in a Firebird 3.0 or higher database
and with SYSDBA-only write access.
The USING clause describes the mapping source. It has a complex set of options:
• an explicit plugin name (PLUGIN plugin_name) means it applies only for that plugin
• it can use any available plugin (ANY PLUGIN); although not if the source is the product of a
previous mapping
• you can omit to use of a specific method by using the asterisk (*) argument
• it can specify the name of the database that defined the mapping for the FROM object (IN
database)
The FROM clause describes the object to map. The FROM clause has a mandatory argument, the type of
the object named. It has the following options:
• When mapping the product of a previous mapping, type can be only USER or ROLE
• Use the ANY keyword to work with any name of the given type.
The TO clause specifies the user or role that is the result of the mapping. The to_name is optional. If
it is not specified, then the original name of the mapped object will be used.
For roles, the role defined by a mapping rule is only applied when the user does not explicitly
specify a role on connect. The mapped role can be assumed later in the session using SET TRUSTED
ROLE, even when the mapped role is not explicitly granted to the user.
• Administrators
1. Enable use of Windows trusted authentication in all databases that use the current security
611
Chapter 14. Security
database:
The group DOMAIN_ANY_RID_ADMINS does not exist in Windows, but such a name
would be added by the Win_Sspi plugin to provide exact backwards
compatibility.
3. Enable a particular user from another database to access the current database with another
name:
4. Enable the server’s SYSDBA (from the main security database) to access the current database.
(Assume that the database is using a non-default security database):
5. Ensure users who logged in using the legacy authentication plugin do not have too many
privileges:
612
Chapter 14. Security
See also
ALTER MAPPING, CREATE OR ALTER MAPPING, DROP MAPPING
Available in
DSQL
Syntax
The ALTER MAPPING statement allows you to modify any of the existing mapping options, but a local
mapping cannot be changed to GLOBAL or vice versa.
Global and local mappings of the same name are different objects.
• Administrators
Alter mapping
See also
CREATE MAPPING, CREATE OR ALTER MAPPING, DROP MAPPING
613
Chapter 14. Security
Available in
DSQL
Syntax
The CREATE OR ALTER MAPPING statement creates a new or modifies an existing mapping.
Global and local mappings of the same name are different objects.
See also
CREATE MAPPING, ALTER MAPPING, DROP MAPPING
Available in
DSQL
Syntax
614
Chapter 14. Security
Parameter Description
The DROP MAPPING statement removes an existing mapping. If GLOBAL is specified, then a global
mapping will be removed.
Global and local mappings of the same name are different objects.
• Administrators
Alter mapping
See also
CREATE MAPPING
To make database encryption possible, you need to obtain or write a database encryption plugin.
Out of the box, Firebird does not include a database encryption plugin.
The main problem with database encryption is how to store the secret key. Firebird provides
support for transferring the key from the client, but this does not mean that storing the key on the
client is the best way; it is one of several alternatives. However, keeping encryption keys on the
same disk as the database is an insecure option.
For efficient separation of encryption and key access, the database encryption plugin data is
divided into two parts, the encryption itself and the holder of the secret key. This can be an efficient
615
Chapter 14. Security
approach when you want to use a good encryption algorithm, but you have your own custom
method of storing the keys.
Once you have decided on the plugin and key-holder, you can perform the encryption.
Syntax
Parameter Description
Encryption starts immediately after this statement completes, and will be performed in the
background. Normal operations of the database are not disturbed during encryption.
The optional KEY clause specifies the name of the key for the encryption plugin. The plugin decides
what to do with this key name.
The encryption process can be monitored using the MON$CRYPT_PAGE field in the
MON$DATABASE virtual table, or viewed in the database header page using gstat -e.
gstat -h will also provide limited information about the encryption status.
For example, the following query will display the progress of the encryption
process as a percentage.
SCHEMA is currently a synonym for DATABASE; this may change in a future version, so
we recommend to always use DATABASE
See also
Decrypting a Database, ALTER DATABASE
616
Chapter 14. Security
Syntax
Decryption starts immediately after this statement completes, and will be performed in the
background. Normal operations of the database are not disturbed during decryption.
SCHEMA is currently a synonym for DATABASE; this may change in a future version, so
we recommend to always use DATABASE
See also
Encrypting a Database, ALTER DATABASE
The SQL Security feature has two contexts: INVOKER and DEFINER. The INVOKER context corresponds to
the privileges available to the current user or the calling object, while DEFINER corresponds to those
available to the owner of the object.
The SQL SECURITY property is an optional part of an object’s definition that can be applied to the
object with DDL statements. The property cannot be dropped, but it can be changed from INVOKER to
DEFINER and vice versa.
This is not the same thing as SQL privileges, which are applied to users and database objects to give
them various types of access to other database objects. When an executable object in Firebird needs
access to a table, view or another executable object, the target object is not accessible if the invoker
does not have the necessary privileges on that object. That is, by default all executable objects have
the SQL SECURITY INVOKER property, and any caller lacking the necessary privileges will be rejected.
The default SQL Security behaviour of a database can be overridden using ALTER DATABASE.
If a routine has the SQL SECURITY DEFINER property applied, the invoking user or routine will be able
to execute it if the required privileges have been granted to its owner, without the need for the
caller to be granted those privileges as well.
In summary:
• If INVOKER is set, the access rights for executing the call to an executable object are determined
by checking the current user’s active set of privileges
• If DEFINER is set, the access rights of the object owner will be applied instead, regardless of the
current user’s active set of privileges.
617
Chapter 15. Management Statements
The isql tool also has a collection of SET commands. Those commands are not part
of Firebird’s SQL lexicon. For information on isqls SET commands, see Isql Set
Commands in Firebird Interactive SQL Utility.
Management statements can run anywhere DSQL can run, but typically, the developer will want to
run a management statement in a database trigger. A subset of management statement can be used
directly in PSQL modules without the need to wrap them in an EXECUTE STATEMENT block. For more
details of the current set, see Management Statements in PSQL in the PSQL chapter.
Most of the management statements affect the current connection (attachment, or “session”) only,
and do not require any authorization over and above the login privileges of the current user
without elevated privileges.
Some management statements operate beyond the scope of the current session. Examples are the
ALTER DATABASE {BEGIN | END} BACKUP statements to control the “copy-safe” mode, or the ALTER
EXTERNAL CONNECTIONS POOL statements to manage connection pooling. A set of system privileges,
analogous with SQL privileges granted for database objects, is provided to enable the required
authority to run a specific management statement in this category.
Some management statements use the verb ALTER, but those statements should not
be confused with DDL ALTER statements that modify database objects like tables,
views, procedures, roles, et al.
Although some ALTER DATABASE clauses (e.g. BEGIN BACKUP) can be considered as
management statements, they are documented in the DDL chapter.
Unless explicitly mentioned otherwise in an “Available in” section, management statements are
available in DSQL and PSQL. Availability in ESQL is not tracked by this Language Reference.
Syntax
<type_from> ::=
<scalar_datatype>
| <blob_datatype>
| TIME ZONE
618
Chapter 15. Management Statements
<type_to> ::=
<scalar_datatype>
| <blob_datatype>
| VARCHAR | {CHARACTER | CHAR} VARYING
| LEGACY | NATIVE | EXTENDED
| EXTENDED TIME WITH TIME ZONE
| EXTENDED TIMESTAMP WITH TIME ZONE
<scalar_datatype> ::=
!! See Scalar Data Types Syntax !!
<blob_datatype> ::=
!! See BLOB Data Types Syntax !!
This statement makes it possible to substitute one data type with another when performing client-
server interactions. In other words, type_from returned by the engine is represented as type_to in
the client API.
Only fields returned by the database engine in regular messages are substituted according to these
rules. Variables returned as an array slice are not affected by the SET BIND statement.
When a partial type definition is used (e.g. CHAR instead of CHAR(n)) in from_type, the coercion is
performed for all CHAR columns. The special partial type TIME ZONE stands for TIME WITH TIME ZONE
and TIMESTAMP WITH TIME ZONE. When a partial type definition is used in to_type, the engine defines
missing details about that type automatically based on source column.
Changing the binding of any NUMERIC or DECIMAL data type does not affect the underlying integer
type. In contrast, changing the binding of an integer data type also affects appropriate NUMERIC and
DECIMAL types. For example, SET BIND OF INT128 TO DOUBLE PRECISION will also map NUMERIC and
DECIMAL with precision 19 or higher, as these types use INT128 as their underlying type.
The special type LEGACY is used when a data type, missing in previous Firebird version, should be
represented in a way, understandable by old client software (possibly with data loss). The coercion
rules applied in this case are shown in the table below.
BOOLEAN CHAR(5)
INT128 BIGINT
Using EXTENDED for type_to causes the engine to coerce to an extended form of the type_from data
type. Currently, this works only for TIME/TIMESTAMP WITH TIME ZONE, they are coerced to EXTENDED
619
Chapter 15. Management Statements
TIME/TIMESTAMP WITH TIME ZONE. The EXTENDED type contains both the time zone name, and the
corresponding GMT offset, so it remains usable if the client application cannot process named time
zones properly (e.g. due to the missing ICU library).
Setting a binding to NATIVE resets the existing coercion rule for this data type and returns it in its
native format.
The initial bind rules of a connection be configured through the DPB by providing a semicolon
separated list of <type_from> TO <type_to> options as the string value of isc_dpb_set_bind.
Execution of ALTER SESSION RESET will revert to the binding rules configured through the DPB, or
otherwise the system default.
It is also possible to configure a default set of data type coercion rules for all clients
through the DataTypeCompatibility configuration option, either as a global
configuration in firebird.conf or per database in databases.conf.
DataTypeCompatibility currently has two possible values: 3.0 and 2.5. The 3.0
option maps data types introduced after Firebird 3.0 — in particular DECIMAL
/NUMERIC with precision 19 or higher, INT128, DECFLOAT, and TIME/TIMESTAMP WITH
TIME ZONE — to data types supported in Firebird 3.0. The 2.5 option also converts
the BOOLEAN data type.
See the Native to LEGACY coercion rules for details. This setting allows legacy client
applications to work with Firebird 5.0 without recompiling or otherwise adjusting
them to understand the new data types.
-- native
SELECT CAST('123.45' AS DECFLOAT(16)) FROM RDB$DATABASE;
CAST
=======================
123.45
-- double
SET BIND OF DECFLOAT TO DOUBLE PRECISION;
SELECT CAST('123.45' AS DECFLOAT(16)) FROM RDB$DATABASE;
CAST
=======================
123.4500000000000
-- still double
SET BIND OF DECFLOAT(34) TO CHAR;
SELECT CAST('123.45' AS DECFLOAT(16)) FROM RDB$DATABASE;
CAST
=======================
620
Chapter 15. Management Statements
123.4500000000000
-- text
SELECT CAST('123.45' AS DECFLOAT(34)) FROM RDB$DATABASE;
CAST
==========================================
123.45
CURRENT_TIMESTAMP
=========================================================
2020-02-21 16:26:48.0230 GMT*
CURRENT_TIMESTAMP
=========================================================
2020-02-21 19:26:55.6820 +03:00
Configures DECFLOAT rounding and error behaviour for the current session
Syntax
SET DECFLOAT
{ ROUND <round_mode>
| TRAPS TO [<trap_opt> [, <trap_opt> ...]] }
<round_mode> ::=
CEILING | UP | HALF_UP | HALF_EVEN
| HALF_DOWN | DOWN | FLOOR | REROUND
<trap_opt> ::=
DIVISON_BY_ZERO | INEXACT | INVALID_OPERATION
| OVERFLOW | UNDERFLOW
SET DECFLOAT ROUND changes the rounding behaviour of operations on DECFLOAT. The default
rounding mode is HALF_UP. The initial configuration of a connection can also be specified using the
DPB tag isc_dpb_decfloat_round with the desired round_mode as string value.
621
Chapter 15. Management Statements
UP away from 0
HALF_EVEN to nearest, if equidistant, ensure last digit in the result will be even
DOWN towards 0
The current value for the connection can be found using RDB$GET_CONTEXT('SYSTEM',
'DECFLOAT_ROUND').
Execution of ALTER SESSION RESET will revert to the value configured through the DPB, or otherwise
the system default.
SET DECFLOAT TRAPS changes the error behaviour of operations on DECFLOAT. The default traps are
DIVISION_BY_ZERO,INVALID_OPERATION,OVERFLOW; this default matches the behaviour specified in the
SQL standard for DECFLOAT. This statement controls whether certain exceptional conditions result in
an error (“trap”) or alternative handling (for example, an underflow returns 0 when not set, or an
overflow returns an infinity). The initial configuration of a connection can also be specified using
the DPB tag isc_dpb_decfloat_traps with the desired comma-separated trap_opt values as a string
value.
Inexact —
Underflow —
The current value for the connection can be found using RDB$GET_CONTEXT('SYSTEM',
'DECFLOAT_TRAPS').
Execution of ALTER SESSION RESET will revert to the value configured through the DPB, or otherwise
the system default.
622
Chapter 15. Management Statements
This connection pool is part of the Firebird server and used for connections to other databases or
servers from the Firebird server itself.
Syntax
Parameter Description
size Maximum size of the connection pool. Range 0 - 1000. Setting to 0 disables
the external connections pool.
When prepared it is described like a DDL statement, but its effect is immediate — it is executed
immediately and to completion, without waiting for transaction commit.
This statement can be issued from any connection, and changes are applied to the in-memory
instance of the pool in the current Firebird process. If the process is Firebird Classic, execution only
affects the current process (current connection), and does not affect other Classic processes.
Changes made with ALTER EXTERNAL CONNECTIONS POOL are not persistent: after a restart, Firebird will
use the pool settings configured in firebird.conf by ExtConnPoolSize and ExtConnPoolLifeTime.
CLEAR ALL
Closes all idle connections and disassociates currently active connections; they are immediately
closed when unused.
CLEAR OLDEST
Closes expired connections
623
Chapter 15. Management Statements
SET LIFETIME
Configures the maximum lifetime of an idle connection in the pool. The default value (in
seconds) is set using the parameter ExtConnPoolLifetime in firebird.conf.
SET SIZE
Configures the maximum number of idle connections in the pool. The default value is set using
the parameter ExtConnPoolSize in firebird.conf.
Every successful connection is associated with a pool, which maintains two lists — one for idle
connections and one for active connections. When a connection in the “active” list has no active
requests and no active transactions, it is assumed to be “unused”. A reset of the unused connection
is attempted using an ALTER SESSION RESET statement and,
• if the reset succeeds (no errors occur) the connection is moved into the “idle” list;
• if the pool has reached its maximum size, the oldest idle connection is closed.
• When the lifetime of an idle connection expires, it is deleted from the pool and closed.
New Connections
When the engine is asked to create a new external connection, the pool first looks for a candidate in
the “idle” list. The search, which is case-sensitive, involves four parameters:
1. connection string
2. username
3. password
4. role
• If it fails the check, it is deleted, and the search is repeated, without reporting any error to the
client
• Otherwise, the live connection is moved from the “idle” list to the “active” list and returned to
the caller
• If there are multiple suitable connections, the most recently used one is chosen
• If there is no suitable connection, a new one is created and added to the “active” list.
• Administrators
624
Chapter 15. Management Statements
See also
RDB$GET_CONTEXT
Available in
DSQL
Syntax
Parameter Description
The SET ROLE statement allows a user to assume a different role; it sets the CURRENT_ROLE context
variable to role_name, if that role has been granted to the CURRENT_USER. For this session, the user
receives the privileges granted by that role. Any rights granted to the previous role are removed
from the session. Use NONE instead of role_name to clear the CURRENT_ROLE.
When the specified role does not exist or has not been explicitly granted to the user, the error “Role
role_name is invalid or unavailable” is raised.
ROLE
=======================
MANAGER
ROLE
=======================
625
Chapter 15. Management Statements
NONE
See also
SET TRUSTED ROLE, GRANT
Sets the active role of the current session to the trusted role
Available in
DSQL
Syntax
The SET TRUSTED ROLE statement makes it possible to assume the role assigned to the user through a
mapping rule (see Mapping of Users to Objects). The role assigned through a mapping rule is
assumed automatically on connect, if the user hasn’t specified an explicit role. The SET TRUSTED ROLE
statement makes it possible to assume the mapped (or “trusted”) role at a later time, or to assume it
again after the current role was changed using SET ROLE.
A trusted role is not a specific type of role, but can be any role that was created using CREATE ROLE,
or a predefined system role such as RDB$ADMIN. An attachment (session) has a trusted role when the
security objects mapping subsystem finds a match between the authentication result passed from
the plugin and a local or global mapping to a role for the current database. The role may be one
that is not granted explicitly to that user.
When a session has no trusted role, executing SET TRUSTED ROLE will raise error “Your attachment
has no trusted role”.
While the CURRENT_ROLE can be changed using SET ROLE, it is not always possible to revert to a trusted
role using the same command, because SET ROLE checks if the role has been granted to the user.
With SET TRUSTED ROLE, the trusted role can be assumed again even when SET ROLE fails.
1. Assuming a mapping rule that assigns the role ROLE1 to a user ALEX:
ROLE
===============================
ROLE1
626
Chapter 15. Management Statements
ROLE
===============================
ROLE2
ROLE
===============================
ROLE1
See also
SET ROLE, Mapping of Users to Objects
Syntax
Parameter Description
The SET SESSION IDLE TIMEOUT sets an idle timeout at connection level and takes effect immediately.
The statement can run outside transaction control (without an active transaction).
Setting a value larger than configured for the database is allowed, but is effectively ignored, see also
Determining the Timeout that is In Effect.
The current timeout set for the session can be retrieved through RDB$GET_CONTEXT, namespace SYSTEM
and variable SESSION_IDLE_TIMEOUT. Information is also available from MON$ATTACHMENTS:
MON$IDLE_TIMEOUT
Connection-level idle timeout in seconds; 0 if timeout is not set.
627
Chapter 15. Management Statements
MON$IDLE_TIMER
Idle timer expiration time; contains NULL if an idle timeout was not set, or if a timer is not
running.
The session idle timeout is reset when ALTER SESSION RESET is executed.
An idle session timeout allows a use connection to close automatically after a specified period of
inactivity. A database administrator can use it to enforce closure of old connections that have
become inactive, to reduce unnecessary consumption of resources. It can also be used by
application and tools developers as an alternative to writing their own modules for controlling
connection lifetime.
By default, the idle timeout is not enabled. No minimum or maximum limit is imposed, but a
reasonably large period — such as a few hours — is recommended.
• When the user API call leaves the engine (returns to the calling connection) a special idle timer
associated with the current connection is started
• When another user API call from that connection enters the engine, the idle timer is stopped
and reset to zero
• If the maximum idle time is exceeded, the engine immediately closes the connection in the same
way as with asynchronous connection cancellation:
◦ The network connection remains open at this point, allowing the client application to get the
exact error code on the next API call. The network connection will be closed on the server
side, after an error is reported or in due course as a result of a network timeout from a
client-side disconnection.
Whenever a connection is cancelled, the next user API call returns the error isc_att_shutdown with
a secondary error specifying the exact reason:
isc_att_shut_idle
Idle timeout expired
isc_att_shut_killed
Killed by database administrator
isc_att_shut_db_down
Database is shut down
628
Chapter 15. Management Statements
isc_att_shut_engine
Engine is shut down
The idle timer will not start if the timeout period is set to zero.
• At database level, the database administrator can set the configuration parameter
ConnectionIdleTimeout, an integer value in minutes. The default value of zero means no timeout
is set. It is configurable per-database, so it may be set globally in firebird.conf and overridden
for individual databases in databases.conf as required.
The scope of this method is all user connections, except system connections (garbage collector,
cache writer, etc.).
• at connection level, the idle session timeout is supported by both the SET SESSION IDLE TIMEOUT
statement and the API (setIdleTimeout). The scope of this method is specific to the supplied
connection (attachment). Its value in the API is in seconds. In the SQL syntax it can be hours,
minutes or seconds. Scope for this method is the connection to which it is applied.
For more information about the API calls, consult the Firebird 4.0 Release Notes.
The effective idle timeout value is determined whenever a user API call leaves the engine, checking
first at connection level and then at database level. A connection-level timeout can override the
value of a database-level setting, as long as the period of time for the connection-level setting is no
longer than any non-zero timeout that is applicable at database level.
Take note of the difference between the time units at each level. At database level,
in the configuration files, the unit for SessionTimeout is minutes. In SQL, the default
unit is minutes but can also be expressed in hours or seconds explicitly. At the API
level, the unit is seconds.
Absolute precision is not guaranteed in any case, especially when the system load
is high, but timeouts are guaranteed not to expire earlier than the moment
specified.
Syntax
629
Chapter 15. Management Statements
Parameter Description
The SET STATEMENT TIMEOUT sets a statement timeout at connection level and takes effect
immediately. The statement can run outside transaction control (without an active transaction).
Setting a value larger than configured for the database is allowed, but is effectively ignored, see also
Determining the Statement Timeout that is In Effect.
The current statement timeout set for the session can be retrieved through RDB$GET_CONTEXT,
namespace SYSTEM and variable STATEMENT_TIMEOUT. Information is also available from
MON$ATTACHMENTS:
MON$STATEMENT_TIMEOUT
Connection-level statement timeout in milliseconds; 0 if timeout is not set.
In MON$STATEMENTS:
MON$STATEMENT_TIMEOUT
Statement-level statement timeout in milliseconds; 0 if timeout is not set.
MON$STATEMENT_TIMER
Timeout timer expiration time; contains NULL if an idle timeout was not set, or if a timer is not
running.
Statement Timeouts
The statement timeout feature allows execution of a statement to be stopped automatically when it
has been running longer than a given timeout period. It gives the database administrator an
instrument for limiting excessive resource consumption from heavy queries.
Statement timeouts can also be useful to application developers when creating and debugging
complex queries without advance knowledge of execution time. Testers and others could find them
handy for detecting long-running queries and establishing finite run times for test suites.
When the statement starts execution, or a cursor is opened, the engine starts a special timer. It is
stopped when the statement completes execution, or the last record has been fetched by the cursor.
630
Chapter 15. Management Statements
• if statement is not active currently (between fetches, for example), it is marked as cancelled,
and the next fetch will break execution and return an error
The timer will not start if the timeout period is set to zero.
• at connection level, using SET STATEMENT TIMEOUT or the API for setting a statement timeout
(setStatementTimeout). A connection-level setting (via SQL or the API) affects all statements for
the given connection; units for the timeout period at this level can be specified to any
granularity from hours to milliseconds.
The statement timeout value that is in effect is determined whenever a statement starts executing,
or a cursor is opened. In searching out the timeout in effect, the engine goes up through the levels,
from statement through to database and/or global levels until it finds a non-zero value. If the value
in effect turns out to be zero then no statement timer is running and no timeout applies.
Take note of the difference between the time units at each level. At database level,
in the conf file, the unit for StatementTimeout is seconds. In SQL, the default unit is
seconds but can be expressed in hours, minutes or milliseconds explicitly. At the
API level, the unit is milliseconds.
631
Chapter 15. Management Statements
Absolute precision is not guaranteed in any case, especially when the system load
is high, but timeouts are guaranteed not to expire earlier than the moment
specified.
Whenever a statement times out and is cancelled, the next user API call returns the error
isc_cancelled with a secondary error specifying the exact reason, viz.,
isc_cfg_stmt_timeout
Config level timeout expired
isc_att_stmt_timeout
Attachment level timeout expired
isc_req_stmt_timeout
Statement level timeout expired
2. When the engine runs an EXECUTE STATEMENT statement, it passes the remainder of the currently
active timeout to the new statement. If the external (remote) engine does not support statement
timeouts, the local engine silently ignores any corresponding error.
3. When the engine acquires a lock from the lock manager, it tries to lower the value of the lock
timeout using the remainder of the currently active statement timeout, if possible. Due to lock
manager internals, any statement timeout remainder will be rounded up to whole seconds.
Syntax
Changes the session time zone to the specified time zone. Specifying LOCAL will revert to initial
session time zone of the session (either the default or as specified through connection property
isc_dpb_session_time_zone).
Executing ALTER SESSION RESET has the same effect on the session time zone as SET TIME ZONE LOCAL,
but will also reset other session properties.
632
Chapter 15. Management Statements
Configures whether the optimizer should optimize for fetching first or all rows.
Syntax
<optimize-mode> ::=
FOR {FIRST | ALL} ROWS
| TO DEFAULT
This feature allows the optimizer to consider another (hopefully better) plan if only a subset or
rows is fetched initially by the user application (with the remaining rows being fetched on
demand), thus improving the response time.
It can also be specified at the statement level using the OPTIMIZE FOR clause.
The default behaviour can be specified globally using the OptimizeForFirstRows setting in
firebird.conf or databases.conf.
Resets the session state of the current connection to its initial values
Syntax
Resetting the session can be useful for reusing the connection by a client application (for example,
by a client-side connection pool). When this statement is executed, all user context variables are
cleared, contents of global temporary tables are cleared, and all session-level settings are reset to
their initial values.
633
Chapter 15. Management Statements
• Error isc_ses_reset_err (335545206) is raised if any transaction is active in the current session
other than the current transaction (the one executing ALTER SESSION RESET) and two-phase
transactions in the prepared state.
• ON DISCONNECT database triggers are fired, if present and if database triggers are not disabled for
the current connection.
• The current transaction (the one executing ALTER SESSION RESET), if present, is rolled back. A
warning is reported if this transaction modified data before resetting the session.
• Session configuration is reset to their initial values. This includes, but is not limited to:
◦ DECFLOAT parameters (TRAP and ROUND) are reset to the initial values defined using the DPB at
connect time, or otherwise the system default.
◦ The current role is restored to the initial value defined using DPB at connect time, and — if
the role changed — the security classes cache is cleared.
◦ The session time zone is reset to the initial value defined using the DPB at connect time, or
otherwise the system default.
◦ The bind configuration is reset to the initial value defined using the DPB at connect time, or
otherwise the database or system default.
◦ In general, configuration values should revert to the values configured using the DPB at
connect time, or otherwise the database or system default.
• Context variables defined for the USER_SESSION namespace are removed (USER_TRANSACTION was
cleared earlier by the transaction roll back).
• Global temporary tables defined as ON COMMIT PRESERVE ROWS are truncated (their contents is
cleared).
• ON CONNECT database triggers are fired, if present and if database triggers are not disabled for the
current connection.
• A new transaction is implicitly started with the same parameters as the transaction that was
rolled back (if there was a transaction)
• As isql starts multiple transactions for a single connection, ALTER SESSION RESET
cannot be executed in isql.
Error Handling
Any error raised by ON DISCONNECT triggers aborts the session reset and leaves the session state
unchanged. Such errors are reported using primary error code isc_session_reset_err (335545206) and
error text "Cannot reset user session".
Any error raised after ON DISCONNECT triggers (including the ones raised by ON CONNECT triggers)
aborts both the session reset and the connection itself. Such errors are reported using primary
634
Chapter 15. Management Statements
error code isc_ses_reset_failed (335545272) and error text "Reset of user session failed. Connection is
shut down.". Subsequent operations on the connection (except detach) will fail with error
isc_att_shutdown (335544856).
15.8. Debugging
15.8.1. SET DEBUG OPTION
Syntax
SET DEBUG OPTION configures debug information for the current connection.
Debug options are closely tied to engine internals and their usage is discouraged if
you do not understand how these internals are subject to change between
versions.
635
Appendix A: Supplementary Information
In PSQL modules, dependencies arise on the definitions of table columns accessed and also on any
parameter or variable that has been defined in the module using the TYPE OF clause.
After the engine has altered any domain, including the implicit domains created internally behind
column definitions and output parameters, the engine internally recompiles all of its dependencies.
Any module that fails to recompile because of an incompatibility arising from a domain change is
marked as invalid (“invalidated” by setting the RDB$VALID_BLR in its system record (in
RDB$PROCEDURES, RDB$FUNCTIONS or RDB$TRIGGERS, as appropriate) to zero.
1. the domain is altered again and the new definition is compatible with the previously
invalidated module definition, or
2. the previously invalidated module is altered to match the new domain definition
The following query will find the modules that depend on a specific domain and report the state of
their RDB$VALID_BLR fields:
SELECT * FROM (
SELECT
'Procedure',
rdb$procedure_name,
rdb$valid_blr
FROM rdb$procedures
UNION ALL
SELECT
'Function',
rdb$function_name,
rdb$valid_blr
FROM rdb$functions
UNION ALL
636
Appendix A: Supplementary Information
SELECT
'Trigger',
rdb$trigger_name,
rdb$valid_blr
FROM rdb$triggers
) (type, name, valid)
WHERE EXISTS
(SELECT * from rdb$dependencies
WHERE rdb$dependent_name = name
AND rdb$depended_on_name = 'MYDOMAIN')
The following query will find the modules that depend on a specific table column and report the
state of their RDB$VALID_BLR fields:
SELECT * FROM (
SELECT
'Procedure',
rdb$procedure_name,
rdb$valid_blr
FROM rdb$procedures
UNION ALL
SELECT
'Function',
rdb$function_name,
rdb$valid_blr
FROM rdb$functions
UNION ALL
SELECT
'Trigger',
rdb$trigger_name,
rdb$valid_blr
FROM rdb$triggers) (type, name, valid)
WHERE EXISTS
(SELECT *
FROM rdb$dependencies
WHERE rdb$dependent_name = name
AND rdb$depended_on_name = 'MYTABLE'
AND rdb$field_name = 'MYCOLUMN')
637
Appendix A: Supplementary Information
1. A procedure (B) is defined, that calls another procedure (A) and reads output
parameters from it. In this case, a dependency is registered in
RDB$DEPENDENCIES. Subsequently, the called procedure (A) is altered to change or
remove one or more of those output parameters. The ALTER PROCEDURE A
statement will fail with an error when commit is attempted.
2. A procedure (B) calls procedure A, supplying values for its input parameters.
No dependency is registered in RDB$DEPENDENCIES. Subsequent modification of
the input parameters in procedure A will be allowed. Failure will occur at run-
time, when B calls A with the mismatched input parameter set.
Other Notes
• For PSQL modules inherited from earlier Firebird versions (including a number of system
triggers, even if the database was created under Firebird 2.1 or higher), RDB$VALID_BLR is NULL.
This does not imply that their BLR is invalid.
• The isql commands SHOW PROCEDURES and SHOW TRIGGERS display an asterisk in the RDB$VALID_BLR
column for any module for which the value is zero (i.e. invalid). However, SHOW PROCEDURE
<procname> and SHOW TRIGGER <trigname>, which display individual PSQL modules, do not signal
invalid BLR.
A Note on Equality
This note about equality and inequality operators applies everywhere in Firebird’s
SQL language.
The “=” operator, which is explicitly used in many conditions, only matches values to values.
According to the SQL standard, NULL is not a value and hence two NULLs are neither equal nor
unequal to one another. If you need NULLs to match each other in a condition, use the IS NOT
DISTINCT FROM operator. This operator returns true if the operands have the same value or if they
are both NULL.
select *
from A join B
on A.id is not distinct from B.code
Likewise, in cases where you want to test against NULL for a condition of inequality, use IS DISTINCT
FROM, not “<>”. If you want NULL to be considered different from any value and two NULLs to be
considered equal:
select *
from A join B
on A.id is distinct from B.code
638
Appendix B: Exception Codes and Messages
Custom Exceptions
You can create custom exceptions for use in PSQL modules, with message text of
up to 1,021 characters. For more information, see CREATE EXCEPTION in Data
Definition (DDL) Statements and, for usage, the statement EXCEPTION in Procedural
SQL (PSQL) Statements.
The Firebird SQLCODE error codes do not correlate with the standards-compliant SQLSTATE codes.
SQLCODE has been used for many years and should be considered as deprecated now. Support for
SQLCODE is likely to be dropped in a future version.
The structure of an SQLSTATE error code is five characters comprising the SQL error class (2
characters) and the SQL subclass (3 characters).
Although Firebird tries to use SQLSTATE codes defined in ISO/IEC 9075 (the SQL
standard), some are non-standard or derive from older standards like X/Open SQL
for historic reasons.
SQLCLASS 00 (Success)
00000 Success
SQLCLASS 01 (Warning)
639
Appendix B: Exception Codes and Messages
640
Appendix B: Exception Codes and Messages
641
Appendix B: Exception Codes and Messages
24504 The cursor identified in the UPDATE, DELETE, SET, or GET statement is
not positioned on a row
642
Appendix B: Exception Codes and Messages
643
Appendix B: Exception Codes and Messages
644
Appendix B: Exception Codes and Messages
645
Appendix B: Exception Codes and Messages
SQLCODE has been used for many years and should be considered as deprecated
now. Support for SQLCODE is likely to be dropped in a future version.
Table 274. SQLCODE and GDSCODE Error Codes and Message Texts
304 335545267 truncate_monitor Monitoring data does not fit into the
field
304 335545268 truncate_context Engine data does not fit into return
value of system function
646
Appendix B: Exception Codes and Messages
647
Appendix B: Exception Codes and Messages
-85 335544753 usrname_not_found The user name specified was not found
in the security database
648
Appendix B: Exception Codes and Messages
649
Appendix B: Exception Codes and Messages
-104 335545037 svc_no_switches All services except for getting server log
require switches
650
Appendix B: Exception Codes and Messages
651
Appendix B: Exception Codes and Messages
652
Appendix B: Exception Codes and Messages
653
Appendix B: Exception Codes and Messages
-204 335544759 bad_default_value can not define a not null column with
NULL as default value
654
Appendix B: Exception Codes and Messages
-233 335544490 logh_open_flag Log file @1 not latest in the chain but
open flag still set
655
Appendix B: Exception Codes and Messages
-242 335544499 wal_err_rollover Cannot roll over to the next log file @1
-247 335544514 wal_cant_expand Could not expand the WAL segment for
database @1
-249 335544522 wal_err_logwrite WAL I/O error. Please see Firebird log.
656
Appendix B: Exception Codes and Messages
-282 335544660 view_alias view @1 has more than one base table;
use aliases to distinguish
657
Appendix B: Exception Codes and Messages
-402 335544414 blobnotsup BLOB and array data types are not
supported for @1 operation
658
Appendix B: Exception Codes and Messages
659
Appendix B: Exception Codes and Messages
-552 335544553 grant_nopriv user does not have GRANT privileges for
operation
660
Appendix B: Exception Codes and Messages
661
Appendix B: Exception Codes and Messages
-607 336397212 dsql_no_array_computed Array and BLOB data types not allowed
in computed field
662
Appendix B: Exception Codes and Messages
-663 335544672 key_field_err too few key columns found for index @1
(incorrect column name?)
663
Appendix B: Exception Codes and Messages
664
Appendix B: Exception Codes and Messages
-808 335544602 table_view_err Only one table allowed for VIEW WITH
CHECK OPTION
665
Appendix B: Exception Codes and Messages
666
Appendix B: Exception Codes and Messages
-833 335544810 date_range_exceeded value exceeds the range for valid dates
-833 335544912 time_range_exceeded value exceeds the range for a valid time
667
Appendix B: Exception Codes and Messages
668
Appendix B: Exception Codes and Messages
669
Appendix B: Exception Codes and Messages
670
Appendix B: Exception Codes and Messages
671
Appendix B: Exception Codes and Messages
672
Appendix B: Exception Codes and Messages
673
Appendix B: Exception Codes and Messages
-901 335544706 host_unknown The specified name was not found in the
hosts file or Domain Name Services.
674
Appendix B: Exception Codes and Messages
675
Appendix B: Exception Codes and Messages
676
Appendix B: Exception Codes and Messages
677
Appendix B: Exception Codes and Messages
678
Appendix B: Exception Codes and Messages
679
Appendix B: Exception Codes and Messages
680
Appendix B: Exception Codes and Messages
681
Appendix B: Exception Codes and Messages
682
Appendix B: Exception Codes and Messages
683
Appendix B: Exception Codes and Messages
-901 335545176 batch_policy Invalid blob policy in the batch for @1()
call
684
Appendix B: Exception Codes and Messages
685
Appendix B: Exception Codes and Messages
686
Appendix B: Exception Codes and Messages
687
Appendix B: Exception Codes and Messages
688
Appendix B: Exception Codes and Messages
-901 336068801 dyn_inv_sql_role_name user name @1 could not be used for SQL
role
689
Appendix B: Exception Codes and Messages
690
Appendix B: Exception Codes and Messages
691
Appendix B: Exception Codes and Messages
692
Appendix B: Exception Codes and Messages
-902 335544344 io_error I/O error during "@1" operation for file
"@2"
693
Appendix B: Exception Codes and Messages
694
Appendix B: Exception Codes and Messages
-902 335544472 login Your user name and password are not
defined. Ask your database
administrator to set up a Firebird login.
695
Appendix B: Exception Codes and Messages
696
Appendix B: Exception Codes and Messages
697
Appendix B: Exception Codes and Messages
698
Appendix B: Exception Codes and Messages
699
Appendix B: Exception Codes and Messages
700
Appendix B: Exception Codes and Messages
701
Appendix B: Exception Codes and Messages
702
Appendix C: Reserved Words and Keywords
Non-reserved keywords are also part of the language. They have a special meaning when used in
the proper context, but they are not reserved for Firebird’s own and exclusive use. You can use
them as identifiers without double-quoting.
Since Firebird 5.0, the reserved words and keywords can be queried from the virtual table
RDB$KEYWORDS.
Reserved words
Full list of reserved words in Firebird 5.0:
703
Appendix C: Reserved Words and Keywords
704
Appendix C: Reserved Words and Keywords
Keywords
The following terms have a special meaning in Firebird 5.0 SQL. This lists all keywords, reserved
and non-reserved.
!< ^< ^=
^> , :=
!= !> (
) < <=
<> = >
>= || ~<
~= ~> ABS
ABSOLUTE ACCENT ACOS
ACOSH ACTION ACTIVE
ADD ADMIN AFTER
ALL ALTER ALWAYS
AND ANY AS
ASC ASCENDING ASCII_CHAR
ASCII_VAL ASIN ASINH
AT ATAN ATAN2
ATANH AUTO AUTONOMOUS
AVG BACKUP BASE64_DECODE
BASE64_ENCODE BEFORE BEGIN
BETWEEN BIGINT BINARY
BIND BIN_AND BIN_NOT
BIN_OR BIN_SHL BIN_SHR
BIN_XOR BIT_LENGTH BLOB
BLOB_APPEND BLOCK BODY
BOOLEAN BOTH BREAK
BY CALLER CASCADE
705
Appendix C: Reserved Words and Keywords
706
Appendix C: Reserved Words and Keywords
707
Appendix C: Reserved Words and Keywords
708
Appendix C: Reserved Words and Keywords
709
Appendix D: System Tables
RDB$BACKUP_HISTORY
History of backups performed using nBackup
RDB$CHARACTER_SETS
Names and describes the character sets available in the database
RDB$CHECK_CONSTRAINTS
Cross-references between the names of constraints (NOT NULL constraints, CHECK constraints and
ON UPDATE and ON DELETE clauses in foreign key constraints) and their associated system-
generated triggers
RDB$COLLATIONS
Collations for all character sets
RDB$CONFIG
Virtual table with configuration settings applied for the current database
RDB$DATABASE
Basic information about the database
RDB$DB_CREATORS
A list of users granted the CREATE DATABASE privilege when using the specified database as a
security database
RDB$DEPENDENCIES
Information about dependencies between database objects
RDB$EXCEPTIONS
Custom database exceptions
RDB$FIELDS
Column and domain definitions, both system and custom
RDB$FIELD_DIMENSIONS
Dimensions of array columns
710
Appendix D: System Tables
RDB$FILES
Information about secondary files and shadow files
RDB$FILTERS
Information about BLOB filters
RDB$FORMATS
Information about changes in the formats of tables
RDB$FUNCTIONS
Information about external functions
RDB$FUNCTION_ARGUMENTS
Attributes of the parameters of external functions
RDB$GENERATORS
Information about generators (sequences)
RDB$INDEX_SEGMENTS
Segments and index positions
RDB$INDICES
Definitions of all indexes in the database (system- or user-defined)
RDB$LOG_FILES
Not used in the current version
RDB$PACKAGES
Stores the definition (header and body) of SQL packages
RDB$PAGES
Information about database pages
RDB$PROCEDURES
Definitions of stored procedures
RDB$PROCEDURE_PARAMETERS
Parameters of stored procedures
RDB$REF_CONSTRAINTS
Definitions of referential constraints (foreign keys)
RDB$RELATIONS
Headers of tables and views
RDB$RELATION_CONSTRAINTS
Definitions of all table-level constraints
711
Appendix D: System Tables
RDB$RELATION_FIELDS
Top-level definitions of table columns
RDB$ROLES
Role definitions
RDB$SECURITY_CLASSES
Access control lists
RDB$TIME_ZONES
Time zones
RDB$TRANSACTIONS
State of multi-database transactions
RDB$TRIGGERS
Trigger definitions
RDB$TRIGGER_MESSAGES
Trigger messages
RDB$TYPES
Definitions of enumerated data types
RDB$USER_PRIVILEGES
SQL privileges granted to system users
RDB$VIEW_RELATIONS
Tables that are referred to in view definitions: one record for each table in a view
RDB$AUTH_MAPPING
RDB$AUTH_MAPPING stores authentication and other security mappings.
712
Appendix D: System Tables
0 - USER
1 - ROLE
RDB$MAP_TO CHAR(63) The name to map to
RDB$SYSTEM_FLAG SMALLINT Flag:
0 - user-defined
1 or higher - system-defined
RDB$DESCRIPTION BLOB TEXT Optional description of the mapping
(comment)
RDB$BACKUP_HISTORY
RDB$BACKUP_HISTORY stores the history of backups performed using the nBackup utility.
RDB$CHARACTER_SETS
RDB$CHARACTER_SETS names and describes the character sets available in the database.
713
Appendix D: System Tables
RDB$CHECK_CONSTRAINTS
RDB$CHECK_CONSTRAINTS provides the cross-references between the names of system-generated
triggers for constraints and the names of the associated constraints (NOT NULL constraints, CHECK
constraints and the ON UPDATE and ON DELETE clauses in foreign key constraints).
RDB$COLLATIONS
RDB$COLLATIONS stores collations for all character sets.
714
Appendix D: System Tables
RDB$CONFIG
RDB$CONFIG is a virtual table showing the configuration settings of the current database for the
current connection.
715
Appendix D: System Tables
Table RDB$CONFIG is populated from in-memory structures upon request and its instance is
preserved for the SQL query lifetime. For security reasons, access to this table is allowed for
administrators only. Non-privileged users see no rows in this table (and no error is raised).
RDB$DATABASE
RDB$DATABASE stores basic information about the database. It contains only one record.
716
Appendix D: System Tables
RDB$DB_CREATORS
RDB$DB_CREATORS contains a list of users granted the CREATE DATABASE privilege when using the
specified database as a security database.
8 - user
13 - role
RDB$DEPENDENCIES
RDB$DEPENDENCIES stores the dependencies between database objects.
717
Appendix D: System Tables
0 - table
1 - view
2 - trigger
3 - computed column
4 - CHECK constraint
5 - procedure
6 - index expression
7 - exception
8 - user
9 - column
10 - index
15 - stored function
18 - package header
19 - package body
RDB$DEPENDED_ON_TYPE SMALLINT Identifies the type of the object
depended on:
RDB$EXCEPTIONS
RDB$EXCEPTIONS stores custom database exceptions.
718
Appendix D: System Tables
0 - user-defined
1 or higher - system-defined
RDB$SECURITY_CLASS CHAR(63) May reference a security class defined
in the table RDB$SECURITY_CLASSES, to
apply access control limits to all users of
this exception
RDB$OWNER_NAME CHAR(63) The username of the user who created
the exception originally
RDB$FIELDS
RDB$FIELDS stores definitions of columns and domains, both system and custom. This is where the
detailed data attributes are stored for all columns.
719
Appendix D: System Tables
720
Appendix D: System Tables
7 - SMALLINT
8 - INTEGER
10 - FLOAT
12 - DATE
13 - TIME
14 - CHAR
16 - BIGINT
23 - BOOLEAN
24 - DECFLOAT(16)
25 - DECFLOAT(34)
26 - INT128
27 - DOUBLE PRECISION
28 - TIME WITH TIME ZONE
29 - TIMESTAMP WITH TIME ZONE 35 -
TIMESTAMP
37 - VARCHAR
261 - BLOB
721
Appendix D: System Tables
0 - untyped (binary)
1 - text
2 - BLR
3 - access control list
4 - reserved for future use
5 - encoded table metadata description
6 - for storing the details of a cross-
database transaction that ends
abnormally
7 - external file description
8 - debug information (for PSQL)
< 0 - user-defined
0 - untyped data
1 - fixed binary data
722
Appendix D: System Tables
7 - SMALLINT
8 - INTEGER
10 - FLOAT
12 - DATE
13 - TIME
14 - CHAR
16 - BIGINT
23 - BOOLEAN
24 - DECFLOAT(16)
25 - DECFLOAT(34)
26 - INT128
27 - DOUBLE PRECISION
28 - TIME WITH TIME ZONE
29 - TIMESTAMP WITH TIME ZONE 35 -
TIMESTAMP
37 - VARCHAR
261 - BLOB
RDB$DIMENSIONS SMALLINT Defines the number of dimensions in an
array if the column is defined as an
array. Always NULL for columns that are
not arrays
RDB$NULL_FLAG SMALLINT Specifies whether the column can take
an empty value (the field will contain
NULL) or not (the field will contain the
value of 1)
RDB$CHARACTER_LENGTH SMALLINT The length of CHAR or VARCHAR columns in
characters (not in bytes)
RDB$COLLATION_ID SMALLINT The identifier of the collation for a
character column or domain. If it is not
defined, the value of the field will be 0
RDB$CHARACTER_SET_ID SMALLINT The identifier of the character set for a
character column, BLOB TEXT column or
domain
723
Appendix D: System Tables
RDB$FIELD_DIMENSIONS
RDB$FIELD_DIMENSIONS stores the dimensions of array columns.
RDB$FILES
RDB$FILES stores information about secondary files and shadow files.
724
Appendix D: System Tables
RDB$FILTERS
RDB$FILTERS stores information about BLOB filters.
0 - user-defined
1 or greater - internally defined
RDB$SECURITY_CLASS CHAR(63) May reference a security class defined
in the table RDB$SECURITY_CLASSES, to
apply access control limits to all users of
this filter
RDB$OWNER_NAME CHAR(63) The username of the user who created
the filter originally
RDB$FORMATS
RDB$FORMATS stores information about changes in tables. Each time any metadata change to a table is
committed, it gets a new format number. When the format number of any table reaches 255, or any
725
Appendix D: System Tables
view 32,000, the entire database becomes inoperable. To return to normal, the database must be
backed up with the gbak utility and restored from that backup copy.
RDB$FUNCTIONS
RDB$FUNCTIONS stores the information needed by the engine about stored functions and external
functions (user-defined functions, UDFs).
0 - user-defined
1 - internally defined
726
Appendix D: System Tables
727
Appendix D: System Tables
RDB$FUNCTION_ARGUMENTS
RDB$FUNCTION_ARGUMENTS stores the parameters of functions and their attributes.
0 - by value
1 - by reference
2 - by descriptor
3 - by BLOB descriptor
4 - by scalar array
5 - by reference with null
7 - SMALLINT
8 - INTEGER
10 - FLOAT
12 - DATE
13 - TIME
14 - CHAR
16 - BIGINT
23 - BOOLEAN
24 - DECFLOAT(16)
25 - DECFLOAT(34)
26 - INT128
27 - DOUBLE PRECISION
28 - TIME WITH TIME ZONE
29 - TIMESTAMP WITH TIME ZONE
35 - TIMESTAMP
37 - VARCHAR
40 - CSTRING (null-terminated text)
45 - BLOB_ID
261 - BLOB
728
Appendix D: System Tables
BOOLEAN = 1
SMALLINT = 2
INTEGER = 4
DATE = 4
TIME = 4
BIGINT = 8
DECFLOAT(16) = 8
DOUBLE PRECISION = 8
TIMESTAMP = 8
TIME WITH TIME ZONE = 8
BLOB_ID = 8
TIMESTAMP WITH TIME ZONE = 12
INT128 = 16
DECFLOAT(34) = 16
729
Appendix D: System Tables
0 - by value
1 - by reference
2 - through a descriptor
3 - via the BLOB descriptor
RDB$FIELD_NAME CHAR(63) The name of the column the parameter
references, if it was declared using TYPE
OF COLUMN instead of a regular data type.
Used in conjunction with
RDB$RELATION_NAME (see next).
RDB$RELATION_NAME CHAR(63) The name of the table the parameter
references, if it was declared using TYPE
OF COLUMN instead of a regular data type
RDB$SYSTEM_FLAG SMALLINT Flag:
0 - user-defined
1 or higher - system-defined
RDB$DESCRIPTION BLOB TEXT Optional description of the function
argument (comment)
RDB$GENERATORS
RDB$GENERATORS stores the metadata of sequences (generators).
730
Appendix D: System Tables
0 - user-defined
1 or greater - system-defined 6 - internal
sequence for identity column
RDB$DESCRIPTION BLOB TEXT Optional description of the sequence
(comment)
RDB$SECURITY_CLASS CHAR(63) May reference a security class defined
in the table RDB$SECURITY_CLASSES, to
apply access control limits to all users of
this sequence
RDB$OWNER_NAME CHAR(63) The username of the user who created
the sequence originally
RDB$INITIAL_VALUE BIGINT Stores the start value (START WITH value)
of the sequence. The start value is the
first value generated by NEXT VALUE FOR
after a restart of the sequence.
RDB$GENERATOR_INCREMENT INTEGER Stores the increment (INCREMENT BY
value) of the sequence. The increment is
used by NEXT VALUE FOR, but not by
GEN_ID.
RDB$INDEX_SEGMENTS
RDB$INDEX_SEGMENTS stores the segments (table columns) of indexes and their positions in the key. A
separate row is stored for each column in an index.
731
Appendix D: System Tables
RDB$INDICES
RDB$INDICES stores definitions of both system- and user-defined indexes. The attributes of each
column belonging to an index are stored in one row of the table RDB$INDEX_SEGMENTS.
0 - not unique
1 - unique
RDB$DESCRIPTION BLOB TEXT Could store comments concerning the
index
RDB$SEGMENT_COUNT SMALLINT The number of segments (columns) in
the index
RDB$INDEX_INACTIVE SMALLINT Indicates whether the index is currently
active:
0 - active
1 - inactive
RDB$INDEX_TYPE SMALLINT Distinguishes between an ascending (0
or NULL) and descending index (1). Not
used in databases created before
Firebird 2.0; hence, indexes in upgraded
databases are more likely to store NULL
in this column
RDB$FOREIGN_KEY CHAR(63) The name of the primary or unique key
index referenced by the foreign key
backed by this index; NULL if this index is
not used by a foreign key.
732
Appendix D: System Tables
0 - user-defined
1 or greater - system-defined
RDB$EXPRESSION_BLR BLOB BLR The binary language representation
(BLR) of the expression of an expression
index, used for calculating the values
for the index at runtime.
RDB$EXPRESSION_SOURCE BLOB TEXT The source code of the expression of an
expression index
RDB$STATISTICS DOUBLE PRECISION Stores the last known selectivity of the
entire index, calculated by execution of
a SET STATISTICS statement over the
index. It is also recalculated whenever
the database is first opened by the
server. The selectivity of each separate
segment of the index is stored in
RDB$INDEX_SEGMENTS.
RDB$CONDITION_BLR BLOB BLR The binary language representation
(BLR) of the WHERE condition of a partial
index, used for filtering the values for
the index at runtime, and optimizer
decisions to use the index.
RDB$CONDITION_SOURCE BLOB TEXT The source code of the WHERE condition
of a partial index
RDB$KEYWORDS
RDB$KEYWORDS is a virtual table listing the keywords used by the Firebird SQL parser. If a keyword is
reserved, it cannot be used as a regular identifier, but only as a delimited (quoted) identifier.
RDB$LOG_FILES
RDB$LOG_FILES is not currently used.
733
Appendix D: System Tables
RDB$PACKAGES
RDB$PACKAGES stores the definition (header and body) of SQL packages.
0 - user-defined
1 or higher - system-defined
RDB$DESCRIPTION BLOB TEXT Optional description of the package
(comment)
RDB$SQL_SECURITY BOOLEAN The SQL SECURITY mode (DEFINER or
INVOKER):
RDB$PAGES
RDB$PAGES stores and maintains information about database pages and their usage.
734
Appendix D: System Tables
RDB$PROCEDURES
RDB$PROCEDURES stores the definitions of stored procedures, including their PSQL source code and its
binary language representation (BLR). The next table, RDB$PROCEDURE_PARAMETERS, stores the
definitions of input and output parameters.
735
Appendix D: System Tables
RDB$PROCEDURE_PARAMETERS
RDB$PROCEDURE_PARAMETERS stores the parameters of stored procedures and their attributes. It holds
one row for each parameter.
736
Appendix D: System Tables
0 - by value
1 - by reference
2 - by descriptor
3 - by BLOB descriptor
RDB$FIELD_NAME CHAR(63) The name of the column the parameter
references, if it was declared using TYPE
OF COLUMN instead of a regular data type.
Used in conjunction with
RDB$RELATION_NAME (see next).
737
Appendix D: System Tables
RDB$PUBLICATIONS
RDB$PUBLICATIONS stores the replication publications defined in the database.
0 - user-defined
1 or higher - system-defined
RDB$ACTIVE_FLAG SMALLINT Inactive (0) or active (1)
RDB$AUTO_ENABLE SMALLINT Automatically add new tables to
publication:
0 - disabled
1 - enabled (tables are automatically
added to this publication)
RDB$PUBLICATION_TABLES
RDB$PUBLICATION_TABLES stores the names of tables that are replicated as part of a publication.
RDB$REF_CONSTRAINTS
RDB$REF_CONSTRAINTS stores the attributes of the referential constraints — Foreign Key relationships
and referential actions.
738
Appendix D: System Tables
RDB$RELATIONS
RDB$RELATIONS stores the top-level definitions and attributes of all tables and views in the system.
739
Appendix D: System Tables
RDB$RELATION_CONSTRAINTS
RDB$RELATION_CONSTRAINTS stores the definitions of all table-level constraints: primary, unique,
foreign key, CHECK, NOT NULL constraints.
740
Appendix D: System Tables
RDB$RELATION_FIELDS
RDB$RELATION_FIELDS stores the definitions of table and view columns.
741
Appendix D: System Tables
RDB$ROLES
RDB$ROLES stores the roles that have been defined in this database.
742
Appendix D: System Tables
0 - unused
1 - USER_MANAGEMENT
2 - READ_RAW_PAGES
3 - CREATE_USER_TYPES
4 - USE_NBACKUP_UTILITY
5 - CHANGE_SHUTDOWN_MODE
6 - TRACE_ANY_ATTACHMENT
7 - MONITOR_ANY_ATTACHMENT
8 - ACCESS_SHUTDOWN_DATABASE
9 - CREATE_DATABASE
10 - DROP_DATABASE
11 - USE_GBAK_UTILITY
12 - USE_GSTAT_UTILITY
13 - USE_GFIX_UTILITY
14 - IGNORE_DB_TRIGGERS
15 - CHANGE_HEADER_SETTINGS
16 - SELECT_ANY_OBJECT_IN_DATABASE
17 - ACCESS_ANY_OBJECT_IN_DATABASE
18 - MODIFY_ANY_OBJECT_IN_DATABASE
19 - CHANGE_MAPPING_RULES
20 - USE_GRANTED_BY_CLAUSE
21 - GRANT_REVOKE_ON_ANY_OBJECT
22 - GRANT_REVOKE_ANY_DDL_RIGHT
23 - CREATE_PRIVILEGED_ROLES
24 - GET_DBCRYPT_INFO
25 - MODIFY_EXT_CONN_POOL
26 - REPLICATE_INTO_DATABASE
27 - PROFILE_ANY_ATTACHMENT
RDB$SECURITY_CLASSES
RDB$SECURITY_CLASSES stores the access control lists
743
Appendix D: System Tables
RDB$TIME_ZONES
RDB$TIME_ZONES lists the named time zones supported by the engine. It is a virtual table that is
populated using the current time zone database of the Firebird engine.
RDB$TRANSACTIONS
RDB$TRANSACTIONS stores the states of distributed transactions and other transactions that were
prepared for two-phase commit with an explicit prepare message.
0 - in limbo
1 - committed
2 - rolled back
RDB$TIMESTAMP TIMESTAMP WITH TIME Not used
ZONE
RDB$TRANSACTION_DESCRIPTION BLOB Describes the prepared transaction and
could be a custom message supplied to
isc_prepare_transaction2, even if it is
not a distributed transaction. It may be
used when a lost connection cannot be
restored
744
Appendix D: System Tables
RDB$TRIGGERS
RDB$TRIGGERS stores the trigger definitions for all tables and views.
745
Appendix D: System Tables
RDB$TRIGGER_TYPE Value
1 before insert
2 after insert
3 before update
4 after update
5 before delete
6 after delete
8192 on connect
8193 on disconnect
Identification of the exact RDB$TRIGGER_TYPE code is a little more complicated, since it is a bitmap,
calculated according to which phase and events are covered and the order in which they are
746
Appendix D: System Tables
defined. For the curious, the calculation is explained in this code comment by Mark Rotteveel.
For DDL triggers, the trigger type is obtained by bitwise OR above the event phase (0 — BEFORE,
1 — AFTER) and all listed types events:
747
Appendix D: System Tables
RDB$TRIGGER_MESSAGES
RDB$TRIGGER_MESSAGES stores the trigger messages.
748
Appendix D: System Tables
RDB$TYPES
RDB$TYPES stores the defining sets of enumerated types used throughout the system.
0 - TABLE
1 - VIEW
2 - TRIGGER
…
RDB$TYPE_NAME CHAR(63) The name of a member of an
enumerated type, e.g., TABLE, VIEW,
TRIGGER, etc. in the example above. In
the RDB$CHARACTER_SET enumerated type,
RDB$TYPE_NAME stores the names of the
character sets.
RDB$DESCRIPTION BLOB TEXT Any text comments related to the
enumerated type
RDB$SYSTEM_FLAG SMALLINT Flag: indicates whether the type-
member is user-defined (value 0) or
system-defined (value 1 or greater)
RDB$USER_PRIVILEGES
RDB$USER_PRIVILEGES stores the SQL access privileges for Firebird users and privileged objects.
749
Appendix D: System Tables
0 - not included
1 - included
RDB$RELATION_NAME CHAR(63) The name of the object (table, view,
procedure or role) the privilege is
granted ON
RDB$FIELD_NAME CHAR(63) The name of the column the privilege is
applicable to, for a column-level
privilege (an UPDATE or REFERENCES
privilege)
RDB$USER_TYPE SMALLINT Identifies the type of user the privilege
is granted TO (a user, a procedure, a
view, etc.)
750
Appendix D: System Tables
0 - table
1 - view
2 - trigger
5 - procedure
7 - exception
8 - user
9 - domain
11 - character set
13 - role
14 - generator (sequence)
15 - function
16 - BLOB filter
17 - collation
18 - package
RDB$VIEW_RELATIONS
RDB$VIEW_RELATIONS stores the tables that are referred to in view definitions. There is one record for
each table in a view.
0 - table
1 - view
2 - stored procedure
RDB$PACKAGE_NAME CHAR(63) Package name for a stored procedure in
a package
751
Appendix E: Monitoring Tables
The key notion in understanding the monitoring feature is an activity snapshot. The activity
snapshot represents the current state of the database at the start of the transaction in which the
monitoring table query runs. It delivers a lot of information about the database itself, active
connections, users, transactions prepared, running queries and more.
The snapshot is created when any monitoring table is queried for the first time. It is preserved until
the end of the current transaction to maintain a stable, consistent view for queries across multiple
tables, such as a master-detail query. In other words, monitoring tables always behave as though
they were in SNAPSHOT TABLE STABILITY (“consistency”) isolation, even if the current transaction is
started with a lower isolation level.
To refresh the snapshot, the current transaction must be completed and the monitoring tables must
be re-queried in a new transaction context.
Access Security
• SYSDBA and the database owner have full access to all information available from the
monitoring tables
• Regular users can see information about their own connections; other connections are not
visible to them
MON$CALL_STACK
Calls to the stack by active queries of stored procedures and triggers
MON$COMPILED_STATEMENTS
Virtual table listing compiled statements
MON$CONTEXT_VARIABLES
Information about custom context variables
MON$DATABASE
Information about the database to which the CURRENT_CONNECTION is attached
752
Appendix E: Monitoring Tables
MON$IO_STATS
Input/output statistics
MON$MEMORY_USAGE
Memory usage statistics
MON$RECORD_STATS
Record-level statistics
MON$STATEMENTS
Statements prepared for execution
MON$TABLE_STATS
Table-level statistics
MON$TRANSACTIONS
Started transactions
MON$ATTACHMENTS
MON$ATTACHMENTS displays information about active attachments to the database.
0 - idle
1 - active
MON$ATTACHMENT_NAME VARCHAR(255) Connection string — the file name and
full path to the primary database file
MON$USER CHAR(63) The name of the user who is using this
connection
MON$ROLE CHAR(63) The role name specified when the
connection was established. If no role
was specified when the connection was
established, the field contains the text
NONE
MON$REMOTE_PROTOCOL VARCHAR(10) Remote protocol name
MON$REMOTE_ADDRESS VARCHAR(255) Remote address (address and server
name)
MON$REMOTE_PID INTEGER Remote client process identifier
753
Appendix E: Monitoring Tables
0 - normal connection
1 - system connection
MON$IDLE_TIMEOUT INTEGER Connection-level idle timeout in
seconds. When 0 is reported the
database ConnectionIdleTimeout from
databases.conf or firebird.conf applies.
MON$IDLE_TIMER TIMESTAMP WITH TIME Idle timer expiration time
ZONE
MON$STATEMENT_TIMEOUT INTEGER Connection-level statement timeout in
milliseconds. When 0 is reported the
database StatementTimeout from
databases.conf or firebird.conf applies.
MON$WIRE_COMPRESSED BOOLEAN Wire compression active (TRUE) or
inactive (FALSE)
MON$WIRE_ENCRYPTED BOOLEAN Wire encryption active (TRUE) or
inactive (FALSE)
MON$WIRE_CRYPT_PLUGIN VARCHAR(63) Name of the wire encryption plugin
used
MON$SESSION_TIMEZONE CHAR(63) Name of the session time zone
754
Appendix E: Monitoring Tables
Monitoring tables are read-only. However, the server has a built-in mechanism for deleting (and
only deleting) records in the MON$ATTACHMENTS table, which makes it possible to close a connection to
the database.
• All the current activity in the connection being deleted is immediately stopped and all active
transactions are rolled back
• The closed connection will return an error with the isc_att_shutdown code to the application
• Subsequent attempts to use this connection (i.e. use its handle in API calls) will return errors
Termination of system connections (MON$SYSTEM_FLAG = 1) is not possible. The server will skip
system connections in a DELETE FROM MON$ATTACHMENTS.
MON$COMPILED_STATEMENTS
Virtual table listing compiled statements.
755
Appendix E: Monitoring Tables
2 - trigger
5 - stored procedure
15 - stored function
MON$PACKAGE_NAME CHAR(63) PSQL object package name
MON$STAT_ID INTEGER Statistics identifier
MON$CALL_STACK
MON$CALL_STACK displays calls to the stack from queries executing in stored procedures and triggers.
2 - trigger
5 - stored procedure
15 - stored function
MON$TIMESTAMP TIMESTAMP WITH TIME The date and time when the call was
ZONE started
MON$SOURCE_LINE INTEGER The number of the source line in the
SQL statement being executed at the
moment of the snapshot
MON$SOURCE_COLUMN INTEGER The number of the source column in the
SQL statement being executed at the
moment of the snapshot
MON$STAT_ID INTEGER Statistics identifier
MON$PACKAGE_NAME CHAR(63) Package name for stored procedures or
functions in a package
MON$COMPILED_STATEMENT_ID BIGINT Compiled statement id
Information about calls during the execution of an EXECUTE STATEMENT statement are not reported in
756
Appendix E: Monitoring Tables
Get the call stack for all connections except your own
WITH RECURSIVE
HEAD AS (
SELECT
CALL.MON$STATEMENT_ID, CALL.MON$CALL_ID,
CALL.MON$OBJECT_NAME, CALL.MON$OBJECT_TYPE
FROM MON$CALL_STACK CALL
WHERE CALL.MON$CALLER_ID IS NULL
UNION ALL
SELECT
CALL.MON$STATEMENT_ID, CALL.MON$CALL_ID,
CALL.MON$OBJECT_NAME, CALL.MON$OBJECT_TYPE
FROM MON$CALL_STACK CALL
JOIN HEAD ON CALL.MON$CALLER_ID = HEAD.MON$CALL_ID
)
SELECT MON$ATTACHMENT_ID, MON$OBJECT_NAME, MON$OBJECT_TYPE
FROM HEAD
JOIN MON$STATEMENTS STMT ON STMT.MON$STATEMENT_ID = HEAD.MON$STATEMENT_ID
WHERE STMT.MON$ATTACHMENT_ID <> CURRENT_CONNECTION
MON$CONTEXT_VARIABLES
MON$CONTEXT_VARIABLES displays information about custom context variables.
SELECT
VAR.MON$VARIABLE_NAME,
VAR.MON$VARIABLE_VALUE
FROM MON$CONTEXT_VARIABLES VAR
WHERE VAR.MON$ATTACHMENT_ID = CURRENT_CONNECTION
757
Appendix E: Monitoring Tables
MON$DATABASE
MON$DATABASE displays the header information from the database the current user is connected to.
758
Appendix E: Monitoring Tables
0 - normal
1 - stalled
2 - merge
MON$CRYPT_PAGE BIGINT Number of encrypted pages
MON$OWNER CHAR(63) Username of the database owner
MON$SEC_DATABASE CHAR(7) Displays what type of security database
is used:
0 - not encrypted
1 - encrypted
2 - decryption in progress
3 - encryption in progress
MON$GUID CHAR(38) Database GUID (persistent until
restore/fixup)
MON$FILE_ID VARCHAR(255) Unique ID of the database file at the
filesystem level
MON$NEXT_ATTACHMENT BIGINT Current value of the next attachment ID
counter
MON$NEXT_STATEMENT BIGINT Current value of the next statement ID
counter
759
Appendix E: Monitoring Tables
0 - not a replica
1 - read-only replica
2 - read-write replica
MON$IO_STATS
MON$IO_STATS displays input/output statistics. The counters are cumulative, by group, for each group
of statistics.
0 - database
1 - connection
2 - transaction
3 - statement
4 - call
MON$PAGE_READS BIGINT Count of database pages read
MON$PAGE_WRITES BIGINT Count of database pages written to
MON$PAGE_FETCHES BIGINT Count of database pages fetched
MON$PAGE_MARKS BIGINT Count of database pages marked
MON$MEMORY_USAGE
MON$MEMORY_USAGE displays memory usage statistics.
0 - database
1 - connection
2 - transaction
3 - operator
4 - call
760
Appendix E: Monitoring Tables
Counters associated with database-level records MON$DATABASE (MON$STAT_GROUP = 0), display memory
allocation for all connections. In Classic and SuperClassic, zero values of the counters indicate that
these architectures have no common cache.
Minor memory allocations are not accrued here but are added to the database memory pool
instead.
SELECT
STMT.MON$ATTACHMENT_ID,
STMT.MON$SQL_TEXT,
MEM.MON$MEMORY_USED
FROM MON$MEMORY_USAGE MEM
NATURAL JOIN MON$STATEMENTS STMT
ORDER BY MEM.MON$MEMORY_USED DESC
FETCH FIRST 10 ROWS ONLY
MON$RECORD_STATS
MON$RECORD_STATS displays record-level statistics. The counters are cumulative, by group, for each
group of statistics.
761
Appendix E: Monitoring Tables
0 - database
1 - connection
2 - transaction
3 - statement
4 - call
MON$RECORD_SEQ_READS BIGINT Count of records read sequentially
MON$RECORD_IDX_READS BIGINT Count of records read via an index
MON$RECORD_INSERTS BIGINT Count of inserted records
MON$RECORD_UPDATES BIGINT Count of updated records
MON$RECORD_DELETES BIGINT Count of deleted records
MON$RECORD_BACKOUTS BIGINT Count of records backed out
MON$RECORD_PURGES BIGINT Count of records purged
MON$RECORD_EXPUNGES BIGINT Count of records expunged
MON$RECORD_LOCKS BIGINT Number of records locked
MON$RECORD_WAITS BIGINT Number of update, delete or lock
attempts on records owned by other
active transactions. Transaction is in
WAIT mode.
MON$RECORD_CONFLICTS BIGINT Number of unsuccessful update, delete
or lock attempts on records owned by
other active transactions. These are
reported as update conflicts.
MON$BACKVERSION_READS BIGINT Number of back-versions read to find
visible records
MON$FRAGMENT_READS BIGINT Number of fragmented records read
MON$RECORD_RPT_READS BIGINT Number of repeated reads of records
MON$RECORD_IMGC BIGINT Number of records processed by the
intermediate garbage collector
MON$STATEMENTS
MON$STATEMENTS displays statements prepared for execution.
762
Appendix E: Monitoring Tables
0 - idle
1 - active
2 - stalled
MON$TIMESTAMP TIMESTAMP WITH TIME The date and time when the statement
ZONE was prepared
MON$SQL_TEXT BLOB TEXT Statement text in SQL
MON$STAT_ID INTEGER Statistics identifier
MON$EXPLAINED_PLAN BLOB TEXT Explained execution plan
MON$STATEMENT_TIMEOUT INTEGER Connection-level statement timeout in
milliseconds. When 0 is reported the
timeout of
MON$ATTACHMENT.MON$STATEMENT_TIMEOUT
for this connection applies.
MON$STATEMENT_TIMER TIMESTAMP WITH TIME Statement timer expiration time
ZONE
MON$COMPILED_STATEMENT_ID BIGINT Compiled statement id
The STALLED state indicates that, at the time of the snapshot, the statement had an open cursor and
was waiting for the client to resume fetching rows.
SELECT
ATT.MON$USER,
ATT.MON$REMOTE_ADDRESS,
STMT.MON$SQL_TEXT,
STMT.MON$TIMESTAMP
FROM MON$ATTACHMENTS ATT
JOIN MON$STATEMENTS STMT ON ATT.MON$ATTACHMENT_ID = STMT.MON$ATTACHMENT_ID
WHERE ATT.MON$ATTACHMENT_ID <> CURRENT_CONNECTION
AND STMT.MON$STATE = 1
Monitoring tables are read-only. However, the server has a built-in mechanism for deleting (and
only deleting) records in the MON$STATEMENTS table, which makes it possible to cancel a running
query.
• If no statements are currently being executed in the connection, any attempt to cancel queries
will not proceed
• After a query is cancelled, calling execute/fetch API functions will return an error with the
isc_cancelled code
763
Appendix E: Monitoring Tables
• Cancellation of the statement does not occur synchronously, it only marks the request for
cancellation, and the cancellation itself is done asynchronously by the server
Example
Cancelling all active queries for the specified connection:
MON$TABLE_STATS
MON$TABLE_STATS reports table-level statistics.
0 - database
1 - connection
2 - transaction
3 - statement
4 - call
MON$TABLE_NAME CHAR(63) Name of the table
MON$RECORD_STAT_ID INTEGER Link to MON$RECORD_STATS
Getting statistics at the record level for each table for the current connection
SELECT
t.mon$table_name,
r.mon$record_inserts,
r.mon$record_updates,
r.mon$record_deletes,
r.mon$record_backouts,
r.mon$record_purges,
r.mon$record_expunges,
------------------------
r.mon$record_seq_reads,
r.mon$record_idx_reads,
r.mon$record_rpt_reads,
r.mon$backversion_reads,
r.mon$fragment_reads,
------------------------
r.mon$record_locks,
r.mon$record_waits,
r.mon$record_conflicts,
764
Appendix E: Monitoring Tables
------------------------
a.mon$stat_id
FROM mon$record_stats r
JOIN mon$table_stats t ON r.mon$stat_id = t.mon$record_stat_id
JOIN mon$attachments a ON t.mon$stat_id = a.mon$stat_id
WHERE a.mon$attachment_id = CURRENT_CONNECTION
MON$TRANSACTIONS
MON$TRANSACTIONS reports started transactions.
0 - idle
1 - active
MON$TIMESTAMP TIMESTAMP WITH TIME The date and time when the transaction
ZONE was started
MON$TOP_TRANSACTION BIGINT Top-level transaction identifier
(number)
MON$OLDEST_TRANSACTION BIGINT Transaction ID of the oldest [interesting]
transaction (OIT)
MON$OLDEST_ACTIVE BIGINT Transaction ID of the oldest active
transaction (OAT)
MON$ISOLATION_MODE SMALLINT Isolation mode (level):
-1 - wait forever
0 - no waiting
1 or greater - lock timeout in seconds
MON$READ_ONLY SMALLINT Flag indicating whether the transaction
is read-only (value 1) or read-write
(value 0)
765
Appendix E: Monitoring Tables
Getting all connections that started Read Write transactions with isolation level above Read Commited
SELECT DISTINCT a. *
FROM mon$attachments a
JOIN mon$transactions t ON a.mon$attachment_id = t.mon$attachment_id
WHERE NOT (t.mon$read_only = 1 AND t.mon$isolation_mode >= 2)
766
Appendix F: Security tables
Security
• SYSDBA, users with the RDB$ADMIN role in the security database and the current database, and
the owner of the security database have full access to all information provided by the security
tables.
• Regular users can only see information on themselves, other users are not visible.
These features are highly dependent on the user management plugin. Keep in
mind that some options are ignored when using a legacy control plugin users.
SEC$GLOBAL_AUTH_MAPPING
Information about global authentication mappings
SEC$USERS
Lists users in the current security database
SEC$USER_ATTRIBUTES
Additional attributes of users
SEC$DB_CREATORS
Lists users and roles granted the CREATE DATABASE privilege.
8 - user
13 - role
SEC$GLOBAL_AUTH_MAPPING
Lists users and roles granted the CREATE DATABASE privilege.
767
Appendix F: Security tables
0 - USER
1 - ROLE
SEC$MAP_TO CHAR(63) The name to map to
SEC$DESCRIPTION BLOB TEXT Comment on the mapping
SEC$USERS
Lists users in the current security database.
Multiple users can exist with the same username, each managed by a different
768
Appendix F: Security tables
authentication plugin.
SEC$USER_ATTRIBUTES
Additional attributes of users
SELECT
U.SEC$USER_NAME AS LOGIN,
A.SEC$KEY AS TAG,
A.SEC$VALUE AS "VALUE",
U.SEC$PLUGIN AS "PLUGIN"
FROM SEC$USERS U
LEFT JOIN SEC$USER_ATTRIBUTES A
ON U.SEC$USER_NAME = A.SEC$USER_NAME
AND U.SEC$PLUGIN = A.SEC$PLUGIN;
769
Appendix G: Plugin tables
The plugin tables do not always exist. For example, some tables only exist in the
security database, and other tables will only be created on first use of a plugin.
This appendix only documents plugin tables which are created by plugins included
in a standard Firebird 5.0 deployment.
PLG$PROF_PSQL_STATS
Profiler PSQL statistics
PLG$PROF_PSQL_STATS_VIEW
Profiler aggregated view for PSQL statistics
PLG$PROF_RECORD_SOURCES
Profiler information on record sources
PLG$PROF_RECORD_SOURCE_STATS
Profiler record source statistics
PLG$PROF_RECORD_SOURCE_STATS_VIEW
Profiler aggregated view for record source statistics
PLG$PROF_REQUESTS
Profiler information on requests
PLG$PROF_SESSIONS
Profiler sessions
PLG$PROF_STATEMENTS
Profiler information on statements
770
Appendix G: Plugin tables
PLG$PROF_STATEMENT_STATS_VIEW
Profiler aggregated view for statement statistics
PLG$SRP
Users and authentication information of the Srp user manager
PLG$USERS
User and authentication information of the Legacy_UserManager user manager
PLG$PROF_CURSORS
Profiler information on cursors.
PLG$PROF_PSQL_STATS
Profiler PSQL statistics.
771
Appendix G: Plugin tables
PLG$PROF_PSQL_STATS_VIEW
Profiler aggregated view for PSQL statistics.
PLG$PROF_RECORD_SOURCES
Profiler information on record sources.
772
Appendix G: Plugin tables
PLG$PROF_RECORD_SOURCE_STATS
Profiler record sources statistics.
PLG$PROF_RECORD_SOURCE_STATS_VIEW
Profiler aggregated view for record source statistics.
773
Appendix G: Plugin tables
774
Appendix G: Plugin tables
PLG$PROF_REQUESTS
Profiler information on requests.
PLG$PROF_SESSIONS
Profiler sessions.
PLG$PROF_STATEMENTS
Profiler information on statements.
775
Appendix G: Plugin tables
PLG$PROF_STATEMENT_STATS_VIEW
Profiler aggregated view for statement statistics.
776
Appendix G: Plugin tables
PLG$SRP
User and authentication information of the Srp user manager, used for authentication by the Srp
family of authentication plugins.
PLG$USERS
User and authentication information of the Legacy_UserManager user manager, used for
authentication by the Legacy_Auth authentication plugins.
777
Appendix H: Character Sets and Collations
〃 〃 〃 CP943C_UNICODE Japanese
CYRL 50 1 CYRL Russian
〃 〃 〃 DB_DEU850 German
〃 〃 〃 DB_ESP850 Spanish
〃 〃 〃 DB_FRA850 French
〃 〃 〃 DB_FRC850 French-Canada
〃 〃 〃 DB_ITA850 Italian
〃 〃 〃 DB_NLD850 Dutch
778
Appendix H: Character Sets and Collations
〃 〃 〃 DB_PTB850 Portuguese-Brazil
〃 〃 〃 DB_SVE850 Swedish
〃 〃 〃 GB18030_UNICODE Chinese
779
Appendix H: Character Sets and Collations
〃 〃 〃 GBK_UNICODE Chinese
GB_2312 57 2 GB_2312 Simplified Chinese (Hong Kong,
Korea)
ISO8859_1 21 1 ISO8859_1 Latin I
〃 〃 〃 DA_DA Danish
〃 〃 〃 DE_DE German
〃 〃 〃 DU_NL Dutch
〃 〃 〃 ES_ES Spanish
〃 〃 〃 FI_FI Finnish
〃 〃 〃 FR_CA French-Canada
〃 〃 〃 FR_FR French
〃 〃 〃 IS_IS Icelandic
〃 〃 〃 IT_IT Italian
〃 〃 〃 NO_NO Norwegian
〃 〃 〃 PT_BR Portuguese-Brazil
〃 〃 〃 PT_PT Portuguese
〃 〃 〃 SV_SV Swedish
ISO8859_2 22 1 ISO8859_2 Latin 2 — Central Europe
(Croatian, Czech, Hungarian,
Polish, Romanian, Serbian, Slovak,
Slovenian)
〃 〃 〃 CS_CZ Czech
〃 〃 〃 ISO_PLK Polish
780
Appendix H: Character Sets and Collations
〃 〃 〃 LT_LT Lithuanian
KOI8R 63 1 KOI8R Russian — dictionary ordering
〃 〃 〃 KOI8R_RU Russian
KOI8U 64 1 KOI8U Ukrainian — dictionary ordering
〃 〃 〃 KOI8U_UA Ukrainian
KSC_5601 44 2 KSC_5601 Korean
〃 〃 〃 NXT_DEU German
〃 〃 〃 NXT_ESP Spanish
〃 〃 〃 NXT_FRA French
〃 〃 〃 NXT_ITA Italian
〃 〃 〃 TIS620_UNICODE Thai
UNICODE_FSS 3 3 UNICODE_FSS All English
781
Appendix H: Character Sets and Collations
〃 〃 〃 BS_BA Bosnian
〃 〃 〃 PXW_CSY Czech
〃 〃 〃 PXW_PLK Polish
〃 〃 〃 PXW_SLOV Slovenian
〃 〃 〃 WIN_CZ Czech
〃 〃 〃 WIN1251_UA Ukrainian
WIN1252 53 1 WIN1252 ANSI — Latin I
782
Appendix H: Character Sets and Collations
783
Appendix I: License notice
The Original Documentation is titled Firebird 5.0 Language Reference. This Documentation was
derived from Firebird 4.0 Language Reference.
The Initial Writers of the Original Documentation are: Paul Vinkenoog, Dmitry Yemanov, Thomas
Woinke and Mark Rotteveel. Writers of text originally in Russian are Denis Simonov, Dmitry
Filippov, Alexander Karpeykin, Alexey Kovyazin and Dmitry Kuzmenko.
Copyright © 2008-2024. All Rights Reserved. Initial Writers contact: paul at vinkenoog dot nl.
Writers and Editors of included PDL-licensed material are: J. Beesley, Helen Borrie, Arno Brinkman,
Frank Ingermann, Vlad Khorsun, Alex Peshkov, Nickolay Samofatov, Adriano dos Santos Fernandes,
Dmitry Yemanov.
Included portions are Copyright © 2001-2024 by their respective authors. All Rights Reserved.
Portions created by Mark Rotteveel are Copyright © 2018-2024. All Rights Reserved. (Contributor
contact(s): mrotteveel at users dot sourceforge dot net).
784
Appendix J: Document History
Revision History
0.7 17 Jan M • Changed note regarding SKIP LOCKED to (once again) match release notes
2024 R
• Added columns RDB$CONDITION_BLR and RDB$CONDITION_SOURCE to RDB$INDICES
(#198)
785
Appendix J: Document History
Revision History
• Documented that OVERRIDING USER VALUE also works for GENERATED ALWAYS
identity columns
• Document OPTIMIZE FOR {FIRST | ALL} ROWS on SELECT and SET OPTIMIZE
• Updated SQLCODE and GDSCODE Error Codes and Message Texts with
error information from 5.0.0.1068
786
Appendix J: Document History
Revision History
• Replaced mention that implicit join is deprecated and might get removed;
its use is merely discouraged.
• Removed “Available in” sections if they listed both DSQL and PSQL
• Miscellaneous copy-editing
787
Appendix J: Document History
Revision History
0.2 10 May M • Documented “standard” plugin tables in new appendix Plugin tables
2023 R
• Removed Upgraders: PLEASE READ! sidebar from Built-in Scalar Functions,
the Possible name conflict sections from function descriptions and the
Name Clash note on LOWER()
• Integrated (most) changes from the Firebird 5.0 beta 1 release notes
• Documented PLAN, ORDER BY and ROWS for UPDATE OR INSERT and PLAN and
ORDER BY for MERGE
0.1 05 May M Copied the Firebird 4.0 Language Reference as a starting point:
2023 R
• renamed files and reference using fblangref40 to fblangref50
788