Postgresql 8.0 US PDF
Postgresql 8.0 US PDF
0 Documentation
Legal Notice
PostgreSQL is Copyright © 1996-2005 by the PostgreSQL Global Development Group and is distributed under the terms of the license of the
University of California below.
Postgres95 is Copyright © 1994-5 by the Regents of the University of California.
Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without a written agreement
is hereby granted, provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies.
IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCI-
DENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IM-
PLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HERE-
UNDER IS ON AN “AS-IS” BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
Table of Contents
Preface .........................................................................................................................................................i
1. What is PostgreSQL? ......................................................................................................................i
2. A Brief History of PostgreSQL..................................................................................................... ii
2.1. The Berkeley POSTGRES Project ................................................................................... ii
2.2. Postgres95........................................................................................................................ iii
2.3. PostgreSQL...................................................................................................................... iii
3. Conventions.................................................................................................................................. iii
4. Further Information.......................................................................................................................iv
5. Bug Reporting Guidelines.............................................................................................................iv
5.1. Identifying Bugs ................................................................................................................v
5.2. What to report....................................................................................................................v
5.3. Where to report bugs ...................................................................................................... vii
I. Tutorial....................................................................................................................................................1
1. Getting Started ...............................................................................................................................1
1.1. Installation .........................................................................................................................1
1.2. Architectural Fundamentals...............................................................................................1
1.3. Creating a Database...........................................................................................................2
1.4. Accessing a Database ........................................................................................................3
2. The SQL Language ........................................................................................................................6
2.1. Introduction .......................................................................................................................6
2.2. Concepts ............................................................................................................................6
2.3. Creating a New Table ........................................................................................................6
2.4. Populating a Table With Rows ..........................................................................................7
2.5. Querying a Table ...............................................................................................................8
2.6. Joins Between Tables.......................................................................................................10
2.7. Aggregate Functions........................................................................................................12
2.8. Updates ............................................................................................................................14
2.9. Deletions..........................................................................................................................14
3. Advanced Features .......................................................................................................................16
3.1. Introduction .....................................................................................................................16
3.2. Views ...............................................................................................................................16
3.3. Foreign Keys....................................................................................................................16
3.4. Transactions.....................................................................................................................17
3.5. Inheritance .......................................................................................................................19
3.6. Conclusion.......................................................................................................................21
II. The SQL Language.............................................................................................................................22
4. SQL Syntax ..................................................................................................................................24
4.1. Lexical Structure..............................................................................................................24
4.1.1. Identifiers and Key Words...................................................................................24
4.1.2. Constants.............................................................................................................25
4.1.2.1. String Constants .....................................................................................25
4.1.2.2. Dollar-Quoted String Constants .............................................................26
4.1.2.3. Bit-String Constants ...............................................................................27
4.1.2.4. Numeric Constants .................................................................................27
4.1.2.5. Constants of Other Types .......................................................................28
iii
4.1.3. Operators.............................................................................................................28
4.1.4. Special Characters...............................................................................................29
4.1.5. Comments ...........................................................................................................30
4.1.6. Lexical Precedence .............................................................................................30
4.2. Value Expressions............................................................................................................31
4.2.1. Column References.............................................................................................32
4.2.2. Positional Parameters..........................................................................................32
4.2.3. Subscripts............................................................................................................33
4.2.4. Field Selection ....................................................................................................33
4.2.5. Operator Invocations...........................................................................................33
4.2.6. Function Calls .....................................................................................................34
4.2.7. Aggregate Expressions........................................................................................34
4.2.8. Type Casts ...........................................................................................................35
4.2.9. Scalar Subqueries................................................................................................36
4.2.10. Array Constructors............................................................................................36
4.2.11. Row Constructors..............................................................................................37
4.2.12. Expression Evaluation Rules ............................................................................38
5. Data Definition .............................................................................................................................40
5.1. Table Basics.....................................................................................................................40
5.2. Default Values .................................................................................................................41
5.3. Constraints.......................................................................................................................42
5.3.1. Check Constraints ...............................................................................................42
5.3.2. Not-Null Constraints...........................................................................................44
5.3.3. Unique Constraints..............................................................................................45
5.3.4. Primary Keys.......................................................................................................46
5.3.5. Foreign Keys .......................................................................................................47
5.4. System Columns..............................................................................................................49
5.5. Inheritance .......................................................................................................................50
5.6. Modifying Tables.............................................................................................................53
5.6.1. Adding a Column................................................................................................53
5.6.2. Removing a Column ...........................................................................................54
5.6.3. Adding a Constraint ............................................................................................54
5.6.4. Removing a Constraint .......................................................................................54
5.6.5. Changing a Column’s Default Value...................................................................55
5.6.6. Changing a Column’s Data Type ........................................................................55
5.6.7. Renaming a Column ...........................................................................................55
5.6.8. Renaming a Table ...............................................................................................55
5.7. Privileges .........................................................................................................................56
5.8. Schemas...........................................................................................................................56
5.8.1. Creating a Schema ..............................................................................................57
5.8.2. The Public Schema .............................................................................................58
5.8.3. The Schema Search Path.....................................................................................58
5.8.4. Schemas and Privileges.......................................................................................59
5.8.5. The System Catalog Schema ..............................................................................60
5.8.6. Usage Patterns.....................................................................................................60
5.8.7. Portability............................................................................................................61
5.9. Other Database Objects ...................................................................................................61
5.10. Dependency Tracking....................................................................................................61
iv
6. Data Manipulation........................................................................................................................63
6.1. Inserting Data ..................................................................................................................63
6.2. Updating Data..................................................................................................................64
6.3. Deleting Data...................................................................................................................65
7. Queries .........................................................................................................................................66
7.1. Overview .........................................................................................................................66
7.2. Table Expressions ............................................................................................................66
7.2.1. The FROM Clause.................................................................................................67
7.2.1.1. Joined Tables ..........................................................................................67
7.2.1.2. Table and Column Aliases......................................................................70
7.2.1.3. Subqueries ..............................................................................................71
7.2.1.4. Table Functions ......................................................................................72
7.2.2. The WHERE Clause...............................................................................................73
7.2.3. The GROUP BY and HAVING Clauses..................................................................74
7.3. Select Lists.......................................................................................................................76
7.3.1. Select-List Items .................................................................................................76
7.3.2. Column Labels ....................................................................................................77
7.3.3. DISTINCT ...........................................................................................................77
7.4. Combining Queries..........................................................................................................77
7.5. Sorting Rows ...................................................................................................................78
7.6. LIMIT and OFFSET..........................................................................................................79
8. Data Types....................................................................................................................................81
8.1. Numeric Types.................................................................................................................82
8.1.1. Integer Types.......................................................................................................83
8.1.2. Arbitrary Precision Numbers ..............................................................................83
8.1.3. Floating-Point Types ...........................................................................................84
8.1.4. Serial Types.........................................................................................................85
8.2. Monetary Types ...............................................................................................................86
8.3. Character Types ...............................................................................................................86
8.4. Binary Data Types ...........................................................................................................88
8.5. Date/Time Types..............................................................................................................90
8.5.1. Date/Time Input ..................................................................................................91
8.5.1.1. Dates.......................................................................................................92
8.5.1.2. Times ......................................................................................................92
8.5.1.3. Time Stamps...........................................................................................93
8.5.1.4. Intervals ..................................................................................................94
8.5.1.5. Special Values ........................................................................................94
8.5.2. Date/Time Output ...............................................................................................95
8.5.3. Time Zones .........................................................................................................96
8.5.4. Internals...............................................................................................................97
8.6. Boolean Type...................................................................................................................97
8.7. Geometric Types..............................................................................................................98
8.7.1. Points ..................................................................................................................98
8.7.2. Line Segments.....................................................................................................99
8.7.3. Boxes...................................................................................................................99
8.7.4. Paths....................................................................................................................99
8.7.5. Polygons..............................................................................................................99
8.7.6. Circles ...............................................................................................................100
v
8.8. Network Address Types.................................................................................................100
8.8.1. inet ..................................................................................................................100
8.8.2. cidr ..................................................................................................................101
8.8.3. inet vs. cidr ...................................................................................................101
8.8.4. macaddr ...........................................................................................................102
8.9. Bit String Types .............................................................................................................102
8.10. Arrays ..........................................................................................................................103
8.10.1. Declaration of Array Types.............................................................................103
8.10.2. Array Value Input............................................................................................104
8.10.3. Accessing Arrays ............................................................................................105
8.10.4. Modifying Arrays............................................................................................107
8.10.5. Searching in Arrays.........................................................................................109
8.10.6. Array Input and Output Syntax.......................................................................110
8.11. Composite Types .........................................................................................................111
8.11.1. Declaration of Composite Types.....................................................................111
8.11.2. Composite Value Input....................................................................................112
8.11.3. Accessing Composite Types ...........................................................................113
8.11.4. Modifying Composite Types...........................................................................114
8.11.5. Composite Type Input and Output Syntax......................................................114
8.12. Object Identifier Types ................................................................................................115
8.13. Pseudo-Types...............................................................................................................117
9. Functions and Operators ............................................................................................................119
9.1. Logical Operators ..........................................................................................................119
9.2. Comparison Operators...................................................................................................119
9.3. Mathematical Functions and Operators.........................................................................121
9.4. String Functions and Operators .....................................................................................124
9.5. Binary String Functions and Operators .........................................................................132
9.6. Bit String Functions and Operators ...............................................................................134
9.7. Pattern Matching ...........................................................................................................135
9.7.1. LIKE ..................................................................................................................135
9.7.2. SIMILAR TO Regular Expressions ...................................................................136
9.7.3. POSIX Regular Expressions .............................................................................137
9.7.3.1. Regular Expression Details ..................................................................138
9.7.3.2. Bracket Expressions .............................................................................140
9.7.3.3. Regular Expression Escapes.................................................................141
9.7.3.4. Regular Expression Metasyntax...........................................................144
9.7.3.5. Regular Expression Matching Rules ....................................................145
9.7.3.6. Limits and Compatibility .....................................................................146
9.7.3.7. Basic Regular Expressions ...................................................................147
9.8. Data Type Formatting Functions ...................................................................................147
9.9. Date/Time Functions and Operators..............................................................................153
9.9.1. EXTRACT, date_part ......................................................................................155
9.9.2. date_trunc .....................................................................................................159
9.9.3. AT TIME ZONE.................................................................................................159
9.9.4. Current Date/Time ............................................................................................160
9.10. Geometric Functions and Operators............................................................................162
9.11. Network Address Functions and Operators.................................................................165
9.12. Sequence Manipulation Functions ..............................................................................167
vi
9.13. Conditional Expressions..............................................................................................169
9.13.1. CASE ................................................................................................................169
9.13.2. COALESCE .......................................................................................................170
9.13.3. NULLIF............................................................................................................171
9.14. Array Functions and Operators ...................................................................................171
9.15. Aggregate Functions....................................................................................................172
9.16. Subquery Expressions .................................................................................................175
9.16.1. EXISTS............................................................................................................175
9.16.2. IN ....................................................................................................................175
9.16.3. NOT IN............................................................................................................176
9.16.4. ANY/SOME ........................................................................................................176
9.16.5. ALL ..................................................................................................................177
9.16.6. Row-wise Comparison....................................................................................178
9.17. Row and Array Comparisons ......................................................................................178
9.17.1. IN ....................................................................................................................178
9.17.2. NOT IN............................................................................................................179
9.17.3. ANY/SOME (array) ............................................................................................179
9.17.4. ALL (array) ......................................................................................................179
9.17.5. Row-wise Comparison....................................................................................180
9.18. Set Returning Functions ..............................................................................................180
9.19. System Information Functions ....................................................................................181
9.20. System Administration Functions ...............................................................................186
10. Type Conversion.......................................................................................................................189
10.1. Overview .....................................................................................................................189
10.2. Operators .....................................................................................................................190
10.3. Functions .....................................................................................................................193
10.4. Value Storage...............................................................................................................196
10.5. UNION, CASE, and ARRAY Constructs ..........................................................................196
11. Indexes .....................................................................................................................................199
11.1. Introduction .................................................................................................................199
11.2. Index Types..................................................................................................................200
11.3. Multicolumn Indexes...................................................................................................201
11.4. Unique Indexes ............................................................................................................202
11.5. Indexes on Expressions ...............................................................................................202
11.6. Operator Classes..........................................................................................................203
11.7. Partial Indexes .............................................................................................................204
11.8. Examining Index Usage...............................................................................................206
12. Concurrency Control................................................................................................................208
12.1. Introduction .................................................................................................................208
12.2. Transaction Isolation ...................................................................................................208
12.2.1. Read Committed Isolation Level ....................................................................209
12.2.2. Serializable Isolation Level.............................................................................210
12.2.2.1. Serializable Isolation versus True Serializability ...............................211
12.3. Explicit Locking ..........................................................................................................212
12.3.1. Table-Level Locks...........................................................................................212
12.3.2. Row-Level Locks ............................................................................................213
12.3.3. Deadlocks........................................................................................................214
12.4. Data Consistency Checks at the Application Level.....................................................215
vii
12.5. Locking and Indexes....................................................................................................215
13. Performance Tips .....................................................................................................................217
13.1. Using EXPLAIN ...........................................................................................................217
13.2. Statistics Used by the Planner .....................................................................................220
13.3. Controlling the Planner with Explicit JOIN Clauses...................................................222
13.4. Populating a Database .................................................................................................223
13.4.1. Disable Autocommit .......................................................................................224
13.4.2. Use COPY .........................................................................................................224
13.4.3. Remove Indexes ..............................................................................................224
13.4.4. Increase maintenance_work_mem ...............................................................224
13.4.5. Increase checkpoint_segments .................................................................224
13.4.6. Run ANALYZE Afterwards...............................................................................225
III. Server Administration ....................................................................................................................226
14. Installation Instructions............................................................................................................228
14.1. Short Version ...............................................................................................................228
14.2. Requirements...............................................................................................................228
14.3. Getting The Source......................................................................................................230
14.4. If You Are Upgrading..................................................................................................230
14.5. Installation Procedure..................................................................................................232
14.6. Post-Installation Setup.................................................................................................237
14.6.1. Shared Libraries ..............................................................................................237
14.6.2. Environment Variables....................................................................................238
14.7. Supported Platforms ....................................................................................................239
15. Client-Only Installation on Windows.......................................................................................245
16. Server Run-time Environment .................................................................................................246
16.1. The PostgreSQL User Account ...................................................................................246
16.2. Creating a Database Cluster ........................................................................................246
16.3. Starting the Database Server........................................................................................247
16.3.1. Server Start-up Failures ..................................................................................248
16.3.2. Client Connection Problems ...........................................................................249
16.4. Run-time Configuration...............................................................................................250
16.4.1. File Locations..................................................................................................251
16.4.2. Connections and Authentication .....................................................................252
16.4.2.1. Connection Settings............................................................................252
16.4.2.2. Security and Authentication ...............................................................253
16.4.3. Resource Consumption ...................................................................................254
16.4.3.1. Memory ..............................................................................................254
16.4.3.2. Free Space Map ..................................................................................255
16.4.3.3. Kernel Resource Usage ......................................................................255
16.4.3.4. Cost-Based Vacuum Delay.................................................................256
16.4.3.5. Background Writer .............................................................................257
16.4.4. Write Ahead Log.............................................................................................258
16.4.4.1. Settings ...............................................................................................258
16.4.4.2. Checkpoints........................................................................................259
16.4.4.3. Archiving............................................................................................259
16.4.5. Query Planning ...............................................................................................260
16.4.5.1. Planner Method Configuration ...........................................................260
viii
16.4.5.2. Planner Cost Constants.......................................................................261
16.4.5.3. Genetic Query Optimizer ...................................................................261
16.4.5.4. Other Planner Options ........................................................................262
16.4.6. Error Reporting and Logging..........................................................................263
16.4.6.1. Where to log .......................................................................................263
16.4.6.2. When To Log......................................................................................264
16.4.6.3. What To Log.......................................................................................266
16.4.7. Runtime Statistics ...........................................................................................268
16.4.7.1. Statistics Monitoring ..........................................................................268
16.4.7.2. Query and Index Statistics Collector..................................................268
16.4.8. Client Connection Defaults.............................................................................269
16.4.8.1. Statement Behavior ............................................................................269
16.4.8.2. Locale and Formatting........................................................................270
16.4.8.3. Other Defaults ....................................................................................271
16.4.9. Lock Management ..........................................................................................272
16.4.10. Version and Platform Compatibility .............................................................273
16.4.10.1. Previous PostgreSQL Versions.........................................................273
16.4.10.2. Platform and Client Compatibility ...................................................273
16.4.11. Preset Options ...............................................................................................274
16.4.12. Customized Options......................................................................................275
16.4.13. Developer Options ........................................................................................276
16.4.14. Short Options ................................................................................................277
16.5. Managing Kernel Resources........................................................................................277
16.5.1. Shared Memory and Semaphores ...................................................................277
16.5.2. Resource Limits ..............................................................................................282
16.5.3. Linux Memory Overcommit ...........................................................................283
16.6. Shutting Down the Server............................................................................................284
16.7. Secure TCP/IP Connections with SSL ........................................................................284
16.8. Secure TCP/IP Connections with SSH Tunnels ..........................................................285
17. Database Users and Privileges .................................................................................................287
17.1. Database Users ............................................................................................................287
17.2. User Attributes.............................................................................................................288
17.3. Groups .........................................................................................................................288
17.4. Privileges .....................................................................................................................289
17.5. Functions and Triggers ................................................................................................289
18. Managing Databases ................................................................................................................291
18.1. Overview .....................................................................................................................291
18.2. Creating a Database.....................................................................................................291
18.3. Template Databases .....................................................................................................292
18.4. Database Configuration ...............................................................................................293
18.5. Destroying a Database .................................................................................................294
18.6. Tablespaces..................................................................................................................294
19. Client Authentication ...............................................................................................................297
19.1. The pg_hba.conf file ................................................................................................297
19.2. Authentication methods...............................................................................................302
19.2.1. Trust authentication.........................................................................................302
19.2.2. Password authentication..................................................................................302
19.2.3. Kerberos authentication ..................................................................................302
ix
19.2.4. Ident-based authentication ..............................................................................303
19.2.4.1. Ident Authentication over TCP/IP ......................................................303
19.2.4.2. Ident Authentication over Local Sockets ...........................................304
19.2.4.3. Ident Maps..........................................................................................304
19.2.5. PAM authentication.........................................................................................305
19.3. Authentication problems .............................................................................................305
20. Localization..............................................................................................................................307
20.1. Locale Support.............................................................................................................307
20.1.1. Overview.........................................................................................................307
20.1.2. Behavior ..........................................................................................................308
20.1.3. Problems .........................................................................................................309
20.2. Character Set Support..................................................................................................309
20.2.1. Supported Character Sets................................................................................309
20.2.2. Setting the Character Set.................................................................................310
20.2.3. Automatic Character Set Conversion Between Server and Client..................311
20.2.4. Further Reading ..............................................................................................314
21. Routine Database Maintenance Tasks......................................................................................315
21.1. Routine Vacuuming .....................................................................................................315
21.1.1. Recovering disk space.....................................................................................315
21.1.2. Updating planner statistics..............................................................................316
21.1.3. Preventing transaction ID wraparound failures ..............................................317
21.2. Routine Reindexing .....................................................................................................319
21.3. Log File Maintenance..................................................................................................319
22. Backup and Restore .................................................................................................................321
22.1. SQL Dump...................................................................................................................321
22.1.1. Restoring the dump .........................................................................................321
22.1.2. Using pg_dumpall...........................................................................................322
22.1.3. Handling large databases ................................................................................322
22.1.4. Caveats ............................................................................................................323
22.2. File system level backup..............................................................................................323
22.3. On-line backup and point-in-time recovery (PITR) ....................................................324
22.3.1. Setting up WAL archiving...............................................................................325
22.3.2. Making a Base Backup ...................................................................................327
22.3.3. Recovering with an On-line Backup...............................................................328
22.3.3.1. Recovery Settings...............................................................................330
22.3.4. Timelines.........................................................................................................331
22.3.5. Caveats ............................................................................................................332
22.4. Migration Between Releases .......................................................................................332
23. Monitoring Database Activity..................................................................................................334
23.1. Standard Unix Tools ....................................................................................................334
23.2. The Statistics Collector................................................................................................334
23.2.1. Statistics Collection Configuration .................................................................335
23.2.2. Viewing Collected Statistics ...........................................................................335
23.3. Viewing Locks.............................................................................................................339
24. Monitoring Disk Usage ............................................................................................................341
24.1. Determining Disk Usage .............................................................................................341
24.2. Disk Full Failure..........................................................................................................342
25. Write-Ahead Logging (WAL) ..................................................................................................343
x
25.1. Benefits of WAL ..........................................................................................................343
25.2. WAL Configuration .....................................................................................................343
25.3. Internals .......................................................................................................................345
26. Regression Tests.......................................................................................................................347
26.1. Running the Tests ........................................................................................................347
26.2. Test Evaluation ............................................................................................................348
26.2.1. Error message differences...............................................................................348
26.2.2. Locale differences ...........................................................................................349
26.2.3. Date and time differences ...............................................................................349
26.2.4. Floating-point differences ...............................................................................349
26.2.5. Row ordering differences................................................................................350
26.2.6. The “random” test ...........................................................................................350
26.3. Platform-specific comparison files ..............................................................................350
IV. Client Interfaces ..............................................................................................................................352
27. libpq - C Library ......................................................................................................................354
27.1. Database Connection Control Functions .....................................................................354
27.2. Connection Status Functions .......................................................................................360
27.3. Command Execution Functions ..................................................................................364
27.3.1. Main Functions ...............................................................................................364
27.3.2. Retrieving Query Result Information .............................................................370
27.3.3. Retrieving Result Information for Other Commands .....................................373
27.3.4. Escaping Strings for Inclusion in SQL Commands ........................................374
27.3.5. Escaping Binary Strings for Inclusion in SQL Commands ............................375
27.4. Asynchronous Command Processing ..........................................................................376
27.5. Cancelling Queries in Progress ...................................................................................380
27.6. The Fast-Path Interface................................................................................................381
27.7. Asynchronous Notification..........................................................................................382
27.8. Functions Associated with the COPY Command .........................................................383
27.8.1. Functions for Sending COPY Data...................................................................384
27.8.2. Functions for Receiving COPY Data................................................................385
27.8.3. Obsolete Functions for COPY ..........................................................................386
27.9. Control Functions ........................................................................................................388
27.10. Notice Processing ......................................................................................................388
27.11. Environment Variables ..............................................................................................390
27.12. The Password File .....................................................................................................391
27.13. SSL Support...............................................................................................................392
27.14. Behavior in Threaded Programs ................................................................................392
27.15. Building libpq Programs............................................................................................392
27.16. Example Programs.....................................................................................................394
28. Large Objects ...........................................................................................................................403
28.1. History .........................................................................................................................403
28.2. Implementation Features .............................................................................................403
28.3. Client Interfaces...........................................................................................................403
28.3.1. Creating a Large Object ..................................................................................403
28.3.2. Importing a Large Object................................................................................404
28.3.3. Exporting a Large Object................................................................................404
28.3.4. Opening an Existing Large Object..................................................................404
xi
28.3.5. Writing Data to a Large Object.......................................................................405
28.3.6. Reading Data from a Large Object .................................................................405
28.3.7. Seeking in a Large Object...............................................................................405
28.3.8. Obtaining the Seek Position of a Large Object...............................................405
28.3.9. Closing a Large Object Descriptor .................................................................405
28.3.10. Removing a Large Object .............................................................................406
28.4. Server-Side Functions..................................................................................................406
28.5. Example Program ........................................................................................................406
29. ECPG - Embedded SQL in C...................................................................................................412
29.1. The Concept.................................................................................................................412
29.2. Connecting to the Database Server..............................................................................412
29.3. Closing a Connection ..................................................................................................413
29.4. Running SQL Commands............................................................................................414
29.5. Choosing a Connection................................................................................................415
29.6. Using Host Variables ...................................................................................................415
29.6.1. Overview.........................................................................................................415
29.6.2. Declare Sections..............................................................................................416
29.6.3. SELECT INTO and FETCH INTO ...................................................................416
29.6.4. Indicators.........................................................................................................417
29.7. Dynamic SQL..............................................................................................................418
29.8. Using SQL Descriptor Areas.......................................................................................419
29.9. Error Handling.............................................................................................................421
29.9.1. Setting Callbacks ............................................................................................421
29.9.2. sqlca ................................................................................................................423
29.9.3. SQLSTATE vs SQLCODE...................................................................................424
29.10. Including Files ...........................................................................................................426
29.11. Processing Embedded SQL Programs.......................................................................427
29.12. Library Functions ......................................................................................................428
29.13. Internals .....................................................................................................................428
30. The Information Schema..........................................................................................................431
30.1. The Schema .................................................................................................................431
30.2. Data Types ...................................................................................................................431
30.3. information_schema_catalog_name ..................................................................432
30.4. applicable_roles...................................................................................................432
30.5. check_constraints ................................................................................................432
30.6. column_domain_usage ............................................................................................433
30.7. column_privileges ................................................................................................433
30.8. column_udt_usage...................................................................................................434
30.9. columns ......................................................................................................................435
30.10. constraint_column_usage .................................................................................439
30.11. constraint_table_usage....................................................................................439
30.12. data_type_privileges ........................................................................................440
30.13. domain_constraints ............................................................................................441
30.14. domain_udt_usage.................................................................................................441
30.15. domains ....................................................................................................................442
30.16. element_types .......................................................................................................445
30.17. enabled_roles .......................................................................................................447
30.18. key_column_usage.................................................................................................448
xii
30.19. parameters..............................................................................................................448
30.20. referential_constraints .................................................................................451
30.21. role_column_grants ............................................................................................452
30.22. role_routine_grants ..........................................................................................452
30.23. role_table_grants ..............................................................................................453
30.24. role_usage_grants ..............................................................................................454
30.25. routine_privileges ............................................................................................454
30.26. routines ..................................................................................................................455
30.27. schemata ..................................................................................................................459
30.28. sql_features .........................................................................................................460
30.29. sql_implementation_info .................................................................................460
30.30. sql_languages .......................................................................................................461
30.31. sql_packages .........................................................................................................462
30.32. sql_sizing..............................................................................................................462
30.33. sql_sizing_profiles ..........................................................................................463
30.34. table_constraints ..............................................................................................463
30.35. table_privileges.................................................................................................464
30.36. tables ......................................................................................................................465
30.37. triggers ..................................................................................................................465
30.38. usage_privileges.................................................................................................467
30.39. view_column_usage ..............................................................................................467
30.40. view_table_usage.................................................................................................468
30.41. views ........................................................................................................................469
V. Server Programming ........................................................................................................................470
31. Extending SQL.........................................................................................................................472
31.1. How Extensibility Works.............................................................................................472
31.2. The PostgreSQL Type System.....................................................................................472
31.2.1. Base Types ......................................................................................................472
31.2.2. Composite Types.............................................................................................472
31.2.3. Domains ..........................................................................................................473
31.2.4. Pseudo-Types ..................................................................................................473
31.2.5. Polymorphic Types .........................................................................................473
31.3. User-Defined Functions...............................................................................................474
31.4. Query Language (SQL) Functions ..............................................................................474
31.4.1. SQL Functions on Base Types ........................................................................475
31.4.2. SQL Functions on Composite Types ..............................................................476
31.4.3. SQL Functions as Table Sources ....................................................................479
31.4.4. SQL Functions Returning Sets .......................................................................480
31.4.5. Polymorphic SQL Functions ..........................................................................481
31.5. Function Overloading ..................................................................................................482
31.6. Function Volatility Categories .....................................................................................483
31.7. Procedural Language Functions ..................................................................................484
31.8. Internal Functions........................................................................................................484
31.9. C-Language Functions.................................................................................................485
31.9.1. Dynamic Loading............................................................................................485
31.9.2. Base Types in C-Language Functions.............................................................486
31.9.3. Calling Conventions Version 0 for C-Language Functions ............................489
xiii
31.9.4. Calling Conventions Version 1 for C-Language Functions ............................491
31.9.5. Writing Code...................................................................................................494
31.9.6. Compiling and Linking Dynamically-Loaded Functions ...............................494
31.9.7. Extension Building Infrastructure...................................................................497
31.9.8. Composite-Type Arguments in C-Language Functions..................................499
31.9.9. Returning Rows (Composite Types) from C-Language Functions.................500
31.9.10. Returning Sets from C-Language Functions.................................................501
31.9.11. Polymorphic Arguments and Return Types ..................................................506
31.10. User-Defined Aggregates ..........................................................................................507
31.11. User-Defined Types ...................................................................................................509
31.12. User-Defined Operators.............................................................................................512
31.13. Operator Optimization Information...........................................................................513
31.13.1. COMMUTATOR .................................................................................................514
31.13.2. NEGATOR .......................................................................................................514
31.13.3. RESTRICT .....................................................................................................515
31.13.4. JOIN ..............................................................................................................516
31.13.5. HASHES..........................................................................................................516
31.13.6. MERGES (SORT1, SORT2, LTCMP, GTCMP).....................................................517
31.14. Interfacing Extensions To Indexes.............................................................................518
31.14.1. Index Methods and Operator Classes ...........................................................518
31.14.2. Index Method Strategies ...............................................................................519
31.14.3. Index Method Support Routines ...................................................................520
31.14.4. An Example ..................................................................................................521
31.14.5. Cross-Data-Type Operator Classes ...............................................................524
31.14.6. System Dependencies on Operator Classes ..................................................525
31.14.7. Special Features of Operator Classes............................................................525
32. Triggers ....................................................................................................................................527
32.1. Overview of Trigger Behavior.....................................................................................527
32.2. Visibility of Data Changes...........................................................................................528
32.3. Writing Trigger Functions in C ...................................................................................529
32.4. A Complete Example ..................................................................................................531
33. The Rule System ......................................................................................................................535
33.1. The Query Tree............................................................................................................535
33.2. Views and the Rule System .........................................................................................537
33.2.1. How SELECT Rules Work ...............................................................................537
33.2.2. View Rules in Non-SELECT Statements .........................................................542
33.2.3. The Power of Views in PostgreSQL ...............................................................543
33.2.4. Updating a View..............................................................................................544
33.3. Rules on INSERT, UPDATE, and DELETE ....................................................................544
33.3.1. How Update Rules Work ................................................................................544
33.3.1.1. A First Rule Step by Step...................................................................545
33.3.2. Cooperation with Views..................................................................................549
33.4. Rules and Privileges ....................................................................................................554
33.5. Rules and Command Status.........................................................................................555
33.6. Rules versus Triggers ..................................................................................................556
34. Procedural Languages ..............................................................................................................559
34.1. Installing Procedural Languages .................................................................................559
35. PL/pgSQL - SQL Procedural Language ..................................................................................561
xiv
35.1. Overview .....................................................................................................................561
35.1.1. Advantages of Using PL/pgSQL ....................................................................562
35.1.2. Supported Argument and Result Data Types ..................................................562
35.2. Tips for Developing in PL/pgSQL...............................................................................563
35.2.1. Handling of Quotation Marks .........................................................................563
35.3. Structure of PL/pgSQL................................................................................................565
35.4. Declarations.................................................................................................................566
35.4.1. Aliases for Function Parameters .....................................................................567
35.4.2. Copying Types ................................................................................................568
35.4.3. Row Types.......................................................................................................569
35.4.4. Record Types ..................................................................................................569
35.4.5. RENAME............................................................................................................570
35.5. Expressions..................................................................................................................570
35.6. Basic Statements..........................................................................................................571
35.6.1. Assignment .....................................................................................................571
35.6.2. SELECT INTO .................................................................................................572
35.6.3. Executing an Expression or Query With No Result........................................573
35.6.4. Doing Nothing At All .....................................................................................573
35.6.5. Executing Dynamic Commands .....................................................................574
35.6.6. Obtaining the Result Status.............................................................................575
35.7. Control Structures........................................................................................................576
35.7.1. Returning From a Function.............................................................................576
35.7.1.1. RETURN ...............................................................................................576
35.7.1.2. RETURN NEXT ....................................................................................576
35.7.2. Conditionals ....................................................................................................577
35.7.2.1. IF-THEN .............................................................................................577
35.7.2.2. IF-THEN-ELSE ..................................................................................578
35.7.2.3. IF-THEN-ELSE IF............................................................................578
35.7.2.4. IF-THEN-ELSIF-ELSE .....................................................................579
35.7.2.5. IF-THEN-ELSEIF-ELSE ...................................................................579
35.7.3. Simple Loops ..................................................................................................579
35.7.3.1. LOOP ...................................................................................................579
35.7.3.2. EXIT ...................................................................................................580
35.7.3.3. WHILE .................................................................................................580
35.7.3.4. FOR (integer variant)...........................................................................581
35.7.4. Looping Through Query Results ....................................................................581
35.7.5. Trapping Errors ...............................................................................................582
35.8. Cursors.........................................................................................................................584
35.8.1. Declaring Cursor Variables .............................................................................584
35.8.2. Opening Cursors .............................................................................................584
35.8.2.1. OPEN FOR SELECT............................................................................585
35.8.2.2. OPEN FOR EXECUTE .........................................................................585
35.8.2.3. Opening a Bound Cursor....................................................................585
35.8.3. Using Cursors..................................................................................................586
35.8.3.1. FETCH .................................................................................................586
35.8.3.2. CLOSE .................................................................................................586
35.8.3.3. Returning Cursors ..............................................................................586
35.9. Errors and Messages....................................................................................................588
xv
35.10. Trigger Procedures ....................................................................................................589
35.11. Porting from Oracle PL/SQL.....................................................................................594
35.11.1. Porting Examples ..........................................................................................594
35.11.2. Other Things to Watch For............................................................................600
35.11.2.1. Implicit Rollback after Exceptions...................................................601
35.11.2.2. EXECUTE ...........................................................................................601
35.11.2.3. Optimizing PL/pgSQL Functions.....................................................601
35.11.3. Appendix.......................................................................................................601
36. PL/Tcl - Tcl Procedural Language...........................................................................................605
36.1. Overview .....................................................................................................................605
36.2. PL/Tcl Functions and Arguments................................................................................605
36.3. Data Values in PL/Tcl..................................................................................................606
36.4. Global Data in PL/Tcl .................................................................................................607
36.5. Database Access from PL/Tcl .....................................................................................607
36.6. Trigger Procedures in PL/Tcl ......................................................................................609
36.7. Modules and the unknown command..........................................................................611
36.8. Tcl Procedure Names ..................................................................................................611
37. PL/Perl - Perl Procedural Language.........................................................................................612
37.1. PL/Perl Functions and Arguments...............................................................................612
37.2. Database Access from PL/Perl ....................................................................................614
37.3. Data Values in PL/Perl.................................................................................................615
37.4. Global Values in PL/Perl .............................................................................................616
37.5. Trusted and Untrusted PL/Perl ....................................................................................617
37.6. PL/Perl Triggers ..........................................................................................................617
37.7. Limitations and Missing Features ...............................................................................619
38. PL/Python - Python Procedural Language...............................................................................620
38.1. PL/Python Functions ...................................................................................................620
38.2. Trigger Functions ........................................................................................................621
38.3. Database Access ..........................................................................................................621
39. Server Programming Interface .................................................................................................623
39.1. Interface Functions ......................................................................................................623
SPI_connect ................................................................................................................623
SPI_finish....................................................................................................................625
SPI_push .....................................................................................................................626
SPI_pop.......................................................................................................................627
SPI_execute.................................................................................................................628
SPI_exec......................................................................................................................631
SPI_prepare.................................................................................................................632
SPI_getargcount ..........................................................................................................634
SPI_getargtypeid.........................................................................................................635
SPI_is_cursor_plan .....................................................................................................636
SPI_execute_plan........................................................................................................637
SPI_execp....................................................................................................................639
SPI_cursor_open .........................................................................................................640
SPI_cursor_find...........................................................................................................642
SPI_cursor_fetch.........................................................................................................643
SPI_cursor_move ........................................................................................................644
SPI_cursor_close.........................................................................................................645
xvi
SPI_saveplan...............................................................................................................646
39.2. Interface Support Functions ........................................................................................647
SPI_fname...................................................................................................................647
SPI_fnumber ...............................................................................................................648
SPI_getvalue ...............................................................................................................649
SPI_getbinval ..............................................................................................................650
SPI_gettype .................................................................................................................651
SPI_gettypeid..............................................................................................................652
SPI_getrelname ...........................................................................................................653
39.3. Memory Management .................................................................................................654
SPI_palloc ...................................................................................................................654
SPI_repalloc................................................................................................................656
SPI_pfree.....................................................................................................................657
SPI_copytuple .............................................................................................................658
SPI_returntuple ...........................................................................................................659
SPI_modifytuple .........................................................................................................660
SPI_freetuple...............................................................................................................662
SPI_freetuptable..........................................................................................................663
SPI_freeplan................................................................................................................664
39.4. Visibility of Data Changes...........................................................................................665
39.5. Examples .....................................................................................................................665
VI. Reference..........................................................................................................................................669
I. SQL Commands..........................................................................................................................671
ABORT.................................................................................................................................672
ALTER AGGREGATE.........................................................................................................674
ALTER CONVERSION.......................................................................................................676
ALTER DATABASE ............................................................................................................678
ALTER DOMAIN ................................................................................................................680
ALTER FUNCTION ............................................................................................................683
ALTER GROUP ...................................................................................................................685
ALTER INDEX ....................................................................................................................687
ALTER LANGUAGE...........................................................................................................689
ALTER OPERATOR ............................................................................................................690
ALTER OPERATOR CLASS...............................................................................................692
ALTER SCHEMA ................................................................................................................693
ALTER SEQUENCE............................................................................................................694
ALTER TABLE ....................................................................................................................696
ALTER TABLESPACE ........................................................................................................703
ALTER TRIGGER ...............................................................................................................705
ALTER TYPE.......................................................................................................................706
ALTER USER ......................................................................................................................707
ANALYZE............................................................................................................................710
BEGIN..................................................................................................................................712
CHECKPOINT.....................................................................................................................714
CLOSE .................................................................................................................................715
CLUSTER ............................................................................................................................717
COMMENT..........................................................................................................................720
xvii
COMMIT..............................................................................................................................723
COPY ...................................................................................................................................725
CREATE AGGREGATE ......................................................................................................733
CREATE CAST....................................................................................................................736
CREATE CONSTRAINT TRIGGER ..................................................................................740
CREATE CONVERSION ....................................................................................................741
CREATE DATABASE..........................................................................................................743
CREATE DOMAIN..............................................................................................................746
CREATE FUNCTION..........................................................................................................749
CREATE GROUP.................................................................................................................754
CREATE INDEX..................................................................................................................756
CREATE LANGUAGE ........................................................................................................759
CREATE OPERATOR .........................................................................................................762
CREATE OPERATOR CLASS ............................................................................................766
CREATE RULE....................................................................................................................769
CREATE SCHEMA .............................................................................................................772
CREATE SEQUENCE .........................................................................................................775
CREATE TABLE .................................................................................................................779
CREATE TABLE AS ...........................................................................................................789
CREATE TABLESPACE......................................................................................................791
CREATE TRIGGER.............................................................................................................793
CREATE TYPE ....................................................................................................................796
CREATE USER....................................................................................................................802
CREATE VIEW....................................................................................................................805
DEALLOCATE ....................................................................................................................808
DECLARE............................................................................................................................809
DELETE ...............................................................................................................................812
DROP AGGREGATE...........................................................................................................814
DROP CAST ........................................................................................................................816
DROP CONVERSION.........................................................................................................818
DROP DATABASE ..............................................................................................................819
DROP DOMAIN ..................................................................................................................820
DROP FUNCTION ..............................................................................................................821
DROP GROUP .....................................................................................................................823
DROP INDEX ......................................................................................................................824
DROP LANGUAGE.............................................................................................................825
DROP OPERATOR ..............................................................................................................826
DROP OPERATOR CLASS.................................................................................................828
DROP RULE ........................................................................................................................830
DROP SCHEMA ..................................................................................................................832
DROP SEQUENCE..............................................................................................................834
DROP TABLE ......................................................................................................................835
DROP TABLESPACE ..........................................................................................................837
DROP TRIGGER .................................................................................................................838
DROP TYPE.........................................................................................................................840
DROP USER ........................................................................................................................841
DROP VIEW ........................................................................................................................843
END......................................................................................................................................844
xviii
EXECUTE............................................................................................................................846
EXPLAIN .............................................................................................................................848
FETCH .................................................................................................................................851
GRANT ................................................................................................................................855
INSERT ................................................................................................................................860
LISTEN ................................................................................................................................863
LOAD ...................................................................................................................................865
LOCK ...................................................................................................................................866
MOVE...................................................................................................................................869
NOTIFY................................................................................................................................871
PREPARE .............................................................................................................................873
REINDEX.............................................................................................................................875
RELEASE SAVEPOINT......................................................................................................878
RESET..................................................................................................................................880
REVOKE ..............................................................................................................................881
ROLLBACK .........................................................................................................................884
ROLLBACK TO SAVEPOINT ............................................................................................886
SAVEPOINT ........................................................................................................................888
SELECT ...............................................................................................................................890
SELECT INTO .....................................................................................................................902
SET .......................................................................................................................................904
SET CONSTRAINTS ..........................................................................................................907
SET SESSION AUTHORIZATION.....................................................................................908
SET TRANSACTION ..........................................................................................................910
SHOW ..................................................................................................................................912
START TRANSACTION .....................................................................................................915
TRUNCATE .........................................................................................................................916
UNLISTEN...........................................................................................................................917
UPDATE ...............................................................................................................................919
VACUUM .............................................................................................................................922
II. PostgreSQL Client Applications ...............................................................................................925
clusterdb ...............................................................................................................................926
createdb.................................................................................................................................929
createlang..............................................................................................................................932
createuser..............................................................................................................................935
dropdb...................................................................................................................................938
droplang................................................................................................................................941
dropuser ................................................................................................................................944
ecpg.......................................................................................................................................947
pg_config ..............................................................................................................................949
pg_dump ...............................................................................................................................951
pg_dumpall ...........................................................................................................................958
pg_restore .............................................................................................................................962
psql .......................................................................................................................................969
vacuumdb..............................................................................................................................993
III. PostgreSQL Server Applications .............................................................................................996
initdb.....................................................................................................................................997
ipcclean...............................................................................................................................1000
xix
pg_controldata ....................................................................................................................1001
pg_ctl ..................................................................................................................................1002
pg_resetxlog .......................................................................................................................1006
postgres...............................................................................................................................1008
postmaster...........................................................................................................................1012
VII. Internals........................................................................................................................................1018
40. Overview of PostgreSQL Internals ........................................................................................1020
40.1. The Path of a Query...................................................................................................1020
40.2. How Connections are Established .............................................................................1020
40.3. The Parser Stage ........................................................................................................1021
40.3.1. Parser.............................................................................................................1021
40.3.2. Transformation Process.................................................................................1022
40.4. The PostgreSQL Rule System ...................................................................................1022
40.5. Planner/Optimizer......................................................................................................1023
40.5.1. Generating Possible Plans.............................................................................1023
40.6. Executor.....................................................................................................................1024
41. System Catalogs .....................................................................................................................1026
41.1. Overview ...................................................................................................................1026
41.2. pg_aggregate .........................................................................................................1027
41.3. pg_am ........................................................................................................................1028
41.4. pg_amop ....................................................................................................................1029
41.5. pg_amproc ................................................................................................................1029
41.6. pg_attrdef..............................................................................................................1030
41.7. pg_attribute .........................................................................................................1030
41.8. pg_cast ....................................................................................................................1033
41.9. pg_class ..................................................................................................................1034
41.10. pg_constraint .....................................................................................................1037
41.11. pg_conversion .....................................................................................................1038
41.12. pg_database .........................................................................................................1039
41.13. pg_depend ..............................................................................................................1040
41.14. pg_description ...................................................................................................1042
41.15. pg_group ................................................................................................................1042
41.16. pg_index ................................................................................................................1043
41.17. pg_inherits .........................................................................................................1045
41.18. pg_language .........................................................................................................1045
41.19. pg_largeobject ...................................................................................................1046
41.20. pg_listener .........................................................................................................1047
41.21. pg_namespace .......................................................................................................1047
41.22. pg_opclass............................................................................................................1048
41.23. pg_operator .........................................................................................................1049
41.24. pg_proc ..................................................................................................................1050
41.25. pg_rewrite............................................................................................................1052
41.26. pg_shadow ..............................................................................................................1053
41.27. pg_statistic .......................................................................................................1053
41.28. pg_tablespace .....................................................................................................1055
41.29. pg_trigger............................................................................................................1056
41.30. pg_type ..................................................................................................................1057
xx
41.31. System Views ..........................................................................................................1063
41.32. pg_indexes............................................................................................................1064
41.33. pg_locks ................................................................................................................1065
41.34. pg_rules ................................................................................................................1066
41.35. pg_settings .........................................................................................................1066
41.36. pg_stats ................................................................................................................1067
41.37. pg_tables ..............................................................................................................1069
41.38. pg_user ..................................................................................................................1070
41.39. pg_views ................................................................................................................1071
42. Frontend/Backend Protocol....................................................................................................1072
42.1. Overview ...................................................................................................................1072
42.1.1. Messaging Overview.....................................................................................1072
42.1.2. Extended Query Overview............................................................................1073
42.1.3. Formats and Format Codes ...........................................................................1073
42.2. Message Flow ............................................................................................................1074
42.2.1. Start-Up.........................................................................................................1074
42.2.2. Simple Query ................................................................................................1076
42.2.3. Extended Query ............................................................................................1077
42.2.4. Function Call.................................................................................................1080
42.2.5. COPY Operations .........................................................................................1081
42.2.6. Asynchronous Operations.............................................................................1082
42.2.7. Cancelling Requests in Progress...................................................................1082
42.2.8. Termination ...................................................................................................1083
42.2.9. SSL Session Encryption................................................................................1083
42.3. Message Data Types ..................................................................................................1084
42.4. Message Formats .......................................................................................................1085
42.5. Error and Notice Message Fields ..............................................................................1101
42.6. Summary of Changes since Protocol 2.0...................................................................1103
43. PostgreSQL Coding Conventions ..........................................................................................1105
43.1. Formatting .................................................................................................................1105
43.2. Reporting Errors Within the Server...........................................................................1105
43.3. Error Message Style Guide........................................................................................1107
43.3.1. What goes where...........................................................................................1108
43.3.2. Formatting.....................................................................................................1108
43.3.3. Quotation marks............................................................................................1108
43.3.4. Use of quotes.................................................................................................1109
43.3.5. Grammar and punctuation.............................................................................1109
43.3.6. Upper case vs. lower case .............................................................................1109
43.3.7. Avoid passive voice.......................................................................................1109
43.3.8. Present vs past tense......................................................................................1109
43.3.9. Type of the object..........................................................................................1110
43.3.10. Brackets.......................................................................................................1110
43.3.11. Assembling error messages.........................................................................1110
43.3.12. Reasons for errors .......................................................................................1110
43.3.13. Function names ...........................................................................................1111
43.3.14. Tricky words to avoid .................................................................................1111
43.3.15. Proper spelling ............................................................................................1111
43.3.16. Localization.................................................................................................1112
xxi
44. Native Language Support.......................................................................................................1113
44.1. For the Translator ......................................................................................................1113
44.1.1. Requirements ................................................................................................1113
44.1.2. Concepts........................................................................................................1113
44.1.3. Creating and maintaining message catalogs .................................................1114
44.1.4. Editing the PO files .......................................................................................1115
44.2. For the Programmer...................................................................................................1116
44.2.1. Mechanics .....................................................................................................1116
44.2.2. Message-writing guidelines ..........................................................................1117
45. Writing A Procedural Language Handler ..............................................................................1119
46. Genetic Query Optimizer .......................................................................................................1121
46.1. Query Handling as a Complex Optimization Problem..............................................1121
46.2. Genetic Algorithms ...................................................................................................1121
46.3. Genetic Query Optimization (GEQO) in PostgreSQL ..............................................1122
46.3.1. Future Implementation Tasks for PostgreSQL GEQO .................................1123
46.4. Further Reading .........................................................................................................1123
47. Index Cost Estimation Functions ...........................................................................................1125
48. GiST Indexes..........................................................................................................................1128
48.1. Introduction ...............................................................................................................1128
48.2. Extensibility...............................................................................................................1128
48.3. Implementation..........................................................................................................1128
48.4. Limitations.................................................................................................................1129
48.5. Examples ...................................................................................................................1129
49. Database Physical Storage .....................................................................................................1131
49.1. Database File Layout.................................................................................................1131
49.2. TOAST ......................................................................................................................1132
49.3. Database Page Layout ...............................................................................................1134
50. BKI Backend Interface...........................................................................................................1137
50.1. BKI File Format ........................................................................................................1137
50.2. BKI Commands .........................................................................................................1137
50.3. Example.....................................................................................................................1138
VIII. Appendixes..................................................................................................................................1139
A. PostgreSQL Error Codes.........................................................................................................1140
B. Date/Time Support ..................................................................................................................1147
B.1. Date/Time Input Interpretation ...................................................................................1147
B.2. Date/Time Key Words.................................................................................................1148
B.3. History of Units ..........................................................................................................1164
C. SQL Key Words.......................................................................................................................1166
D. SQL Conformance ..................................................................................................................1186
D.1. Supported Features .....................................................................................................1187
D.2. Unsupported Features .................................................................................................1198
E. Release Notes ..........................................................................................................................1206
E.1. Release 8.0 ..................................................................................................................1206
E.1.1. Overview ........................................................................................................1206
E.1.2. Migration to version 8.0 .................................................................................1207
E.1.3. Deprecated Features .......................................................................................1208
E.1.4. Changes ..........................................................................................................1209
xxii
E.1.4.1. Performance Improvements ...............................................................1209
E.1.4.2. Server Changes ..................................................................................1211
E.1.4.3. Query Changes...................................................................................1213
E.1.4.4. Object Manipulation Changes ...........................................................1214
E.1.4.5. Utility Command Changes.................................................................1215
E.1.4.6. Data Type and Function Changes ......................................................1216
E.1.4.7. Server-Side Language Changes .........................................................1218
E.1.4.8. psql Changes ......................................................................................1219
E.1.4.9. pg_dump Changes..............................................................................1220
E.1.4.10. libpq Changes ..................................................................................1221
E.1.4.11. Source Code Changes ......................................................................1221
E.1.4.12. Contrib Changes ..............................................................................1222
E.2. Release 7.4.6 ...............................................................................................................1223
E.2.1. Migration to version 7.4.6 ..............................................................................1223
E.2.2. Changes ..........................................................................................................1223
E.3. Release 7.4.5 ...............................................................................................................1224
E.3.1. Migration to version 7.4.5 ..............................................................................1224
E.3.2. Changes ..........................................................................................................1224
E.4. Release 7.4.4 ...............................................................................................................1225
E.4.1. Migration to version 7.4.4 ..............................................................................1225
E.4.2. Changes ..........................................................................................................1225
E.5. Release 7.4.3 ...............................................................................................................1225
E.5.1. Migration to version 7.4.3 ..............................................................................1226
E.5.2. Changes ..........................................................................................................1226
E.6. Release 7.4.2 ...............................................................................................................1226
E.6.1. Migration to version 7.4.2 ..............................................................................1227
E.6.2. Changes ..........................................................................................................1228
E.7. Release 7.4.1 ...............................................................................................................1229
E.7.1. Migration to version 7.4.1 ..............................................................................1229
E.7.2. Changes ..........................................................................................................1229
E.8. Release 7.4 ..................................................................................................................1230
E.8.1. Overview ........................................................................................................1230
E.8.2. Migration to version 7.4 .................................................................................1232
E.8.3. Changes ..........................................................................................................1233
E.8.3.1. Server Operation Changes .................................................................1233
E.8.3.2. Performance Improvements ...............................................................1235
E.8.3.3. Server Configuration Changes ...........................................................1236
E.8.3.4. Query Changes...................................................................................1238
E.8.3.5. Object Manipulation Changes ...........................................................1238
E.8.3.6. Utility Command Changes.................................................................1240
E.8.3.7. Data Type and Function Changes ......................................................1241
E.8.3.8. Server-Side Language Changes .........................................................1243
E.8.3.9. psql Changes ......................................................................................1244
E.8.3.10. pg_dump Changes............................................................................1244
E.8.3.11. libpq Changes ..................................................................................1245
E.8.3.12. JDBC Changes.................................................................................1246
E.8.3.13. Miscellaneous Interface Changes ....................................................1246
E.8.3.14. Source Code Changes ......................................................................1246
xxiii
E.8.3.15. Contrib Changes ..............................................................................1247
E.9. Release 7.3.8 ...............................................................................................................1248
E.9.1. Migration to version 7.3.8 ..............................................................................1248
E.9.2. Changes ..........................................................................................................1248
E.10. Release 7.3.7 .............................................................................................................1249
E.10.1. Migration to version 7.3.7 ............................................................................1249
E.10.2. Changes ........................................................................................................1249
E.11. Release 7.3.6 .............................................................................................................1249
E.11.1. Migration to version 7.3.6 ............................................................................1249
E.11.2. Changes ........................................................................................................1250
E.12. Release 7.3.5 .............................................................................................................1250
E.12.1. Migration to version 7.3.5 ............................................................................1250
E.12.2. Changes ........................................................................................................1251
E.13. Release 7.3.4 .............................................................................................................1251
E.13.1. Migration to version 7.3.4 ............................................................................1251
E.13.2. Changes ........................................................................................................1252
E.14. Release 7.3.3 .............................................................................................................1252
E.14.1. Migration to version 7.3.3 ............................................................................1252
E.14.2. Changes ........................................................................................................1252
E.15. Release 7.3.2 .............................................................................................................1254
E.15.1. Migration to version 7.3.2 ............................................................................1254
E.15.2. Changes ........................................................................................................1255
E.16. Release 7.3.1 .............................................................................................................1256
E.16.1. Migration to version 7.3.1 ............................................................................1256
E.16.2. Changes ........................................................................................................1256
E.17. Release 7.3 ................................................................................................................1256
E.17.1. Overview ......................................................................................................1256
E.17.2. Migration to version 7.3 ...............................................................................1257
E.17.3. Changes ........................................................................................................1258
E.17.3.1. Server Operation ..............................................................................1258
E.17.3.2. Performance .....................................................................................1258
E.17.3.3. Privileges..........................................................................................1259
E.17.3.4. Server Configuration........................................................................1259
E.17.3.5. Queries .............................................................................................1260
E.17.3.6. Object Manipulation ........................................................................1261
E.17.3.7. Utility Commands............................................................................1261
E.17.3.8. Data Types and Functions................................................................1263
E.17.3.9. Internationalization ..........................................................................1264
E.17.3.10. Server-side Languages ...................................................................1264
E.17.3.11. psql.................................................................................................1265
E.17.3.12. libpq ...............................................................................................1265
E.17.3.13. JDBC..............................................................................................1265
E.17.3.14. Miscellaneous Interfaces................................................................1266
E.17.3.15. Source Code ...................................................................................1266
E.17.3.16. Contrib ...........................................................................................1267
E.18. Release 7.2.6 .............................................................................................................1268
E.18.1. Migration to version 7.2.6 ............................................................................1268
E.18.2. Changes ........................................................................................................1268
xxiv
E.19. Release 7.2.5 .............................................................................................................1269
E.19.1. Migration to version 7.2.5 ............................................................................1269
E.19.2. Changes ........................................................................................................1269
E.20. Release 7.2.4 .............................................................................................................1270
E.20.1. Migration to version 7.2.4 ............................................................................1270
E.20.2. Changes ........................................................................................................1270
E.21. Release 7.2.3 .............................................................................................................1270
E.21.1. Migration to version 7.2.3 ............................................................................1270
E.21.2. Changes ........................................................................................................1271
E.22. Release 7.2.2 .............................................................................................................1271
E.22.1. Migration to version 7.2.2 ............................................................................1271
E.22.2. Changes ........................................................................................................1271
E.23. Release 7.2.1 .............................................................................................................1272
E.23.1. Migration to version 7.2.1 ............................................................................1272
E.23.2. Changes ........................................................................................................1272
E.24. Release 7.2 ................................................................................................................1272
E.24.1. Overview ......................................................................................................1273
E.24.2. Migration to version 7.2 ...............................................................................1273
E.24.3. Changes ........................................................................................................1274
E.24.3.1. Server Operation ..............................................................................1274
E.24.3.2. Performance .....................................................................................1275
E.24.3.3. Privileges..........................................................................................1275
E.24.3.4. Client Authentication .......................................................................1275
E.24.3.5. Server Configuration........................................................................1275
E.24.3.6. Queries .............................................................................................1276
E.24.3.7. Schema Manipulation ......................................................................1276
E.24.3.8. Utility Commands............................................................................1277
E.24.3.9. Data Types and Functions................................................................1277
E.24.3.10. Internationalization ........................................................................1278
E.24.3.11. PL/pgSQL ......................................................................................1279
E.24.3.12. PL/Perl ...........................................................................................1279
E.24.3.13. PL/Tcl ............................................................................................1279
E.24.3.14. PL/Python ......................................................................................1279
E.24.3.15. psql.................................................................................................1279
E.24.3.16. libpq ...............................................................................................1280
E.24.3.17. JDBC..............................................................................................1280
E.24.3.18. ODBC ............................................................................................1281
E.24.3.19. ECPG .............................................................................................1281
E.24.3.20. Misc. Interfaces..............................................................................1281
E.24.3.21. Build and Install.............................................................................1282
E.24.3.22. Source Code ...................................................................................1282
E.24.3.23. Contrib ...........................................................................................1283
E.25. Release 7.1.3 .............................................................................................................1283
E.25.1. Migration to version 7.1.3 ............................................................................1283
E.25.2. Changes ........................................................................................................1283
E.26. Release 7.1.2 .............................................................................................................1284
E.26.1. Migration to version 7.1.2 ............................................................................1284
E.26.2. Changes ........................................................................................................1284
xxv
E.27. Release 7.1.1 .............................................................................................................1284
E.27.1. Migration to version 7.1.1 ............................................................................1284
E.27.2. Changes ........................................................................................................1284
E.28. Release 7.1 ................................................................................................................1285
E.28.1. Migration to version 7.1 ...............................................................................1285
E.28.2. Changes ........................................................................................................1286
E.29. Release 7.0.3 .............................................................................................................1289
E.29.1. Migration to version 7.0.3 ............................................................................1289
E.29.2. Changes ........................................................................................................1289
E.30. Release 7.0.2 .............................................................................................................1290
E.30.1. Migration to version 7.0.2 ............................................................................1291
E.30.2. Changes ........................................................................................................1291
E.31. Release 7.0.1 .............................................................................................................1291
E.31.1. Migration to version 7.0.1 ............................................................................1291
E.31.2. Changes ........................................................................................................1291
E.32. Release 7.0 ................................................................................................................1292
E.32.1. Migration to version 7.0 ...............................................................................1292
E.32.2. Changes ........................................................................................................1293
E.33. Release 6.5.3 .............................................................................................................1299
E.33.1. Migration to version 6.5.3 ............................................................................1299
E.33.2. Changes ........................................................................................................1299
E.34. Release 6.5.2 .............................................................................................................1299
E.34.1. Migration to version 6.5.2 ............................................................................1300
E.34.2. Changes ........................................................................................................1300
E.35. Release 6.5.1 .............................................................................................................1300
E.35.1. Migration to version 6.5.1 ............................................................................1301
E.35.2. Changes ........................................................................................................1301
E.36. Release 6.5 ................................................................................................................1301
E.36.1. Migration to version 6.5 ...............................................................................1302
E.36.1.1. Multiversion Concurrency Control ..................................................1303
E.36.2. Changes ........................................................................................................1303
E.37. Release 6.4.2 .............................................................................................................1306
E.37.1. Migration to version 6.4.2 ............................................................................1307
E.37.2. Changes ........................................................................................................1307
E.38. Release 6.4.1 .............................................................................................................1307
E.38.1. Migration to version 6.4.1 ............................................................................1307
E.38.2. Changes ........................................................................................................1307
E.39. Release 6.4 ................................................................................................................1308
E.39.1. Migration to version 6.4 ...............................................................................1309
E.39.2. Changes ........................................................................................................1309
E.40. Release 6.3.2 .............................................................................................................1312
E.40.1. Changes ........................................................................................................1313
E.41. Release 6.3.1 .............................................................................................................1313
E.41.1. Changes ........................................................................................................1314
E.42. Release 6.3 ................................................................................................................1315
E.42.1. Migration to version 6.3 ...............................................................................1316
E.42.2. Changes ........................................................................................................1316
E.43. Release 6.2.1 .............................................................................................................1319
xxvi
E.43.1. Migration from version 6.2 to version 6.2.1.................................................1320
E.43.2. Changes ........................................................................................................1320
E.44. Release 6.2 ................................................................................................................1320
E.44.1. Migration from version 6.1 to version 6.2....................................................1320
E.44.2. Migration from version 1.x to version 6.2 ...................................................1321
E.44.3. Changes ........................................................................................................1321
E.45. Release 6.1.1 .............................................................................................................1323
E.45.1. Migration from version 6.1 to version 6.1.1.................................................1323
E.45.2. Changes ........................................................................................................1323
E.46. Release 6.1 ................................................................................................................1324
E.46.1. Migration to version 6.1 ...............................................................................1324
E.46.2. Changes ........................................................................................................1324
E.47. Release 6.0 ................................................................................................................1326
E.47.1. Migration from version 1.09 to version 6.0..................................................1326
E.47.2. Migration from pre-1.09 to version 6.0 ........................................................1327
E.47.3. Changes ........................................................................................................1327
E.48. Release 1.09 ..............................................................................................................1329
E.49. Release 1.02 ..............................................................................................................1329
E.49.1. Migration from version 1.02 to version 1.02.1.............................................1329
E.49.2. Dump/Reload Procedure ..............................................................................1330
E.49.3. Changes ........................................................................................................1330
E.50. Release 1.01 ..............................................................................................................1331
E.50.1. Migration from version 1.0 to version 1.01..................................................1331
E.50.2. Changes ........................................................................................................1332
E.51. Release 1.0 ................................................................................................................1333
E.51.1. Changes ........................................................................................................1333
E.52. Postgres95 Release 0.03............................................................................................1334
E.52.1. Changes ........................................................................................................1334
E.53. Postgres95 Release 0.02............................................................................................1336
E.53.1. Changes ........................................................................................................1337
E.54. Postgres95 Release 0.01............................................................................................1337
F. The CVS Repository ................................................................................................................1339
F.1. Getting The Source Via Anonymous CVS ..................................................................1339
F.2. CVS Tree Organization ...............................................................................................1340
F.3. Getting The Source Via CVSup...................................................................................1342
F.3.1. Preparing A CVSup Client System.................................................................1342
F.3.2. Running a CVSup Client ................................................................................1342
F.3.3. Installing CVSup.............................................................................................1344
F.3.4. Installation from Sources ................................................................................1345
G. Documentation ........................................................................................................................1348
G.1. DocBook.....................................................................................................................1348
G.2. Tool Sets .....................................................................................................................1348
G.2.1. Linux RPM Installation..................................................................................1349
G.2.2. FreeBSD Installation......................................................................................1349
G.2.3. Debian Packages ............................................................................................1350
G.2.4. Manual Installation from Source....................................................................1350
G.2.4.1. Installing OpenJade ...........................................................................1350
G.2.4.2. Installing the DocBook DTD Kit ......................................................1351
xxvii
G.2.4.3. Installing the DocBook DSSSL Style Sheets ....................................1352
G.2.4.4. Installing JadeTeX .............................................................................1352
G.2.5. Detection by configure ...............................................................................1352
G.3. Building The Documentation .....................................................................................1353
G.3.1. HTML ............................................................................................................1353
G.3.2. Manpages .......................................................................................................1353
G.3.3. Print Output via JadeTex................................................................................1354
G.3.4. Print Output via RTF......................................................................................1354
G.3.5. Plain Text Files...............................................................................................1356
G.3.6. Syntax Check .................................................................................................1356
G.4. Documentation Authoring ..........................................................................................1356
G.4.1. Emacs/PSGML...............................................................................................1356
G.4.2. Other Emacs modes .......................................................................................1358
G.5. Style Guide .................................................................................................................1358
G.5.1. Reference Pages .............................................................................................1358
H. External Projects .....................................................................................................................1361
H.1. Externally Developed Interfaces.................................................................................1361
H.2. Extensions...................................................................................................................1362
Bibliography .........................................................................................................................................1363
Index......................................................................................................................................................1365
xxviii
List of Tables
4-1. Operator Precedence (decreasing)......................................................................................................30
8-1. Data Types ..........................................................................................................................................81
8-2. Numeric Types....................................................................................................................................82
8-3. Monetary Types ..................................................................................................................................86
8-4. Character Types ..................................................................................................................................86
8-5. Special Character Types .....................................................................................................................88
8-6. Binary Data Types ..............................................................................................................................88
8-7. bytea Literal Escaped Octets ............................................................................................................89
8-8. bytea Output Escaped Octets............................................................................................................90
8-9. Date/Time Types.................................................................................................................................90
8-10. Date Input .........................................................................................................................................92
8-11. Time Input ........................................................................................................................................92
8-12. Time Zone Input ...............................................................................................................................93
8-13. Special Date/Time Inputs .................................................................................................................94
8-14. Date/Time Output Styles ..................................................................................................................95
8-15. Date Order Conventions ...................................................................................................................95
8-16. Geometric Types...............................................................................................................................98
8-17. Network Address Types .................................................................................................................100
8-18. cidr Type Input Examples ............................................................................................................101
8-19. Object Identifier Types ...................................................................................................................116
8-20. Pseudo-Types..................................................................................................................................117
9-1. Comparison Operators......................................................................................................................119
9-2. Mathematical Operators ...................................................................................................................121
9-3. Mathematical Functions ...................................................................................................................122
9-4. Trigonometric Functions ..................................................................................................................123
9-5. SQL String Functions and Operators ...............................................................................................124
9-6. Other String Functions .....................................................................................................................125
9-7. Built-in Conversions.........................................................................................................................129
9-8. SQL Binary String Functions and Operators ...................................................................................132
9-9. Other Binary String Functions .........................................................................................................133
9-10. Bit String Operators........................................................................................................................134
9-11. Regular Expression Match Operators.............................................................................................137
9-12. Regular Expression Atoms .............................................................................................................138
9-13. Regular Expression Quantifiers......................................................................................................139
9-14. Regular Expression Constraints .....................................................................................................140
9-15. Regular Expression Character-Entry Escapes ................................................................................142
9-16. Regular Expression Class-Shorthand Escapes ...............................................................................143
9-17. Regular Expression Constraint Escapes .........................................................................................143
9-18. Regular Expression Back References.............................................................................................143
9-19. ARE Embedded-Option Letters .....................................................................................................144
9-20. Formatting Functions .....................................................................................................................147
9-21. Template Patterns for Date/Time Formatting .................................................................................148
9-22. Template Pattern Modifiers for Date/Time Formatting ..................................................................150
9-23. Template Patterns for Numeric Formatting ....................................................................................151
9-24. to_char Examples ........................................................................................................................152
9-25. Date/Time Operators ......................................................................................................................153
xxix
9-26. Date/Time Functions ......................................................................................................................154
9-27. AT TIME ZONE Variants ................................................................................................................160
9-28. Geometric Operators ......................................................................................................................162
9-29. Geometric Functions ......................................................................................................................163
9-30. Geometric Type Conversion Functions ..........................................................................................164
9-31. cidr and inet Operators ..............................................................................................................166
9-32. cidr and inet Functions ..............................................................................................................166
9-33. macaddr Functions ........................................................................................................................167
9-34. Sequence Functions........................................................................................................................167
9-35. array Operators ............................................................................................................................171
9-36. array Functions ............................................................................................................................172
9-37. Aggregate Functions.......................................................................................................................173
9-38. Series Generating Functions...........................................................................................................180
9-39. Session Information Functions.......................................................................................................181
9-40. Access Privilege Inquiry Functions................................................................................................182
9-41. Schema Visibility Inquiry Functions..............................................................................................184
9-42. System Catalog Information Functions..........................................................................................184
9-43. Comment Information Functions ...................................................................................................186
9-44. Configuration Settings Functions ...................................................................................................186
9-45. Backend Signalling Functions........................................................................................................187
9-46. Backup Control Functions..............................................................................................................187
12-1. SQL Transaction Isolation Levels ..................................................................................................208
16-1. Short option key .............................................................................................................................277
16-2. System V IPC parameters...............................................................................................................278
20-1. Server Character Sets .....................................................................................................................309
20-2. Client/Server Character Set Conversions .......................................................................................312
23-1. Standard Statistics Views ...............................................................................................................336
23-2. Statistics Access Functions ............................................................................................................337
30-1. information_schema_catalog_name Columns......................................................................432
30-2. applicable_roles Columns ......................................................................................................432
30-3. check_constraints Columns....................................................................................................432
30-4. column_domain_usage Columns ...............................................................................................433
30-5. column_privileges Columns....................................................................................................433
30-6. column_udt_usage Columns ......................................................................................................434
30-7. columns Columns .........................................................................................................................435
30-8. constraint_column_usage Columns.......................................................................................439
30-9. constraint_table_usage Columns .........................................................................................439
30-10. data_type_privileges Columns ...........................................................................................440
30-11. domain_constraints Columns................................................................................................441
30-12. domain_udt_usage Columns ....................................................................................................442
30-13. domains Columns .......................................................................................................................442
30-14. element_types Columns ..........................................................................................................445
30-15. enabled_roles Columns ..........................................................................................................447
30-16. key_column_usage Columns ....................................................................................................448
30-17. parameters Columns .................................................................................................................448
30-18. referential_constraints Columns.....................................................................................451
30-19. role_column_grants Columns................................................................................................452
30-20. role_routine_grants Columns .............................................................................................452
xxx
30-21. role_table_grants Columns..................................................................................................453
30-22. role_usage_grants Columns..................................................................................................454
30-23. routine_privileges Columns................................................................................................454
30-24. routines Columns .....................................................................................................................455
30-25. schemata Columns .....................................................................................................................459
30-26. sql_features Columns.............................................................................................................460
30-27. sql_implementation_info Columns.....................................................................................461
30-28. sql_languages Columns ..........................................................................................................461
30-29. sql_packages Columns.............................................................................................................462
30-30. sql_sizing Columns .................................................................................................................462
30-31. sql_sizing_profiles Columns .............................................................................................463
30-32. table_constraints Columns..................................................................................................463
30-33. table_privileges Columns ....................................................................................................464
30-34. tables Columns..........................................................................................................................465
30-35. triggers Columns .....................................................................................................................466
30-36. usage_privileges Columns ....................................................................................................467
30-37. view_column_usage Columns..................................................................................................467
30-38. view_table_usage Columns ....................................................................................................468
30-39. views Columns............................................................................................................................469
31-1. Equivalent C Types for Built-In SQL Types ..................................................................................488
31-2. B-tree Strategies .............................................................................................................................519
31-3. Hash Strategies ...............................................................................................................................519
31-4. R-tree Strategies .............................................................................................................................520
31-5. B-tree Support Functions................................................................................................................520
31-6. Hash Support Functions .................................................................................................................521
31-7. R-tree Support Functions................................................................................................................521
31-8. GiST Support Functions.................................................................................................................521
41-1. System Catalogs ...........................................................................................................................1026
41-2. pg_aggregate Columns.............................................................................................................1027
41-3. pg_am Columns............................................................................................................................1028
41-4. pg_amop Columns .......................................................................................................................1029
41-5. pg_amproc Columns ...................................................................................................................1029
41-6. pg_attrdef Columns .................................................................................................................1030
41-7. pg_attribute Columns.............................................................................................................1030
41-8. pg_cast Columns .......................................................................................................................1033
41-9. pg_class Columns .....................................................................................................................1034
41-10. pg_constraint Columns ........................................................................................................1037
41-11. pg_conversion Columns ........................................................................................................1038
41-12. pg_database Columns.............................................................................................................1039
41-13. pg_depend Columns .................................................................................................................1041
41-14. pg_description Columns ......................................................................................................1042
41-15. pg_group Columns ...................................................................................................................1043
41-16. pg_index Columns ...................................................................................................................1043
41-17. pg_inherits Columns.............................................................................................................1045
41-18. pg_language Columns.............................................................................................................1045
41-19. pg_largeobject Columns ......................................................................................................1046
41-20. pg_listener Columns.............................................................................................................1047
41-21. pg_namespace Columns...........................................................................................................1048
xxxi
41-22. pg_opclass Columns ...............................................................................................................1048
41-23. pg_operator Columns.............................................................................................................1049
41-24. pg_proc Columns .....................................................................................................................1050
41-25. pg_rewrite Columns ...............................................................................................................1052
41-26. pg_shadow Columns .................................................................................................................1053
41-27. pg_statistic Columns...........................................................................................................1054
41-28. pg_tablespace Columns ........................................................................................................1055
41-29. pg_trigger Columns ...............................................................................................................1056
41-30. pg_type Columns .....................................................................................................................1057
41-31. System Views .............................................................................................................................1064
41-32. pg_indexes Columns ...............................................................................................................1064
41-33. pg_locks Columns ...................................................................................................................1065
41-34. pg_rules Columns ...................................................................................................................1066
41-35. pg_settings Columns.............................................................................................................1066
41-36. pg_stats Columns ...................................................................................................................1067
41-37. pg_tables Columns .................................................................................................................1070
41-38. pg_user Columns .....................................................................................................................1070
41-39. pg_views Columns ...................................................................................................................1071
49-1. Contents of PGDATA .....................................................................................................................1131
49-2. Overall Page Layout .....................................................................................................................1134
49-3. PageHeaderData Layout...............................................................................................................1135
49-4. HeapTupleHeaderData Layout .....................................................................................................1136
A-1. PostgreSQL Error Codes ...............................................................................................................1140
B-1. Month Names.................................................................................................................................1148
B-2. Day of the Week Names ................................................................................................................1148
B-3. Date/Time Field Modifiers.............................................................................................................1149
B-4. Time Zone Abbreviations for Input ...............................................................................................1149
B-5. Australian Time Zone Abbreviations for Input .............................................................................1152
B-6. Time Zone Names for Setting timezone .....................................................................................1153
C-1. SQL Key Words.............................................................................................................................1166
List of Figures
46-1. Structured Diagram of a Genetic Algorithm ................................................................................1122
List of Examples
8-1. Using the character types ...................................................................................................................88
8-2. Using the boolean type.....................................................................................................................97
8-3. Using the bit string types..................................................................................................................103
10-1. Exponentiation Operator Type Resolution .....................................................................................192
10-2. String Concatenation Operator Type Resolution............................................................................192
10-3. Absolute-Value and Negation Operator Type Resolution ..............................................................192
10-4. Rounding Function Argument Type Resolution.............................................................................194
10-5. Substring Function Type Resolution ..............................................................................................195
10-6. character Storage Type Conversion ...........................................................................................196
xxxii
10-7. Type Resolution with Underspecified Types in a Union ................................................................197
10-8. Type Resolution in a Simple Union................................................................................................197
10-9. Type Resolution in a Transposed Union.........................................................................................197
11-1. Setting up a Partial Index to Exclude Common Values..................................................................204
11-2. Setting up a Partial Index to Exclude Uninteresting Values...........................................................205
11-3. Setting up a Partial Unique Index...................................................................................................206
19-1. Example pg_hba.conf entries .....................................................................................................300
19-2. An example pg_ident.conf file .................................................................................................305
27-1. libpq Example Program 1...............................................................................................................394
27-2. libpq Example Program 2...............................................................................................................396
27-3. libpq Example Program 3...............................................................................................................399
28-1. Large Objects with libpq Example Program ..................................................................................407
34-1. Manual Installation of PL/pgSQL ..................................................................................................560
35-1. A PL/pgSQL Trigger Procedure.....................................................................................................590
35-2. A PL/pgSQL Trigger Procedure For Auditing ...............................................................................591
35-3. A PL/pgSQL Trigger Procedure For Maintaining A Summary Table ...........................................592
35-4. Porting a Simple Function from PL/SQL to PL/pgSQL ................................................................595
35-5. Porting a Function that Creates Another Function from PL/SQL to PL/pgSQL ...........................595
35-6. Porting a Procedure With String Manipulation and OUT Parameters from PL/SQL to PL/pgSQL597
35-7. Porting a Procedure from PL/SQL to PL/pgSQL...........................................................................599
xxxiii
Preface
This book is the official documentation of PostgreSQL. It is being written by the PostgreSQL develop-
ers and other volunteers in parallel to the development of the PostgreSQL software. It describes all the
functionality that the current version of PostgreSQL officially supports.
To make the large amount of information about PostgreSQL manageable, this book has been organized
in several parts. Each part is targeted at a different class of users, or at users in different stages of their
PostgreSQL experience:
1. What is PostgreSQL?
PostgreSQL is an object-relational database management system (ORDBMS) based on POSTGRES,
Version 4.21, developed at the University of California at Berkeley Computer Science Department. POST-
GRES pioneered many concepts that only became available in some commercial database systems much
later.
PostgreSQL is an open-source descendant of this original Berkeley code. It supports a large part of the
SQL:2003 standard and offers many modern features:
• complex queries
• foreign keys
• triggers
• views
• transactional integrity
• multiversion concurrency control
Also, PostgreSQL can be extended by the user in many ways, for example by adding new
• data types
• functions
1. http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/postgres.html
i
Preface
• operators
• aggregate functions
• index methods
• procedural languages
And because of the liberal license, PostgreSQL can be used, modified, and distributed by everyone free
of charge for any purpose, be it private, commercial, or academic.
2. http://www.informix.com/
3. http://www.ibm.com/
4. http://meteora.ucsd.edu/s2k/s2k_home.html
ii
Preface
2.2. Postgres95
In 1994, Andrew Yu and Jolly Chen added a SQL language interpreter to POSTGRES. Under a new
name, Postgres95 was subsequently released to the web to find its own way in the world as an open-
source descendant of the original POSTGRES Berkeley code.
Postgres95 code was completely ANSI C and trimmed in size by 25%. Many internal changes improved
performance and maintainability. Postgres95 release 1.0.x ran about 30-50% faster on the Wisconsin
Benchmark compared to POSTGRES, Version 4.2. Apart from bug fixes, the following were the major
enhancements:
• The query language PostQUEL was replaced with SQL (implemented in the server). Subqueries were
not supported until PostgreSQL (see below), but they could be imitated in Postgres95 with user-defined
SQL functions. Aggregate functions were re-implemented. Support for the GROUP BY query clause was
also added.
• A new program (psql) was provided for interactive SQL queries, which used GNU Readline. This
largely superseded the old monitor program.
• A new front-end library, libpgtcl, supported Tcl-based clients. A sample shell, pgtclsh, provided
new Tcl commands to interface Tcl programs with the Postgres95 server.
• The large-object interface was overhauled. The inversion large objects were the only mechanism for
storing large objects. (The inversion file system was removed.)
• The instance-level rule system was removed. Rules were still available as rewrite rules.
• A short tutorial introducing regular SQL features as well as those of Postgres95 was distributed with
the source code
• GNU make (instead of BSD make) was used for the build. Also, Postgres95 could be compiled with an
unpatched GCC (data alignment of doubles was fixed).
2.3. PostgreSQL
By 1996, it became clear that the name “Postgres95” would not stand the test of time. We chose a new
name, PostgreSQL, to reflect the relationship between the original POSTGRES and the more recent ver-
sions with SQL capability. At the same time, we set the version numbering to start at 6.0, putting the
numbers back into the sequence originally begun by the Berkeley POSTGRES project.
The emphasis during development of Postgres95 was on identifying and understanding existing problems
in the server code. With PostgreSQL, the emphasis has shifted to augmenting features and capabilities,
although work continues in all areas.
Details about what has happened in PostgreSQL since then can be found in Appendix E.
3. Conventions
This book uses the following typographical conventions to mark certain portions of text: new terms,
foreign phrases, and other important passages are emphasized in italics. Everything that represents in-
iii
Preface
put or output of the computer, in particular commands, program code, and screen output, is shown in a
monospaced font (example). Within such passages, italics (example) indicate placeholders; you must
insert an actual value instead of the placeholder. On occasion, parts of program code are emphasized in
bold face (example), if they have been added or changed since the preceding example.
The following conventions are used in the synopsis of a command: brackets ([ and ]) indicate optional
parts. (In the synopsis of a Tcl command, question marks (?) are used instead, as is usual in Tcl.) Braces
({ and }) and vertical lines (|) indicate that you must choose one alternative. Dots (...) mean that the
preceding element can be repeated.
Where it enhances the clarity, SQL commands are preceded by the prompt =>, and shell commands are
preceded by the prompt $. Normally, prompts are not shown, though.
An administrator is generally a person who is in charge of installing and running the server. A user
could be anyone who is using, or wants to use, any part of the PostgreSQL system. These terms should
not be interpreted too narrowly; this book does not have fixed presumptions about system administration
procedures.
4. Further Information
Besides the documentation, that is, this book, there are other resources about PostgreSQL:
FAQs
The FAQ list contains continuously updated answers to frequently asked questions.
READMEs
README files are available for most contributed packages.
Web Site
The PostgreSQL web site5 carries details on the latest release and other information to make your
work or play with PostgreSQL more productive.
Mailing Lists
The mailing lists are a good place to have your questions answered, to share experiences with other
users, and to contact the developers. Consult the PostgreSQL web site for details.
Yourself!
PostgreSQL is an open-source project. As such, it depends on the user community for ongoing sup-
port. As you begin to use PostgreSQL, you will rely on others for help, either through the documen-
tation or through the mailing lists. Consider contributing your knowledge back. Read the mailing
lists and answer questions. If you learn something which is not in the documentation, write it up and
contribute it. If you add features to the code, contribute them.
5. http://www.postgresql.org
iv
Preface
• A program terminates with a fatal signal or an operating system error message that would point to a
problem in the program. (A counterexample might be a “disk full” message, since you have to fix that
yourself.)
• A program produces the wrong output for any given input.
• A program refuses to accept valid input (as defined in the documentation).
• A program accepts invalid input without a notice or error message. But keep in mind that your idea of
invalid input might be our idea of an extension or compatibility with traditional practice.
• PostgreSQL fails to compile, build, or install according to the instructions on supported platforms.
Here “program” refers to any executable, not only the backend server.
Being slow or resource-hogging is not necessarily a bug. Read the documentation or ask on one of the
mailing lists for help in tuning your applications. Failing to comply to the SQL standard is not necessarily
a bug either, unless compliance for the specific feature is explicitly claimed.
Before you continue, check on the TODO list and in the FAQ to see if your bug is already known. If you
cannot decode the information on the TODO list, report your problem. The least we can do is make the
TODO list clearer.
v
Preface
straightforward (you can probably copy and paste them from the screen) but all too often important details
are left out because someone thought it does not matter or the report would be understood anyway.
The following items should be contained in every bug report:
• The exact sequence of steps from program start-up necessary to reproduce the problem. This should
be self-contained; it is not enough to send in a bare SELECT statement without the preceding CREATE
TABLE and INSERT statements, if the output should depend on the data in the tables. We do not have
the time to reverse-engineer your database schema, and if we are supposed to make up our own data we
would probably miss the problem.
The best format for a test case for SQL-related problems is a file that can be run through the psql
frontend that shows the problem. (Be sure to not have anything in your ~/.psqlrc start-up file.) An
easy start at this file is to use pg_dump to dump out the table declarations and data needed to set the
scene, then add the problem query. You are encouraged to minimize the size of your example, but this
is not absolutely necessary. If the bug is reproducible, we will find it either way.
If your application uses some other client interface, such as PHP, then please try to isolate the offending
queries. We will probably not set up a web server to reproduce your problem. In any case remember
to provide the exact input files; do not guess that the problem happens for “large files” or “midsize
databases”, etc. since this information is too inexact to be of use.
• The output you got. Please do not say that it “didn’t work” or “crashed”. If there is an error message,
show it, even if you do not understand it. If the program terminates with an operating system error,
say which. If nothing at all happens, say so. Even if the result of your test case is a program crash or
otherwise obvious it might not happen on our platform. The easiest thing is to copy the output from the
terminal, if possible.
Note: If you are reporting an error message, please obtain the most verbose form of the message.
In psql, say \set VERBOSITY verbose beforehand. If you are extracting the message from the
server log, set the run-time parameter log_error_verbosity to verbose so that all details are logged.
Note: In case of fatal errors, the error message reported by the client might not contain all the
information available. Please also look at the log output of the database server. If you do not keep
your server’s log output, this would be a good time to start doing so.
• The output you expected is very important to state. If you just write “This command gives me that
output.” or “This is not what I expected.”, we might run it ourselves, scan the output, and think it
looks OK and is exactly what we expected. We should not have to spend the time to decode the exact
semantics behind your commands. Especially refrain from merely saying that “This is not what SQL
says/Oracle does.” Digging out the correct behavior from SQL is not a fun undertaking, nor do we all
know how all the other relational databases out there behave. (If your problem is a program crash, you
can obviously omit this item.)
vi
Preface
• Any command line options and other start-up options, including any relevant environment variables or
configuration files that you changed from the default. Again, please provide exact information. If you
are using a prepackaged distribution that starts the database server at boot time, you should try to find
out how that is done.
• Anything you did at all differently from the installation instructions.
• The PostgreSQL version. You can run the command SELECT version(); to find out the version of
the server you are connected to. Most executable programs also support a --version option; at least
postmaster --version and psql --version should work. If the function or the options do not
exist then your version is more than old enough to warrant an upgrade. If you run a prepackaged version,
such as RPMs, say so, including any subversion the package may have. If you are talking about a CVS
snapshot, mention that, including its date and time.
If your version is older than 8.0.0 we will almost certainly tell you to upgrade. There are many bug
fixes and improvements in each new release, so it is quite possible that a bug you have encountered in
an older release of PostgreSQL has already been fixed. We can only provide limited support for sites
using older releases of PostgreSQL; if you require more than we can provide, consider acquiring a
commercial support contract.
• Platform information. This includes the kernel name and version, C library, processor, memory infor-
mation, and so on. In most cases it is sufficient to report the vendor and version, but do not assume
everyone knows what exactly “Debian” contains or that everyone runs on Pentiums. If you have instal-
lation problems then information about the toolchain on your machine (compiler, make, and so on) is
also necessary.
Do not be afraid if your bug report becomes rather lengthy. That is a fact of life. It is better to report
everything the first time than us having to squeeze the facts out of you. On the other hand, if your input
files are huge, it is fair to ask first whether somebody is interested in looking into it.
Do not spend all your time to figure out which changes in the input make the problem go away. This will
probably not help solving it. If it turns out that the bug cannot be fixed right away, you will still have time
to find and share your work-around. Also, once again, do not waste your time guessing why the bug exists.
We will find that out soon enough.
When writing a bug report, please avoid confusing terminology. The software package in total is called
“PostgreSQL”, sometimes “Postgres” for short. If you are specifically talking about the backend server,
mention that, do not just say “PostgreSQL crashes”. A crash of a single backend server process is quite
different from crash of the parent “postmaster” process; please don’t say “the postmaster crashed” when
you mean a single backend process went down, nor vice versa. Also, client programs such as the interactive
frontend “psql” are completely separate from the backend. Please try to be specific about whether the
problem is on the client or server side.
vii
Preface
Note: Due to the unfortunate amount of spam going around, all of the above email addresses are
closed mailing lists. That is, you need to be subscribed to a list to be allowed to post on it. (You need
not be subscribed to use the bug-report web form, however.) If you would like to send mail but do not
want to receive list traffic, you can subscribe and set your subscription option to nomail. For more
information send mail to <[email protected]> with the single word help in the body of the
message.
viii
I. Tutorial
Welcome to the PostgreSQL Tutorial. The following few chapters are intended to give a simple introduc-
tion to PostgreSQL, relational database concepts, and the SQL language to those who are new to any one
of these aspects. We only assume some general knowledge about how to use computers. No particular
Unix or programming experience is required. This part is mainly intended to give you some hands-on
experience with important aspects of the PostgreSQL system. It makes no attempt to be a complete or
thorough treatment of the topics it covers.
After you have worked through this tutorial you might want to move on to reading Part II to gain a more
formal knowledge of the SQL language, or Part IV for information about developing applications for
PostgreSQL. Those who set up and manage their own server should also read Part III.
Chapter 1. Getting Started
1.1. Installation
Before you can use PostgreSQL you need to install it, of course. It is possible that PostgreSQL is already
installed at your site, either because it was included in your operating system distribution or because
the system administrator already installed it. If that is the case, you should obtain information from the
operating system documentation or your system administrator about how to access PostgreSQL.
If you are not sure whether PostgreSQL is already available or whether you can use it for your experimen-
tation then you can install it yourself. Doing so is not hard and it can be a good exercise. PostgreSQL can
be installed by any unprivileged user; no superuser (root) access is required.
If you are installing PostgreSQL yourself, then refer to Chapter 14 for instructions on installation, and
return to this guide when the installation is complete. Be sure to follow closely the section about setting
up the appropriate environment variables.
If your site administrator has not set things up in the default way, you may have some more work to do. For
example, if the database server machine is a remote machine, you will need to set the PGHOST environment
variable to the name of the database server machine. The environment variable PGPORT may also have to
be set. The bottom line is this: if you try to start an application program and it complains that it cannot
connect to the database, you should consult your site administrator or, if that is you, the documentation
to make sure that your environment is properly set up. If you did not understand the preceding paragraph
then read the next section.
• A server process, which manages the database files, accepts connections to the database from client
applications, and performs actions on the database on behalf of the clients. The database server program
is called postmaster.
• The user’s client (frontend) application that wants to perform database operations. Client applications
can be very diverse in nature: a client could be a text-oriented tool, a graphical application, a web server
that accesses the database to display web pages, or a specialized database maintenance tool. Some client
applications are supplied with the PostgreSQL distribution; most are developed by users.
As is typical of client/server applications, the client and the server can be on different hosts. In that case
they communicate over a TCP/IP network connection. You should keep this in mind, because the files that
can be accessed on a client machine might not be accessible (or might only be accessible using a different
file name) on the database server machine.
1
Chapter 1. Getting Started
The PostgreSQL server can handle multiple concurrent connections from clients. For that purpose it starts
(“forks”) a new process for each connection. From that point on, the client and the new server process
communicate without intervention by the original postmaster process. Thus, the postmaster is always
running, waiting for client connections, whereas client and associated server processes come and go. (All
of this is of course invisible to the user. We only mention it here for completeness.)
$ createdb mydb
CREATE DATABASE
If so, this step was successful and you can skip over the remainder of this section.
If you see a message similar to
then PostgreSQL was not installed properly. Either it was not installed at all or the search path was not set
correctly. Try calling the command with an absolute path instead:
$ /usr/local/pgsql/bin/createdb mydb
The path at your site might be different. Contact your site administrator or check back in the installation
instructions to correct the situation.
Another response could be this:
createdb: could not connect to database template1: could not connect to server:
No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
This means that the server was not started, or it was not started where createdb expected it. Again, check
the installation instructions or consult the administrator.
Another response could be this:
createdb: could not connect to database template1: FATAL: user "joe" does not
exist
where your own login name is mentioned. This will happen if the administrator has not created a Post-
greSQL user account for you. (PostgreSQL user accounts are distinct from operating system user ac-
2
Chapter 1. Getting Started
counts.) If you are the administrator, see Chapter 17 for help creating accounts. You will need to become
the operating system user under which PostgreSQL was installed (usually postgres) to create the first
user account. It could also be that you were assigned a PostgreSQL user name that is different from your
operating system user name; in that case you need to use the -U switch or set the PGUSER environment
variable to specify your PostgreSQL user name.
If you have a user account but it does not have the privileges required to create a database, you will see
the following:
Not every user has authorization to create new databases. If PostgreSQL refuses to create databases for
you then the site administrator needs to grant you permission to create databases. Consult your site ad-
ministrator if this occurs. If you installed PostgreSQL yourself then you should log in for the purposes of
this tutorial under the user account that you started the server as. 1
You can also create databases with other names. PostgreSQL allows you to create any number of databases
at a given site. Database names must have an alphabetic first character and are limited to 63 characters in
length. A convenient choice is to create a database with the same name as your current user name. Many
tools assume that database name as the default, so it can save you some typing. To create that database,
simply type
$ createdb
If you do not want to use your database anymore you can remove it. For example, if you are the owner
(creator) of the database mydb, you can destroy it using the following command:
$ dropdb mydb
(For this command, the database name does not default to the user account name. You always need to
specify it.) This action physically removes all files associated with the database and cannot be undone, so
this should only be done with a great deal of forethought.
More about createdb and dropdb may be found in createdb and dropdb respectively.
• Running the PostgreSQL interactive terminal program, called psql, which allows you to interactively
enter, edit, and execute SQL commands.
• Using an existing graphical frontend tool like PgAccess or an office suite with ODBC support to create
and manipulate a database. These possibilities are not covered in this tutorial.
1. As an explanation for why this works: PostgreSQL user names are separate from operating system user accounts. If you
connect to a database, you can choose what PostgreSQL user name to connect as; if you don’t, it will default to the same name
as your current operating system account. As it happens, there will always be a PostgreSQL user account that has the same name
as the operating system user that started the server, and it also happens that that user always has permission to create databases.
Instead of logging in as that user you can also specify the -U option everywhere to select a PostgreSQL user name to connect as.
3
Chapter 1. Getting Started
• Writing a custom application, using one of the several available language bindings. These possibilities
are discussed further in Part IV.
You probably want to start up psql, to try out the examples in this tutorial. It can be activated for the
mydb database by typing the command:
$ psql mydb
If you leave off the database name then it will default to your user account name. You already discovered
this scheme in the previous section.
In psql, you will be greeted with the following message:
mydb=>
mydb=#
That would mean you are a database superuser, which is most likely the case if you installed PostgreSQL
yourself. Being a superuser means that you are not subject to access controls. For the purpose of this
tutorial this is not of importance.
If you encounter problems starting psql then go back to the previous section. The diagnostics of
createdb and psql are similar, and if the former worked the latter should work as well.
The last line printed out by psql is the prompt, and it indicates that psql is listening to you and that you
can type SQL queries into a work space maintained by psql. Try out these commands:
mydb=> SELECT 2 + 2;
?column?
----------
4
(1 row)
4
Chapter 1. Getting Started
The psql program has a number of internal commands that are not SQL commands. They begin with
the backslash character, “\”. Some of these commands were listed in the welcome message. For example,
you can get help on the syntax of various PostgreSQL SQL commands by typing:
mydb=> \h
mydb=> \q
and psql will quit and return you to your command shell. (For more internal commands, type \? at the
psql prompt.) The full capabilities of psql are documented in psql. If PostgreSQL is installed correctly
you can also type man psql at the operating system shell prompt to see the documentation. In this tutorial
we will not use these features explicitly, but you can use them yourself when you see fit.
5
Chapter 2. The SQL Language
2.1. Introduction
This chapter provides an overview of how to use SQL to perform simple operations. This tutorial is only
intended to give you an introduction and is in no way a complete tutorial on SQL. Numerous books have
been written on SQL, including Understanding the New SQL and A Guide to the SQL Standard. You
should be aware that some PostgreSQL language features are extensions to the standard.
In the examples that follow, we assume that you have created a database named mydb, as described in the
previous chapter, and have started psql.
Examples in this manual can also be found in the PostgreSQL source distribution in the directory
src/tutorial/. To use those files, first change to that directory and run make:
$ cd ..../src/tutorial
$ make
This creates the scripts and compiles the C files containing user-defined functions and types. (You must
use GNU make for this — it may be named something different on your system, often gmake.) Then, to
start the tutorial, do the following:
$ cd ..../src/tutorial
$ psql -s mydb
...
mydb=> \i basics.sql
The \i command reads in commands from the specified file. The -s option puts you in single step mode
which pauses before sending each statement to the server. The commands used in this section are in the
file basics.sql.
2.2. Concepts
PostgreSQL is a relational database management system (RDBMS). That means it is a system for man-
aging data stored in relations. Relation is essentially a mathematical term for table. The notion of storing
data in tables is so commonplace today that it might seem inherently obvious, but there are a number of
other ways of organizing databases. Files and directories on Unix-like operating systems form an example
of a hierarchical database. A more modern development is the object-oriented database.
Each table is a named collection of rows. Each row of a given table has the same set of named columns,
and each column is of a specific data type. Whereas columns have a fixed order in each row, it is important
to remember that SQL does not guarantee the order of the rows within the table in any way (although they
can be explicitly sorted for display).
Tables are grouped into databases, and a collection of databases managed by a single PostgreSQL server
instance constitutes a database cluster.
6
Chapter 2. The SQL Language
You can enter this into psql with the line breaks. psql will recognize that the command is not terminated
until the semicolon.
White space (i.e., spaces, tabs, and newlines) may be used freely in SQL commands. That means you can
type the command aligned differently than above, or even all on one line. Two dashes (“--”) introduce
comments. Whatever follows them is ignored up to the end of the line. SQL is case insensitive about key
words and identifiers, except when identifiers are double-quoted to preserve the case (not done above).
varchar(80) specifies a data type that can store arbitrary character strings up to 80 characters in length.
int is the normal integer type. real is a type for storing single precision floating-point numbers. date
should be self-explanatory. (Yes, the column of type date is also named date. This may be convenient
or confusing — you choose.)
PostgreSQL supports the standard SQL types int, smallint, real, double precision, char(N ),
varchar(N ), date, time, timestamp, and interval, as well as other types of general utility and a
rich set of geometric types. PostgreSQL can be customized with an arbitrary number of user-defined data
types. Consequently, type names are not syntactical key words, except where required to support special
cases in the SQL standard.
The second example will store cities and their associated geographical location:
INSERT INTO weather VALUES (’San Francisco’, 46, 50, 0.25, ’1994-11-27’);
7
Chapter 2. The SQL Language
Note that all data types use rather obvious input formats. Constants that are not simple numeric values
usually must be surrounded by single quotes (’), as in the example. The date type is actually quite flexible
in what it accepts, but for this tutorial we will stick to the unambiguous format shown here.
The point type requires a coordinate pair as input, as shown here:
The syntax used so far requires you to remember the order of the columns. An alternative syntax allows
you to list the columns explicitly:
You can list the columns in a different order if you wish or even omit some columns, e.g., if the precipi-
tation is unknown:
Many developers consider explicitly listing the columns better style than relying on the order implicitly.
Please enter all the commands shown above so you have some data to work with in the following sections.
You could also have used COPY to load large amounts of data from flat-text files. This is usually faster
because the COPY command is optimized for this application while allowing less flexibility than INSERT.
An example would be:
where the file name for the source file must be available to the backend server machine, not the client,
since the backend server reads the file directly. You can read more about the COPY command in COPY.
Here * is a shorthand for “all columns”. 1 So the same result would be had with:
1. While SELECT * is useful for off-the-cuff queries, it is widely considered bad style in production code, since adding a column
to the table would change the results.
8
Chapter 2. The SQL Language
---------------+---------+---------+------+------------
San Francisco | 46 | 50 | 0.25 | 1994-11-27
San Francisco | 43 | 57 | 0 | 1994-11-29
Hayward | 37 | 54 | | 1994-11-29
(3 rows)
You can write expressions, not just simple column references, in the select list. For example, you can do:
Notice how the AS clause is used to relabel the output column. (The AS clause is optional.)
A query can be “qualified” by adding a WHERE clause that specifies which rows are wanted. The WHERE
clause contains a Boolean (truth value) expression, and only rows for which the Boolean expression is
true are returned. The usual Boolean operators (AND, OR, and NOT) are allowed in the qualification. For
example, the following retrieves the weather of San Francisco on rainy days:
Result:
You can request that the results of a query be returned in sorted order:
In this example, the sort order isn’t fully specified, and so you might get the San Francisco rows in either
order. But you’d always get the results shown above if you do
9
Chapter 2. The SQL Language
You can request that duplicate rows be removed from the result of a query:
city
---------------
Hayward
San Francisco
(2 rows)
Here again, the result row ordering might vary. You can ensure consistent results by using DISTINCT and
ORDER BY together: 2
Note: This is only a conceptual model. The join is usually performed in a more efficient manner than
actually comparing each possible pair of rows, but this is invisible to the user.
SELECT *
FROM weather, cities
WHERE city = name;
2. In some database systems, including older versions of PostgreSQL, the implementation of DISTINCT automatically orders the
rows and so ORDER BY is redundant. But this is not required by the SQL standard, and current PostgreSQL doesn’t guarantee that
DISTINCT causes the rows to be ordered.
10
Chapter 2. The SQL Language
• There is no result row for the city of Hayward. This is because there is no matching entry in the cities
table for Hayward, so the join ignores the unmatched rows in the weather table. We will see shortly how
this can be fixed.
• There are two columns containing the city name. This is correct because the lists of columns of the
weather and the cities table are concatenated. In practice this is undesirable, though, so you will
probably want to list the output columns explicitly rather than using *:
SELECT city, temp_lo, temp_hi, prcp, date, location
FROM weather, cities
WHERE city = name;
Exercise: Attempt to find out the semantics of this query when the WHERE clause is omitted.
Since the columns all had different names, the parser automatically found out which table they belong to,
but it is good style to fully qualify column names in join queries:
Join queries of the kind seen thus far can also be written in this alternative form:
SELECT *
FROM weather INNER JOIN cities ON (weather.city = cities.name);
This syntax is not as commonly used as the one above, but we show it here to help you understand the
following topics.
Now we will figure out how we can get the Hayward records back in. What we want the query to do is to
scan the weather table and for each row to find the matching cities row. If no matching row is found
we want some “empty values” to be substituted for the cities table’s columns. This kind of query is
called an outer join. (The joins we have seen so far are inner joins.) The command looks like this:
SELECT *
FROM weather LEFT OUTER JOIN cities ON (weather.city = cities.name);
This query is called a left outer join because the table mentioned on the left of the join operator will have
each of its rows in the output at least once, whereas the table on the right will only have those rows output
that match some row of the left table. When outputting a left-table row for which there is no right-table
match, empty (null) values are substituted for the right-table columns.
11
Chapter 2. The SQL Language
Exercise: There are also right outer joins and full outer joins. Try to find out what those do.
We can also join a table against itself. This is called a self join. As an example, suppose we wish to find
all the weather records that are in the temperature range of other weather records. So we need to compare
the temp_lo and temp_hi columns of each weather row to the temp_lo and temp_hi columns of all
other weather rows. We can do this with the following query:
Here we have relabeled the weather table as W1 and W2 to be able to distinguish the left and right side of
the join. You can also use these kinds of aliases in other queries to save some typing, e.g.:
SELECT *
FROM weather w, cities c
WHERE w.city = c.name;
max
-----
46
(1 row)
If we wanted to know what city (or cities) that reading occurred in, we might try
but this will not work since the aggregate max cannot be used in the WHERE clause. (This restriction
exists because the WHERE clause determines the rows that will go into the aggregation stage; so it has to
be evaluated before aggregate functions are computed.) However, as is often the case the query can be
restated to accomplish the intended result, here by using a subquery:
12
Chapter 2. The SQL Language
city
---------------
San Francisco
(1 row)
This is OK because the subquery is an independent computation that computes its own aggregate sepa-
rately from what is happening in the outer query.
Aggregates are also very useful in combination with GROUP BY clauses. For example, we can get the
maximum low temperature observed in each city with
city | max
---------------+-----
Hayward | 37
San Francisco | 46
(2 rows)
which gives us one output row per city. Each aggregate result is computed over the table rows matching
that city. We can filter these grouped rows using HAVING:
city | max
---------+-----
Hayward | 37
(1 row)
which gives us the same results for only the cities that have all temp_lo values below 40. Finally, if we
only care about cities whose names begin with “S”, we might do
➊ The LIKE operator does pattern matching and is explained in Section 9.7.
It is important to understand the interaction between aggregates and SQL’s WHERE and HAVING clauses.
The fundamental difference between WHERE and HAVING is this: WHERE selects input rows before groups
and aggregates are computed (thus, it controls which rows go into the aggregate computation), whereas
HAVING selects group rows after groups and aggregates are computed. Thus, the WHERE clause must not
13
Chapter 2. The SQL Language
contain aggregate functions; it makes no sense to try to use an aggregate to determine which rows will
be inputs to the aggregates. On the other hand, the HAVING clause always contains aggregate functions.
(Strictly speaking, you are allowed to write a HAVING clause that doesn’t use aggregates, but it’s wasteful.
The same condition could be used more efficiently at the WHERE stage.)
In the previous example, we can apply the city name restriction in WHERE, since it needs no aggregate.
This is more efficient than adding the restriction to HAVING, because we avoid doing the grouping and
aggregate calculations for all rows that fail the WHERE check.
2.8. Updates
You can update existing rows using the UPDATE command. Suppose you discover the temperature readings
are all off by 2 degrees as of November 28. You may update the data as follows:
UPDATE weather
SET temp_hi = temp_hi - 2, temp_lo = temp_lo - 2
WHERE date > ’1994-11-28’;
2.9. Deletions
Rows can be removed from a table using the DELETE command. Suppose you are no longer interested in
the weather of Hayward. Then you can do the following to delete those rows from the table:
14
Chapter 2. The SQL Language
Without a qualification, DELETE will remove all rows from the given table, leaving it empty. The system
will not request confirmation before doing this!
15
Chapter 3. Advanced Features
3.1. Introduction
In the previous chapter we have covered the basics of using SQL to store and access your data in Post-
greSQL. We will now discuss some more advanced features of SQL that simplify management and prevent
loss or corruption of your data. Finally, we will look at some PostgreSQL extensions.
This chapter will on occasion refer to examples found in Chapter 2 to change or improve them, so it will
be of advantage if you have read that chapter. Some examples from this chapter can also be found in
advanced.sql in the tutorial directory. This file also contains some example data to load, which is not
repeated here. (Refer to Section 2.1 for how to use the file.)
3.2. Views
Refer back to the queries in Section 2.6. Suppose the combined listing of weather records and city location
is of particular interest to your application, but you do not want to type the query each time you need it.
You can create a view over the query, which gives a name to the query that you can refer to like an ordinary
table.
Making liberal use of views is a key aspect of good SQL database design. Views allow you to encapsulate
the details of the structure of your tables, which may change as your application evolves, behind consistent
interfaces.
Views can be used in almost any place a real table can be used. Building views upon other views is not
uncommon.
16
Chapter 3. Advanced Features
ERROR: insert or update on table "weather" violates foreign key constraint "weather_cit
DETAIL: Key (city)=(Berkeley) is not present in table "cities".
The behavior of foreign keys can be finely tuned to your application. We will not go beyond this simple
example in this tutorial, but just refer you to Chapter 5 for more information. Making correct use of foreign
keys will definitely improve the quality of your database applications, so you are strongly encouraged to
learn about them.
3.4. Transactions
Transactions are a fundamental concept of all database systems. The essential point of a transaction is that
it bundles multiple steps into a single, all-or-nothing operation. The intermediate states between the steps
are not visible to other concurrent transactions, and if some failure occurs that prevents the transaction
from completing, then none of the steps affect the database at all.
For example, consider a bank database that contains balances for various customer accounts, as well as
total deposit balances for branches. Suppose that we want to record a payment of $100.00 from Alice’s
account to Bob’s account. Simplifying outrageously, the SQL commands for this might look like
The details of these commands are not important here; the important point is that there are several separate
updates involved to accomplish this rather simple operation. Our bank’s officers will want to be assured
that either all these updates happen, or none of them happen. It would certainly not do for a system failure
to result in Bob receiving $100.00 that was not debited from Alice. Nor would Alice long remain a happy
17
Chapter 3. Advanced Features
customer if she was debited without Bob being credited. We need a guarantee that if something goes
wrong partway through the operation, none of the steps executed so far will take effect. Grouping the
updates into a transaction gives us this guarantee. A transaction is said to be atomic: from the point of
view of other transactions, it either happens completely or not at all.
We also want a guarantee that once a transaction is completed and acknowledged by the database system,
it has indeed been permanently recorded and won’t be lost even if a crash ensues shortly thereafter. For
example, if we are recording a cash withdrawal by Bob, we do not want any chance that the debit to
his account will disappear in a crash just after he walks out the bank door. A transactional database
guarantees that all the updates made by a transaction are logged in permanent storage (i.e., on disk) before
the transaction is reported complete.
Another important property of transactional databases is closely related to the notion of atomic updates:
when multiple transactions are running concurrently, each one should not be able to see the incomplete
changes made by others. For example, if one transaction is busy totalling all the branch balances, it would
not do for it to include the debit from Alice’s branch but not the credit to Bob’s branch, nor vice versa.
So transactions must be all-or-nothing not only in terms of their permanent effect on the database, but
also in terms of their visibility as they happen. The updates made so far by an open transaction are in-
visible to other transactions until the transaction completes, whereupon all the updates become visible
simultaneously.
In PostgreSQL, a transaction is set up by surrounding the SQL commands of the transaction with BEGIN
and COMMIT commands. So our banking transaction would actually look like
BEGIN;
UPDATE accounts SET balance = balance - 100.00
WHERE name = ’Alice’;
-- etc etc
COMMIT;
If, partway through the transaction, we decide we do not want to commit (perhaps we just noticed that
Alice’s balance went negative), we can issue the command ROLLBACK instead of COMMIT, and all our
updates so far will be canceled.
PostgreSQL actually treats every SQL statement as being executed within a transaction. If you do not is-
sue a BEGIN command, then each individual statement has an implicit BEGIN and (if successful) COMMIT
wrapped around it. A group of statements surrounded by BEGIN and COMMIT is sometimes called a trans-
action block.
Note: Some client libraries issue BEGIN and COMMIT commands automatically, so that you may get the
effect of transaction blocks without asking. Check the documentation for the interface you are using.
It’s possible to control the statements in a transaction in a more granular fashion through the use of save-
points. Savepoints allow you to selectively discard parts of the transaction, while committing the rest.
After defining a savepoint with SAVEPOINT, you can if needed roll back to the savepoint with ROLLBACK
TO. All the transaction’s database changes between defining the savepoint and rolling back to it are dis-
carded, but changes earlier than the savepoint are kept.
18
Chapter 3. Advanced Features
After rolling back to a savepoint, it continues to be defined, so you can roll back to it several times.
Conversely, if you are sure you won’t need to roll back to a particular savepoint again, it can be released,
so the system can free some resources. Keep in mind that either releasing or rolling back to a savepoint
will automatically release all savepoints that were defined after it.
All this is happening within the transaction block, so none of it is visible to other database sessions. When
and if you commit the transaction block, the committed actions become visible as a unit to other sessions,
while the rolled-back actions never become visible at all.
Remembering the bank database, suppose we debit $100.00 from Alice’s account, and credit Bob’s ac-
count, only to find later that we should have credited Wally’s account. We could do it using savepoints
like this:
BEGIN;
UPDATE accounts SET balance = balance - 100.00
WHERE name = ’Alice’;
SAVEPOINT my_savepoint;
UPDATE accounts SET balance = balance + 100.00
WHERE name = ’Bob’;
-- oops ... forget that and use Wally’s account
ROLLBACK TO my_savepoint;
UPDATE accounts SET balance = balance + 100.00
WHERE name = ’Wally’;
COMMIT;
This example is, of course, oversimplified, but there’s a lot of control to be had over a transaction block
through the use of savepoints. Moreover, ROLLBACK TO is the only way to regain control of a transaction
block that was put in aborted state by the system due to an error, short of rolling it back completely and
starting again.
3.5. Inheritance
Inheritance is a concept from object-oriented databases. It opens up interesting new possibilities of
database design.
Let’s create two tables: A table cities and a table capitals. Naturally, capitals are also cities, so you
want some way to show the capitals implicitly when you list all cities. If you’re really clever you might
invent some scheme like this:
19
Chapter 3. Advanced Features
);
This works OK as far as querying goes, but it gets ugly when you need to update several rows, for one
thing.
A better solution is this:
In this case, a row of capitals inherits all columns (name, population, and altitude) from its
parent, cities. The type of the column name is text, a native PostgreSQL type for variable length
character strings. State capitals have an extra column, state, that shows their state. In PostgreSQL, a table
can inherit from zero or more other tables.
For example, the following query finds the names of all cities, including state capitals, that are located at
an altitude over 500 ft.:
which returns:
name | altitude
-----------+----------
Las Vegas | 2174
Mariposa | 1953
Madison | 845
(3 rows)
On the other hand, the following query finds all the cities that are not state capitals and are situated at an
altitude of 500 ft. or higher:
name | altitude
20
Chapter 3. Advanced Features
-----------+----------
Las Vegas | 2174
Mariposa | 1953
(2 rows)
Here the ONLY before cities indicates that the query should be run over only the cities table, and not
tables below cities in the inheritance hierarchy. Many of the commands that we have already discussed
— SELECT, UPDATE, and DELETE — support this ONLY notation.
Note: Although inheritance is frequently useful, it has not been integrated with unique constraints or
foreign keys, which limits its usefulness. See Section 5.5 for more detail.
3.6. Conclusion
PostgreSQL has many features not touched upon in this tutorial introduction, which has been oriented
toward newer users of SQL. These features are discussed in more detail in the remainder of this book.
If you feel you need more introductory material, please visit the PostgreSQL web site1 for links to more
resources.
1. http://www.postgresql.org
21
II. The SQL Language
This part describes the use of the SQL language in PostgreSQL. We start with describing the general
syntax of SQL, then explain how to create the structures to hold data, how to populate the database, and
how to query it. The middle part lists the available data types and functions for use in SQL commands.
The rest treats several aspects that are important for tuning a database for optimal performance.
The information in this part is arranged so that a novice user can follow it start to end to gain a full un-
derstanding of the topics without having to refer forward too many times. The chapters are intended to be
self-contained, so that advanced users can read the chapters individually as they choose. The information
in this part is presented in a narrative fashion in topical units. Readers looking for a complete description
of a particular command should look into Part VI.
Readers of this part should know how to connect to a PostgreSQL database and issue SQL commands.
Readers that are unfamiliar with these issues are encouraged to read Part I first. SQL commands are
typically entered using the PostgreSQL interactive terminal psql, but other programs that have similar
functionality can be used as well.
Chapter 4. SQL Syntax
This chapter describes the syntax of SQL. It forms the foundation for understanding the following chapters
which will go into detail about how the SQL commands are applied to define and modify data.
We also advise users who are already familiar with SQL to read this chapter carefully because there are
several rules and concepts that are implemented inconsistently among SQL databases or that are specific
to PostgreSQL.
This is a sequence of three commands, one per line (although this is not required; more than one command
can be on a line, and commands can usefully be split across lines).
The SQL syntax is not very consistent regarding what tokens identify commands and which are operands
or parameters. The first few tokens are generally the command name, so in the above example we would
usually speak of a “SELECT”, an “UPDATE”, and an “INSERT” command. But for instance the UPDATE
command always requires a SET token to appear in a certain position, and this particular variation of
INSERT also requires a VALUES in order to be complete. The precise syntax rules for each command are
described in Part VI.
24
Chapter 4. SQL Syntax
will not define a key word that contains digits or starts or ends with an underscore, so identifiers of this
form are safe against possible conflict with future extensions of the standard.
The system uses no more than NAMEDATALEN-1 characters of an identifier; longer names can be written
in commands, but they will be truncated. By default, NAMEDATALEN is 64 so the maximum identifier
length is 63. If this limit is problematic, it can be raised by changing the NAMEDATALEN constant in
src/include/postgres_ext.h.
A convention often used is to write key words in upper case and names in lower case, e.g.,
There is a second kind of identifier: the delimited identifier or quoted identifier. It is formed by enclosing
an arbitrary sequence of characters in double-quotes ("). A delimited identifier is always an identifier,
never a key word. So "select" could be used to refer to a column or table named “select”, whereas an
unquoted select would be taken as a key word and would therefore provoke a parse error when used
where a table or column name is expected. The example can be written with quoted identifiers like this:
Quoted identifiers can contain any character other than a double quote itself. (To include a double quote,
write two double quotes.) This allows constructing table or column names that would otherwise not be
possible, such as ones containing spaces or ampersands. The length limitation still applies.
Quoting an identifier also makes it case-sensitive, whereas unquoted names are always folded to lower
case. For example, the identifiers FOO, foo, and "foo" are considered the same by PostgreSQL, but
"Foo" and "FOO" are different from these three and each other. (The folding of unquoted names to lower
case in PostgreSQL is incompatible with the SQL standard, which says that unquoted names should be
folded to upper case. Thus, foo should be equivalent to "FOO" not "foo" according to the standard. If
you want to write portable applications you are advised to always quote a particular name or never quote
it.)
4.1.2. Constants
There are three kinds of implicitly-typed constants in PostgreSQL: strings, bit strings, and numbers. Con-
stants can also be specified with explicit types, which can enable more accurate representation and more
efficient handling by the system. These alternatives are discussed in the following subsections.
25
Chapter 4. SQL Syntax
Another PostgreSQL extension is that C-style backslash escapes are available: \b is a backspace, \f is a
form feed, \n is a newline, \r is a carriage return, \t is a tab, and \xxx , where xxx is an octal number,
is a byte with the corresponding code. (It is your responsibility that the byte sequences you create are
valid characters in the server character set encoding.) Any other character following a backslash is taken
literally. Thus, to include a backslash in a string constant, write two backslashes.
The character with the code zero cannot be in a string constant.
Two string constants that are only separated by whitespace with at least one newline are concatenated and
effectively treated as if the string had been written in one constant. For example:
SELECT ’foo’
’bar’;
is equivalent to
SELECT ’foobar’;
but
is not valid syntax. (This slightly bizarre behavior is specified by SQL; PostgreSQL is following the
standard.)
$$Dianne’s horse$$
$SomeTag$Dianne’s horse$SomeTag$
Notice that inside the dollar-quoted string, single quotes can be used without needing to be escaped.
Indeed, no characters inside a dollar-quoted string are ever escaped: the string content is always written
literally. Backslashes are not special, and neither are dollar signs, unless they are part of a sequence
matching the opening tag.
It is possible to nest dollar-quoted string constants by choosing different tags at each nesting level. This is
most commonly used in writing function definitions. For example:
26
Chapter 4. SQL Syntax
$function$
BEGIN
RETURN ($1 ~ $q$[\t\r\n\v\\]$q$);
END;
$function$
A dollar-quoted string that follows a keyword or identifier must be separated from it by whitespace;
otherwise the dollar quoting delimiter would be taken as part of the preceding identifier.
Dollar quoting is not part of the SQL standard, but it is often a more convenient way to write complicated
string literals than the standard-compliant single quote syntax. It is particularly useful when representing
string constants inside other constants, as is often needed in procedural function definitions. With single-
quote syntax, each backslash in the above example would have to be written as four backslashes, which
would be reduced to two backslashes in parsing the original string constant, and then to one when the
inner string constant is re-parsed during function execution.
digits
digits.[digits][e[+-]digits]
[digits].digits[e[+-]digits]
digitse[+-]digits
where digits is one or more decimal digits (0 through 9). At least one digit must be before or after the
decimal point, if one is used. At least one digit must follow the exponent marker (e), if one is present.
There may not be any spaces or other characters embedded in the constant. Note that any leading plus or
minus sign is not actually considered part of the constant; it is an operator applied to the constant.
27
Chapter 4. SQL Syntax
42
3.5
4.
.001
5e2
1.925e-3
A numeric constant that contains neither a decimal point nor an exponent is initially presumed to be type
integer if its value fits in type integer (32 bits); otherwise it is presumed to be type bigint if its value
fits in type bigint (64 bits); otherwise it is taken to be type numeric. Constants that contain decimal
points and/or exponents are always initially presumed to be type numeric.
The initially assigned data type of a numeric constant is just a starting point for the type resolution algo-
rithms. In most cases the constant will be automatically coerced to the most appropriate type depending
on context. When necessary, you can force a numeric value to be interpreted as a specific data type by
casting it. For example, you can force a numeric value to be treated as type real (float4) by writing
These are actually just special cases of the general casting notations discussed next.
type ’string’
’string’::type
CAST ( ’string’ AS type )
The string constant’s text is passed to the input conversion routine for the type called type. The result is
a constant of the indicated type. The explicit type cast may be omitted if there is no ambiguity as to the
type the constant must be (for example, when it is assigned directly to a table column), in which case it is
automatically coerced.
The string constant can be written using either regular SQL notation or dollar-quoting.
It is also possible to specify a type coercion using a function-like syntax:
typename ( ’string’ )
but not all type names may be used in this way; see Section 4.2.8 for details.
The ::, CAST(), and function-call syntaxes can also be used to specify run-time type conversions of
arbitrary expressions, as discussed in Section 4.2.8. But the form type ’string’ can only be used to
specify the type of a literal constant. Another restriction on type ’string’ is that it does not work for
array types; use :: or CAST() to specify the type of an array constant.
28
Chapter 4. SQL Syntax
4.1.3. Operators
An operator name is a sequence of up to NAMEDATALEN-1 (63 by default) characters from the following
list:
+-*/<>=~!@#%^&|‘?
• -- and /* cannot appear anywhere in an operator name, since they will be taken as the start of a
comment.
• A multiple-character operator name cannot end in + or -, unless the name also contains at least one of
these characters:
~!@#%^&|‘?
For example, @- is an allowed operator name, but *- is not. This restriction allows PostgreSQL to parse
SQL-compliant queries without requiring spaces between tokens.
When working with non-SQL-standard operator names, you will usually need to separate adjacent opera-
tors with spaces to avoid ambiguity. For example, if you have defined a left unary operator named @, you
cannot write X*@Y; you must write X* @Y to ensure that PostgreSQL reads it as two operator names not
one.
• A dollar sign ($) followed by digits is used to represent a positional parameter in the body of a function
definition or a prepared statement. In other contexts the dollar sign may be part of an identifier or a
dollar-quoted string constant.
• Parentheses (()) have their usual meaning to group expressions and enforce precedence. In some cases
parentheses are required as part of the fixed syntax of a particular SQL command.
• Brackets ([]) are used to select the elements of an array. See Section 8.10 for more information on
arrays.
• Commas (,) are used in some syntactical constructs to separate the elements of a list.
• The semicolon (;) terminates an SQL command. It cannot appear anywhere within a command, except
within a string constant or quoted identifier.
• The colon (:) is used to select “slices” from arrays. (See Section 8.10.) In certain SQL dialects (such
as Embedded SQL), the colon is used to prefix variable names.
• The asterisk (*) is used in some contexts to denote all the fields of a table row or composite value. It
also has a special meaning when used as the argument of the COUNT aggregate function.
29
Chapter 4. SQL Syntax
• The period (.) is used in numeric constants, and to separate schema, table, and column names.
4.1.5. Comments
A comment is an arbitrary sequence of characters beginning with double dashes and extending to the end
of the line, e.g.:
/* multiline comment
* with nesting: /* nested block comment */
*/
where the comment begins with /* and extends to the matching occurrence of */. These block comments
nest, as specified in the SQL standard but unlike C, so that one can comment out larger blocks of code
that may contain existing block comments.
A comment is removed from the input stream before further syntax analysis and is effectively replaced by
whitespace.
SELECT 5 ! - 6;
will be parsed as
SELECT 5 ! (- 6);
because the parser has no idea — until it is too late — that ! is defined as a postfix operator, not an infix
one. To get the desired behavior in this case, you must write
SELECT (5 !) - 6;
30
Chapter 4. SQL Syntax
Note that the operator precedence rules also apply to user-defined operators that have the same names as
the built-in operators mentioned above. For example, if you define a “+” operator for some custom data
type it will have the same precedence as the built-in “+” operator, no matter what yours does.
When a schema-qualified operator name is used in the OPERATOR syntax, as for example in
SELECT 3 OPERATOR(pg_catalog.+) 4;
the OPERATOR construct is taken to have the default precedence shown in Table 4-1 for “any other” oper-
ator. This is true no matter which specific operator name appears inside OPERATOR().
31
Chapter 4. SQL Syntax
In addition to this list, there are a number of constructs that can be classified as an expression but do
not follow any general syntax rules. These generally have the semantics of a function or operator and are
explained in the appropriate location in Chapter 9. An example is the IS NULL clause.
We have already discussed constants in Section 4.1.2. The following sections discuss the remaining op-
tions.
correlation.columnname
correlation is the name of a table (possibly qualified with a schema name), or an alias for a table
defined by means of a FROM clause, or one of the key words NEW or OLD. (NEW and OLD can only appear in
rewrite rules, while other correlation names can be used in any SQL statement.) The correlation name and
separating dot may be omitted if the column name is unique across all the tables being used in the current
query. (See also Chapter 7.)
$number
32
Chapter 4. SQL Syntax
Here the $1 will be replaced by the first function argument when the function is invoked.
4.2.3. Subscripts
If an expression yields a value of an array type, then a specific element of the array value can be extracted
by writing
expression[subscript]
expression[lower_subscript:upper_subscript]
(Here, the brackets [ ] are meant to appear literally.) Each subscript is itself an expression, which
must yield an integer value.
In general the array expression must be parenthesized, but the parentheses may be omitted when the
expression to be subscripted is just a column reference or positional parameter. Also, multiple subscripts
can be concatenated when the original array is multi-dimensional. For example,
mytable.arraycolumn[4]
mytable.two_d_column[17][34]
$1[10:42]
(arrayfunction(a,b))[42]
The parentheses in the last example are required. See Section 8.10 for more about arrays.
expression.fieldname
In general the row expression must be parenthesized, but the parentheses may be omitted when the
expression to be selected from is just a table reference or positional parameter. For example,
mytable.mycolumn
$1.somecolumn
(rowfunction(a,b)).col3
(Thus, a qualified column reference is actually just a special case of the field selection syntax.)
33
Chapter 4. SQL Syntax
OPERATOR(schema.operatorname)
Which particular operators exist and whether they are unary or binary depends on what operators have
been defined by the system or the user. Chapter 9 describes the built-in operators.
sqrt(2)
The list of built-in functions is in Chapter 9. Other functions may be added by the user.
aggregate_name (expression)
aggregate_name (ALL expression)
aggregate_name (DISTINCT expression)
aggregate_name ( * )
where aggregate_name is a previously defined aggregate (possibly qualified with a schema name),
and expression is any value expression that does not itself contain an aggregate expression.
The first form of aggregate expression invokes the aggregate across all input rows for which the given
expression yields a non-null value. (Actually, it is up to the aggregate function whether to ignore null
values or not — but all the standard ones do.) The second form is the same as the first, since ALL is the
default. The third form invokes the aggregate for all distinct non-null values of the expression found in
the input rows. The last form invokes the aggregate once for each input row regardless of null or non-null
34
Chapter 4. SQL Syntax
values; since no particular input value is specified, it is generally only useful for the count() aggregate
function.
For example, count(*) yields the total number of input rows; count(f1) yields the number of input
rows in which f1 is non-null; count(distinct f1) yields the number of distinct non-null values of
f1.
The predefined aggregate functions are described in Section 9.15. Other aggregate functions may be added
by the user.
An aggregate expression may only appear in the result list or HAVING clause of a SELECT command. It is
forbidden in other clauses, such as WHERE, because those clauses are logically evaluated before the results
of aggregates are formed.
When an aggregate expression appears in a subquery (see Section 4.2.9 and Section 9.16), the aggregate
is normally evaluated over the rows of the subquery. But an exception occurs if the aggregate’s argument
contains only outer-level variables: the aggregate then belongs to the nearest such outer level, and is
evaluated over the rows of that query. The aggregate expression as a whole is then an outer reference for
the subquery it appears in, and acts as a constant over any one evaluation of that subquery. The restriction
about appearing only in the result list or HAVING clause applies with respect to the query level that the
aggregate belongs to.
The CAST syntax conforms to SQL; the syntax with :: is historical PostgreSQL usage.
When a cast is applied to a value expression of a known type, it represents a run-time type conversion. The
cast will succeed only if a suitable type conversion operation has been defined. Notice that this is subtly
different from the use of casts with constants, as shown in Section 4.1.2.5. A cast applied to an unadorned
string literal represents the initial assignment of a type to a literal constant value, and so it will succeed
for any type (if the contents of the string literal are acceptable input syntax for the data type).
An explicit type cast may usually be omitted if there is no ambiguity as to the type that a value expression
must produce (for example, when it is assigned to a table column); the system will automatically apply
a type cast in such cases. However, automatic casting is only done for casts that are marked “OK to
apply implicitly” in the system catalogs. Other casts must be invoked with explicit casting syntax. This
restriction is intended to prevent surprising conversions from being applied silently.
It is also possible to specify a type cast using a function-like syntax:
typename ( expression )
However, this only works for types whose names are also valid as function names. For example, double
precision can’t be used this way, but the equivalent float8 can. Also, the names interval, time,
and timestamp can only be used in this fashion if they are double-quoted, because of syntactic conflicts.
Therefore, the use of the function-like cast syntax leads to inconsistencies and should probably be avoided
in new applications. (The function-like syntax is in fact just a function call. When one of the two standard
35
Chapter 4. SQL Syntax
cast syntaxes is used to do a run-time conversion, it will internally invoke a registered function to perform
the conversion. By convention, these conversion functions have the same name as their output type, and
thus the “function-like syntax” is nothing more than a direct invocation of the underlying conversion
function. Obviously, this is not something that a portable application should rely on.)
SELECT ARRAY[1,2,3+4];
array
---------
{1,2,7}
(1 row)
The array element type is the common type of the member expressions, determined using the same rules
as for UNION or CASE constructs (see Section 10.5).
Multidimensional array values can be built by nesting array constructors. In the inner constructors, the
key word ARRAY may be omitted. For example, these produce the same result:
SELECT ARRAY[[1,2],[3,4]];
array
---------------
{{1,2},{3,4}}
(1 row)
36
Chapter 4. SQL Syntax
Since multidimensional arrays must be rectangular, inner constructors at the same level must produce
sub-arrays of identical dimensions.
Multidimensional array constructor elements can be anything yielding an array of the proper kind, not
only a sub-ARRAY construct. For example:
It is also possible to construct an array from the results of a subquery. In this form, the array constructor
is written with the key word ARRAY followed by a parenthesized (not bracketed) subquery. For example:
The subquery must return a single column. The resulting one-dimensional array will have an element for
each row in the subquery result, with an element type matching that of the subquery’s output column.
The subscripts of an array value built with ARRAY always begin with one. For more information about
arrays, see Section 8.10.
The key word ROW is optional when there is more than one expression in the list.
By default, the value created by a ROW expression is of an anonymous record type. If necessary, it can be
cast to a named composite type — either the row type of a table, or a composite type created with CREATE
TYPE AS. An explicit cast may be needed to avoid ambiguity. For example:
37
Chapter 4. SQL Syntax
getf1
-------
1
(1 row)
Row constructors can be used to build composite values to be stored in a composite-type table column,
or to be passed to a function that accepts a composite parameter. Also, it is possible to compare two row
values or test a row with IS NULL or IS NOT NULL, for example
For more detail see Section 9.17. Row constructors can also be used in connection with subqueries, as
discussed in Section 9.16.
then somefunc() would (probably) not be called at all. The same would be the case if one wrote
38
Chapter 4. SQL Syntax
Note that this is not the same as the left-to-right “short-circuiting” of Boolean operators that is found in
some programming languages.
As a consequence, it is unwise to use functions with side effects as part of complex expressions. It is
particularly dangerous to rely on side effects or evaluation order in WHERE and HAVING clauses, since
those clauses are extensively reprocessed as part of developing an execution plan. Boolean expressions
(AND/OR/NOT combinations) in those clauses may be reorganized in any manner allowed by the laws of
Boolean algebra.
When it is essential to force evaluation order, a CASE construct (see Section 9.13) may be used. For
example, this is an untrustworthy way of trying to avoid division by zero in a WHERE clause:
SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END;
A CASE construct used in this fashion will defeat optimization attempts, so it should only be done when
necessary. (In this particular example, it would doubtless be best to sidestep the problem by writing y >
1.5*x instead.)
39
Chapter 5. Data Definition
This chapter covers how one creates the database structures that will hold one’s data. In a relational
database, the raw data is stored in tables, so the majority of this chapter is devoted to explaining how
tables are created and modified and what features are available to control what data is stored in the tables.
Subsequently, we discuss how tables can be organized into schemas, and how privileges can be assigned to
tables. Finally, we will briefly look at other features that affect the data storage, such as views, functions,
and triggers.
This creates a table named my_first_table with two columns. The first column is named
first_column and has a data type of text; the second column has the name second_column and the
type integer. The table and column names follow the identifier syntax explained in Section 4.1.1.
The type names are usually also identifiers, but there are some exceptions. Note that the column list is
comma-separated and surrounded by parentheses.
Of course, the previous example was heavily contrived. Normally, you would give names to your tables
and columns that convey what kind of data they store. So let’s look at a more realistic example:
40
Chapter 5. Data Definition
(The numeric type can store fractional components, as would be typical of monetary amounts.)
Tip: When you create many interrelated tables it is wise to choose a consistent naming pattern for the
tables and columns. For instance, there is a choice of using singular or plural nouns for table names,
both of which are favored by some theorist or other.
There is a limit on how many columns a table can contain. Depending on the column types, it is between
250 and 1600. However, defining a table with anywhere near this many columns is highly unusual and
often a questionable design.
If you no longer need a table, you can remove it using the DROP TABLE command. For example:
Attempting to drop a table that does not exist is an error. Nevertheless, it is common in SQL script files to
unconditionally try to drop each table before creating it, ignoring the error messages.
If you need to modify a table that already exists look into Section 5.6 later in this chapter.
With the tools discussed so far you can create fully functional tables. The remainder of this chapter is
concerned with adding features to the table definition to ensure data integrity, security, or convenience. If
you are eager to fill your tables with data now you can skip ahead to Chapter 6 and read the rest of this
chapter later.
41
Chapter 5. Data Definition
The default value may be an expression, which will be evaluated whenever the default value is inserted
(not when the table is created). A common example is that a timestamp column may have a default of
now(), so that it gets set to the time of row insertion. Another common example is generating a “serial
number” for each row. In PostgreSQL this is typically done by something like
where the nextval() function supplies successive values from a sequence object (see Section 9.12). This
arrangement is sufficiently common that there’s a special shorthand for it:
5.3. Constraints
Data types are a way to limit the kind of data that can be stored in a table. For many applications, however,
the constraint they provide is too coarse. For example, a column containing a product price should prob-
ably only accept positive values. But there is no data type that accepts only positive numbers. Another
issue is that you might want to constrain column data with respect to other columns or rows. For example,
in a table containing product information, there should only be one row for each product number.
To that end, SQL allows you to define constraints on columns and tables. Constraints give you as much
control over the data in your tables as you wish. If a user attempts to store data in a column that would
violate a constraint, an error is raised. This applies even if the value came from the default value definition.
As you see, the constraint definition comes after the data type, just like default value definitions. Default
values and constraints can be listed in any order. A check constraint consists of the key word CHECK
followed by an expression in parentheses. The check constraint expression should involve the column
thus constrained, otherwise the constraint would not make too much sense.
42
Chapter 5. Data Definition
You can also give the constraint a separate name. This clarifies error messages and allows you to refer to
the constraint when you need to change it. The syntax is:
So, to specify a named constraint, use the key word CONSTRAINT followed by an identifier followed by
the constraint definition. (If you don’t specify a constraint name in this way, the system chooses a name
for you.)
A check constraint can also refer to several columns. Say you store a regular price and a discounted price,
and you want to ensure that the discounted price is lower than the regular price.
The first two constraints should look familiar. The third one uses a new syntax. It is not attached to a
particular column, instead it appears as a separate item in the comma-separated column list. Column
definitions and these constraint definitions can be listed in mixed order.
We say that the first two constraints are column constraints, whereas the third one is a table constraint
because it is written separately from any one column definition. Column constraints can also be written
as table constraints, while the reverse is not necessarily possible, since a column constraint is supposed to
refer to only the column it is attached to. (PostgreSQL doesn’t enforce that rule, but you should follow it
if you want your table definitions to work with other database systems.) The above example could also be
written as
or even
43
Chapter 5. Data Definition
It should be noted that a check constraint is satisfied if the check expression evaluates to true or the null
value. Since most expressions will evaluate to the null value if any operand is null, they will not prevent
null values in the constrained columns. To ensure that a column does not contain null values, the not-null
constraint described in the next section can be used.
A not-null constraint is always written as a column constraint. A not-null constraint is functionally equiv-
alent to creating a check constraint CHECK (column_name IS NOT NULL), but in PostgreSQL creating
an explicit not-null constraint is more efficient. The drawback is that you cannot give explicit names to
not-null constraints created that way.
Of course, a column can have more than one constraint. Just write the constraints one after another:
The order doesn’t matter. It does not necessarily determine in which order the constraints are checked.
The NOT NULL constraint has an inverse: the NULL constraint. This does not mean that the column must
be null, which would surely be useless. Instead, this simply selects the default behavior that the column
may be null. The NULL constraint is not defined in the SQL standard and should not be used in portable
44
Chapter 5. Data Definition
applications. (It was only added to PostgreSQL to be compatible with some other database systems.) Some
users, however, like it because it makes it easy to toggle the constraint in a script file. For example, you
could start with
Tip: In most database designs the majority of columns should be marked not null.
This specifies that the combination of values in the indicated columns is unique across the whole table,
though any one of the columns need not be (and ordinarily isn’t) unique.
You can assign your own name for a unique constraint, in the usual way:
45
Chapter 5. Data Definition
In general, a unique constraint is violated when there are two or more rows in the table where the values
of all of the columns included in the constraint are equal. However, null values are not considered equal in
this comparison. That means even in the presence of a unique constraint it is possible to store an unlimited
number of rows that contain a null value in at least one of the constrained columns. This behavior conforms
to the SQL standard, but we have heard that other SQL databases may not follow this rule. So be careful
when developing applications that are intended to be portable.
Primary keys can also constrain more than one column; the syntax is similar to unique constraints:
A primary key indicates that a column or group of columns can be used as a unique identifier for rows in
the table. (This is a direct consequence of the definition of a primary key. Note that a unique constraint
does not, by itself, provide a unique identifier because it does not exclude null values.) This is useful
both for documentation purposes and for client applications. For example, a GUI application that allows
modifying row values probably needs to know the primary key of a table to be able to identify rows
uniquely.
46
Chapter 5. Data Definition
A table can have at most one primary key (while it can have many unique and not-null constraints).
Relational database theory dictates that every table must have a primary key. This rule is not enforced by
PostgreSQL, but it is usually best to follow it.
Let’s also assume you have a table storing orders of those products. We want to ensure that the orders table
only contains orders of products that actually exist. So we define a foreign key constraint in the orders
table that references the products table:
Now it is impossible to create orders with product_no entries that do not appear in the products table.
We say that in this situation the orders table is the referencing table and the products table is the referenced
table. Similarly, there are referencing and referenced columns.
You can also shorten the above command to
because in absence of a column list the primary key of the referenced table is used as the referenced
column(s).
A foreign key can also constrain and reference a group of columns. As usual, it then needs to be written
in table constraint form. Here is a contrived syntax example:
CREATE TABLE t1 (
a integer PRIMARY KEY,
b integer,
c integer,
FOREIGN KEY (b, c) REFERENCES other_table (c1, c2)
);
47
Chapter 5. Data Definition
Of course, the number and type of the constrained columns need to match the number and type of the
referenced columns.
You can assign your own name for a foreign key constraint, in the usual way.
A table can contain more than one foreign key constraint. This is used to implement many-to-many rela-
tionships between tables. Say you have tables about products and orders, but now you want to allow one
order to contain possibly many products (which the structure above did not allow). You could use this
table structure:
Notice that the primary key overlaps with the foreign keys in the last table.
We know that the foreign keys disallow creation of orders that do not relate to any products. But what if
a product is removed after an order is created that references it? SQL allows you to handle that as well.
Intuitively, we have a few options:
To illustrate this, let’s implement the following policy on the many-to-many relationship example above:
when someone wants to remove a product that is still referenced by an order (via order_items), we
disallow it. If someone removes an order, the order items are removed as well.
48
Chapter 5. Data Definition
...
);
Restricting and cascading deletes are the two most common options. RESTRICT prevents deletion of a
referenced row. NO ACTION means that if any referencing rows still exist when the constraint is checked,
an error is raised; this is the default behavior if you do not specify anything. (The essential difference
between these two choices is that NO ACTION allows the check to be deferred until later in the transaction,
whereas RESTRICT does not.) CASCADE specifies that when a referenced row is deleted, row(s) referencing
it should be automatically deleted as well. There are two other options: SET NULL and SET DEFAULT.
These cause the referencing columns to be set to nulls or default values, respectively, when the referenced
row is deleted. Note that these do not excuse you from observing any constraints. For example, if an action
specifies SET DEFAULT but the default value would not satisfy the foreign key, the operation will fail.
Analogous to ON DELETE there is also ON UPDATE which is invoked when a referenced column is
changed (updated). The possible actions are the same.
More information about updating and deleting data is in Chapter 6.
Finally, we should mention that a foreign key must reference columns that either are a primary key or
form a unique constraint. If the foreign key references a unique constraint, there are some additional
possibilities regarding how null values are matched. These are explained in the reference documentation
for CREATE TABLE.
oid
The object identifier (object ID) of a row. This is a serial number that is automatically added by
PostgreSQL to all table rows (unless the table was created using WITHOUT OIDS, in which case this
column is not present). This column is of type oid (same name as the column); see Section 8.12 for
more information about the type.
tableoid
The OID of the table containing this row. This column is particularly handy for queries that select
from inheritance hierarchies, since without it, it’s difficult to tell which individual table a row came
from. The tableoid can be joined against the oid column of pg_class to obtain the table name.
49
Chapter 5. Data Definition
xmin
The identity (transaction ID) of the inserting transaction for this row version. (A row version is an
individual state of a row; each update of a row creates a new row version for the same logical row.)
cmin
The identity (transaction ID) of the deleting transaction, or zero for an undeleted row version. It
is possible for this column to be nonzero in a visible row version. That usually indicates that the
deleting transaction hasn’t committed yet, or that an attempted deletion was rolled back.
cmax
The physical location of the row version within its table. Note that although the ctid can be used to
locate the row version very quickly, a row’s ctid will change each time it is updated or moved by
VACUUM FULL. Therefore ctid is useless as a long-term row identifier. The OID, or even better a
user-defined serial number, should be used to identify logical rows.
OIDs are 32-bit quantities and are assigned from a single cluster-wide counter. In a large or long-lived
database, it is possible for the counter to wrap around. Hence, it is bad practice to assume that OIDs are
unique, unless you take steps to ensure that this is the case. If you need to identify the rows in a table,
using a sequence generator is strongly recommended. However, OIDs can be used as well, provided that
a few additional precautions are taken:
• A unique constraint should be created on the OID column of each table for which the OID will be used
to identify rows.
• OIDs should never be assumed to be unique across tables; use the combination of tableoid and row
OID if you need a database-wide identifier.
• The tables in question should be created using WITH OIDS to ensure forward compatibility with future
releases of PostgreSQL. It is planned that WITHOUT OIDS will become the default.
Transaction identifiers are also 32-bit quantities. In a long-lived database it is possible for transaction IDs
to wrap around. This is not a fatal problem given appropriate maintenance procedures; see Chapter 21 for
details. It is unwise, however, to depend on the uniqueness of transaction IDs over the long term (more
than one billion transactions).
Command identifiers are also 32-bit quantities. This creates a hard limit of 232 (4 billion) SQL commands
within a single transaction. In practice this limit is not a problem — note that the limit is on number of
SQL commands, not number of rows processed.
50
Chapter 5. Data Definition
5.5. Inheritance
Let’s create two tables. The capitals table contains state capitals which are also cities. Naturally, the
capitals table should inherit from cities.
In this case, a row of capitals inherits all attributes (name, population, and altitude) from its parent, cities.
State capitals have an extra attribute, state, that shows their state. In PostgreSQL, a table can inherit from
zero or more other tables, and a query can reference either all rows of a table or all rows of a table plus all
of its descendants.
For example, the following query finds the names of all cities, including state capitals, that are located at
an altitude over 500ft:
which returns:
name | altitude
-----------+----------
Las Vegas | 2174
Mariposa | 1953
Madison | 845
On the other hand, the following query finds all the cities that are not state capitals and are situated at an
altitude over 500ft:
name | altitude
-----------+----------
Las Vegas | 2174
Mariposa | 1953
51
Chapter 5. Data Definition
Here the “ONLY” before cities indicates that the query should be run over only cities and not tables below
cities in the inheritance hierarchy. Many of the commands that we have already discussed -- SELECT,
UPDATE and DELETE -- support this “ONLY” notation.
Deprecated: In previous versions of PostgreSQL, the default behavior was not to include child tables
in queries. This was found to be error prone and is also in violation of the SQL:1999 standard. Under
the old syntax, to get the sub-tables you append * to the table name. For example
You can still explicitly specify scanning child tables by appending *, as well as explicitly specify not
scanning child tables by writing “ONLY”. But beginning in version 7.1, the default behavior for an
undecorated table name is to scan its child tables too, whereas before the default was not to do so.
To get the old default behavior, set the configuration option SQL_Inheritance to off, e.g.,
In some cases you may wish to know which table a particular row originated from. There is a system
column called tableoid in each table which can tell you the originating table:
which returns:
(If you try to reproduce this example, you will probably get different numeric OIDs.) By doing a join with
pg_class you can see the actual table names:
which returns:
52
Chapter 5. Data Definition
A table can inherit from more than one parent table, in which case it has the union of the columns defined
by the parent tables (plus any columns declared specifically for the child table).
A serious limitation of the inheritance feature is that indexes (including unique constraints) and foreign
key constraints only apply to single tables, not to their inheritance children. This is true on both the
referencing and referenced sides of a foreign key constraint. Thus, in the terms of the above example:
• If we declared cities.name to be UNIQUE or a PRIMARY KEY, this would not stop the capitals
table from having rows with names duplicating rows in cities. And those duplicate rows would by
default show up in queries from cities. In fact, by default capitals would have no unique constraint
at all, and so could contain multiple rows with the same name. You could add a unique constraint to
capitals, but this would not prevent duplication compared to cities.
• Similarly, if we were to specify that cities.name REFERENCES some other table, this constraint would
not automatically propagate to capitals. In this case you could work around it by manually adding
the same REFERENCES constraint to capitals.
• Specifying that another table’s column REFERENCES cities(name) would allow the other table to
contain city names, but not capital names. There is no good workaround for this case.
These deficiencies will probably be fixed in some future release, but in the meantime considerable care is
needed in deciding whether inheritance is useful for your problem.
• Add columns,
• Remove columns,
• Add constraints,
• Remove constraints,
• Change default values,
• Change column data types,
• Rename columns,
• Rename tables.
All these actions are performed using the ALTER TABLE command.
53
Chapter 5. Data Definition
The new column is initially filled with whatever default value is given (null if you don’t specify a DEFAULT
clause).
You can also define constraints on the column at the same time, using the usual syntax:
ALTER TABLE products ADD COLUMN description text CHECK (description <> ”);
In fact all the options that can be applied to a column description in CREATE TABLE can be used here.
Keep in mind however that the default value must satisfy the given constraints, or the ADD will fail.
Alternatively, you can add constraints later (see below) after you’ve filled in the new column correctly.
Whatever data was in the column disappears. Table constraints involving the column are dropped, too.
However, if the column is referenced by a foreign key constraint of another table, PostgreSQL will not
silently drop that constraint. You can authorize dropping everything that depends on the column by adding
CASCADE:
See Section 5.10 for a description of the general mechanism behind this.
To add a not-null constraint, which cannot be written as a table constraint, use this syntax:
The constraint will be checked immediately, so the table data must satisfy the constraint before it can be
added.
54
Chapter 5. Data Definition
(If you are dealing with a generated constraint name like $2, don’t forget that you’ll need to double-quote
it to make it a valid identifier.)
As with dropping a column, you need to add CASCADE if you want to drop a constraint that something else
depends on. An example is that a foreign key constraint depends on a unique or primary key constraint on
the referenced column(s).
This works the same for all constraint types except not-null constraints. To drop a not null constraint use
Note that this doesn’t affect any existing rows in the table, it just changes the default for future INSERT
commands.
To remove any default value, use
This is effectively the same as setting the default to null. As a consequence, it is not an error to drop a
default where one hadn’t been defined, because the default is implicitly the null value.
This will succeed only if each existing entry in the column can be converted to the new type by an implicit
cast. If a more complex conversion is needed, you can add a USING clause that specifies how to compute
the new values from the old.
PostgreSQL will attempt to convert the column’s default value (if any) to the new type, as well as any
constraints that involve the column. But these conversions may fail, or may produce surprising results.
It’s often best to drop any constraints on the column before altering its type, and then add back suitably
modified constraints afterwards.
55
Chapter 5. Data Definition
5.7. Privileges
When you create a database object, you become its owner. By default, only the owner of an object can
do anything with the object. In order to allow other users to use it, privileges must be granted. (However,
users that have the superuser attribute can always access any object.)
There are several different privileges: SELECT, INSERT, UPDATE, DELETE, RULE, REFERENCES,
TRIGGER, CREATE, TEMPORARY, EXECUTE, and USAGE. The privileges applicable to a particular object
vary depending on the object’s type (table, function, etc). For complete information on the different types
of privileges supported by PostgreSQL, refer to the GRANT reference page. The following sections and
chapters will also show you how those privileges are used.
The right to modify or destroy an object is always the privilege of the owner only.
Note: To change the owner of a table, index, sequence, or view, use the ALTER TABLE command.
There are corresponding ALTER commands for other object types.
To assign privileges, the GRANT command is used. For example, if joe is an existing user, and accounts
is an existing table, the privilege to update the table can be granted with
The special “user” name PUBLIC can be used to grant a privilege to every user on the system. Writing
ALL in place of a specific privilege grants all privileges that are relevant for the object type.
The special privileges of the object owner (i.e., the right to do DROP, GRANT, REVOKE, etc.) are always
implicit in being the owner, and cannot be granted or revoked. But the object owner can choose to revoke
his own ordinary privileges, for example to make a table read-only for himself as well as others.
Ordinarily, only the object’s owner (or a superuser) can grant or revoke privileges on an object. However,
it is possible to grant a privilege “with grant option”, which gives the recipient the right to grant it in
turn to others. If the grant option is subsequently revoked then all who received the privilege from that
recipient (directly or through a chain of grants) will lose the privilege. For details see the GRANT and
REVOKE reference pages.
56
Chapter 5. Data Definition
5.8. Schemas
A PostgreSQL database cluster contains one or more named databases. Users and groups of users are
shared across the entire cluster, but no other data is shared across databases. Any given client connection
to the server can access only the data in a single database, the one specified in the connection request.
Note: Users of a cluster do not necessarily have the privilege to access every database in the cluster.
Sharing of user names means that there cannot be different users named, say, joe in two databases in
the same cluster; but the system can be configured to allow joe access to only some of the databases.
A database contains one or more named schemas, which in turn contain tables. Schemas also contain
other kinds of named objects, including data types, functions, and operators. The same object name can
be used in different schemas without conflict; for example, both schema1 and myschema may contain
tables named mytable. Unlike databases, schemas are not rigidly separated: a user may access objects in
any of the schemas in the database he is connected to, if he has privileges to do so.
There are several reasons why one might want to use schemas:
• To allow many users to use one database without interfering with each other.
• To organize database objects into logical groups to make them more manageable.
• Third-party applications can be put into separate schemas so they cannot collide with the names of
other objects.
Schemas are analogous to directories at the operating system level, except that schemas cannot be nested.
To create or access objects in a schema, write a qualified name consisting of the schema name and table
name separated by a dot:
schema.table
This works anywhere a table name is expected, including the table modification commands and the data
access commands discussed in the following chapters. (For brevity we will speak of tables only, but the
same ideas apply to other kinds of named objects, such as types and functions.)
Actually, the even more general syntax
database.schema.table
can be used too, but at present this is just for pro forma compliance with the SQL standard. If you write a
database name, it must be the same as the database you are connected to.
So to create a table in the new schema, use
57
Chapter 5. Data Definition
To drop a schema if it’s empty (all objects in it have been dropped), use
See Section 5.10 for a description of the general mechanism behind this.
Often you will want to create a schema owned by someone else (since this is one of the ways to restrict
the activities of your users to well-defined namespaces). The syntax for that is:
You can even omit the schema name, in which case the schema name will be the same as the user name.
See Section 5.8.6 for how this can be useful.
Schema names beginning with pg_ are reserved for system purposes and may not be created by users.
and
58
Chapter 5. Data Definition
SHOW search_path;
search_path
--------------
$user,public
The first element specifies that a schema with the same name as the current user is to be searched. If no
such schema exists, the entry is ignored. The second element refers to the public schema that we have
seen already.
The first schema in the search path that exists is the default location for creating new objects. That is
the reason that by default objects are created in the public schema. When objects are referenced in any
other context without schema qualification (table modification, data modification, or query commands)
the search path is traversed until a matching object is found. Therefore, in the default configuration, any
unqualified access again can only refer to the public schema.
To put our new schema in the path, we use
(We omit the $user here because we have no immediate need for it.) And then we can access the table
without schema qualification:
Also, since myschema is the first element in the path, new objects would by default be created in it.
We could also have written
Then we no longer have access to the public schema without explicit qualification. There is nothing special
about the public schema except that it exists by default. It can be dropped, too.
See also Section 9.19 for other ways to manipulate the schema search path.
The search path works in the same way for data type names, function names, and operator names as it
does for table names. Data type and function names can be qualified in exactly the same way as table
names. If you need to write a qualified operator name in an expression, there is a special provision: you
must write
OPERATOR(schema.operator )
SELECT 3 OPERATOR(pg_catalog.+) 4;
In practice one usually relies on the search path for operators, so as not to have to write anything so ugly
as that.
59
Chapter 5. Data Definition
(The first “public” is the schema, the second “public” means “every user”. In the first sense it is an
identifier, in the second sense it is a key word, hence the different capitalization; recall the guidelines
from Section 4.1.1.)
• If you do not create any schemas then all users access the public schema implicitly. This simulates the
situation where schemas are not available at all. This setup is mainly recommended when there is only
a single user or a few cooperating users in a database. This setup also allows smooth transition from the
non-schema-aware world.
• You can create a schema for each user with the same name as that user. Recall that the default search
path starts with $user, which resolves to the user name. Therefore, if each user has a separate schema,
they access their own schemas by default.
If you use this setup then you might also want to revoke access to the public schema (or drop it alto-
gether), so users are truly constrained to their own schemas.
60
Chapter 5. Data Definition
• To install shared applications (tables to be used by everyone, additional functions provided by third
parties, etc.), put them into separate schemas. Remember to grant appropriate privileges to allow the
other users to access them. Users can then refer to these additional objects by qualifying the names with
a schema name, or they can put the additional schemas into their search path, as they choose.
5.8.7. Portability
In the SQL standard, the notion of objects in the same schema being owned by different users does not
exist. Moreover, some implementations do not allow you to create schemas that have a different name
than their owner. In fact, the concepts of schema and user are nearly equivalent in a database system
that implements only the basic schema support specified in the standard. Therefore, many users consider
qualified names to really consist of username.tablename. This is how PostgreSQL will effectively
behave if you create a per-user schema for every user.
Also, there is no concept of a public schema in the SQL standard. For maximum conformance to the
standard, you should not use (perhaps even remove) the public schema.
Of course, some SQL database systems might not implement schemas at all, or provide namespace sup-
port by allowing (possibly limited) cross-database access. If you need to work with those systems, then
maximum portability would be achieved by not using schemas at all.
• Views
• Functions and operators
• Data types and domains
• Triggers and rewrite rules
Detailed information on these topics appears in Part V.
61
Chapter 5. Data Definition
considered in Section 5.3.5, with the orders table depending on it, would result in an error message such
as this:
The error message contains a useful hint: if you do not want to bother deleting all the dependent objects
individually, you can run
and all the dependent objects will be removed. In this case, it doesn’t remove the orders table, it only
removes the foreign key constraint. (If you want to check what DROP ... CASCADE will do, run DROP
without CASCADE and read the NOTICE messages.)
All drop commands in PostgreSQL support specifying CASCADE. Of course, the nature of the possible
dependencies varies with the type of the object. You can also write RESTRICT instead of CASCADE to get
the default behavior, which is to prevent drops of objects that other objects depend on.
Note: According to the SQL standard, specifying either RESTRICT or CASCADE is required. No database
system actually enforces that rule, but whether the default behavior is RESTRICT or CASCADE varies
across systems.
Note: Foreign key constraint dependencies and serial column dependencies from PostgreSQL ver-
sions prior to 7.3 are not maintained or created during the upgrade process. All other dependency
types will be properly created during an upgrade from a pre-7.3 database.
62
Chapter 6. Data Manipulation
The previous chapter discussed how to create tables and other structures to hold your data. Now it is
time to fill the tables with data. This chapter covers how to insert, update, and delete table data. We also
introduce ways to effect automatic data changes when certain events occur: triggers and rewrite rules. The
chapter after this will finally explain how to extract your long-lost data back out of the database.
The data values are listed in the order in which the columns appear in the table, separated by commas.
Usually, the data values will be literals (constants), but scalar expressions are also allowed.
The above syntax has the drawback that you need to know the order of the columns in the table. To avoid
that you can also list the columns explicitly. For example, both of the following commands have the same
effect as the one above:
INSERT INTO products (product_no, name, price) VALUES (1, ’Cheese’, 9.99);
INSERT INTO products (name, price, product_no) VALUES (’Cheese’, 9.99, 1);
Many users consider it good practice to always list the column names.
If you don’t have values for all the columns, you can omit some of them. In that case, the columns will be
filled with their default values. For example,
The second form is a PostgreSQL extension. It fills the columns from the left with as many values as are
given, and the rest will be defaulted.
For clarity, you can also request default values explicitly, for individual columns or for the entire row:
INSERT INTO products (product_no, name, price) VALUES (1, ’Cheese’, DEFAULT);
INSERT INTO products DEFAULT VALUES;
63
Chapter 6. Data Manipulation
Tip: To do “bulk loads”, that is, inserting a lot of data, take a look at the COPY command. It is not as
flexible as the INSERT command, but is more efficient.
Recall from Chapter 5 that SQL does not, in general, provide a unique identifier for rows. Therefore it is
not necessarily possible to directly specify which row to update. Instead, you specify which conditions
a row must meet in order to be updated. Only if you have a primary key in the table (no matter whether
you declared it or not) can you reliably address individual rows, by choosing a condition that matches the
primary key. Graphical database access tools rely on this fact to allow you to update rows individually.
For example, this command updates all products that have a price of 5 to have a price of 10:
This may cause zero, one, or many rows to be updated. It is not an error to attempt an update that does not
match any rows.
Let’s look at that command in detail. First is the key word UPDATE followed by the table name. As usual,
the table name may be schema-qualified, otherwise it is looked up in the path. Next is the key word SET
followed by the column name, an equals sign and the new column value. The new column value can be
any scalar expression, not just a constant. For example, if you want to raise the price of all products by
10% you could use:
As you see, the expression for the new value can refer to the existing value(s) in the row. We also left
out the WHERE clause. If it is omitted, it means that all rows in the table are updated. If it is present, only
those rows that match the WHERE condition are updated. Note that the equals sign in the SET clause is an
assignment while the one in the WHERE clause is a comparison, but this does not create any ambiguity. Of
course, the WHERE condition does not have to be an equality test. Many other operators are available (see
Chapter 9). But the expression needs to evaluate to a Boolean result.
You can update more than one column in an UPDATE command by listing more than one assignment in
the SET clause. For example:
64
Chapter 6. Data Manipulation
65
Chapter 7. Queries
The previous chapters explained how to create tables, how to fill them with data, and how to manipulate
that data. Now we finally discuss how to retrieve the data out of the database.
7.1. Overview
The process of retrieving or the command to retrieve data from a database is called a query. In SQL the
SELECT command is used to specify queries. The general syntax of the SELECT command is
The following sections describe the details of the select list, the table expression, and the sort specification.
The simplest kind of query has the form
Assuming that there is a table called table1, this command would retrieve all rows and all columns from
table1. (The method of retrieval depends on the client application. For example, the psql program will
display an ASCII-art table on the screen, while client libraries will offer functions to extract individual
values from the query result.) The select list specification * means all columns that the table expression
happens to provide. A select list can also select a subset of the available columns or make calculations
using the columns. For example, if table1 has columns named a, b, and c (and perhaps others) you can
make the following query:
(assuming that b and c are of a numerical data type). See Section 7.3 for more details.
FROM table1 is a particularly simple kind of table expression: it reads just one table. In general, table
expressions can be complex constructs of base tables, joins, and subqueries. But you can also omit the
table expression entirely and use the SELECT command as a calculator:
SELECT 3 * 4;
This is more useful if the expressions in the select list return varying results. For example, you could call
a function this way:
SELECT random();
66
Chapter 7. Queries
The optional WHERE, GROUP BY, and HAVING clauses in the table expression specify a pipeline of succes-
sive transformations performed on the table derived in the FROM clause. All these transformations produce
a virtual table that provides the rows that are passed to the select list to compute the output rows of the
query.
A table reference may be a table name (possibly schema-qualified), or a derived table such as a subquery,
a table join, or complex combinations of these. If more than one table reference is listed in the FROM
clause they are cross-joined (see below) to form the intermediate virtual table that may then be subject to
transformations by the WHERE, GROUP BY, and HAVING clauses and is finally the result of the overall table
expression.
When a table reference names a table that is the supertable of a table inheritance hierarchy, the table
reference produces rows of not only that table but all of its subtable successors, unless the key word ONLY
precedes the table name. However, the reference produces only the columns that appear in the named table
— any columns added in subtables are ignored.
Join Types
Cross join
T1 CROSS JOIN T2
For each combination of rows from T1 and T2, the derived table will contain a row consisting of
all columns in T1 followed by all columns in T2. If the tables have N and M rows respectively, the
joined table will have N * M rows.
FROM T1 CROSS JOIN T2 is equivalent to FROM T1, T2. It is also equivalent to FROM T1 INNER
JOIN T2 ON TRUE (see below).
Qualified joins
T1 { [INNER] | { LEFT | RIGHT | FULL } [OUTER] } JOIN T2 ON boolean_expression
T1 { [INNER] | { LEFT | RIGHT | FULL } [OUTER] } JOIN T2 USING ( join column list )
T1 NATURAL { [INNER] | { LEFT | RIGHT | FULL } [OUTER] } JOIN T2
The words INNER and OUTER are optional in all forms. INNER is the default; LEFT, RIGHT, and FULL
imply an outer join.
The join condition is specified in the ON or USING clause, or implicitly by the word NATURAL. The
join condition determines which rows from the two source tables are considered to “match”, as
explained in detail below.
67
Chapter 7. Queries
The ON clause is the most general kind of join condition: it takes a Boolean value expression of the
same kind as is used in a WHERE clause. A pair of rows from T1 and T2 match if the ON expression
evaluates to true for them.
USING is a shorthand notation: it takes a comma-separated list of column names, which the joined
tables must have in common, and forms a join condition specifying equality of each of these pairs
of columns. Furthermore, the output of a JOIN USING has one column for each of the equated pairs
of input columns, followed by all of the other columns from each table. Thus, USING (a, b, c)
is equivalent to ON (t1.a = t2.a AND t1.b = t2.b AND t1.c = t2.c) with the exception
that if ON is used there will be two columns a, b, and c in the result, whereas with USING there will
be only one of each.
Finally, NATURAL is a shorthand form of USING: it forms a USING list consisting of exactly those
column names that appear in both input tables. As with USING, these columns appear only once in
the output table.
The possible types of qualified join are:
INNER JOIN
For each row R1 of T1, the joined table has a row for each row in T2 that satisfies the join
condition with R1.
LEFT OUTER JOIN
First, an inner join is performed. Then, for each row in T1 that does not satisfy the join condition
with any row in T2, a joined row is added with null values in columns of T2. Thus, the joined
table unconditionally has at least one row for each row in T1.
RIGHT OUTER JOIN
First, an inner join is performed. Then, for each row in T2 that does not satisfy the join condition
with any row in T1, a joined row is added with null values in columns of T1. This is the converse
of a left join: the result table will unconditionally have a row for each row in T2.
FULL OUTER JOIN
First, an inner join is performed. Then, for each row in T1 that does not satisfy the join condition
with any row in T2, a joined row is added with null values in columns of T2. Also, for each row
of T2 that does not satisfy the join condition with any row in T1, a joined row with null values
in the columns of T1 is added.
Joins of all types can be chained together or nested: either or both of T1 and T2 may be joined tables.
Parentheses may be used around JOIN clauses to control the join order. In the absence of parentheses,
JOIN clauses nest left-to-right.
To put this together, assume we have tables t1
num | name
-----+------
1 | a
2 | b
68
Chapter 7. Queries
3 | c
and t2
num | value
-----+-------
1 | xxx
3 | yyy
5 | zzz
69
Chapter 7. Queries
(3 rows)
The join condition specified with ON can also contain conditions that do not relate directly to the join. This
can prove useful for some queries but needs to be thought out carefully. For example:
=> SELECT * FROM t1 LEFT JOIN t2 ON t1.num = t2.num AND t2.value = ’xxx’;
num | name | num | value
-----+------+-----+-------
1 | a | 1 | xxx
2 | b | |
3 | c | |
(3 rows)
or
70
Chapter 7. Queries
The alias becomes the new name of the table reference for the current query — it is no longer possible to
refer to the table by the original name. Thus
is not valid SQL syntax. What will actually happen (this is a PostgreSQL extension to the standard) is that
an implicit table reference is added to the FROM clause, so the query is processed as if it were written as
which will result in a cross join, which is usually not what you want.
Table aliases are mainly for notational convenience, but it is necessary to use them when joining a table
to itself, e.g.,
Additionally, an alias is required if the table reference is a subquery (see Section 7.2.1.3).
Parentheses are used to resolve ambiguities. The following statement will assign the alias b to the result
of the join, unlike the previous example:
Another form of table aliasing gives temporary names to the columns of the table, as well as the table
itself:
If fewer column aliases are specified than the actual table has columns, the remaining columns are not
renamed. This syntax is especially useful for self-joins or subqueries.
When an alias is applied to the output of a JOIN clause, using any of these forms, the alias hides the
original names within the JOIN. For example,
is not valid: the table alias a is not visible outside the alias c.
71
Chapter 7. Queries
7.2.1.3. Subqueries
Subqueries specifying a derived table must be enclosed in parentheses and must be assigned a table alias
name. (See Section 7.2.1.2.) For example:
This example is equivalent to FROM table1 AS alias_name. More interesting cases, which can’t be
reduced to a plain join, arise when the subquery involves grouping or aggregation.
In some cases it is useful to define table functions that can return different column sets depending on how
they are invoked. To support this, the table function can be declared as returning the pseudotype record.
When such a function is used in a query, the expected row structure must be specified in the query itself,
so that the system can know how to parse and plan the query. Consider this example:
SELECT *
FROM dblink(’dbname=mydb’, ’select proname, prosrc from pg_proc’)
AS t1(proname name, prosrc text)
72
Chapter 7. Queries
The dblink function executes a remote query (see contrib/dblink). It is declared to return record
since it might be used for any kind of query. The actual column set must be specified in the calling query
so that the parser knows, for example, what * should expand to.
WHERE search_condition
where search_condition is any value expression (see Section 4.2) that returns a value of type
boolean.
After the processing of the FROM clause is done, each row of the derived virtual table is checked against
the search condition. If the result of the condition is true, the row is kept in the output table, otherwise
(that is, if the result is false or null) it is discarded. The search condition typically references at least some
column of the table generated in the FROM clause; this is not required, but otherwise the WHERE clause will
be fairly useless.
Note: The join condition of an inner join can be written either in the WHERE clause or in the JOIN clause.
For example, these table expressions are equivalent:
and
or perhaps even
Which one of these you use is mainly a matter of style. The JOIN syntax in the FROM clause is probably
not as portable to other SQL database management systems. For outer joins there is no choice in any
case: they must be done in the FROM clause. An ON/USING clause of an outer join is not equivalent to
a WHERE condition, because it determines the addition of rows (for unmatched input rows) as well as
the removal of rows from the final result.
SELECT ... FROM fdt WHERE c1 IN (SELECT c3 FROM t2 WHERE c2 = fdt.c1 + 10)
73
Chapter 7. Queries
SELECT ... FROM fdt WHERE c1 BETWEEN (SELECT c3 FROM t2 WHERE c2 = fdt.c1 + 10) AND 100
SELECT ... FROM fdt WHERE EXISTS (SELECT c1 FROM t2 WHERE c2 > fdt.c1)
fdt is the table derived in the FROM clause. Rows that do not meet the search condition of the WHERE
clause are eliminated from fdt. Notice the use of scalar subqueries as value expressions. Just like any
other query, the subqueries can employ complex table expressions. Notice also how fdt is referenced in
the subqueries. Qualifying c1 as fdt.c1 is only necessary if c1 is also the name of a column in the derived
input table of the subquery. But qualifying the column name adds clarity even when it is not needed. This
example shows how the column naming scope of an outer query extends into its inner queries.
SELECT select_list
FROM ...
[WHERE ...]
GROUP BY grouping_column_reference [, grouping_column_reference]...
The GROUP BY Clause is used to group together those rows in a table that share the same values in all
the columns listed. The order in which the columns are listed does not matter. The effect is to combine
each set of rows sharing common values into one group row that is representative of all rows in the group.
This is done to eliminate redundancy in the output and/or compute aggregates that apply to these groups.
For instance:
In the second query, we could not have written SELECT * FROM test1 GROUP BY x, because there is
no single value for the column y that could be associated with each group. The grouped-by columns can
be referenced in the select list since they have a single value in each group.
In general, if a table is grouped, columns that are not used in the grouping cannot be referenced except in
aggregate expressions. An example with aggregate expressions is:
74
Chapter 7. Queries
Here sum is an aggregate function that computes a single value over the entire group. More information
about the available aggregate functions can be found in Section 9.15.
Tip: Grouping without aggregate expressions effectively calculates the set of distinct values in a col-
umn. This can also be achieved using the DISTINCT clause (see Section 7.3.3).
Here is another example: it calculates the total sales for each product (rather than the total sales on all
products).
In this example, the columns product_id, p.name, and p.price must be in the GROUP BY clause since
they are referenced in the query select list. (Depending on how exactly the products table is set up, name
and price may be fully dependent on the product ID, so the additional groupings could theoretically be
unnecessary, but this is not implemented yet.) The column s.units does not have to be in the GROUP BY
list since it is only used in an aggregate expression (sum(...)), which represents the sales of a product.
For each product, the query returns a summary row about all sales of the product.
In strict SQL, GROUP BY can only group by columns of the source table but PostgreSQL extends this to
also allow GROUP BY to group by columns in the select list. Grouping by value expressions instead of
simple column names is also allowed.
If a table has been grouped using a GROUP BY clause, but then only certain groups are of interest, the
HAVING clause can be used, much like a WHERE clause, to eliminate groups from a grouped table. The
syntax is:
SELECT select_list FROM ... [WHERE ...] GROUP BY ... HAVING boolean_expression
Expressions in the HAVING clause can refer both to grouped expressions and to ungrouped expressions
(which necessarily involve an aggregate function).
Example:
75
Chapter 7. Queries
---+-----
a | 4
b | 5
(2 rows)
In the example above, the WHERE clause is selecting rows by a column that is not grouped (the expression
is only true for sales during the last four weeks), while the HAVING clause restricts the output to groups
with total gross sales over 5000. Note that the aggregate expressions do not necessarily need to be the
same in all parts of the query.
The columns names a, b, and c are either the actual names of the columns of tables referenced in the
FROM clause, or the aliases given to them as explained in Section 7.2.1.2. The name space available in the
select list is the same as in the WHERE clause, unless grouping is used, in which case it is the same as in
the HAVING clause.
If more than one table has a column of the same name, the table name must also be given, as in
When working with multiple tables, it can also be useful to ask for all the columns of a particular table:
76
Chapter 7. Queries
If an arbitrary value expression is used in the select list, it conceptually adds a new virtual column to the
returned table. The value expression is evaluated once for each result row, with the row’s values substituted
for any column references. But the expressions in the select list do not have to reference any columns in the
table expression of the FROM clause; they could be constant arithmetic expressions as well, for instance.
If no output column name is specified using AS, the system assigns a default name. For simple column
references, this is the name of the referenced column. For function calls, this is the name of the function.
For complex expressions, the system will generate a generic name.
Note: The naming of output columns here is different from that done in the FROM clause (see Section
7.2.1.2). This pipeline will in fact allow you to rename the same column twice, but the name chosen in
the select list is the one that will be passed on.
7.3.3. DISTINCT
After the select list has been processed, the result table may optionally be subject to the elimination of
duplicate rows. The DISTINCT key word is written directly after SELECT to specify this:
(Instead of DISTINCT the key word ALL can be used to specify the default behavior of retaining all rows.)
Obviously, two rows are considered distinct if they differ in at least one column value. Null values are
considered equal in this comparison.
Alternatively, an arbitrary expression can determine what rows are to be considered distinct:
Here expression is an arbitrary value expression that is evaluated for all rows. A set of rows for which
all the expressions are equal are considered duplicates, and only the first row of the set is kept in the
output. Note that the “first row” of a set is unpredictable unless the query is sorted on enough columns
to guarantee a unique ordering of the rows arriving at the DISTINCT filter. (DISTINCT ON processing
occurs after ORDER BY sorting.)
The DISTINCT ON clause is not part of the SQL standard and is sometimes considered bad style because
of the potentially indeterminate nature of its results. With judicious use of GROUP BY and subqueries in
FROM the construct can be avoided, but it is often the most convenient alternative.
77
Chapter 7. Queries
query1 and query2 are queries that can use any of the features discussed up to this point. Set operations
can also be nested and chained, for example
UNION effectively appends the result of query2 to the result of query1 (although there is no guarantee
that this is the order in which the rows are actually returned). Furthermore, it eliminates duplicate rows
from its result, in the same way as DISTINCT, unless UNION ALL is used.
INTERSECT returns all rows that are both in the result of query1 and in the result of query2. Duplicate
rows are eliminated unless INTERSECT ALL is used.
EXCEPT returns all rows that are in the result of query1 but not in the result of query2. (This is
sometimes called the difference between two queries.) Again, duplicates are eliminated unless EXCEPT
ALL is used.
In order to calculate the union, intersection, or difference of two queries, the two queries must be “union
compatible”, which means that they return the same number of columns and the corresponding columns
have compatible data types, as described in Section 10.5.
SELECT select_list
FROM table_expression
ORDER BY column1 [ASC | DESC] [, column2 [ASC | DESC] ...]
column1, etc., refer to select list columns. These can be either the output name of a column (see Section
7.3.2) or the number of a column. Some examples:
78
Chapter 7. Queries
As an extension to the SQL standard, PostgreSQL also allows ordering by arbitrary expressions:
References to column names of the FROM clause that are not present in the select list are also allowed:
But these extensions do not work in queries involving UNION, INTERSECT, or EXCEPT, and are not
portable to other SQL databases.
Each column specification may be followed by an optional ASC or DESC to set the sort direction to ascend-
ing or descending. ASC order is the default. Ascending order puts smaller values first, where “smaller” is
defined in terms of the < operator. Similarly, descending order is determined with the > operator. 1
If more than one sort column is specified, the later entries are used to sort rows that are equal under the
order imposed by the earlier sort columns.
SELECT select_list
FROM table_expression
[LIMIT { number | ALL }] [OFFSET number]
If a limit count is given, no more than that many rows will be returned (but possibly less, if the query itself
yields less rows). LIMIT ALL is the same as omitting the LIMIT clause.
OFFSET says to skip that many rows before beginning to return rows. OFFSET 0 is the same as omitting
the OFFSET clause. If both OFFSET and LIMIT appear, then OFFSET rows are skipped before starting to
count the LIMIT rows that are returned.
When using LIMIT, it is important to use an ORDER BY clause that constrains the result rows into a unique
order. Otherwise you will get an unpredictable subset of the query’s rows. You may be asking for the tenth
through twentieth rows, but tenth through twentieth in what ordering? The ordering is unknown, unless
you specified ORDER BY.
The query optimizer takes LIMIT into account when generating a query plan, so you are very likely to get
different plans (yielding different row orders) depending on what you give for LIMIT and OFFSET. Thus,
using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent
results unless you enforce a predictable result ordering with ORDER BY. This is not a bug; it is an inherent
1. Actually, PostgreSQL uses the default B-tree operator class for the column’s data type to determine the sort ordering for ASC
and DESC. Conventionally, data types will be set up so that the < and > operators correspond to this sort ordering, but a user-defined
data type’s designer could choose to do something different.
79
Chapter 7. Queries
consequence of the fact that SQL does not promise to deliver the results of a query in any particular order
unless ORDER BY is used to constrain the order.
The rows skipped by an OFFSET clause still have to be computed inside the server; therefore a large
OFFSET can be inefficient.
80
Chapter 8. Data Types
PostgreSQL has a rich set of native data types available to users. Users may add new types to PostgreSQL
using the CREATE TYPE command.
Table 8-1 shows all the built-in general-purpose data types. Most of the alternative names listed in the
“Aliases” column are the names used internally by PostgreSQL for historical reasons. In addition, some
internally used or deprecated types are available, but they are not listed here.
81
Chapter 8. Data Types
Compatibility: The following types (or spellings thereof) are specified by SQL: bit, bit varying,
boolean, char, character varying, character, varchar, date, double precision, integer,
interval, numeric, decimal, real, smallint, time (with or without time zone), timestamp (with or
without time zone).
Each data type has an external representation determined by its input and output functions. Many of the
built-in types have obvious external formats. However, several types are either unique to PostgreSQL,
such as geometric paths, or have several possibilities for formats, such as the date and time types. Some
of the input and output functions are not invertible. That is, the result of an output function may lose
accuracy when compared to the original input.
82
Chapter 8. Data Types
The syntax of constants for the numeric types is described in Section 4.1.2. The numeric types have a full
set of corresponding arithmetic operators and functions. Refer to Chapter 9 for more information. The
following sections describe the types in detail.
NUMERIC(precision, scale)
83
Chapter 8. Data Types
NUMERIC(precision)
NUMERIC
without any precision or scale creates a column in which numeric values of any precision and scale can
be stored, up to the implementation limit on precision. A column of this kind will not coerce input values
to any particular scale, whereas numeric columns with a declared scale will coerce input values to that
scale. (The SQL standard requires a default scale of 0, i.e., coercion to integer precision. We find this a bit
useless. If you’re concerned about portability, always specify the precision and scale explicitly.)
If the scale of a value to be stored is greater than the declared scale of the column, the system will round
the value to the specified number of fractional digits. Then, if the number of digits to the left of the decimal
point exceeds the declared precision minus the declared scale, an error is raised.
Numeric values are physically stored without any extra leading or trailing zeroes. Thus, the declared
precision and scale of a column are maximums, not fixed allocations. (In this sense the numeric type is
more akin to varchar(n) than to char(n).)
In addition to ordinary numeric values, the numeric type allows the special value NaN, meaning “not-
a-number”. Any operation on NaN yields another NaN. When writing this value as a constant in a SQL
command, you must put quotes around it, for example UPDATE table SET x = ’NaN’. On input, the
string NaN is recognized in a case-insensitive manner.
The types decimal and numeric are equivalent. Both types are part of the SQL standard.
• If you require exact storage and calculations (such as for monetary amounts), use the numeric type
instead.
• If you want to do complicated calculations with these types for anything important, especially if you
rely on certain behavior in boundary cases (infinity, underflow), you should evaluate the implementation
carefully.
• Comparing two floating-point values for equality may or may not work as expected.
On most platforms, the real type has a range of at least 1E-37 to 1E+37 with a precision of at least 6
decimal digits. The double precision type typically has a range of around 1E-307 to 1E+308 with a
precision of at least 15 digits. Values that are too large or too small will cause an error. Rounding may take
84
Chapter 8. Data Types
place if the precision of an input number is too high. Numbers too close to zero that are not representable
as distinct from zero will cause an underflow error.
In addition to ordinary numeric values, the floating-point types have several special values:
Infinity
-Infinity
NaN
These represent the IEEE 754 special values “infinity”, “negative infinity”, and “not-a-number”, respec-
tively. (On a machine whose floating-point arithmetic does not follow IEEE 754, these values will prob-
ably not work as expected.) When writing these values as constants in a SQL command, you must put
quotes around them, for example UPDATE table SET x = ’Infinity’. On input, these strings are
recognized in a case-insensitive manner.
PostgreSQL also supports the SQL-standard notations float and float(p) for specifying inexact nu-
meric types. Here, p specifies the minimum acceptable precision in binary digits. PostgreSQL accepts
float(1) to float(24) as selecting the real type, while float(25) to float(53) select double
precision. Values of p outside the allowed range draw an error. float with no precision specified is
taken to mean double precision.
Note: Prior to PostgreSQL 7.4, the precision in float(p) was taken to mean so many decimal digits.
This has been corrected to match the SQL standard, which specifies that the precision is measured
in binary digits. The assumption that real and double precision have exactly 24 and 53 bits in
the mantissa respectively is correct for IEEE-standard floating point implementations. On non-IEEE
platforms it may be off a little, but for simplicity the same ranges of p are used on all platforms.
is equivalent to specifying:
Thus, we have created an integer column and arranged for its default values to be assigned from a sequence
generator. A NOT NULL constraint is applied to ensure that a null value cannot be explicitly inserted, either.
In most cases you would also want to attach a UNIQUE or PRIMARY KEY constraint to prevent duplicate
values from being inserted by accident, but this is not automatic.
85
Chapter 8. Data Types
Note: Prior to PostgreSQL 7.3, serial implied UNIQUE. This is no longer automatic. If you wish a
serial column to be in a unique constraint or a primary key, it must now be specified, same as with any
other data type.
To insert the next value of the sequence into the serial column, specify that the serial column should
be assigned its default value. This can be done either by excluding the column from the list of columns in
the INSERT statement, or through the use of the DEFAULT key word.
The type names serial and serial4 are equivalent: both create integer columns. The type names
bigserial and serial8 work just the same way, except that they create a bigint column. bigserial
should be used if you anticipate the use of more than 231 identifiers over the lifetime of the table.
The sequence created for a serial column is automatically dropped when the owning column is dropped,
and cannot be dropped otherwise. (This was not true in PostgreSQL releases before 7.3. Note that this
automatic drop linkage will not occur for a sequence created by reloading a dump from a pre-7.3 database;
the dump file does not contain the information needed to establish the dependency link.) Furthermore,
this dependency between sequence and column is made only for the serial column itself. If any other
columns reference the sequence (perhaps by manually calling the nextval function), they will be broken
if the sequence is removed. Using a serial column’s sequence in such a fashion is considered bad
form; if you wish to feed several columns from the same sequence generator, create the sequence as an
independent object.
Note: The money type is deprecated. Use numeric or decimal instead, in combination with the
to_char function.
The money type stores a currency amount with a fixed fractional precision; see Table 8-3. Input is ac-
cepted in a variety of formats, including integer and floating-point literals, as well as “typical” currency
formatting, such as ’$1,000.00’. Output is generally in the latter form but depends on the locale.
Name Description
86
Chapter 8. Data Types
Name Description
character varying(n), varchar(n) variable-length with limit
character(n), char(n) fixed-length, blank padded
text variable unlimited length
Note: Prior to PostgreSQL 7.2, strings that were too long were always truncated without raising an
error, in either explicit or implicit casting contexts.
The notations varchar(n) and char(n) are aliases for character varying(n) and
character(n), respectively. character without length specifier is equivalent to character(1). If
character varying is used without length specifier, the type accepts strings of any size. The latter is
a PostgreSQL extension.
In addition, PostgreSQL provides the text type, which stores strings of any length. Although the type
text is not in the SQL standard, several other SQL database management systems have it as well.
Values of type character are physically padded with spaces to the specified width n, and are stored
and displayed that way. However, the padding spaces are treated as semantically insignificant. Trailing
spaces are disregarded when comparing two values of type character, and they will be removed when
converting a character value to one of the other string types. Note that trailing spaces are semantically
significant in character varying and text values.
The storage requirement for data of these types is 4 bytes plus the actual string, and in case of character
plus the padding. Long strings are compressed by the system automatically, so the physical requirement
on disk may be less. Long values are also stored in background tables so they do not interfere with rapid
access to the shorter column values. In any case, the longest possible character string that can be stored
is about 1 GB. (The maximum value that will be allowed for n in the data type declaration is less than
that. It wouldn’t be very useful to change this because with multibyte character encodings the number of
characters and bytes can be quite different anyway. If you desire to store long strings with no specific upper
limit, use text or character varying without a length specifier, rather than making up an arbitrary
length limit.)
Tip: There are no performance differences between these three types, apart from the increased stor-
age size when using the blank-padded type. While character(n) has performance advantages in
some other database systems, it has no such advantages in PostgreSQL. In most situations text or
character varying should be used instead.
87
Chapter 8. Data Types
Refer to Section 4.1.2.1 for information about the syntax of string literals, and to Chapter 9 for information
about available operators and functions. The database character set determines the character set used to
store textual values; for more information on character set support, refer to Section 20.2.
There are two other fixed-length character types in PostgreSQL, shown in Table 8-5. The name type exists
only for storage of identifiers in the internal system catalogs and is not intended for use by the general user.
Its length is currently defined as 64 bytes (63 usable characters plus terminator) but should be referenced
using the constant NAMEDATALEN. The length is set at compile time (and is therefore adjustable for special
uses); the default maximum length may change in a future release. The type "char" (note the quotes) is
different from char(1) in that it only uses one byte of storage. It is internally used in the system catalogs
as a poor-man’s enumeration type.
88
Chapter 8. Data Types
A binary string is a sequence of octets (or bytes). Binary strings are distinguished from character strings
by two characteristics: First, binary strings specifically allow storing octets of value zero and other “non-
printable” octets (usually, octets outside the range 32 to 126). Character strings disallow zero octets,
and also disallow any other octet values and sequences of octet values that are invalid according to the
database’s selected character set encoding. Second, operations on binary strings process the actual bytes,
whereas the processing of character strings depends on locale settings. In short, binary strings are ap-
propriate for storing data that the programmer thinks of as “raw bytes”, whereas character strings are
appropriate for storing text.
When entering bytea values, octets of certain values must be escaped (but all octet values may be escaped)
when used as part of a string literal in an SQL statement. In general, to escape an octet, it is converted into
the three-digit octal number equivalent of its decimal octet value, and preceded by two backslashes. Table
8-7 shows the characters that must be escaped, and gives the alternate escape sequences where applicable.
The requirement to escape “non-printable” octets actually varies depending on locale settings. In some
instances you can get away with leaving them unescaped. Note that the result in each of the examples
in Table 8-7 was exactly one octet in length, even though the output representation of the zero octet and
backslash are more than one character.
The reason that you have to write so many backslashes, as shown in Table 8-7, is that an input string written
as a string literal must pass through two parse phases in the PostgreSQL server. The first backslash of each
pair is interpreted as an escape character by the string-literal parser and is therefore consumed, leaving the
second backslash of the pair. The remaining backslash is then recognized by the bytea input function as
starting either a three digit octal value or escaping another backslash. For example, a string literal passed
to the server as ’\\001’ becomes \001 after passing through the string-literal parser. The \001 is then
sent to the bytea input function, where it is converted to a single octet with a decimal value of 1. Note
that the apostrophe character is not treated specially by bytea, so it follows the normal rules for string
literals. (See also Section 4.1.2.1.)
Bytea octets are also escaped in the output. In general, each “non-printable” octet is converted into its
89
Chapter 8. Data Types
equivalent three-digit octal value and preceded by one backslash. Most “printable” octets are represented
by their standard representation in the client character set. The octet with decimal value 92 (backslash)
has a special alternative output representation. Details are in Table 8-8.
Depending on the front end to PostgreSQL you use, you may have additional work to do in terms of
escaping and unescaping bytea strings. For example, you may also have to escape line feeds and carriage
returns if your interface automatically translates these.
The SQL standard defines a different binary string type, called BLOB or BINARY LARGE OBJECT. The
input format is different from bytea, but the provided functions and operators are mostly the same.
90
Chapter 8. Data Types
Note: Prior to PostgreSQL 7.3, writing just timestamp was equivalent to timestamp with time
zone. This was changed for SQL compliance.
time, timestamp, and interval accept an optional precision value p which specifies the number of
fractional digits retained in the seconds field. By default, there is no explicit bound on precision. The
allowed range of p is from 0 to 6 for the timestamp and interval types.
Note: When timestamp values are stored as double precision floating-point numbers (currently the
default), the effective limit of precision may be less than 6. timestamp values are stored as seconds
before or after midnight 2000-01-01. Microsecond precision is achieved for dates within a few years of
2000-01-01, but the precision degrades for dates further away. When timestamp values are stored as
eight-byte integers (a compile-time option), microsecond precision is available over the full range of
values. However eight-byte integer timestamps have a more limited range of dates than shown above:
from 4713 BC up to 294276 AD. The same compile-time option also determines whether time and
interval values are stored as floating-point or eight-byte integers. In the floating-point case, large
interval values degrade in precision as the size of the interval increases.
For the time types, the allowed range of p is from 0 to 6 when eight-byte integer storage is used, or from
0 to 10 when floating-point storage is used.
The type time with time zone is defined by the SQL standard, but the definition exhibits properties
which lead to questionable usefulness. In most cases, a combination of date, time, timestamp
without time zone, and timestamp with time zone should provide a complete range of
date/time functionality required by any application.
The types abstime and reltime are lower precision types which are used internally. You are discour-
aged from using these types in new applications and are encouraged to move any old ones over when
appropriate. Any or all of these internal types might disappear in a future release.
PostgreSQL is more flexible in handling date/time input than the SQL standard requires. See Appendix B
for the exact parsing rules of date/time input and for the recognized text fields including months, days of
the week, and time zones.
Remember that any date or time literal input needs to be enclosed in single quotes, like text strings. Refer
to Section 4.1.2.5 for more information. SQL requires the following syntax
where p in the optional precision specification is an integer corresponding to the number of fractional
digits in the seconds field. Precision can be specified for time, timestamp, and interval types. The
91
Chapter 8. Data Types
allowed values are mentioned above. If no precision is specified in a constant specification, it defaults to
the precision of the literal value.
8.5.1.1. Dates
Table 8-10 shows some possible inputs for the date type.
Example Description
January 8, 1999 unambiguous in any datestyle input mode
1999-01-08 ISO 8601; January 8 in any mode (recommended
format)
1/8/1999 January 8 in MDY mode; August 1 in DMY mode
1/18/1999 January 18 in MDY mode; rejected in other modes
01/02/03 January 2, 2003 in MDY mode; February 1, 2003 in
DMY mode; February 3, 2001 in YMD mode
1999-Jan-08 January 8 in any mode
Jan-08-1999 January 8 in any mode
08-Jan-1999 January 8 in any mode
99-Jan-08 January 8 in YMD mode, else error
08-Jan-99 January 8, except error in YMD mode
Jan-08-99 January 8, except error in YMD mode
19990108 ISO 8601; January 8, 1999 in any mode
990108 ISO 8601; January 8, 1999 in any mode
1999.008 year and day of year
J2451187 Julian day
January 8, 99 BC year 99 before the Common Era
8.5.1.2. Times
The time-of-day types are time [ (p) ] without time zone and time [ (p) ] with time
zone. Writing just time is equivalent to time without time zone.
Valid input for these types consists of a time of day followed by an optional time zone. (See Table 8-11
and Table 8-12.) If a time zone is specified in the input for time without time zone, it is silently
ignored.
Example Description
04:05:06.789 ISO 8601
04:05:06 ISO 8601
04:05 ISO 8601
92
Chapter 8. Data Types
Example Description
040506 ISO 8601
04:05 AM same as 04:05; AM does not affect value
04:05 PM same as 16:05; input hour must be <= 12
04:05:06.789-8 ISO 8601
04:05:06-08:00 ISO 8601
04:05-08:00 ISO 8601
040506-08 ISO 8601
04:05:06 PST time zone specified by name
Example Description
PST Pacific Standard Time
-8:00 ISO-8601 offset for PST
-800 ISO-8601 offset for PST
-8 ISO-8601 offset for PST
zulu Military abbreviation for UTC
z Short form of zulu
Refer to Appendix B for a list of time zone names that are recognized for input.
1999-01-08 04:05:06
and
are valid values, which follow the ISO 8601 standard. In addition, the wide-spread format
is supported.
The SQL standard differentiates timestamp without time zone and timestamp with time
zone literals by the existence of a “+”; or “-”. Hence, according to the standard,
93
Chapter 8. Data Types
is a timestamp with time zone. PostgreSQL differs from the standard by requiring that timestamp
with time zone literals be explicitly typed:
If a literal is not explicitly indicated as being of timestamp with time zone, PostgreSQL will silently
ignore any time zone indication in the literal. That is, the resulting date/time value is derived from the
date/time fields in the input value, and is not adjusted for time zone.
For timestamp with time zone, the internally stored value is always in UTC (Universal Coordinated
Time, traditionally known as Greenwich Mean Time, GMT). An input value that has an explicit time zone
specified is converted to UTC using the appropriate offset for that time zone. If no time zone is stated in
the input string, then it is assumed to be in the time zone indicated by the system’s timezone parameter,
and is converted to UTC using the offset for the timezone zone.
When a timestamp with time zone value is output, it is always converted from UTC to the current
timezone zone, and displayed as local time in that zone. To see the time in another time zone, either
change timezone or use the AT TIME ZONE construct (see Section 9.9.3).
Conversions between timestamp without time zone and timestamp with time zone normally
assume that the timestamp without time zone value should be taken or given as timezone local
time. A different zone reference can be specified for the conversion using AT TIME ZONE.
8.5.1.4. Intervals
interval values can be written with the following syntax:
Where: quantity is a number (possibly signed); unit is second, minute, hour, day, week, month,
year, decade, century, millennium, or abbreviations or plurals of these units; direction can be
ago or empty. The at sign (@) is optional noise. The amounts of different units are implicitly added up
with appropriate sign accounting.
Quantities of days, hours, minutes, and seconds can be specified without explicit unit markings. For ex-
ample, ’1 12:59:10’ is read the same as ’1 day 12 hours 59 min 10 sec’.
The optional precision p should be between 0 and 6, and defaults to the precision of the input literal.
94
Chapter 8. Data Types
The following SQL-compatible functions can also be used to obtain the current time value for the
corresponding data type: CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, LOCALTIME,
LOCALTIMESTAMP. The latter four accept an optional precision specification. (See Section 9.9.4.) Note
however that these are SQL functions and are not recognized as data input strings.
In the SQL and POSTGRES styles, day appears before month if DMY field ordering has been specified,
otherwise month appears before day. (See Section 8.5.1 for how this setting also affects interpretation of
input values.) Table 8-15 shows an example.
95
Chapter 8. Data Types
interval output looks like the input format, except that units like century or week are converted to
years and days and ago is converted to an appropriate sign. In ISO mode the output looks like
The date/time styles can be selected by the user using the SET datestyle command, the DateStyle
parameter in the postgresql.conf configuration file, or the PGDATESTYLE environment variable on
the server or client. The formatting function to_char (see Section 9.8) is also available as a more flexible
way to format the date/time output.
• Although the date type does not have an associated time zone, the time type can. Time zones in the
real world have little meaning unless associated with a date as well as a time, since the offset may vary
through the year with daylight-saving time boundaries.
• The default time zone is specified as a constant numeric offset from UTC. It is therefore not possible to
adapt to daylight-saving time when doing date/time arithmetic across DST boundaries.
To address these difficulties, we recommend using date/time types that contain both date and time when
using time zones. We recommend not using the type time with time zone (though it is supported
by PostgreSQL for legacy applications and for compliance with the SQL standard). PostgreSQL assumes
your local time zone for any type containing only date or time.
All timezone-aware dates and times are stored internally in UTC. They are converted to local time in the
zone specified by the timezone configuration parameter before being displayed to the client.
The timezone configuration parameter can be set in the file postgresql.conf, or in any of the other
standard ways described in Section 16.4. There are also several special ways to set it:
96
Chapter 8. Data Types
• The PGTZ environment variable, if set at the client, is used by libpq applications to send a SET TIME
ZONE command to the server upon connection.
8.5.4. Internals
PostgreSQL uses Julian dates for all date/time calculations. They have the nice property of correctly
predicting/calculating any date more recent than 4713 BC to far into the future, using the assumption that
the length of the year is 365.2425 days.
Date conventions before the 19th century make for interesting reading, but are not consistent enough to
warrant coding into a date/time handler.
TRUE
’t’
’true’
’y’
’yes’
’1’
FALSE
’f’
’false’
’n’
’no’
’0’
Using the key words TRUE and FALSE is preferred (and SQL-compliant).
97
Chapter 8. Data Types
f | non est
Example 8-2 shows that boolean values are output using the letters t and f.
Tip: Values of the boolean type cannot be cast directly to other types (e.g., CAST (boolval AS
integer) does not work). This can be accomplished using the CASE expression: CASE WHEN boolval
THEN ’value if true’ ELSE ’value if false’ END. See Section 9.13.
A rich set of functions and operators is available to perform various geometric operations such as scaling,
translation, rotation, and determining intersections. They are explained in Section 9.10.
8.7.1. Points
Points are the fundamental two-dimensional building block for geometric types. Values of type point are
specified using the following syntax:
( x , y )
98
Chapter 8. Data Types
x , y
( ( x1 , y1 ) , ( x2 , y2 ) )
( x1 , y1 ) , ( x2 , y2 )
x1 , y1 , x2 , y2
where (x1,y1) and (x2,y2) are the end points of the line segment.
8.7.3. Boxes
Boxes are represented by pairs of points that are opposite corners of the box. Values of type box are
specified using the following syntax:
( ( x1 , y1 ) , ( x2 , y2 ) )
( x1 , y1 ) , ( x2 , y2 )
x1 , y1 , x2 , y2
where (x1,y1) and (x2,y2) are any two opposite corners of the box.
Boxes are output using the first syntax. The corners are reordered on input to store the upper right corner,
then the lower left corner. Other corners of the box can be entered, but the lower left and upper right
corners are determined from the input and stored.
8.7.4. Paths
Paths are represented by lists of connected points. Paths can be open, where the first and last points in the
list are not considered connected, or closed, where the first and last points are considered connected.
Values of type path are specified using the following syntax:
( ( x1 , y1 ) , ... , ( xn , yn ) )
[ ( x1 , y1 ) , ... , ( xn , yn ) ]
( x1 , y1 ) , ... , ( xn , yn )
( x1 , y1 , ... , xn , yn )
x1 , y1 , ... , xn , yn
where the points are the end points of the line segments comprising the path. Square brackets ([]) indicate
an open path, while parentheses (()) indicate a closed path.
Paths are output using the first syntax.
99
Chapter 8. Data Types
8.7.5. Polygons
Polygons are represented by lists of points (the vertexes of the polygon). Polygons should probably be
considered equivalent to closed paths, but are stored differently and have their own set of support routines.
Values of type polygon are specified using the following syntax:
( ( x1 , y1 ) , ... , ( xn , yn ) )
( x1 , y1 ) , ... , ( xn , yn )
( x1 , y1 , ... , xn , yn )
x1 , y1 , ... , xn , yn
where the points are the end points of the line segments comprising the boundary of the polygon.
Polygons are output using the first syntax.
8.7.6. Circles
Circles are represented by a center point and a radius. Values of type circle are specified using the
following syntax:
< ( x , y ) , r >
( ( x , y ) , r )
( x , y ) , r
x , y , r
When sorting inet or cidr data types, IPv4 addresses will always sort before IPv6 addresses, including
IPv4 addresses encapsulated or mapped into IPv6 addresses, such as ::10.2.3.4 or ::ffff::10.4.3.2.
8.8.1. inet
The inet type holds an IPv4 or IPv6 host address, and optionally the identity of the subnet it is in, all
100
Chapter 8. Data Types
in one field. The subnet identity is represented by stating how many bits of the host address represent the
network address (the “netmask”). If the netmask is 32 and the address is IPv4, then the value does not
indicate a subnet, only a single host. In IPv6, the address length is 128 bits, so 128 bits specify a unique
host address. Note that if you want to accept networks only, you should use the cidr type rather than
inet.
The input format for this type is address/y where address is an IPv4 or IPv6 address and y is the
number of bits in the netmask. If the /y part is left off, then the netmask is 32 for IPv4 and 128 for IPv6,
so the value represents just a single host. On display, the /y portion is suppressed if the netmask specifies
a single host.
8.8.2. cidr
The cidr type holds an IPv4 or IPv6 network specification. Input and output formats follow Class-
less Internet Domain Routing conventions. The format for specifying networks is address/y where
address is the network represented as an IPv4 or IPv6 address, and y is the number of bits in the
netmask. If y is omitted, it is calculated using assumptions from the older classful network numbering
system, except that it will be at least large enough to include all of the octets written in the input. It is an
error to specify a network address that has bits set to the right of the specified netmask.
Table 8-18 shows some examples.
101
Chapter 8. Data Types
Tip: If you do not like the output format for inet or cidr values, try the functions host, text, and
abbrev.
8.8.4. macaddr
The macaddr type stores MAC addresses, i.e., Ethernet card hardware addresses (although MAC ad-
dresses are used for other purposes as well). Input is accepted in various customary formats, including
’08002b:010203’
’08002b-010203’
’0800.2b01.0203’
’08-00-2b-01-02-03’
’08:00:2b:01:02:03’
which would all specify the same address. Upper and lower case is accepted for the digits a through f.
Output is always in the last of the forms shown.
The directory contrib/mac in the PostgreSQL source distribution contains tools that can be used to map
MAC addresses to hardware manufacturer names.
Note: If one explicitly casts a bit-string value to bit(n), it will be truncated or zero-padded on the
right to be exactly n bits, without raising an error. Similarly, if one explicitly casts a bit-string value to
bit varying(n), it will be truncated on the right if it is more than n bits.
Note: Prior to PostgreSQL 7.2, bit data was always silently truncated or zero-padded on the right,
with or without an explicit cast. This was changed to comply with the SQL standard.
102
Chapter 8. Data Types
Refer to Section 4.1.2.3 for information about the syntax of bit string constants. Bit-logical operators and
string manipulation functions are available; see Section 9.6.
8.10. Arrays
PostgreSQL allows columns of a table to be defined as variable-length multidimensional arrays. Arrays of
any built-in or user-defined base type can be created. (Arrays of composite types or domains are not yet
supported, however.)
As shown, an array data type is named by appending square brackets ([]) to the data type name of the
array elements. The above command will create a table named sal_emp with a column of type text
(name), a one-dimensional array of type integer (pay_by_quarter), which represents the employee’s
salary by quarter, and a two-dimensional array of text (schedule), which represents the employee’s
weekly schedule.
The syntax for CREATE TABLE allows the exact size of arrays to be specified, for example:
However, the current implementation does not enforce the array size limits — the behavior is the same as
for arrays of unspecified length.
Actually, the current implementation does not enforce the declared number of dimensions either. Arrays
of a particular element type are all considered to be of the same type, regardless of size or number of
103
Chapter 8. Data Types
dimensions. So, declaring number of dimensions or sizes in CREATE TABLE is simply documentation, it
does not affect runtime behavior.
An alternative syntax, which conforms to the SQL:1999 standard, may be used for one-dimensional arrays.
pay_by_quarter could have been defined as:
This syntax requires an integer constant to denote the array size. As before, however, PostgreSQL does
not enforce the size restriction.
where delim is the delimiter character for the type, as recorded in its pg_type entry. Among the standard
data types provided in the PostgreSQL distribution, type box uses a semicolon (;) but all the others use
comma (,). Each val is either a constant of the array element type, or a subarray. An example of an array
constant is
’{{1,2,3},{4,5,6},{7,8,9}}’
Note that multidimensional arrays must have matching extents for each dimension. A mismatch causes an
error report.
104
Chapter 8. Data Types
A limitation of the present array implementation is that individual elements of an array cannot be SQL
null values. The entire array can be set to null, but you can’t have an array with some elements null and
some not.
The result of the previous two inserts looks like this:
Notice that the array elements are ordinary SQL constants or expressions; for instance, string literals are
single quoted, instead of double quoted as they would be in an array literal. The ARRAY constructor syntax
is discussed in more detail in Section 4.2.10.
name
-------
Carol
(1 row)
The array subscript numbers are written within square brackets. By default PostgreSQL uses the one-
based numbering convention for arrays, that is, an array of n elements starts with array[1] and ends
with array[n].
This query retrieves the third quarter pay of all employees:
pay_by_quarter
105
Chapter 8. Data Types
----------------
10000
25000
(2 rows)
We can also access arbitrary rectangular slices of an array, or subarrays. An array slice is denoted by writ-
ing lower-bound:upper-bound for one or more array dimensions. For example, this query retrieves
the first item on Bill’s schedule for the first two days of the week:
schedule
------------------------
{{meeting},{training}}
(1 row)
with the same result. An array subscripting operation is always taken to represent an array slice if any of
the subscripts are written in the form lower:upper . A lower bound of 1 is assumed for any subscript
where only one value is specified, as in this example:
schedule
-------------------------------------------
{{meeting,lunch},{training,presentation}}
(1 row)
The current dimensions of any array value can be retrieved with the array_dims function:
array_dims
------------
[1:2][1:1]
(1 row)
array_dims produces a text result, which is convenient for people to read but perhaps not so convenient
for programs. Dimensions can also be retrieved with array_upper and array_lower, which return the
upper and lower bound of a specified array dimension, respectively.
array_upper
-------------
2
(1 row)
106
Chapter 8. Data Types
or updated in a slice:
A stored array value can be enlarged by assigning to an element adjacent to those already present, or by
assigning to a slice that is adjacent to or overlaps the data already present. For example, if array myarray
currently has 4 elements, it will have five elements after an update that assigns to myarray[5]. Currently,
enlargement in this fashion is only allowed for one-dimensional arrays, not multidimensional arrays.
Array slice assignment allows creation of arrays that do not use one-based subscripts. For example one
might assign to myarray[-2:7] to create an array with subscript values running from -2 to 7.
New array values can also be constructed by using the concatenation operator, ||.
The concatenation operator allows a single element to be pushed on to the beginning or end of a
one-dimensional array. It also accepts two N -dimensional arrays, or an N -dimensional and an
N+1-dimensional array.
107
Chapter 8. Data Types
When a single element is pushed on to the beginning of a one-dimensional array, the result is an array
with a lower bound subscript equal to the right-hand operand’s lower bound subscript, minus one. When
a single element is pushed on to the end of a one-dimensional array, the result is an array retaining the
lower bound of the left-hand operand. For example:
When two arrays with an equal number of dimensions are concatenated, the result retains the lower bound
subscript of the left-hand operand’s outer dimension. The result is an array comprising every element of
the left-hand operand followed by every element of the right-hand operand. For example:
When an N -dimensional array is pushed on to the beginning or end of an N+1-dimensional array, the result
is analogous to the element-array case above. Each N -dimensional sub-array is essentially an element of
the N+1-dimensional array’s outer dimension. For example:
108
Chapter 8. Data Types
operator. However, they may be directly useful in the creation of user-defined aggregates. Some
examples:
However, this quickly becomes tedious for large arrays, and is not helpful if the size of the array is
uncertain. An alternative method is described in Section 9.17. The above query could be replaced by:
In addition, you could find rows where the array had all values equal to 10000 with:
109
Chapter 8. Data Types
Tip: Arrays are not sets; searching for specific array elements may be a sign of database misdesign.
Consider using a separate table with a row for each item that would be an array element. This will be
easier to search, and is likely to scale up better to large numbers of elements.
array
---------------
[0:2]={1,2,3}
(1 row)
array
--------------------------
[0:1][1:2]={{1,2},{3,4}}
(1 row)
This syntax can also be used to specify non-default array subscripts in an array literal. For example:
110
Chapter 8. Data Types
e1 | e2
----+----
1 | 6
(1 row)
As shown previously, when writing an array value you may write double quotes around any individual ar-
ray element. You must do so if the element value would otherwise confuse the array-value parser. For ex-
ample, elements containing curly braces, commas (or whatever the delimiter character is), double quotes,
backslashes, or leading or trailing whitespace must be double-quoted. To put a double quote or backslash
in a quoted array element value, precede it with a backslash. Alternatively, you can use backslash-escaping
to protect all data characters that would otherwise be taken as array syntax.
You may write whitespace before a left brace or after a right brace. You may also write whitespace be-
fore or after any individual item string. In all of these cases the whitespace will be ignored. However,
whitespace within double-quoted elements, or surrounded on both sides by non-whitespace characters of
an element, is not ignored.
Note: Remember that what you write in an SQL command will first be interpreted as a string literal,
and then as an array. This doubles the number of backslashes you need. For example, to insert a
text array value containing a backslash and a double quote, you’d need to write
The string-literal processor removes one level of backslashes, so that what arrives at the array-value
parser looks like {"\\","\""}. In turn, the strings fed to the text data type’s input routine become \
and " respectively. (If we were working with a data type whose input routine also treated backslashes
specially, bytea for example, we might need as many as eight backslashes in the command to get
one backslash into the stored array element.) Dollar quoting (see Section 4.1.2.2) may be used to
avoid the need to double backslashes.
Tip: The ARRAY constructor syntax (see Section 4.2.10) is often easier to work with than the array-
literal syntax when writing array values in SQL commands. In ARRAY, individual element values are
written the same way they would be written when not members of an array.
111
Chapter 8. Data Types
The syntax is comparable to CREATE TABLE, except that only field names and types can be specified; no
constraints (such as NOT NULL) can presently be included. Note that the AS keyword is essential; without
it, the system will think a quite different kind of CREATE TYPE command is meant, and you’ll get odd
syntax errors.
Having defined the types, we can use them to create tables:
or functions:
Whenever you create a table, a composite type is also automatically created, with the same name as the
table, to represent the table’s row type. For example, had we said
then the same inventory_item composite type shown above would come into being as a byproduct, and
could be used just as above. Note however an important restriction of the current implementation: since
no constraints are associated with a composite type, the constraints shown in the table definition do not
apply to values of the composite type outside the table. (A partial workaround is to use domain types as
members of composite types.)
112
Chapter 8. Data Types
An example is
’("fuzzy dice",42,1.99)’
which would be a valid value of the inventory_item type defined above. To make a field be NULL,
write no characters at all in its position in the list. For example, this constant specifies a NULL third field:
’("fuzzy dice",42,)’
If you want an empty string rather than NULL, write double quotes:
’("",42,)’
Here the first field is a non-NULL empty string, the third is NULL.
(These constants are actually only a special case of the generic type constants discussed in Section 4.1.2.5.
The constant is initially treated as a string and passed to the composite-type input conversion routine. An
explicit type specification might be necessary.)
The ROW expression syntax may also be used to construct composite values. In most cases this is consid-
erably simpler to use than the string-literal syntax, since you don’t have to worry about multiple layers of
quoting. We already used this method above:
The ROW keyword is actually optional as long as you have more than one field in the expression, so these
can simplify to
113
Chapter 8. Data Types
This will not work since the name item is taken to be a table name, not a field name, per SQL syntax
rules. You must write it like this:
or if you need to use the table name as well (for instance in a multi-table query), like this:
Now the parenthesized object is correctly interpreted as a reference to the item column, and then the
subfield can be selected from it.
Similar syntactic issues apply whenever you select a field from a composite value. For instance, to select
just one field from the result of a function that returns a composite value, you’d need to write something
like
The first example omits ROW, the second uses it; we could have done it either way.
We can update an individual subfield of a composite column:
Notice here that we don’t need to (and indeed cannot) put parentheses around the column name appearing
just after SET, but we do need parentheses when referencing the same column in the expression to the
right of the equal sign.
And we can specify subfields as targets for INSERT, too:
Had we not supplied values for all the subfields of the column, the remaining subfields would have been
filled with null values.
114
Chapter 8. Data Types
part of the field value, and may or may not be significant depending on the input conversion rules for the
field data type. For example, in
’( 42)’
the whitespace will be ignored if the field type is integer, but not if it is text.
As shown previously, when writing a composite value you may write double quotes around any individual
field value. You must do so if the field value would otherwise confuse the composite-value parser. In
particular, fields containing parentheses, commas, double quotes, or backslashes must be double-quoted.
To put a double quote or backslash in a quoted composite field value, precede it with a backslash. (Also,
a pair of double quotes within a double-quoted field value is taken to represent a double quote character,
analogously to the rules for single quotes in SQL literal strings.) Alternatively, you can use backslash-
escaping to protect all data characters that would otherwise be taken as composite syntax.
A completely empty field value (no characters at all between the commas or parentheses) represents a
NULL. To write a value that is an empty string rather than NULL, write "".
The composite output routine will put double quotes around field values if they are empty strings or
contain parentheses, commas, double quotes, backslashes, or white space. (Doing so for white space is
not essential, but aids legibility.) Double quotes and backslashes embedded in field values will be doubled.
Note: Remember that what you write in an SQL command will first be interpreted as a string literal,
and then as a composite. This doubles the number of backslashes you need. For example, to insert a
text field containing a double quote and a backslash in a composite value, you’d need to write
The string-literal processor removes one level of backslashes, so that what arrives at the composite-
value parser looks like ("\"\\"). In turn, the string fed to the text data type’s input routine becomes
"\. (If we were working with a data type whose input routine also treated backslashes specially, bytea
for example, we might need as many as eight backslashes in the command to get one backslash into
the stored composite field.) Dollar quoting (see Section 4.1.2.2) may be used to avoid the need to
double backslashes.
Tip: The ROW constructor syntax is usually easier to work with than the composite-literal syntax when
writing composite values in SQL commands. In ROW, individual field values are written the same way
they would be written when not members of a composite.
115
Chapter 8. Data Types
The oid type is currently implemented as an unsigned four-byte integer. Therefore, it is not large enough
to provide database-wide uniqueness in large databases, or even in large individual tables. So, using a
user-created table’s OID column as a primary key is discouraged. OIDs are best used only for references
to system tables.
Note: OIDs are included by default in user-created tables in PostgreSQL 8.0.0. However, this be-
havior is likely to change in a future version of PostgreSQL. Eventually, user-created tables will not
include an OID system column unless WITH OIDS is specified when the table is created, or the
default_with_oids configuration variable is set to true. If your application requires the presence
of an OID system column in a table, it should specify WITH OIDS when that table is created to ensure
compatibility with future releases of PostgreSQL.
The oid type itself has few operations beyond comparison. It can be cast to integer, however, and then
manipulated using the standard integer operators. (Beware of possible signed-versus-unsigned confusion
if you do this.)
The OID alias types have no operations of their own except for specialized input and output routines.
These routines are able to accept and display symbolic names for system objects, rather than the raw
numeric value that type oid would use. The alias types allow simplified lookup of OID values for objects.
For example, to examine the pg_attribute rows related to a table mytable, one could write
rather than
While that doesn’t look all that bad by itself, it’s still oversimplified. A far more complicated sub-select
would be needed to select the right OID if there are multiple tables named mytable in different schemas.
The regclass input converter handles the table lookup according to the schema path setting, and so it
does the “right thing” automatically. Similarly, casting a table’s OID to regclass is handy for symbolic
display of a numeric OID.
All of the OID alias types accept schema-qualified names, and will display schema-qualified names on
116
Chapter 8. Data Types
output if the object would not be found in the current search path without being qualified. The regproc
and regoper alias types will only accept input names that are unique (not overloaded), so they are of
limited use; for most uses regprocedure or regoperator is more appropriate. For regoperator,
unary operators are identified by writing NONE for the unused operand.
Another identifier type used by the system is xid, or transaction (abbreviated xact) identifier. This is the
data type of the system columns xmin and xmax. Transaction identifiers are 32-bit quantities.
A third identifier type used by the system is cid, or command identifier. This is the data type of the system
columns cmin and cmax. Command identifiers are also 32-bit quantities.
A final identifier type used by the system is tid, or tuple identifier (row identifier). This is the data type
of the system column ctid. A tuple ID is a pair (block number, tuple index within block) that identifies
the physical location of the row within its table.
(The system columns are further explained in Section 5.4.)
8.13. Pseudo-Types
The PostgreSQL type system contains a number of special-purpose entries that are collectively called
pseudo-types. A pseudo-type cannot be used as a column data type, but it can be used to declare a func-
tion’s argument or result type. Each of the available pseudo-types is useful in situations where a function’s
behavior does not correspond to simply taking or returning a value of a specific SQL data type. Table 8-20
lists the existing pseudo-types.
Name Description
any Indicates that a function accepts any input data type
whatever.
anyarray Indicates that a function accepts any array data type
(see Section 31.2.5).
anyelement Indicates that a function accepts any data type (see
Section 31.2.5).
cstring Indicates that a function accepts or returns a
null-terminated C string.
internal Indicates that a function accepts or returns a
server-internal data type.
language_handler A procedural language call handler is declared to
return language_handler.
record Identifies a function returning an unspecified row
type.
trigger A trigger function is declared to return trigger.
void Indicates that a function returns no value.
opaque An obsolete type name that formerly served all the
above purposes.
117
Chapter 8. Data Types
Functions coded in C (whether built-in or dynamically loaded) may be declared to accept or return any of
these pseudo data types. It is up to the function author to ensure that the function will behave safely when
a pseudo-type is used as an argument type.
Functions coded in procedural languages may use pseudo-types only as allowed by their implementation
languages. At present the procedural languages all forbid use of a pseudo-type as argument type, and allow
only void and record as a result type (plus trigger when the function is used as a trigger). Some also
support polymorphic functions using the types anyarray and anyelement.
The internal pseudo-type is used to declare functions that are meant only to be called internally by the
database system, and not by direct invocation in a SQL query. If a function has at least one internal-type
argument then it cannot be called from SQL. To preserve the type safety of this restriction it is important
to follow this coding rule: do not create any function that is declared to return internal unless it has at
least one internal argument.
118
Chapter 9. Functions and Operators
PostgreSQL provides a large number of functions and operators for the built-in data types. Users can also
define their own functions and operators, as described in Part V. The psql commands \df and \do can be
used to show the list of all actually available functions and operators, respectively.
If you are concerned about portability then take note that most of the functions and operators described
in this chapter, with the exception of the most trivial arithmetic and comparison operators and some
explicitly marked functions, are not specified by the SQL standard. Some of the extended functionality is
present in other SQL database management systems, and in many cases this functionality is compatible
and consistent between the various implementations.
AND
OR
NOT
SQL uses a three-valued Boolean logic where the null value represents “unknown”. Observe the following
truth tables:
a b a AND b a OR b
TRUE TRUE TRUE TRUE
TRUE FALSE FALSE TRUE
TRUE NULL NULL TRUE
FALSE FALSE FALSE FALSE
FALSE NULL FALSE NULL
NULL NULL NULL NULL
a NOT a
TRUE FALSE
FALSE TRUE
NULL NULL
The operators AND and OR are commutative, that is, you can switch the left and right operand without
affecting the result. But see Section 4.2.12 for more information about the order of evaluation of subex-
pressions.
119
Chapter 9. Functions and Operators
Operator Description
< less than
> greater than
<= less than or equal to
>= greater than or equal to
= equal
<> or != not equal
Note: The != operator is converted to <> in the parser stage. It is not possible to implement != and
<> operators that do different things.
Comparison operators are available for all data types where this makes sense. All comparison operators are
binary operators that return values of type boolean; expressions like 1 < 2 < 3 are not valid (because
there is no < operator to compare a Boolean value with 3).
In addition to the comparison operators, the special BETWEEN construct is available.
a BETWEEN x AND y
is equivalent to
Similarly,
is equivalent to
a < x OR a > y
There is no difference between the two respective forms apart from the CPU cycles required to rewrite the
first one into the second one internally.
To check whether a value is or is not null, use the constructs
expression IS NULL
expression IS NOT NULL
expression ISNULL
expression NOTNULL
Do not write expression = NULL because NULL is not “equal to” NULL. (The null value represents an
unknown value, and it is not known whether two unknown values are equal.) This behavior conforms to
the SQL standard.
120
Chapter 9. Functions and Operators
Tip: Some applications may expect that expression = NULL returns true if expression evaluates
to the null value. It is highly recommended that these applications be modified to comply with the
SQL standard. However, if that cannot be done the transform_null_equals configuration variable is
available. If it is enabled, PostgreSQL will convert x = NULL clauses to x IS NULL. This was the
default behavior in PostgreSQL releases 6.5 through 7.1.
The ordinary comparison operators yield null (signifying “unknown”) when either input is null. Another
way to do comparisons is with the IS DISTINCT FROM construct:
For non-null inputs this is the same as the <> operator. However, when both inputs are null it will return
false, and when just one input is null it will return true. Thus it effectively acts as though null were a
normal data value, rather than “unknown”.
Boolean values can also be tested using the constructs
expression IS TRUE
expression IS NOT TRUE
expression IS FALSE
expression IS NOT FALSE
expression IS UNKNOWN
expression IS NOT UNKNOWN
These will always return true or false, never a null value, even when the operand is null. A null input
is treated as the logical value “unknown”. Notice that IS UNKNOWN and IS NOT UNKNOWN are effec-
tively the same as IS NULL and IS NOT NULL, respectively, except that the input expression must be of
Boolean type.
121
Chapter 9. Functions and Operators
The bitwise operators work only on integral data types, whereas the others are available for all numeric
data types. The bitwise operators are also available for the bit string types bit and bit varying, as
shown in Table 9-10.
Table 9-3 shows the available mathematical functions. In the table, dp indicates double precision.
Many of these functions are provided in multiple forms with different argument types. Except where
noted, any given form of a function returns the same data type as its argument. The functions working
with double precision data are mostly implemented on top of the host system’s C library; accuracy
and behavior in boundary cases may therefore vary depending on the host system.
122
Chapter 9. Functions and Operators
Finally, Table 9-4 shows the available trigonometric functions. All trigonometric functions take arguments
and return values of type double precision.
123
Chapter 9. Functions and Operators
Function Description
acos(x) inverse cosine
asin(x) inverse sine
atan(x) inverse tangent
atan2(x, y) inverse tangent of x/y
cos(x) cosine
cot(x) cotangent
sin(x) sine
tan(x) tangent
124
Chapter 9. Functions and Operators
Additional string manipulation functions are available and are listed in Table 9-6. Some of them are used
internally to implement the SQL-standard string functions listed in Table 9-5.
125
Chapter 9. Functions and Operators
126
Chapter 9. Functions and Operators
127
Chapter 9. Functions and Operators
128
Chapter 9. Functions and Operators
129
Chapter 9. Functions and Operators
130
Chapter 9. Functions and Operators
131
Chapter 9. Functions and Operators
132
Chapter 9. Functions and Operators
octet_length(string
integer
) Number of bytes in octet_length( 5
binary string ’jo\\000se’::bytea)
Additional binary string manipulation functions are available and are listed in Table 9-9. Some of them
are used internally to implement the SQL-standard string functions listed in Table 9-8.
133
Chapter 9. Functions and Operators
The following SQL-standard functions work on bit strings as well as character strings: length,
bit_length, octet_length, position, substring.
In addition, it is possible to cast integral values to and from type bit. Some examples:
44::bit(10) 0000101100
44::bit(3) 100
cast(-44 as bit(12)) 111111010100
’1110’::bit(4)::integer 14
Note that casting to just “bit” means casting to bit(1), and so it will deliver only the least significant bit
of the integer.
134
Chapter 9. Functions and Operators
Note: Prior to PostgreSQL 8.0, casting an integer to bit(n) would copy the leftmost n bits of the
integer, whereas now it copies the rightmost n bits. Also, casting an integer to a bit string width wider
than the integer itself will sign-extend on the left.
Tip: If you have pattern matching needs that go beyond this, consider writing a user-defined function
in Perl or Tcl.
9.7.1. LIKE
string LIKE pattern [ESCAPE escape-character]
string NOT LIKE pattern [ESCAPE escape-character]
Every pattern defines a set of strings. The LIKE expression returns true if the string is contained in
the set of strings represented by pattern. (As expected, the NOT LIKE expression returns false if LIKE
returns true, and vice versa. An equivalent expression is NOT (string LIKE pattern).)
If pattern does not contain percent signs or underscore, then the pattern only represents the string itself;
in that case LIKE acts like the equals operator. An underscore (_) in pattern stands for (matches) any
single character; a percent sign (%) matches any string of zero or more characters.
Some examples:
LIKE pattern matches always cover the entire string. To match a sequence anywhere within a string, the
pattern must therefore start and end with a percent sign.
To match a literal underscore or percent sign without matching other characters, the respective character
in pattern must be preceded by the escape character. The default escape character is the backslash but
a different one may be selected by using the ESCAPE clause. To match the escape character itself, write
two escape characters.
Note that the backslash already has a special meaning in string literals, so to write a pattern constant that
contains a backslash you must write two backslashes in an SQL statement. Thus, writing a pattern that
actually matches a literal backslash means writing four backslashes in the statement. You can avoid this
135
Chapter 9. Functions and Operators
by selecting a different escape character with ESCAPE; then a backslash is not special to LIKE anymore.
(But it is still special to the string literal parser, so you still need two of them.)
It’s also possible to select no escape character by writing ESCAPE ”. This effectively disables the escape
mechanism, which makes it impossible to turn off the special meaning of underscore and percent signs in
the pattern.
The key word ILIKE can be used instead of LIKE to make the match case-insensitive according to the
active locale. This is not in the SQL standard but is a PostgreSQL extension.
The operator ~~ is equivalent to LIKE, and ~~* corresponds to ILIKE. There are also !~~ and !~~*
operators that represent NOT LIKE and NOT ILIKE, respectively. All of these operators are PostgreSQL-
specific.
The SIMILAR TO operator returns true or false depending on whether its pattern matches the given string.
It is much like LIKE, except that it interprets the pattern using the SQL standard’s definition of a regular
expression. SQL regular expressions are a curious cross between LIKE notation and common regular
expression notation.
Like LIKE, the SIMILAR TO operator succeeds only if its pattern matches the entire string; this is unlike
common regular expression practice, wherein the pattern may match any part of the string. Also like
LIKE, SIMILAR TO uses _ and % as wildcard characters denoting any single character and any string,
respectively (these are comparable to . and .* in POSIX regular expressions).
In addition to these facilities borrowed from LIKE, SIMILAR TO supports these pattern-matching
metacharacters borrowed from POSIX regular expressions:
136
Chapter 9. Functions and Operators
The substring function with three parameters, substring(string from pattern for
escape-character), provides extraction of a substring that matches an SQL regular expression
pattern. As with SIMILAR TO, the specified pattern must match to the entire data string, else the function
fails and returns null. To indicate the part of the pattern that should be returned on success, the pattern
must contain two occurrences of the escape character followed by a double quote ("). The text matching
the portion of the pattern between these markers is returned.
Some examples:
POSIX regular expressions provide a more powerful means for pattern matching than the LIKE and
SIMILAR TO operators. Many Unix tools such as egrep, sed, or awk use a pattern matching language
that is similar to the one described here.
A regular expression is a character sequence that is an abbreviated definition of a set of strings (a regular
set). A string is said to match a regular expression if it is a member of the regular set described by the
regular expression. As with LIKE, pattern characters match string characters exactly unless they are special
characters in the regular expression language — but regular expressions use different special characters
than LIKE does. Unlike LIKE patterns, a regular expression is allowed to match anywhere within a string,
unless the regular expression is explicitly anchored to the beginning or end of the string.
Some examples:
137
Chapter 9. Functions and Operators
The substring function with two parameters, substring(string from pattern), provides extrac-
tion of a substring that matches a POSIX regular expression pattern. It returns null if there is no match,
otherwise the portion of the text that matched the pattern. But if the pattern contains any parentheses,
the portion of the text that matched the first parenthesized subexpression (the one whose left parenthe-
sis comes first) is returned. You can put parentheses around the whole expression if you want to use
parentheses within it without triggering this exception. If you need parentheses in the pattern before the
subexpression you want to extract, see the non-capturing parentheses described below.
Some examples:
PostgreSQL’s regular expressions are implemented using a package written by Henry Spencer. Much of
the description of regular expressions below is copied verbatim from his manual entry.
Note: The form of regular expressions accepted by PostgreSQL can be chosen by setting the
regex_flavor run-time parameter. The usual setting is advanced, but one might choose extended for
maximum backwards compatibility with pre-7.4 releases of PostgreSQL.
A regular expression is defined as one or more branches, separated by |. It matches anything that matches
one of the branches.
A branch is zero or more quantified atoms or constraints, concatenated. It matches a match for the first,
followed by a match for the second, etc; an empty branch matches the empty string.
A quantified atom is an atom possibly followed by a single quantifier. Without a quantifier, it matches a
match for the atom. With a quantifier, it can match some number of matches of the atom. An atom can
be any of the possibilities shown in Table 9-12. The possible quantifiers and their meanings are shown in
Table 9-13.
A constraint matches an empty string, but matches only when specific conditions are met. A constraint
can be used where an atom could be used, except it may not be followed by a quantifier. The simple
constraints are shown in Table 9-14; some more constraints are described later.
138
Chapter 9. Functions and Operators
Atom Description
(re) (where re is any regular expression) matches a
match for re, with the match noted for possible
reporting
(?:re) as above, but the match is not noted for reporting (a
“non-capturing” set of parentheses) (AREs only)
. matches any single character
[chars] a bracket expression, matching any one of the
chars (see Section 9.7.3.2 for more detail)
\k (where k is a non-alphanumeric character) matches
that character taken as an ordinary character, e.g. \\
matches a backslash character
\c where c is alphanumeric (possibly followed by
other characters) is an escape, see Section 9.7.3.3
(AREs only; in EREs and BREs, this matches c)
{ when followed by a character other than a digit,
matches the left-brace character {; when followed
by a digit, it is the beginning of a bound (see
below)
x where x is a single character with no other
significance, matches that character
Note: Remember that the backslash (\) already has a special meaning in PostgreSQL string literals.
To write a pattern constant that contains a backslash, you must write two backslashes in the statement.
Quantifier Matches
* a sequence of 0 or more matches of the atom
+ a sequence of 1 or more matches of the atom
? a sequence of 0 or 1 matches of the atom
{m } a sequence of exactly m matches of the atom
{m,} a sequence of m or more matches of the atom
{m,n} a sequence of m through n (inclusive) matches of
the atom; m may not exceed n
*? non-greedy version of *
+? non-greedy version of +
?? non-greedy version of ?
139
Chapter 9. Functions and Operators
Quantifier Matches
{m}? non-greedy version of {m}
{m,}? non-greedy version of {m,}
{m,n}? non-greedy version of {m,n}
The forms using {...} are known as bounds. The numbers m and n within a bound are unsigned decimal
integers with permissible values from 0 to 255 inclusive.
Non-greedy quantifiers (available in AREs only) match the same possibilities as their corresponding nor-
mal (greedy) counterparts, but prefer the smallest number rather than the largest number of matches. See
Section 9.7.3.5 for more detail.
Note: A quantifier cannot immediately follow another quantifier. A quantifier cannot begin an expres-
sion or subexpression or follow ^ or |.
Constraint Description
^ matches at the beginning of the string
$ matches at the end of the string
(?=re) positive lookahead matches at any point where a
substring matching re begins (AREs only)
(?!re) negative lookahead matches at any point where no
substring matching re begins (AREs only)
Lookahead constraints may not contain back references (see Section 9.7.3.3), and all parentheses within
them are considered non-capturing.
140
Chapter 9. Functions and Operators
sequence of characters of that collating element. The sequence is a single element of the bracket expres-
sion’s list. A bracket expression containing a multiple-character collating element can thus match more
than one character, e.g. if the collating sequence includes a ch collating element, then the RE [[.ch.]]*c
matches the first five characters of chchcc.
Note: PostgreSQL currently has no multi-character collating elements. This information describes
possible future behavior.
Within a bracket expression, a collating element enclosed in [= and =] is an equivalence class, standing
for the sequences of characters of all collating elements equivalent to that one, including itself. (If there
are no other equivalent collating elements, the treatment is as if the enclosing delimiters were [. and .].)
For example, if o and ^ are the members of an equivalence class, then [[=o=]], [[=^=]], and [o^] are
all synonymous. An equivalence class may not be an endpoint of a range.
Within a bracket expression, the name of a character class enclosed in [: and :] stands for the list of
all characters belonging to that class. Standard character class names are: alnum, alpha, blank, cntrl,
digit, graph, lower, print, punct, space, upper, xdigit. These stand for the character classes
defined in ctype. A locale may provide others. A character class may not be used as an endpoint of a
range.
There are two special cases of bracket expressions: the bracket expressions [[:<:]] and [[:>:]] are
constraints, matching empty strings at the beginning and end of a word respectively. A word is defined as
a sequence of word characters that is neither preceded nor followed by word characters. A word character
is an alnum character (as defined by ctype) or an underscore. This is an extension, compatible with but
not specified by POSIX 1003.2, and should be used with caution in software intended to be portable to
other systems. The constraint escapes described below are usually preferable (they are no more standard,
but are certainly easier to type).
141
Chapter 9. Functions and Operators
Note: Keep in mind that an escape’s leading \ will need to be doubled when entering the pattern as
an SQL string constant. For example:
Escape Description
\a alert (bell) character, as in C
\b backspace, as in C
\B synonym for \ to help reduce the need for
backslash doubling
\cX (where X is any character) the character whose
low-order 5 bits are the same as those of X, and
whose other bits are all zero
\e the character whose collating-sequence name is
ESC, or failing that, the character with octal value
033
\f form feed, as in C
\n newline, as in C
\r carriage return, as in C
\t horizontal tab, as in C
\uwxyz (where wxyz is exactly four hexadecimal digits)
the Unicode character U+wxyz in the local byte
ordering
\Ustuvwxyz (where stuvwxyz is exactly eight hexadecimal
digits) reserved for a somewhat-hypothetical
Unicode extension to 32 bits
\v vertical tab, as in C
\xhhh (where hhh is any sequence of hexadecimal digits)
the character whose hexadecimal value is 0xhhh (a
single character no matter how many hexadecimal
digits are used)
\0 the character whose value is 0
\xy (where xy is exactly two octal digits, and is not a
back reference) the character whose octal value is
0xy
\xyz (where xyz is exactly three octal digits, and is not
a back reference) the character whose octal value is
0xyz
Hexadecimal digits are 0-9, a-f, and A-F. Octal digits are 0-7.
142
Chapter 9. Functions and Operators
The character-entry escapes are always taken as ordinary characters. For example, \135 is ] in ASCII,
but \135 does not terminate a bracket expression.
Escape Description
\d [[:digit:]]
\s [[:space:]]
\w [[:alnum:]_] (note underscore is included)
\D [^[:digit:]]
\S [^[:space:]]
\W [^[:alnum:]_] (note underscore is included)
Within bracket expressions, \d, \s, and \w lose their outer brackets, and \D, \S, and \W are illegal.
(So, for example, [a-c\d] is equivalent to [a-c[:digit:]]. Also, [a-c\D], which is equivalent to
[a-c^[:digit:]], is illegal.)
Escape Description
\A matches only at the beginning of the string (see
Section 9.7.3.5 for how this differs from ^)
\m matches only at the beginning of a word
\M matches only at the end of a word
\y matches only at the beginning or end of a word
\Y matches only at a point that is not the beginning or
end of a word
\Z matches only at the end of the string (see Section
9.7.3.5 for how this differs from $)
A word is defined as in the specification of [[:<:]] and [[:>:]] above. Constraint escapes are illegal
within bracket expressions.
Escape Description
\m (where m is a nonzero digit) a back reference to the
m’th subexpression
\mnn (where m is a nonzero digit, and nn is some more
digits, and the decimal value mnn is not greater than
the number of closing capturing parentheses seen so
far) a back reference to the mnn’th subexpression
Note: There is an inherent historical ambiguity between octal character-entry escapes and back ref-
erences, which is resolved by heuristics, as hinted at above. A leading zero always indicates an octal
143
Chapter 9. Functions and Operators
escape. A single non-zero digit, not followed by another digit, is always taken as a back reference. A
multi-digit sequence not starting with a zero is taken as a back reference if it comes after a suitable
subexpression (i.e. the number is in the legal range for a back reference), and otherwise is taken as
octal.
Option Description
b rest of RE is a BRE
c case-sensitive matching (overrides operator type)
e rest of RE is an ERE
i case-insensitive matching (see Section 9.7.3.5)
(overrides operator type)
m historical synonym for n
n newline-sensitive matching (see Section 9.7.3.5)
p partial newline-sensitive matching (see Section
9.7.3.5)
q rest of RE is a literal (“quoted”) string, all ordinary
characters
s non-newline-sensitive matching (default)
t tight syntax (default; see below)
w inverse partial newline-sensitive (“weird”)
matching (see Section 9.7.3.5)
x expanded syntax (see below)
Embedded options take effect at the ) terminating the sequence. They may appear only at the start of an
ARE (after the ***: director if any).
In addition to the usual (tight) RE syntax, in which all characters are significant, there is an expanded
syntax, available by specifying the embedded x option. In the expanded syntax, white-space characters in
the RE are ignored, as are all characters between a # and the following newline (or the end of the RE).
144
Chapter 9. Functions and Operators
This permits paragraphing and commenting a complex RE. There are three exceptions to that basic rule:
• Most atoms, and all constraints, have no greediness attribute (because they cannot match variable
amounts of text anyway).
• Adding parentheses around an RE does not change its greediness.
• A quantified atom with a fixed-repetition quantifier ({m} or {m}?) has the same greediness (possibly
none) as the atom itself.
• A quantified atom with other normal quantifiers (including {m,n} with m equal to n) is greedy (prefers
longest match).
• A quantified atom with a non-greedy quantifier (including {m,n}? with m equal to n) is non-greedy
(prefers shortest match).
• A branch — that is, an RE that has no top-level | operator — has the same greediness as the first
quantified atom in it that has a greediness attribute.
• An RE consisting of two or more branches connected by the | operator is always greedy.
The above rules associate greediness attributes not only with individual quantified atoms, but with
branches and entire REs that contain quantified atoms. What that means is that the matching is done in
such a way that the branch, or whole RE, matches the longest or shortest possible substring as a whole.
Once the length of the entire match is determined, the part of it that matches any particular subexpression
is determined on the basis of the greediness attribute of that subexpression, with subexpressions starting
earlier in the RE taking priority over ones starting later.
145
Chapter 9. Functions and Operators
In the first case, the RE as a whole is greedy because Y* is greedy. It can match beginning at the Y, and it
matches the longest possible string starting there, i.e., Y123. The output is the parenthesized part of that,
or 123. In the second case, the RE as a whole is non-greedy because Y*? is non-greedy. It can match
beginning at the Y, and it matches the shortest possible string starting there, i.e., Y1. The subexpression
[0-9]{1,3} is greedy but it cannot change the decision as to the overall match length; so it is forced to
match just 1.
In short, when an RE contains both greedy and non-greedy subexpressions, the total match length is
either as long as possible or as short as possible, according to the attribute assigned to the whole RE. The
attributes assigned to the subexpressions only affect how much of that match they are allowed to “eat”
relative to each other.
The quantifiers {1,1} and {1,1}? can be used to force greediness or non-greediness, respectively, on a
subexpression or a whole RE.
Match lengths are measured in characters, not collating elements. An empty string is considered
longer than no match at all. For example: bb* matches the three middle characters of abbbc;
(week|wee)(night|knights) matches all ten characters of weeknights; when (.*).* is matched
against abc the parenthesized subexpression matches all three characters; and when (a*)* is matched
against bc both the whole RE and the parenthesized subexpression match an empty string.
If case-independent matching is specified, the effect is much as if all case distinctions had vanished from
the alphabet. When an alphabetic that exists in multiple cases appears as an ordinary character outside
a bracket expression, it is effectively transformed into a bracket expression containing both cases, e.g. x
becomes [xX]. When it appears inside a bracket expression, all case counterparts of it are added to the
bracket expression, e.g. [x] becomes [xX] and [^x] becomes [^xX].
If newline-sensitive matching is specified, . and bracket expressions using ^ will never match the newline
character (so that matches will never cross newlines unless the RE explicitly arranges it) and ^and $ will
match the empty string after and before a newline respectively, in addition to matching at beginning and
end of string respectively. But the ARE escapes \A and \Z continue to match beginning or end of string
only.
If partial newline-sensitive matching is specified, this affects . and bracket expressions as with newline-
sensitive matching, but not ^ and $.
If inverse partial newline-sensitive matching is specified, this affects ^ and $ as with newline-sensitive
matching, but not . and bracket expressions. This isn’t very useful but is provided for symmetry.
146
Chapter 9. Functions and Operators
The only feature of AREs that is actually incompatible with POSIX EREs is that \ does not lose its
special significance inside bracket expressions. All other ARE features use syntax which is illegal or
has undefined or unspecified effects in POSIX EREs; the *** syntax of directors likewise is outside the
POSIX syntax for both BREs and EREs.
Many of the ARE extensions are borrowed from Perl, but some have been changed to clean them up, and a
few Perl extensions are not present. Incompatibilities of note include \b, \B, the lack of special treatment
for a trailing newline, the addition of complemented bracket expressions to the things affected by newline-
sensitive matching, the restrictions on parentheses and back references in lookahead constraints, and the
longest/shortest-match (rather than first-match) matching semantics.
Two significant incompatibilities exist between AREs and the ERE syntax recognized by pre-7.4 releases
of PostgreSQL:
147
Chapter 9. Functions and Operators
Warning: to_char(interval, text) is deprecated and should not be used in newly-written code. It will
be removed in the next version.
In an output template string (for to_char), there are certain patterns that are recognized and replaced
with appropriately-formatted data from the value to be formatted. Any text that is not a template pattern
is simply copied verbatim. Similarly, in an input template string (for anything but to_char), template
patterns identify the parts of the input data string to be looked at and the values to be found there.
Table 9-21 shows the template patterns available for formatting date and time values.
Pattern Description
HH hour of day (01-12)
HH12 hour of day (01-12)
HH24 hour of day (00-23)
MI minute (00-59)
SS second (00-59)
MS millisecond (000-999)
US microsecond (000000-999999)
SSSS seconds past midnight (0-86399)
AM or A.M. or PM or P.M. meridian indicator (uppercase)
am or a.m. or pm or p.m. meridian indicator (lowercase)
Y,YYY year (4 and more digits) with comma
YYYY year (4 and more digits)
YYY last 3 digits of year
148
Chapter 9. Functions and Operators
Pattern Description
YY last 2 digits of year
Y last digit of year
IYYY ISO year (4 and more digits)
IYY last 3 digits of ISO year
IY last 2 digits of ISO year
I last digits of ISO year
BC or B.C. or AD or A.D. era indicator (uppercase)
bc or b.c. or ad or a.d. era indicator (lowercase)
MONTH full uppercase month name (blank-padded to 9
chars)
Month full mixed-case month name (blank-padded to 9
chars)
month full lowercase month name (blank-padded to 9
chars)
MON abbreviated uppercase month name (3 chars)
Mon abbreviated mixed-case month name (3 chars)
mon abbreviated lowercase month name (3 chars)
MM month number (01-12)
DAY full uppercase day name (blank-padded to 9 chars)
Day full mixed-case day name (blank-padded to 9 chars)
day full lowercase day name (blank-padded to 9 chars)
DY abbreviated uppercase day name (3 chars)
Dy abbreviated mixed-case day name (3 chars)
dy abbreviated lowercase day name (3 chars)
DDD day of year (001-366)
DD day of month (01-31)
D day of week (1-7; Sunday is 1)
W week of month (1-5) (The first week starts on the
first day of the month.)
WW week number of year (1-53) (The first week starts
on the first day of the year.)
IW ISO week number of year (The first Thursday of the
new year is in week 1.)
CC century (2 digits)
J Julian Day (days since January 1, 4712 BC)
Q quarter
RM month in Roman numerals (I-XII; I=January)
(uppercase)
rm month in Roman numerals (i-xii; i=January)
(lowercase)
149
Chapter 9. Functions and Operators
Pattern Description
TZ time-zone name (uppercase)
tz time-zone name (lowercase)
Certain modifiers may be applied to any template pattern to alter its behavior. For example, FMMonth is
the Month pattern with the FM modifier. Table 9-22 shows the modifier patterns for date/time formatting.
• FM suppresses leading zeroes and trailing blanks that would otherwise be added to make the output of a
pattern be fixed-width.
• to_timestamp and to_date skip multiple blank spaces in the input string if the FX option is not used.
FX must be specified as the first item in the template. For example to_timestamp(’2000 JUN’,
’YYYY MON’) is correct, but to_timestamp(’2000 JUN’, ’FXYYYY MON’) returns an error,
because to_timestamp expects one space only.
• Ordinary text is allowed in to_char templates and will be output literally. You can put a substring
in double quotes to force it to be interpreted as literal text even if it contains pattern key words. For
example, in ’"Hello Year "YYYY’, the YYYY will be replaced by the year data, but the single Y in
Year will not be.
• If you want to have a double quote in the output you must precede it with a backslash, for example
’\\"YYYY Month\\"’. (Two backslashes are necessary because the backslash already has a special
meaning in a string constant.)
• The YYYY conversion from string to timestamp or date has a restriction if you use a year with
more than 4 digits. You must use some non-digit character or template after YYYY, otherwise the
year is always interpreted as 4 digits. For example (with the year 20000): to_date(’200001131’,
’YYYYMMDD’) will be interpreted as a 4-digit year; instead use a non-digit separator after the year, like
to_date(’20000-1131’, ’YYYY-MMDD’) or to_date(’20000Nov31’, ’YYYYMonDD’).
• Millisecond (MS) and microsecond (US) values in a conversion from string to timestamp are used as
part of the seconds after the decimal point. For example to_timestamp(’12:3’, ’SS:MS’) is not 3
milliseconds, but 300, because the conversion counts it as 12 + 0.3 seconds. This means for the format
SS:MS, the input values 12:3, 12:30, and 12:300 specify the same number of milliseconds. To get
three milliseconds, one must use 12:003, which the conversion counts as 12 + 0.003 = 12.003 seconds.
150
Chapter 9. Functions and Operators
• to_char’s day of the week numbering (see the ’D’ formatting pattern) is different from that of the
extract function.
Table 9-23 shows the template patterns available for formatting numeric values.
Pattern Description
9 value with the specified number of digits
0 value with leading zeros
. (period) decimal point
, (comma) group (thousand) separator
PR negative value in angle brackets
S sign anchored to number (uses locale)
L currency symbol (uses locale)
D decimal point (uses locale)
G group separator (uses locale)
MI minus sign in specified position (if number < 0)
PL plus sign in specified position (if number > 0)
SG plus/minus sign in specified position
RN roman numeral (input between 1 and 3999)
TH or th ordinal number suffix
V shift specified number of digits (see notes)
EEEE scientific notation (not implemented yet)
• A sign formatted using SG, PL, or MI is not anchored to the number; for example, to_char(-12,
’S9999’) produces ’ -12’, but to_char(-12, ’MI9999’) produces ’- 12’. The Oracle im-
plementation does not allow the use of MI ahead of 9, but rather requires that 9 precede MI.
• 9 results in a value with the same number of digits as there are 9s. If a digit is not available it outputs a
space.
• TH does not convert values less than zero and does not convert fractional numbers.
• PL, SG, and TH are PostgreSQL extensions.
• V effectively multiplies the input values by 10^n, where n is the number of digits following V. to_char
does not support the use of V combined with a decimal point. (E.g., 99.9V99 is not allowed.)
151
Chapter 9. Functions and Operators
Table 9-24 shows some examples of the use of the to_char function.
Expression Result
to_char(current_timestamp, ’Tuesday , 06 05:39:18’
’Day, DD HH12:MI:SS’)
to_char(current_timestamp, ’Tuesday, 6 05:39:18’
’FMDay, FMDD HH12:MI:SS’)
to_char(-0.1, ’99.99’) ’ -.10’
to_char(-0.1, ’FM9.99’) ’-.1’
to_char(0.1, ’0.9’) ’ 0.1’
to_char(12, ’9990999.9’) ’ 0012.0’
to_char(12, ’FM9990999.9’) ’0012.’
to_char(485, ’999’) ’ 485’
to_char(-485, ’999’) ’-485’
to_char(485, ’9 9 9’) ’ 4 8 5’
to_char(1485, ’9,999’) ’ 1,485’
to_char(1485, ’9G999’) ’ 1 485’
to_char(148.5, ’999.999’) ’ 148.500’
to_char(148.5, ’FM999.999’) ’148.5’
to_char(148.5, ’FM999.990’) ’148.500’
to_char(148.5, ’999D999’) ’ 148,500’
to_char(3148.5, ’9G999D999’) ’ 3 148,500’
to_char(-485, ’999S’) ’485-’
to_char(-485, ’999MI’) ’485-’
to_char(485, ’999MI’) ’485 ’
to_char(485, ’FM999MI’) ’485’
to_char(485, ’PL999’) ’+485’
to_char(485, ’SG999’) ’+485’
to_char(-485, ’SG999’) ’-485’
to_char(-485, ’9SG99’) ’4-85’
to_char(-485, ’999PR’) ’<485>’
to_char(485, ’L999’) ’DM 485
to_char(485, ’RN’) ’ CDLXXXV’
to_char(485, ’FMRN’) ’CDLXXXV’
to_char(5.2, ’FMRN’) ’V’
to_char(482, ’999th’) ’ 482nd’
to_char(485, ’"Good number:"999’) ’Good number: 485’
152
Chapter 9. Functions and Operators
Expression Result
to_char(485.8, ’Pre: 485 Post: .800’
’"Pre:"999" Post:" .999’)
to_char(12, ’99V999’) ’ 12000’
to_char(12.4, ’99V999’) ’ 12400’
to_char(12.45, ’99V9’) ’ 125’
153
Chapter 9. Functions and Operators
154
Chapter 9. Functions and Operators
This expression yields true when two time periods (defined by their endpoints) overlap, false when they
do not overlap. The endpoints can be specified as pairs of dates, times, or time stamps; or as a date, time,
or time stamp followed by an interval.
155
Chapter 9. Functions and Operators
The extract function retrieves subfields such as year or hour from date/time values. source must be
a value expression of type timestamp, time, or interval. (Expressions of type date will be cast to
timestamp and can therefore be used as well.) field is an identifier or string that selects what field to
extract from the source value. The extract function returns values of type double precision. The
following are valid field names:
century
The century
SELECT EXTRACT(CENTURY FROM TIMESTAMP ’2000-12-16 12:21:13’);
Result: 20
SELECT EXTRACT(CENTURY FROM TIMESTAMP ’2001-02-16 20:38:40’);
Result: 21
The first century starts at 0001-01-01 00:00:00 AD, although they did not know it at the time. This
definition applies to all Gregorian calendar countries. There is no century number 0, you go from -1
to 1. If you disagree with this, please write your complaint to: Pope, Cathedral Saint-Peter of Roma,
Vatican.
PostgreSQL releases before 8.0 did not follow the conventional numbering of centuries, but just
returned the year field divided by 100.
day
decade
dow
Note that extract’s day of the week numbering is different from that of the to_char function.
doy
156
Chapter 9. Functions and Operators
epoch
For date and timestamp values, the number of seconds since 1970-01-01 00:00:00-00 (can be
negative); for interval values, the total number of seconds in the interval
SELECT EXTRACT(EPOCH FROM TIMESTAMP WITH TIME ZONE ’2001-02-16 20:38:40-08’);
Result: 982384720
Here is how you can convert an epoch value back to a time stamp:
SELECT TIMESTAMP WITH TIME ZONE ’epoch’ + 982384720 * INTERVAL ’1 second’;
hour
microseconds
The seconds field, including fractional parts, multiplied by 1 000 000. Note that this includes full
seconds.
SELECT EXTRACT(MICROSECONDS FROM TIME ’17:12:28.5’);
Result: 28500000
millennium
The millennium
SELECT EXTRACT(MILLENNIUM FROM TIMESTAMP ’2001-02-16 20:38:40’);
Result: 3
Years in the 1900s are in the second millennium. The third millennium starts January 1, 2001.
PostgreSQL releases before 8.0 did not follow the conventional numbering of millennia, but just
returned the year field divided by 1000.
milliseconds
The seconds field, including fractional parts, multiplied by 1000. Note that this includes full seconds.
SELECT EXTRACT(MILLISECONDS FROM TIME ’17:12:28.5’);
Result: 28500
minute
month
For timestamp values, the number of the month within the year (1 - 12) ; for interval values the
number of months, modulo 12 (0 - 11)
SELECT EXTRACT(MONTH FROM TIMESTAMP ’2001-02-16 20:38:40’);
Result: 2
157
Chapter 9. Functions and Operators
Result: 3
quarter
The quarter of the year (1 - 4) that the day is in (for timestamp values only)
SELECT EXTRACT(QUARTER FROM TIMESTAMP ’2001-02-16 20:38:40’);
Result: 1
second
timezone
The time zone offset from UTC, measured in seconds. Positive values correspond to time zones east
of UTC, negative values to zones west of UTC.
timezone_hour
The number of the week of the year that the day is in. By definition (ISO 8601), the first week of a
year contains January 4 of that year. (The ISO-8601 week starts on Monday.) In other words, the first
Thursday of a year is in week 1 of that year. (for timestamp values only)
SELECT EXTRACT(WEEK FROM TIMESTAMP ’2001-02-16 20:38:40’);
Result: 7
year
The year field. Keep in mind there is no 0 AD, so subtracting BC years from AD years should be done
with care.
SELECT EXTRACT(YEAR FROM TIMESTAMP ’2001-02-16 20:38:40’);
Result: 2001
The extract function is primarily intended for computational processing. For formatting date/time val-
ues for display, see Section 9.8.
The date_part function is modeled on the traditional Ingres equivalent to the SQL-standard function
extract:
158
Chapter 9. Functions and Operators
date_part(’field’, source)
Note that here the field parameter needs to be a string value, not a name. The valid field names for
date_part are the same as for extract.
9.9.2. date_trunc
The function date_trunc is conceptually similar to the trunc function for numbers.
date_trunc(’field’, source)
source is a value expression of type timestamp or interval. (Values of type date and time are cast
automatically, to timestamp or interval respectively.) field selects to which precision to truncate
the input value. The return value is of type timestamp or interval with all fields that are less significant
than the selected one set to zero (or one, for day and month).
Valid values for field are:
microseconds
milliseconds
second
minute
hour
day
week
month
year
decade
century
millennium
Examples:
159
Chapter 9. Functions and Operators
In these expressions, the desired time zone zone can be specified either as a text string (e.g., ’PST’) or
as an interval (e.g., INTERVAL ’-08:00’). In the text case, the available zone names are those shown in
Table B-4. (It would be useful to support the more general names shown in Table B-6, but this is not yet
implemented.)
Examples (supposing that the local time zone is PST8PDT):
SELECT TIMESTAMP WITH TIME ZONE ’2001-02-16 20:38:40-05’ AT TIME ZONE ’MST’;
Result: 2001-02-16 18:38:40
The first example takes a zone-less time stamp and interprets it as MST time (UTC-7) to produce a UTC
time stamp, which is then rotated to PST (UTC-8) for display. The second example takes a time stamp
specified in EST (UTC-5) and converts it to local time in MST (UTC-7).
The function timezone(zone, timestamp) is equivalent to the SQL-conforming construct timestamp
AT TIME ZONE zone.
CURRENT_DATE
CURRENT_TIME
CURRENT_TIMESTAMP
CURRENT_TIME ( precision )
CURRENT_TIMESTAMP ( precision )
LOCALTIME
LOCALTIMESTAMP
LOCALTIME ( precision )
LOCALTIMESTAMP ( precision )
160
Chapter 9. Functions and Operators
CURRENT_TIME and CURRENT_TIMESTAMP deliver values with time zone; LOCALTIME and
LOCALTIMESTAMP deliver values without time zone.
Note: Prior to PostgreSQL 7.2, the precision parameters were unimplemented, and the result was
always given in integer seconds.
Some examples:
SELECT CURRENT_TIME;
Result: 14:39:53.662522-05
SELECT CURRENT_DATE;
Result: 2001-12-23
SELECT CURRENT_TIMESTAMP;
Result: 2001-12-23 14:39:53.662522-05
SELECT CURRENT_TIMESTAMP(2);
Result: 2001-12-23 14:39:53.66-05
SELECT LOCALTIMESTAMP;
Result: 2001-12-23 14:39:53.662522
SELECT timeofday();
Result: Sat Feb 17 19:07:32.000126 2001 EST
It is important to know that CURRENT_TIMESTAMP and related functions return the start time of the current
transaction; their values do not change during the transaction. This is considered a feature: the intent is to
allow a single transaction to have a consistent notion of the “current” time, so that multiple modifications
within the same transaction bear the same time stamp. timeofday() returns the wall-clock time and does
advance during transactions.
Note: Other database systems may advance these values more frequently.
All the date/time data types also accept the special literal value now to specify the current date and time.
Thus, the following three all return the same result:
SELECT CURRENT_TIMESTAMP;
161
Chapter 9. Functions and Operators
SELECT now();
SELECT TIMESTAMP ’now’;
Tip: You do not want to use the third form when specifying a DEFAULT clause while creating a table.
The system will convert now to a timestamp as soon as the constant is parsed, so that when the
default value is needed, the time of the table creation would be used! The first two forms will not
be evaluated until the default value is used, because they are function calls. Thus they will give the
desired behavior of defaulting to the time of row insertion.
162
Chapter 9. Functions and Operators
163
Chapter 9. Functions and Operators
164
Chapter 9. Functions and Operators
It is possible to access the two component numbers of a point as though it were an array with indices
0 and 1. For example, if t.p is a point column then SELECT p[0] FROM t retrieves the X coordinate
and UPDATE t SET p[1] = ... changes the Y coordinate. In the same way, a value of type box or
lseg may be treated as an array of two point values.
The area function works for the types box, circle, and path. The area function only
works on the path data type if the points in the path are non-intersecting. For example,
the path ’((0,0),(0,1),(2,1),(2,2),(1,2),(1,0),(0,0))’::PATH
won’t work, however, the following visually identical path
’((0,0),(0,1),(1,1),(1,2),(2,2),(2,1),(1,1),(1,0),(0,0))’::PATH
will work. If the concept of an intersecting versus non-intersecting path is confusing, draw both of the
above paths side by side on a piece of graph paper.
165
Chapter 9. Functions and Operators
host part, and determine whether one network part is identical to or a subnet of the other.
Table 9-32 shows the functions available for use with the cidr and inet types. The host, text, and
abbrev functions are primarily intended to offer alternative display formats. You can cast a text value to
inet using normal casting syntax: inet(expression) or colname::inet.
166
Chapter 9. Functions and Operators
Table 9-33 shows the functions available for use with the macaddr type. The function trunc(macaddr)
returns a MAC address with the last 3 bytes set to zero. This can be used to associate the remaining prefix
with a manufacturer. The directory contrib/mac in the source distribution contains some utilities to
create and maintain such an association table.
The macaddr type also supports the standard relational operators (>, <=, etc.) for lexicographical order-
ing.
167
Chapter 9. Functions and Operators
For largely historical reasons, the sequence to be operated on by a sequence-function call is specified
by a text-string argument. To achieve some compatibility with the handling of ordinary SQL names, the
sequence functions convert their argument to lowercase unless the string is double-quoted. Thus
Of course, the text argument can be the result of an expression, not only a simple literal, which is occa-
sionally useful.
The available sequence functions are:
nextval
Advance the sequence object to its next value and return that value. This is done atomically: even if
multiple sessions execute nextval concurrently, each will safely receive a distinct sequence value.
currval
Return the value most recently obtained by nextval for this sequence in the current session. (An
error is reported if nextval has never been called for this sequence in this session.) Notice that
because this is returning a session-local value, it gives a predictable answer whether or not other
sessions have executed nextval since the current session did.
setval
Reset the sequence object’s counter value. The two-parameter form sets the sequence’s last_value
field to the specified value and sets its is_called field to true, meaning that the next nextval
will advance the sequence before returning a value. In the three-parameter form, is_called may
be set either true or false. If it’s set to false, the next nextval will return exactly the specified
value, and sequence advancement commences with the following nextval. For example,
SELECT setval(’foo’, 42); Next nextval will return 43
SELECT setval(’foo’, 42, true); Same as above
SELECT setval(’foo’, 42, false); Next nextval will return 42
The result returned by setval is just the value of its second argument.
Important: To avoid blocking of concurrent transactions that obtain numbers from the same sequence,
a nextval operation is never rolled back; that is, once a value has been fetched it is considered used,
even if the transaction that did the nextval later aborts. This means that aborted transactions may
168
Chapter 9. Functions and Operators
leave unused “holes” in the sequence of assigned values. setval operations are never rolled back,
either.
If a sequence object has been created with default parameters, nextval calls on it will return successive
values beginning with 1. Other behaviors can be obtained by using special parameters in the CREATE
SEQUENCE command; see its command reference page for more information.
Tip: If your needs go beyond the capabilities of these conditional expressions you might want to
consider writing a stored procedure in a more expressive programming language.
9.13.1. CASE
The SQL CASE expression is a generic conditional expression, similar to if/else statements in other lan-
guages:
CASE clauses can be used wherever an expression is valid. condition is an expression that returns a
boolean result. If the result is true then the value of the CASE expression is the result that follows the
condition. If the result is false any subsequent WHEN clauses are searched in the same manner. If no WHEN
condition is true then the value of the case expression is the result in the ELSE clause. If the ELSE
clause is omitted and no condition matches, the result is null.
An example:
a
---
1
2
3
SELECT a,
CASE WHEN a=1 THEN ’one’
WHEN a=2 THEN ’two’
ELSE ’other’
END
FROM test;
169
Chapter 9. Functions and Operators
a | case
---+-------
1 | one
2 | two
3 | other
The data types of all the result expressions must be convertible to a single output type. See Section
10.5 for more detail.
The following “simple” CASE expression is a specialized variant of the general form above:
CASE expression
WHEN value THEN result
[WHEN ...]
[ELSE result]
END
The expression is computed and compared to all the value specifications in the WHEN clauses until
one is found that is equal. If no match is found, the result in the ELSE clause (or a null value) is
returned. This is similar to the switch statement in C.
The example above can be written using the simple CASE syntax:
SELECT a,
CASE a WHEN 1 THEN ’one’
WHEN 2 THEN ’two’
ELSE ’other’
END
FROM test;
a | case
---+-------
1 | one
2 | two
3 | other
A CASE expression does not evaluate any subexpressions that are not needed to determine the result. For
example, this is a possible way of avoiding a division-by-zero failure:
SELECT ... WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END;
9.13.2. COALESCE
COALESCE(value [, ...])
170
Chapter 9. Functions and Operators
The COALESCE function returns the first of its arguments that is not null. Null is returned only if all
arguments are null. This is often useful to substitute a default value for null values when data is retrieved
for display, for example:
Like a CASE expression, COALESCE will not evaluate arguments that are not needed to determine the
result; that is, arguments to the right of the first non-null argument are not evaluated.
9.13.3. NULLIF
NULLIF(value1, value2)
The NULLIF function returns a null value if and only if value1 and value2 are equal. Otherwise it
returns value1. This can be used to perform the inverse operation of the COALESCE example given
above:
171
Chapter 9. Functions and Operators
See Section 8.10 for more details about array operator behavior.
Table 9-36 shows the functions available for use with array types. See Section 8.10 for more discussion
and examples of the use of these functions.
172
Chapter 9. Functions and Operators
aggregate functions. The special syntax considerations for aggregate functions are explained in Section
4.2.7. Consult Section 2.7 for additional introductory information.
173
Chapter 9. Functions and Operators
It should be noted that except for count, these functions return a null value when no rows are selected.
In particular, sum of no rows returns null, not zero as one might expect. The coalesce function may be
used to substitute zero for null when necessary.
Note: Boolean aggregates bool_and and bool_or correspond to standard SQL aggregates every
and any or some. As for any and some, it seems that there is an ambiguity built into the standard
syntax:
Here ANY can be considered both as leading to a subquery or as an aggregate if the select expression
returns 1 row. Thus the standard name cannot be given to these aggregates.
Note: Users accustomed to working with other SQL database management systems may be surprised
by the performance characteristics of certain aggregate functions in PostgreSQL when the aggregate
is applied to the entire table (in other words, no WHERE clause is specified). In particular, a query like
will be executed by PostgreSQL using a sequential scan of the entire table. Other database systems
may optimize queries of this form to use an index on the column, if one is available. Similarly, the
aggregate functions max() and count() always require a sequential scan if applied to the entire table
in PostgreSQL.
PostgreSQL cannot easily implement this optimization because it also allows for user-defined ag-
gregate queries. Since min(), max(), and count() are defined using a generic API for aggregate
functions, there is no provision for special-casing the execution of these functions under certain cir-
cumstances.
Fortunately, there is a simple workaround for min() and max(). The query shown below is equivalent
to the query above, except that it can take advantage of a B-tree index if there is one present on the
column in question.
A similar query (obtained by substituting DESC for ASC in the query above) can be used in the place of
max().
174
Chapter 9. Functions and Operators
Unfortunately, there is no similarly trivial query that can be used to improve the performance of
count() when applied to the entire table.
9.16.1. EXISTS
EXISTS ( subquery )
The argument of EXISTS is an arbitrary SELECT statement, or subquery. The subquery is evaluated to
determine whether it returns any rows. If it returns at least one row, the result of EXISTS is “true”; if the
subquery returns no rows, the result of EXISTS is “false”.
The subquery can refer to variables from the surrounding query, which will act as constants during any
one evaluation of the subquery.
The subquery will generally only be executed far enough to determine whether at least one row is returned,
not all the way to completion. It is unwise to write a subquery that has any side effects (such as calling
sequence functions); whether the side effects occur or not may be difficult to predict.
Since the result depends only on whether any rows are returned, and not on the contents of those rows, the
output list of the subquery is normally uninteresting. A common coding convention is to write all EXISTS
tests in the form EXISTS(SELECT 1 WHERE ...). There are exceptions to this rule however, such as
subqueries that use INTERSECT.
This simple example is like an inner join on col2, but it produces at most one output row for each tab1
row, even if there are multiple matching tab2 rows:
9.16.2. IN
expression IN (subquery)
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand
expression is evaluated and compared to each row of the subquery result. The result of IN is “true” if any
equal subquery row is found. The result is “false” if no equal row is found (including the special case
where the subquery returns no rows).
Note that if the left-hand expression yields null, or if there are no equal right-hand values and at least one
right-hand row yields null, the result of the IN construct will be null, not false. This is in accordance with
SQL’s normal rules for Boolean combinations of null values.
175
Chapter 9. Functions and Operators
As with EXISTS, it’s unwise to assume that the subquery will be evaluated completely.
row_constructor IN (subquery)
The left-hand side of this form of IN is a row constructor, as described in Section 4.2.11. The right-hand
side is a parenthesized subquery, which must return exactly as many columns as there are expressions
in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the
subquery result. The result of IN is “true” if any equal subquery row is found. The result is “false” if no
equal row is found (including the special case where the subquery returns no rows).
As usual, null values in the rows are combined per the normal rules of SQL Boolean expressions. Two
rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal
if any corresponding members are non-null and unequal; otherwise the result of that row comparison is
unknown (null). If all the row results are either unequal or null, with at least one null, then the result of IN
is null.
9.16.3. NOT IN
expression NOT IN (subquery)
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand
expression is evaluated and compared to each row of the subquery result. The result of NOT IN is “true”
if only unequal subquery rows are found (including the special case where the subquery returns no rows).
The result is “false” if any equal row is found.
Note that if the left-hand expression yields null, or if there are no equal right-hand values and at least one
right-hand row yields null, the result of the NOT IN construct will be null, not true. This is in accordance
with SQL’s normal rules for Boolean combinations of null values.
As with EXISTS, it’s unwise to assume that the subquery will be evaluated completely.
The left-hand side of this form of NOT IN is a row constructor, as described in Section 4.2.11. The right-
hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions
in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the
subquery result. The result of NOT IN is “true” if only unequal subquery rows are found (including the
special case where the subquery returns no rows). The result is “false” if any equal row is found.
As usual, null values in the rows are combined per the normal rules of SQL Boolean expressions. Two
rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal
if any corresponding members are non-null and unequal; otherwise the result of that row comparison is
unknown (null). If all the row results are either unequal or null, with at least one null, then the result of
NOT IN is null.
9.16.4. ANY/SOME
expression operator ANY (subquery)
expression operator SOME (subquery)
176
Chapter 9. Functions and Operators
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand
expression is evaluated and compared to each row of the subquery result using the given operator,
which must yield a Boolean result. The result of ANY is “true” if any true result is obtained. The result is
“false” if no true result is found (including the special case where the subquery returns no rows).
SOME is a synonym for ANY. IN is equivalent to = ANY.
Note that if there are no successes and at least one right-hand row yields null for the operator’s result,
the result of the ANY construct will be null, not false. This is in accordance with SQL’s normal rules for
Boolean combinations of null values.
As with EXISTS, it’s unwise to assume that the subquery will be evaluated completely.
The left-hand side of this form of ANY is a row constructor, as described in Section 4.2.11. The right-hand
side is a parenthesized subquery, which must return exactly as many columns as there are expressions
in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the
subquery result, using the given operator. Presently, only = and <> operators are allowed in row-wise
ANY constructs. The result of ANY is “true” if any equal or unequal row is found, respectively. The result
is “false” if no such row is found (including the special case where the subquery returns no rows).
As usual, null values in the rows are combined per the normal rules of SQL Boolean expressions. Two
rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal
if any corresponding members are non-null and unequal; otherwise the result of that row comparison is
unknown (null). If there is at least one null row result, then the result of ANY cannot be false; it will be
true or null.
9.16.5. ALL
expression operator ALL (subquery)
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand
expression is evaluated and compared to each row of the subquery result using the given operator,
which must yield a Boolean result. The result of ALL is “true” if all rows yield true (including the special
case where the subquery returns no rows). The result is “false” if any false result is found.
NOT IN is equivalent to <> ALL.
Note that if there are no failures but at least one right-hand row yields null for the operator’s result, the
result of the ALL construct will be null, not true. This is in accordance with SQL’s normal rules for Boolean
combinations of null values.
As with EXISTS, it’s unwise to assume that the subquery will be evaluated completely.
The left-hand side of this form of ALL is a row constructor, as described in Section 4.2.11. The right-hand
side is a parenthesized subquery, which must return exactly as many columns as there are expressions
in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the
subquery result, using the given operator. Presently, only = and <> operators are allowed in row-wise
ALL queries. The result of ALL is “true” if all subquery rows are equal or unequal, respectively (including
177
Chapter 9. Functions and Operators
the special case where the subquery returns no rows). The result is “false” if any row is found to be unequal
or equal, respectively.
As usual, null values in the rows are combined per the normal rules of SQL Boolean expressions. Two
rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal
if any corresponding members are non-null and unequal; otherwise the result of that row comparison is
unknown (null). If there is at least one null row result, then the result of ALL cannot be true; it will be false
or null.
The left-hand side is a row constructor, as described in Section 4.2.11. The right-hand side is a paren-
thesized subquery, which must return exactly as many columns as there are expressions in the left-hand
row. Furthermore, the subquery cannot return more than one row. (If it returns zero rows, the result is
taken to be null.) The left-hand side is evaluated and compared row-wise to the single subquery result
row. Presently, only = and <> operators are allowed in row-wise comparisons. The result is “true” if the
two rows are equal or unequal, respectively.
As usual, null values in the rows are combined per the normal rules of SQL Boolean expressions. Two
rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal
if any corresponding members are non-null and unequal; otherwise the result of the row comparison is
unknown (null).
9.17.1. IN
expression IN (value[, ...])
The right-hand side is a parenthesized list of scalar expressions. The result is “true” if the left-hand ex-
pression’s result is equal to any of the right-hand expressions. This is a shorthand notation for
expression = value1
OR
expression = value2
OR
...
178
Chapter 9. Functions and Operators
Note that if the left-hand expression yields null, or if there are no equal right-hand values and at least one
right-hand expression yields null, the result of the IN construct will be null, not false. This is in accordance
with SQL’s normal rules for Boolean combinations of null values.
9.17.2. NOT IN
expression NOT IN (value[, ...])
The right-hand side is a parenthesized list of scalar expressions. The result is “true” if the left-hand ex-
pression’s result is unequal to all of the right-hand expressions. This is a shorthand notation for
Note that if the left-hand expression yields null, or if there are no equal right-hand values and at least one
right-hand expression yields null, the result of the NOT IN construct will be null, not true as one might
naively expect. This is in accordance with SQL’s normal rules for Boolean combinations of null values.
Tip: x NOT IN y is equivalent to NOT (x IN y) in all cases. However, null values are much more
likely to trip up the novice when working with NOT IN than when working with IN. It’s best to express
your condition positively if possible.
The right-hand side is a parenthesized expression, which must yield an array value. The left-hand expres-
sion is evaluated and compared to each element of the array using the given operator, which must
yield a Boolean result. The result of ANY is “true” if any true result is obtained. The result is “false” if no
true result is found (including the special case where the array has zero elements).
SOME is a synonym for ANY.
The right-hand side is a parenthesized expression, which must yield an array value. The left-hand expres-
sion is evaluated and compared to each element of the array using the given operator, which must
yield a Boolean result. The result of ALL is “true” if all comparisons yield true (including the special case
where the array has zero elements). The result is “false” if any false result is found.
179
Chapter 9. Functions and Operators
Each side is a row constructor, as described in Section 4.2.11. The two row values must have the same
number of fields. Each side is evaluated and they are compared row-wise. Presently, only = and <>
operators are allowed in row-wise comparisons. The result is “true” if the two rows are equal or unequal,
respectively.
As usual, null values in the rows are combined per the normal rules of SQL Boolean expressions. Two
rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal
if any corresponding members are non-null and unequal; otherwise the result of the row comparison is
unknown (null).
This construct is similar to a <> row comparison, but it does not yield null for null inputs. Instead, any
null value is considered unequal to (distinct from) any non-null value, and any two nulls are considered
equal (not distinct). Thus the result will always be either true or false, never null.
row_constructor IS NULL
row_constructor IS NOT NULL
These constructs test a row value for null or not null. A row value is considered not null if it has at least
one field that is not null.
When step is positive, zero rows are returned if start is greater than stop. Conversely, when step is
negative, zero rows are returned if start is less than stop. Zero rows are also returned for NULL inputs.
It is an error for step to be zero. Some examples follow:
180
Chapter 9. Functions and Operators
-----------------
2
3
4
(3 rows)
181
Chapter 9. Functions and Operators
The session_user is normally the user who initiated the current database connection; but superusers
can change this setting with SET SESSION AUTHORIZATION. The current_user is the user identifier
that is applicable for permission checking. Normally, it is equal to the session user, but it changes during
the execution of functions with the attribute SECURITY DEFINER. In Unix parlance, the session user is
the “real user” and the current user is the “effective user”.
Note: current_user, session_user, and user have special syntactic status in SQL: they must be
called without trailing parentheses.
current_schema returns the name of the schema that is at the front of the search path (or a null value
if the search path is empty). This is the schema that will be used for any tables or other named objects
that are created without specifying a target schema. current_schemas(boolean) returns an array of
the names of all schemas presently in the search path. The Boolean option determines whether or not
implicitly included system schemas such as pg_catalog are included in the search path returned.
Note: The search path may be altered at run time. The command is:
inet_client_addr returns the IP address of the current client, and inet_client_port returns the
port number. inet_server_addr returns the IP address on which the server accepted the current con-
nection, and inet_server_port returns the port number. All these functions return NULL if the current
connection is via a Unix-domain socket.
version() returns a string describing the PostgreSQL server’s version.
Table 9-40 lists functions that allow the user to query object access privileges programmatically. See
Section 5.7 for more information about privileges.
182
Chapter 9. Functions and Operators
has_table_privilege checks whether a user can access a table in a particular way. The user can
be specified by name or by ID (pg_user.usesysid), or if the argument is omitted current_user
is assumed. The table can be specified by name or by OID. (Thus, there are actually six variants of
has_table_privilege, which can be distinguished by the number and types of their arguments.) When
specifying by name, the name can be schema-qualified if necessary. The desired access privilege type is
specified by a text string, which must evaluate to one of the values SELECT, INSERT, UPDATE, DELETE,
RULE, REFERENCES, or TRIGGER. (Case of the string is not significant, however.) An example is:
has_database_privilege checks whether a user can access a database in a particular way. The pos-
sibilities for its arguments are analogous to has_table_privilege. The desired access privilege type
must evaluate to CREATE, TEMPORARY, or TEMP (which is equivalent to TEMPORARY).
has_function_privilege checks whether a user can access a function in a particular way. The possi-
bilities for its arguments are analogous to has_table_privilege. When specifying a function by a text
string rather than by OID, the allowed input is the same as for the regprocedure data type (see Section
8.12). The desired access privilege type must evaluate to EXECUTE. An example is:
has_language_privilege checks whether a user can access a procedural language in a particular way.
The possibilities for its arguments are analogous to has_table_privilege. The desired access privilege
type must evaluate to USAGE.
has_schema_privilege checks whether a user can access a schema in a particular way. The possibili-
ties for its arguments are analogous to has_table_privilege. The desired access privilege type must
evaluate to CREATE or USAGE.
183
Chapter 9. Functions and Operators
has_tablespace_privilege checks whether a user can access a tablespace in a particular way. The
possibilities for its arguments are analogous to has_table_privilege. The desired access privilege
type must evaluate to CREATE.
To test whether a user holds a grant option on the privilege, append WITH GRANT OPTION to the privi-
lege key word; for example ’UPDATE WITH GRANT OPTION’.
Table 9-41 shows functions that determine whether a certain object is visible in the current schema search
path. A table is said to be visible if its containing schema is in the search path and no table of the same
name appears earlier in the search path. This is equivalent to the statement that the table can be referenced
by name without explicit schema qualification. For example, to list the names of all visible tables:
pg_operator_is_visible(operator_oid
boolean
) is operator visible in search path
pg_opclass_is_visible(opclass_oid
boolean
) is operator class visible in search
path
pg_conversion_is_visible(conversion_oid
boolean ) is conversion visible in search path
pg_table_is_visible performs the check for tables (or views, or any other kind of pg_class
entry). pg_type_is_visible, pg_function_is_visible, pg_operator_is_visible,
pg_opclass_is_visible, and pg_conversion_is_visible perform the same sort of visibility
check for types (and domains), functions, operators, operator classes and conversions, respectively. For
functions and operators, an object in the search path is visible if there is no object of the same name and
argument data type(s) earlier in the path. For operator classes, both name and associated index access
method are considered.
All these functions require object OIDs to identify the object to be checked. If you want to test an
object by name, it is convenient to use the OID alias types (regclass, regtype, regprocedure, or
regoperator), for example
SELECT pg_type_is_visible(’myschema.widget’::regtype);
Note that it would not make much sense to test an unqualified name in this way — if the name can be
recognized at all, it must be visible.
Table 9-42 lists functions that extract information from the system catalogs.
184
Chapter 9. Functions and Operators
pg_get_constraintdef(constraint_oid
text , get definition of a constraint
pretty_bool)
pg_get_expr(expr_text, text decompile internal form of an
relation_oid) expression, assuming that any Vars
in it refer to the relation indicated
by the second parameter
pg_get_expr(expr_text, text decompile internal form of an
relation_oid, pretty_bool) expression, assuming that any Vars
in it refer to the relation indicated
by the second parameter
pg_get_userbyid(userid) name get user name with given ID
pg_get_serial_sequence(table_name
text, get name of the sequence that a
column_name) serial or bigserial column
uses
pg_tablespace_databases(tablespace_oid
setof oid) get set of database OIDs that have
objects in the tablespace
185
Chapter 9. Functions and Operators
pg_get_expr decompiles the internal form of an individual expression, such as the default value for a
column. It may be useful when examining the contents of system catalogs. Most of these functions come
in two variants, one of which can optionally “pretty-print” the result. The pretty-printed format is more
readable, but the default format is more likely to be interpreted the same way by future versions of
PostgreSQL; avoid using pretty-printed output for dump purposes. Passing false for the pretty-print
parameter yields the same result as the variant that does not have the parameter at all.
pg_get_userbyid extracts a user’s name given a user ID number. pg_get_serial_sequence fetches
the name of the sequence associated with a serial or bigserial column. The name is suitably formatted
for passing to the sequence functions (see Section 9.12). NULL is returned if the column does not have a
sequence attached.
pg_tablespace_databases allows usage examination of a tablespace. It will return a set of OIDs of
databases that have objects stored in the tablespace. If this function returns any row, the tablespace is not
empty and cannot be dropped. To display the specific objects populating the tablespace, you will need to
connect to the databases identified by pg_tablespace_databases and query their pg_class catalogs.
The functions shown in Table 9-43 extract comments previously stored with the COMMENT command. A
null value is returned if no comment could be found matching the specified parameters.
186
Chapter 9. Functions and Operators
The function current_setting yields the current value of the setting setting_name. It corresponds
to the SQL command SHOW. An example:
SELECT current_setting(’datestyle’);
current_setting
-----------------
ISO, MDY
(1 row)
set_config sets the parameter setting_name to new_value. If is_local is true, the new value
will only apply to the current transaction. If you want the new value to apply for the current session, use
false instead. The function corresponds to the SQL command SET. An example:
set_config
------------
off
(1 row)
The function shown in Table 9-45 sends control signals to other server processes. Use of this function is
restricted to superusers.
This function returns 1 if successful, 0 if not successful. The process ID (pid) of an active backend can be
found from the procpid column in the pg_stat_activity view, or by listing the postgres processes
on the server with ps.
The functions shown in Table 9-46 assist in making on-line backups. Use of these functions is restricted
to superusers.
187
Chapter 9. Functions and Operators
pg_start_backup accepts a single parameter which is an arbitrary user-defined label for the backup.
(Typically this would be the name under which the backup dump file will be stored.) The function writes
a backup label file into the database cluster’s data directory, and then returns the backup’s starting WAL
offset as text. (The user need not pay any attention to this result value, but it is provided in case it is of
use.)
pg_stop_backup removes the label file created by pg_start_backup, and instead creates a backup
history file in the WAL archive area. The history file includes the label given to pg_start_backup, the
starting and ending WAL offsets for the backup, and the starting and ending times of the backup. The
return value is the backup’s ending WAL offset (which again may be of little interest).
For details about proper usage of these functions, see Section 22.3.
188
Chapter 10. Type Conversion
SQL statements can, intentionally or not, require mixing of different data types in the same expression.
PostgreSQL has extensive facilities for evaluating mixed-type expressions.
In many cases a user will not need to understand the details of the type conversion mechanism. However,
the implicit conversions done by PostgreSQL can affect the results of a query. When necessary, these
results can be tailored by using explicit type conversion.
This chapter introduces the PostgreSQL type conversion mechanisms and conventions. Refer to the rele-
vant sections in Chapter 8 and Chapter 9 for more information on specific data types and allowed functions
and operators.
10.1. Overview
SQL is a strongly typed language. That is, every data item has an associated data type which determines
its behavior and allowed usage. PostgreSQL has an extensible type system that is much more general
and flexible than other SQL implementations. Hence, most type conversion behavior in PostgreSQL is
governed by general rules rather than by ad hoc heuristics. This allows mixed-type expressions to be
meaningful even with user-defined types.
The PostgreSQL scanner/parser divides lexical elements into only five fundamental categories: integers,
non-integer numbers, strings, identifiers, and key words. Constants of most non-numeric types are first
classified as strings. The SQL language definition allows specifying type names with strings, and this
mechanism can be used in PostgreSQL to start the parser down the correct path. For example, the query
label | value
--------+-------
Origin | (0,0)
(1 row)
has two literal constants, of type text and point. If a type is not specified for a string literal, then the
placeholder type unknown is assigned initially, to be resolved in later stages as described below.
There are four fundamental SQL constructs requiring distinct type conversion rules in the PostgreSQL
parser:
Function calls
Much of the PostgreSQL type system is built around a rich set of functions. Functions can have one
or more arguments. Since PostgreSQL permits function overloading, the function name alone does
not uniquely identify the function to be called; the parser must select the right function based on the
data types of the supplied arguments.
Operators
PostgreSQL allows expressions with prefix and postfix unary (one-argument) operators, as well as
binary (two-argument) operators. Like functions, operators can be overloaded, and so the same prob-
lem of selecting the right operator exists.
189
Chapter 10. Type Conversion
Value Storage
SQL INSERT and UPDATE statements place the results of expressions into a table. The expressions
in the statement must be matched up with, and perhaps converted to, the types of the target columns.
UNION, CASE, and ARRAY constructs
Since all query results from a unionized SELECT statement must appear in a single set of columns,
the types of the results of each SELECT clause must be matched up and converted to a uniform set.
Similarly, the result expressions of a CASE construct must be converted to a common type so that the
CASE expression as a whole has a known output type. The same holds for ARRAY constructs.
The system catalogs store information about which conversions, called casts, between data types are valid,
and how to perform those conversions. Additional casts can be added by the user with the CREATE CAST
command. (This is usually done in conjunction with defining new data types. The set of casts between the
built-in types has been carefully crafted and is best not altered.)
An additional heuristic is provided in the parser to allow better guesses at proper behavior for SQL stan-
dard types. There are several basic type categories defined: boolean, numeric, string, bitstring,
datetime, timespan, geometric, network, and user-defined. Each category, with the exception of
user-defined, has one or more preferred types which are preferentially selected when there is ambiguity.
In the user-defined category, each type is its own preferred type. Ambiguous expressions (those with mul-
tiple candidate parsing solutions) can therefore often be resolved when there are multiple possible built-in
types, but they will raise an error when there are multiple choices for user-defined types.
All type conversion rules are designed with several principles in mind:
190
Chapter 10. Type Conversion
10.2. Operators
The specific operator to be used in an operator invocation is determined by following the procedure below.
Note that this procedure is indirectly affected by the precedence of the involved operators. See Section
4.1.6 for more information.
1. Select the operators to be considered from the pg_operator system catalog. If an unqualified opera-
tor name was used (the usual case), the operators considered are those of the right name and argument
count that are visible in the current search path (see Section 5.8.3). If a qualified operator name was
given, only operators in the specified schema are considered.
a. If the search path finds multiple operators of identical argument types, only the one ap-
pearing earliest in the path is considered. But operators of different argument types are
considered on an equal footing regardless of search path position.
2. Check for an operator accepting exactly the input argument types. If one exists (there can be only one
exact match in the set of operators considered), use it.
a. If one argument of a binary operator invocation is of the unknown type, then assume it is the
same type as the other argument for this check. Other cases involving unknown will never
find a match at this step.
3. Look for the best match.
a. Discard candidate operators for which the input types do not match and cannot be converted
(using an implicit conversion) to match. unknown literals are assumed to be convertible to
anything for this purpose. If only one candidate remains, use it; else continue to the next
step.
b. Run through all candidates and keep those with the most exact matches on input types.
(Domains are considered the same as their base type for this purpose.) Keep all candidates
if none have any exact matches. If only one candidate remains, use it; else continue to the
next step.
c. Run through all candidates and keep those that accept preferred types (of the input data
type’s type category) at the most positions where type conversion will be required. Keep
all candidates if none accept preferred types. If only one candidate remains, use it; else
continue to the next step.
d. If any input arguments are unknown, check the type categories accepted at those argument
positions by the remaining candidates. At each position, select the string category if any
candidate accepts that category. (This bias towards string is appropriate since an unknown-
type literal does look like a string.) Otherwise, if all the remaining candidates accept the
same type category, select that category; otherwise fail because the correct choice cannot
be deduced without more clues. Now discard candidates that do not accept the selected
type category. Furthermore, if any candidate accepts a preferred type at a given argument
position, discard candidates that accept non-preferred types for that argument.
e. If only one candidate remains, use it. If no candidate or more than one candidate remains,
then fail.
191
Chapter 10. Type Conversion
There is only one exponentiation operator defined in the catalog, and it takes arguments of type double
precision. The scanner assigns an initial type of integer to both arguments of this query expression:
SELECT 2 ^ 3 AS "exp";
exp
-----
8
(1 row)
So the parser does a type conversion on both operands and the query is equivalent to
SELECT CAST(2 AS double precision) ^ CAST(3 AS double precision) AS "exp";
A string-like syntax is used for working with string types as well as for working with complex extension
types. Strings with unspecified type are matched with likely operator candidates.
An example with one unspecified argument:
SELECT text ’abc’ || ’def’ AS "text and unknown";
In this case the parser looks to see if there is an operator taking text for both arguments. Since there is,
it assumes that the second argument should be interpreted as of type text.
Here is a concatenation on unspecified types:
SELECT ’abc’ || ’def’ AS "unspecified";
unspecified
-------------
abcdef
(1 row)
In this case there is no initial hint for which type to use, since no types are specified in the query. So, the
parser looks for all candidate operators and finds that there are candidates accepting both string-category
and bit-string-category inputs. Since string category is preferred when available, that category is selected,
and then the preferred type for strings, text, is used as the specific type to resolve the unknown literals
to.
192
Chapter 10. Type Conversion
The PostgreSQL operator catalog has several entries for the prefix operator @, all of which implement
absolute-value operations for various numeric data types. One of these entries is for type float8, which
is the preferred type in the numeric category. Therefore, PostgreSQL will use that entry when faced with
a non-numeric input:
SELECT @ ’-4.5’ AS "abs";
abs
-----
4.5
(1 row)
Here the system has performed an implicit conversion from text to float8 before applying the chosen
operator. We can verify that float8 and not some other type was used:
SELECT @ ’-4.5e500’ AS "abs";
On the other hand, the prefix operator ~ (bitwise negation) is defined only for integer data types, not for
float8. So, if we try a similar case with ~, we get:
SELECT ~ ’20’ AS "negation";
negation
----------
-21
(1 row)
10.3. Functions
The specific function to be used in a function invocation is determined according to the following steps.
1. Select the functions to be considered from the pg_proc system catalog. If an unqualified function
name was used, the functions considered are those of the right name and argument count that are
visible in the current search path (see Section 5.8.3). If a qualified function name was given, only
functions in the specified schema are considered.
a. If the search path finds multiple functions of identical argument types, only the one ap-
pearing earliest in the path is considered. But functions of different argument types are
considered on an equal footing regardless of search path position.
193
Chapter 10. Type Conversion
2. Check for a function accepting exactly the input argument types. If one exists (there can be only one
exact match in the set of functions considered), use it. (Cases involving unknown will never find a
match at this step.)
3. If no exact match is found, see whether the function call appears to be a trivial type conversion request.
This happens if the function call has just one argument and the function name is the same as the
(internal) name of some data type. Furthermore, the function argument must be either an unknown-
type literal or a type that is binary-compatible with the named data type. When these conditions are
met, the function argument is converted to the named data type without any actual function call.
4. Look for the best match.
a. Discard candidate functions for which the input types do not match and cannot be converted
(using an implicit conversion) to match. unknown literals are assumed to be convertible to
anything for this purpose. If only one candidate remains, use it; else continue to the next
step.
b. Run through all candidates and keep those with the most exact matches on input types.
(Domains are considered the same as their base type for this purpose.) Keep all candidates
if none have any exact matches. If only one candidate remains, use it; else continue to the
next step.
c. Run through all candidates and keep those that accept preferred types (of the input data
type’s type category) at the most positions where type conversion will be required. Keep
all candidates if none accept preferred types. If only one candidate remains, use it; else
continue to the next step.
d. If any input arguments are unknown, check the type categories accepted at those argument
positions by the remaining candidates. At each position, select the string category if any
candidate accepts that category. (This bias towards string is appropriate since an unknown-
type literal does look like a string.) Otherwise, if all the remaining candidates accept the
same type category, select that category; otherwise fail because the correct choice cannot
be deduced without more clues. Now discard candidates that do not accept the selected
type category. Furthermore, if any candidate accepts a preferred type at a given argument
position, discard candidates that accept non-preferred types for that argument.
e. If only one candidate remains, use it. If no candidate or more than one candidate remains,
then fail.
Note that the “best match” rules are identical for operator and function type resolution. Some examples
follow.
There is only one round function with two arguments. (The first is numeric, the second is integer.)
So the following query automatically converts the first argument of type integer to numeric:
SELECT round(4, 4);
round
--------
4.0000
(1 row)
194
Chapter 10. Type Conversion
Since numeric constants with decimal points are initially assigned the type numeric, the following query
will require no type conversion and may therefore be slightly more efficient:
SELECT round(4.0, 4);
There are several substr functions, one of which takes types text and integer. If called with a string
constant of unspecified type, the system chooses the candidate function that accepts an argument of the
preferred category string (namely of type text).
SELECT substr(’1234’, 3);
substr
--------
34
(1 row)
If the string is declared to be of type varchar, as might be the case if it comes from a table, then the
parser will try to convert it to become text:
SELECT substr(varchar ’1234’, 3);
substr
--------
34
(1 row)
This is transformed by the parser to effectively become
SELECT substr(CAST (varchar ’1234’ AS text), 3);
Note: The parser learns from the pg_cast catalog that text and varchar are binary-compatible,
meaning that one can be passed to a function that accepts the other without doing any physical
conversion. Therefore, no explicit type conversion call is really inserted in this case.
And, if the function is called with an argument of type integer, the parser will try to convert that to
text:
SELECT substr(1234, 3);
substr
--------
34
(1 row)
This actually executes as
SELECT substr(CAST (1234 AS text), 3);
195
Chapter 10. Type Conversion
This automatic transformation can succeed because there is an implicitly invocable cast from integer to
text.
For a target column declared as character(20) the following statement ensures that the stored value is
sized correctly:
CREATE TABLE vv (v character(20));
INSERT INTO vv SELECT ’abc’ || ’def’;
SELECT v, length(v) FROM vv;
v | length
----------------------+--------
abcdef | 20
(1 row)
What has really happened here is that the two unknown literals are resolved to text by default, allowing
the || operator to be resolved as text concatenation. Then the text result of the operator is converted to
bpchar (“blank-padded char”, the internal name of the character data type) to match the target column
type. (Since the types text and bpchar are binary-compatible, this conversion does not insert any real
function call.) Finally, the sizing function bpchar(bpchar, integer) is found in the system catalog
and applied to the operator’s result and the stored column length. This type-specific function performs the
required length check and addition of padding spaces.
196
Chapter 10. Type Conversion
1. If all inputs are of type unknown, resolve as type text (the preferred type of the string category).
Otherwise, ignore the unknown inputs while choosing the result type.
2. If the non-unknown inputs are not all of the same type category, fail.
3. Choose the first non-unknown input type which is a preferred type in that category or allows all the
non-unknown inputs to be implicitly converted to it.
4. Convert all inputs to the selected type.
text
------
a
b
(2 rows)
Here, the unknown-type literal ’b’ will be resolved as type text.
numeric
---------
1
1.2
(2 rows)
The literal 1.2 is of type numeric, and the integer value 1 can be cast implicitly to numeric, so that
type is used.
197
Chapter 10. Type Conversion
real
------
1
2.2
(2 rows)
Here, since type real cannot be implicitly cast to integer, but integer can be implicitly cast to real,
the union result type is resolved as real.
198
Chapter 11. Indexes
Indexes are a common way to enhance database performance. An index allows the database server to find
and retrieve specific rows much faster than it could do without an index. But indexes also add overhead to
the database system as a whole, so they should be used sensibly.
11.1. Introduction
Suppose we have a table similar to this:
With no advance preparation, the system would have to scan the entire test1 table, row by row, to find all
matching entries. If there are a lot of rows in test1 and only a few rows (perhaps only zero or one) that
would be returned by such a query, then this is clearly an inefficient method. But if the system has been
instructed to maintain an index on the id column, then it can use a more efficient method for locating
matching rows. For instance, it might only have to walk a few levels deep into a search tree.
A similar approach is used in most books of non-fiction: terms and concepts that are frequently looked
up by readers are collected in an alphabetic index at the end of the book. The interested reader can scan
the index relatively quickly and flip to the appropriate page(s), rather than having to read the entire book
to find the material of interest. Just as it is the task of the author to anticipate the items that the readers
are most likely to look up, it is the task of the database programmer to foresee which indexes would be of
advantage.
The following command would be used to create the index on the id column, as discussed:
The name test1_id_index can be chosen freely, but you should pick something that enables you to
remember later what the index was for.
To remove an index, use the DROP INDEX command. Indexes can be added to and removed from tables at
any time.
Once an index is created, no further intervention is required: the system will update the index when the
table is modified, and it will use the index in queries when it thinks this would be more efficient than a
sequential table scan. But you may have to run the ANALYZE command regularly to update statistics to
allow the query planner to make educated decisions. See Chapter 13 for information about how to find out
whether an index is used and when and why the planner may choose not to use an index.
Indexes can also benefit UPDATE and DELETE commands with search conditions. Indexes can moreover
be used in join queries. Thus, an index defined on a column that is part of a join condition can significantly
speed up queries with joins.
199
Chapter 11. Indexes
When an index is created, the system has to keep it synchronized with the table. This adds overhead to
data manipulation operations. Therefore indexes that are non-essential or do not get used at all should be
removed. Note that a query or data manipulation command can use at most one index per table.
<
<=
=
>=
>
Constructs equivalent to combinations of these operators, such as BETWEEN and IN, can also be imple-
mented with a B-tree index search. (But note that IS NULL is not equivalent to = and is not indexable.)
The optimizer can also use a B-tree index for queries involving the pattern matching operators LIKE,
ILIKE, ~, and ~*, if the pattern is anchored to the beginning of the string, e.g., col LIKE ’foo%’ or
col ~ ’^foo’, but not col LIKE ’%bar’. However, if your server does not use the C locale you will
need to create the index with a special operator class to support indexing of pattern-matching queries. See
Section 11.6 below.
R-tree indexes are suited for queries on spatial data. To create an R-tree index, use a command of the form
The PostgreSQL query planner will consider using an R-tree index whenever an indexed column is in-
volved in a comparison using one of these operators:
<<
&<
&>
>>
@
~=
&&
200
Chapter 11. Indexes
Note: Testing has shown PostgreSQL’s hash indexes to perform no better than B-tree indexes, and
the index size and build time for hash indexes is much worse. For these reasons, hash index use is
presently discouraged.
GiST indexes are not a single kind of index, but rather an infrastructure within which many different in-
dexing strategies can be implemented. Accordingly, the particular operators with which a GiST index can
be used vary depending on the indexing strategy (the operator class). For more information see Chapter
48.
The B-tree index method is an implementation of Lehman-Yao high-concurrency B-trees. The R-tree
index method implements standard R-trees using Guttman’s quadratic split algorithm. The hash index
method is an implementation of Litwin’s linear hashing. We mention the algorithms used solely to indicate
that all of these index methods are fully dynamic and do not have to be optimized periodically (as is the
case with, for example, static hash methods).
(say, you keep your /dev directory in a database...) and you frequently make queries like
SELECT name FROM test2 WHERE major = constant AND minor = constant;
then it may be appropriate to define an index on the columns major and minor together, e.g.,
Currently, only the B-tree and GiST implementations support multicolumn indexes. Up to 32 columns may
be specified. (This limit can be altered when building PostgreSQL; see the file pg_config_manual.h.)
The query planner can use a multicolumn index for queries that involve the leftmost column in the index
definition plus any number of columns listed to the right of it, without a gap. For example, an index on
(a, b, c) can be used in queries involving all of a, b, and c, or in queries involving both a and b,
or in queries involving only a, but not in other combinations. (In a query involving a and c the planner
could choose to use the index for a, while treating c like an ordinary unindexed column.) Of course, each
column must be used with operators appropriate to the index type; clauses that involve other operators
will not be considered.
Multicolumn indexes can only be used if the clauses involving the indexed columns are joined with AND.
For instance,
201
Chapter 11. Indexes
cannot make use of the index test2_mm_idx defined above to look up both columns. (It can be used to
look up only the major column, however.)
Multicolumn indexes should be used sparingly. Most of the time, an index on a single column is sufficient
and saves space and time. Indexes with more than three columns are unlikely to be helpful unless the
usage of the table is extremely stylized.
Note: The preferred way to add a unique constraint to a table is ALTER TABLE ... ADD CONSTRAINT.
The use of indexes to enforce unique constraints could be considered an implementation detail that
should not be accessed directly. One should, however, be aware that there’s no need to manually
create indexes on unique columns; doing so would just duplicate the automatically-created index.
This query can use an index, if one has been defined on the result of the lower(col1) operation:
202
Chapter 11. Indexes
If we were to declare this index UNIQUE, it would prevent creation of rows whose col1 values differ only
in case, as well as rows whose col1 values are actually identical. Thus, indexes on expressions can be
used to enforce constraints that are not definable as simple unique constraints.
As another example, if one often does queries like this:
The syntax of the CREATE INDEX command normally requires writing parentheses around index expres-
sions, as shown in the second example. The parentheses may be omitted when the expression is just a
function call, as in the first example.
Index expressions are relatively expensive to maintain, since the derived expression(s) must be computed
for each row upon insertion or whenever it is updated. Therefore they should be used only when queries
that can use the index are very frequent.
The operator class identifies the operators to be used by the index for that column. For example, a B-
tree index on the type int4 would use the int4_ops class; this operator class includes comparison
functions for values of type int4. In practice the default operator class for the column’s data type is
usually sufficient. The main point of having operator classes is that for some data types, there could be
more than one meaningful index behavior. For example, we might want to sort a complex-number data
type either by absolute value or by real part. We could do this by defining two operator classes for the data
type and then selecting the proper class when making an index.
There are also some built-in operator classes besides the default ones:
If you do use the C locale, you may instead create an index with the default operator class, and it
will still be useful for pattern-matching queries. Also note that you should create an index with the
default operator class if you want queries involving ordinary comparisons to use an index. Such queries
203
Chapter 11. Indexes
cannot use the xxx_pattern_ops operator classes. It is allowed to create multiple indexes on the
same column with different operator classes.
Suppose you are storing web server access logs in a database. Most accesses originate from the IP address
range of your organization but some are from elsewhere (say, employees on dial-up connections). If your
searches by IP are primarily for outside accesses, you probably do not need to index the IP range that
corresponds to your organization’s subnet.
Assume a table like this:
CREATE TABLE access_log (
url varchar,
client_ip inet,
...
);
204
Chapter 11. Indexes
To create a partial index that suits our example, use a command such as this:
CREATE INDEX access_log_client_ip_ix ON access_log (client_ip)
WHERE NOT (client_ip > inet ’192.168.100.0’ AND client_ip < inet ’192.168.100.255’);
Observe that this kind of partial index requires that the common values be predetermined. If the distribu-
tion of values is inherent (due to the nature of the application) and static (not changing over time), this is
not difficult, but if the common values are merely due to the coincidental data load this can require a lot
of maintenance work.
Another possibility is to exclude values from the index that the typical query workload is not interested
in; this is shown in Example 11-2. This results in the same advantages as listed above, but it prevents the
“uninteresting” values from being accessed via that index at all, even if an index scan might be profitable
in that case. Obviously, setting up partial indexes for this kind of scenario will require a lot of care and
experimentation.
If you have a table that contains both billed and unbilled orders, where the unbilled orders take up a small
fraction of the total table and yet those are the most-accessed rows, you can improve performance by
creating an index on just the unbilled rows. The command to create the index would look like this:
CREATE INDEX orders_unbilled_index ON orders (order_nr)
WHERE billed is not true;
Example 11-2 also illustrates that the indexed column and the column used in the predicate do not need
to match. PostgreSQL supports partial indexes with arbitrary predicates, so long as only columns of the
table being indexed are involved. However, keep in mind that the predicate must match the conditions
used in the queries that are supposed to benefit from the index. To be precise, a partial index can be used
in a query only if the system can recognize that the WHERE condition of the query mathematically implies
the predicate of the index. PostgreSQL does not have a sophisticated theorem prover that can recognize
mathematically equivalent expressions that are written in different forms. (Not only is such a general
205
Chapter 11. Indexes
theorem prover extremely difficult to create, it would probably be too slow to be of any real use.) The
system can recognize simple inequality implications, for example “x < 1” implies “x < 2”; otherwise
the predicate condition must exactly match part of the query’s WHERE condition or the index will not be
recognized to be usable.
A third possible use for partial indexes does not require the index to be used in queries at all. The idea
here is to create a unique index over a subset of a table, as in Example 11-3. This enforces uniqueness
among the rows that satisfy the index predicate, without constraining those that do not.
Suppose that we have a table describing test outcomes. We wish to ensure that there is only one “success-
ful” entry for a given subject and target combination, but there might be any number of “unsuccessful”
entries. Here is one way to do it:
CREATE TABLE tests (
subject text,
target text,
success boolean,
...
);
Finally, a partial index can also be used to override the system’s query plan choices. It may occur that data
sets with peculiar distributions will cause the system to use an index when it really should not. In that case
the index can be set up so that it is not available for the offending query. Normally, PostgreSQL makes
reasonable choices about index usage (e.g., it avoids them when retrieving common values, so the earlier
example really only saves index size, it is not required to avoid index usage), and grossly incorrect plan
choices are cause for a bug report.
Keep in mind that setting up a partial index indicates that you know at least as much as the query planner
knows, in particular you know when an index might be profitable. Forming this knowledge requires ex-
perience and understanding of how indexes in PostgreSQL work. In most cases, the advantage of a partial
index over a regular index will not be much.
More information about partial indexes can be found in The case for partial indexes, Partial indexing in
POSTGRES: research project, and Generalized Partial Indexes.
206
Chapter 11. Indexes
It is difficult to formulate a general procedure for determining which indexes to set up. There are a number
of typical cases that have been shown in the examples throughout the previous sections. A good deal of
experimentation will be necessary in most cases. The rest of this section gives some tips for that.
• Always run ANALYZE first. This command collects statistics about the distribution of the values in the
table. This information is required to guess the number of rows returned by a query, which is needed by
the planner to assign realistic costs to each possible query plan. In absence of any real statistics, some
default values are assumed, which are almost certain to be inaccurate. Examining an application’s index
usage without having run ANALYZE is therefore a lost cause.
• Use real data for experimentation. Using test data for setting up indexes will tell you what indexes you
need for the test data, but that is all.
It is especially fatal to use very small test data sets. While selecting 1000 out of 100000 rows could be
a candidate for an index, selecting 1 out of 100 rows will hardly be, because the 100 rows will probably
fit within a single disk page, and there is no plan that can beat sequentially fetching 1 disk page.
Also be careful when making up test data, which is often unavoidable when the application is not in
production use yet. Values that are very similar, completely random, or inserted in sorted order will
skew the statistics away from the distribution that real data would have.
• When indexes are not used, it can be useful for testing to force their use. There are run-time parameters
that can turn off various plan types (described in Section 16.4). For instance, turning off sequential scans
(enable_seqscan) and nested-loop joins (enable_nestloop), which are the most basic plans, will
force the system to use a different plan. If the system still chooses a sequential scan or nested-loop join
then there is probably a more fundamental problem for why the index is not used, for example, the
query condition does not match the index. (What kind of query can use what kind of index is explained
in the previous sections.)
• If forcing index usage does use the index, then there are two possibilities: Either the system is right
and using the index is indeed not appropriate, or the cost estimates of the query plans are not reflecting
reality. So you should time your query with and without indexes. The EXPLAIN ANALYZE command
can be useful here.
• If it turns out that the cost estimates are wrong, there are, again, two possibilities. The total cost is
computed from the per-row costs of each plan node times the selectivity estimate of the plan node.
The costs of the plan nodes can be tuned with run-time parameters (described in Section 16.4). An
inaccurate selectivity estimate is due to insufficient statistics. It may be possible to help this by tuning
the statistics-gathering parameters (see ALTER TABLE).
If you do not succeed in adjusting the costs to be more appropriate, then you may have to resort to
forcing index usage explicitly. You may also want to contact the PostgreSQL developers to examine the
issue.
207
Chapter 12. Concurrency Control
This chapter describes the behavior of the PostgreSQL database system when two or more sessions try
to access the same data at the same time. The goals in that situation are to allow efficient access for
all sessions while maintaining strict data integrity. Every developer of database applications should be
familiar with the topics covered in this chapter.
12.1. Introduction
Unlike traditional database systems which use locks for concurrency control, PostgreSQL maintains data
consistency by using a multiversion model (Multiversion Concurrency Control, MVCC). This means that
while querying a database each transaction sees a snapshot of data (a database version) as it was some
time ago, regardless of the current state of the underlying data. This protects the transaction from viewing
inconsistent data that could be caused by (other) concurrent transaction updates on the same data rows,
providing transaction isolation for each database session.
The main advantage to using the MVCC model of concurrency control rather than locking is that in
MVCC locks acquired for querying (reading) data do not conflict with locks acquired for writing data,
and so reading never blocks writing and writing never blocks reading.
Table- and row-level locking facilities are also available in PostgreSQL for applications that cannot adapt
easily to MVCC behavior. However, proper use of MVCC will generally provide better performance than
locks.
dirty read
A transaction reads data written by a concurrent uncommitted transaction.
nonrepeatable read
A transaction re-reads data it has previously read and finds that data has been modified by another
transaction (that committed since the initial read).
phantom read
A transaction re-executes a query returning a set of rows that satisfy a search condition and finds that
the set of rows satisfying the condition has changed due to another recently-committed transaction.
The four transaction isolation levels and the corresponding behaviors are described in Table 12-1.
208
Chapter 12. Concurrency Control
In PostgreSQL, you can request any of the four standard transaction isolation levels. But internally, there
are only two distinct isolation levels, which correspond to the levels Read Committed and Serializable.
When you select the level Read Uncommitted you really get Read Committed, and when you select
Repeatable Read you really get Serializable, so the actual isolation level may be stricter than what you
select. This is permitted by the SQL standard: the four isolation levels only define which phenomena
must not happen, they do not define which phenomena must happen. The reason that PostgreSQL only
provides two isolation levels is that this is the only sensible way to map the standard isolation levels to the
multiversion concurrency control architecture. The behavior of the available isolation levels is detailed in
the following subsections.
To set the transaction isolation level of a transaction, use the command SET TRANSACTION.
BEGIN;
UPDATE accounts SET balance = balance + 100.00 WHERE acctnum = 12345;
UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 7534;
COMMIT;
209
Chapter 12. Concurrency Control
If two such transactions concurrently try to change the balance of account 12345, we clearly want the
second transaction to start from the updated version of the account’s row. Because each command is
affecting only a predetermined row, letting it see the updated version of the row does not create any
troublesome inconsistency.
Since in Read Committed mode each new command starts with a new snapshot that includes all transac-
tions committed up to that instant, subsequent commands in the same transaction will see the effects of
the committed concurrent transaction in any case. The point at issue here is whether or not within a single
command we see an absolutely consistent view of the database.
The partial transaction isolation provided by Read Committed mode is adequate for many applications,
and this mode is fast and simple to use. However, for applications that do complex queries and updates, it
may be necessary to guarantee a more rigorously consistent view of the database than the Read Committed
mode provides.
because a serializable transaction cannot modify rows changed by other transactions after the serializable
transaction began.
When the application receives this error message, it should abort the current transaction and then retry
the whole transaction from the beginning. The second time through, the transaction sees the previously-
committed change as part of its initial view of the database, so there is no logical conflict in using the new
version of the row as the starting point for the new transaction’s update.
Note that only updating transactions may need to be retried; read-only transactions will never have serial-
ization conflicts.
210
Chapter 12. Concurrency Control
The Serializable mode provides a rigorous guarantee that each transaction sees a wholly consistent view
of the database. However, the application has to be prepared to retry transactions when concurrent up-
dates make it impossible to sustain the illusion of serial execution. Since the cost of redoing complex
transactions may be significant, this mode is recommended only when updating transactions contain logic
sufficiently complex that they may give wrong answers in Read Committed mode. Most commonly, Se-
rializable mode is necessary when a transaction executes several successive commands that must see
identical views of the database.
class | value
-------+-------
1 | 10
1 | 20
2 | 100
2 | 200
and then inserts the result (30) as the value in a new row with class = 2. Concurrently, serializable
transaction B computes
and obtains the result 300, which it inserts in a new row with class = 1. Then both transactions commit.
None of the listed undesirable behaviors have occurred, yet we have a result that could not have occurred
in either order serially. If A had executed before B, B would have computed the sum 330, not 300, and
similarly the other order would have resulted in a different sum computed by A.
To guarantee true mathematical serializability, it is necessary for a database system to enforce predicate
locking, which means that a transaction cannot insert or modify a row that would have matched the WHERE
condition of a query in another concurrent transaction. For example, once transaction A has executed the
query SELECT ... WHERE class = 1, a predicate-locking system would forbid transaction B from in-
serting any new row with class 1 until A has committed. 1 Such a locking system is complex to implement
and extremely expensive in execution, since every session must be aware of the details of every query
executed by every concurrent transaction. And this large expense is mostly wasted, since in practice most
applications do not do the sorts of things that could result in problems. (Certainly the example above is
rather contrived and unlikely to represent real software.) Accordingly, PostgreSQL does not implement
predicate locking, and so far as we are aware no other production DBMS does either.
1. Essentially, a predicate-locking system prevents phantom reads by restricting what is written, whereas MVCC prevents them
by restricting what is read.
211
Chapter 12. Concurrency Control
In those cases where the possibility of nonserializable execution is a real hazard, problems can be pre-
vented by appropriate use of explicit locking. Further discussion appears in the following sections.
Conflicts with the SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE lock
modes.
212
Chapter 12. Concurrency Control
The commands UPDATE, DELETE, and INSERT acquire this lock mode on the target table (in addition
to ACCESS SHARE locks on any other referenced tables). In general, this lock mode will be acquired
by any command that modifies the data in a table.
SHARE UPDATE EXCLUSIVE
Conflicts with the SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE,
and ACCESS EXCLUSIVE lock modes. This mode protects a table against concurrent schema
changes and VACUUM runs.
Acquired by VACUUM (without FULL).
SHARE
Conflicts with the ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE ROW EXCLUSIVE,
EXCLUSIVE, and ACCESS EXCLUSIVE lock modes. This mode protects a table against concurrent
data changes.
Acquired by CREATE INDEX.
SHARE ROW EXCLUSIVE
Conflictswith the ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW
EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE lock modes.
Conflicts with the ROW SHARE, ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE, SHARE
ROW EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE lock modes. This mode allows only
concurrent ACCESS SHARE locks, i.e., only reads from the table can proceed in parallel with a
transaction holding this lock mode.
This lock mode is not automatically acquired by any PostgreSQL command.
ACCESS EXCLUSIVE
Conflicts with locks of all modes (ACCESS SHARE, ROW SHARE, ROW EXCLUSIVE, SHARE UPDATE
EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE). This mode
guarantees that the holder is the only transaction accessing the table in any way.
Acquired by the ALTER TABLE, DROP TABLE, REINDEX, CLUSTER, and VACUUM FULL commands.
This is also the default lock mode for LOCK TABLE statements that do not specify a mode explicitly.
Tip: Only an ACCESS EXCLUSIVE lock blocks a SELECT (without FOR UPDATE) statement.
213
Chapter 12. Concurrency Control
with SELECT FOR UPDATE. Note that once a particular row-level lock is acquired, the transaction may
update the row multiple times without fear of conflicts.
PostgreSQL doesn’t remember any information about modified rows in memory, so it has no limit to the
number of rows locked at one time. However, locking a row may cause a disk write; thus, for example,
SELECT FOR UPDATE will modify selected rows to mark them and so will result in disk writes.
In addition to table and row locks, page-level share/exclusive locks are used to control read/write access
to table pages in the shared buffer pool. These locks are released immediately after a row is fetched or
updated. Application developers normally need not be concerned with page-level locks, but we mention
them for completeness.
12.3.3. Deadlocks
The use of explicit locking can increase the likelihood of deadlocks, wherein two (or more) transactions
each hold locks that the other wants. For example, if transaction 1 acquires an exclusive lock on table A
and then tries to acquire an exclusive lock on table B, while transaction 2 has already exclusive-locked
table B and now wants an exclusive lock on table A, then neither one can proceed. PostgreSQL automati-
cally detects deadlock situations and resolves them by aborting one of the transactions involved, allowing
the other(s) to complete. (Exactly which transaction will be aborted is difficult to predict and should not
be relied on.)
Note that deadlocks can also occur as the result of row-level locks (and thus, they can occur even if explicit
locking is not used). Consider the case in which there are two concurrent transactions modifying a table.
The first transaction executes:
This acquires a row-level lock on the row with the specified account number. Then, the second transaction
executes:
The first UPDATE statement successfully acquires a row-level lock on the specified row, so it succeeds in
updating that row. However, the second UPDATE statement finds that the row it is attempting to update has
already been locked, so it waits for the transaction that acquired the lock to complete. Transaction two is
now waiting on transaction one to complete before it continues execution. Now, transaction one executes:
Transaction one attempts to acquire a row-level lock on the specified row, but it cannot: transaction two
already holds such a lock. So it waits for transaction two to complete. Thus, transaction one is blocked
on transaction two, and transaction two is blocked on transaction one: a deadlock condition. PostgreSQL
will detect this situation and abort one of the transactions.
The best defense against deadlocks is generally to avoid them by being certain that all applications using a
database acquire locks on multiple objects in a consistent order. In the example above, if both transactions
had updated the rows in the same order, no deadlock would have occurred. One should also ensure that the
first lock acquired on an object in a transaction is the highest mode that will be needed for that object. If it
214
Chapter 12. Concurrency Control
is not feasible to verify this in advance, then deadlocks may be handled on-the-fly by retrying transactions
that are aborted due to deadlock.
So long as no deadlock situation is detected, a transaction seeking either a table-level or row-level lock
will wait indefinitely for conflicting locks to be released. This means it is a bad idea for applications to
hold transactions open for long periods of time (e.g., while waiting for user input).
215
Chapter 12. Concurrency Control
B-tree indexes
Short-term share/exclusive page-level locks are used for read/write access. Locks are released imme-
diately after each index row is fetched or inserted. B-tree indexes provide the highest concurrency
without deadlock conditions.
GiST and R-tree indexes
Share/exclusive index-level locks are used for read/write access. Locks are released after the com-
mand is done.
Hash indexes
Share/exclusive hash-bucket-level locks are used for read/write access. Locks are released after the
whole bucket is processed. Bucket-level locks provide better concurrency than index-level ones, but
deadlock is possible since the locks are held longer than one index operation.
In short, B-tree indexes offer the best performance for concurrent applications; since they also have more
features than hash indexes, they are the recommended index type for concurrent applications that need to
index scalar data. When dealing with non-scalar data, B-trees obviously cannot be used; in that situation,
application developers should be aware of the relatively poor concurrent performance of GiST and R-tree
indexes.
216
Chapter 13. Performance Tips
Query performance can be affected by many things. Some of these can be manipulated by the user, while
others are fundamental to the underlying design of the system. This chapter provides some hints about
understanding and tuning PostgreSQL performance.
• Estimated start-up cost (Time expended before output scan can start, e.g., time to do the sorting in a
sort node.)
• Estimated total cost (If all rows were to be retrieved, which they may not be: a query with a LIMIT
clause will stop short of paying the total cost, for example.)
• Estimated number of rows output by this plan node (Again, only if executed to completion)
• Estimated average width (in bytes) of rows output by this plan node
The costs are measured in units of disk page fetches. (CPU effort estimates are converted into disk-page
units using some fairly arbitrary fudge factors. If you want to experiment with these factors, see the list of
run-time configuration parameters in Section 16.4.5.2.)
It’s important to note that the cost of an upper-level node includes the cost of all its child nodes. It’s also
important to realize that the cost only reflects things that the planner/optimizer cares about. In particular,
the cost does not consider the time spent transmitting result rows to the frontend, which could be a pretty
dominant factor in the true elapsed time; but the planner ignores it because it cannot change it by altering
the plan. (Every correct plan will output the same row set, we trust.)
Rows output is a little tricky because it is not the number of rows processed/scanned by the query, it is
usually less, reflecting the estimated selectivity of any WHERE-clause conditions that are being applied
at this node. Ideally the top-level rows estimate will approximate the number of rows actually returned,
updated, or deleted by the query.
Here are some examples (using the regression test database after a VACUUM ANALYZE, and 7.3 develop-
ment sources):
QUERY PLAN
-------------------------------------------------------------
Seq Scan on tenk1 (cost=0.00..333.00 rows=10000 width=148)
217
Chapter 13. Performance Tips
you will find out that tenk1 has 233 disk pages and 10000 rows. So the cost is estimated at 233 page
reads, defined as costing 1.0 apiece, plus 10000 * cpu_tuple_cost which is currently 0.01 (try SHOW
cpu_tuple_cost).
QUERY PLAN
------------------------------------------------------------
Seq Scan on tenk1 (cost=0.00..358.00 rows=1033 width=148)
Filter: (unique1 < 1000)
The estimate of output rows has gone down because of the WHERE clause. However, the scan will still have
to visit all 10000 rows, so the cost hasn’t decreased; in fact it has gone up a bit to reflect the extra CPU
time spent checking the WHERE condition.
The actual number of rows this query would select is 1000, but the estimate is only approximate. If you try
to duplicate this experiment, you will probably get a slightly different estimate; moreover, it will change
after each ANALYZE command, because the statistics produced by ANALYZE are taken from a randomized
sample of the table.
Modify the query to restrict the condition even more:
QUERY PLAN
-------------------------------------------------------------------------------
Index Scan using tenk1_unique1 on tenk1 (cost=0.00..179.33 rows=49 width=148)
Index Cond: (unique1 < 50)
and you will see that if we make the WHERE condition selective enough, the planner will eventually decide
that an index scan is cheaper than a sequential scan. This plan will only have to visit 50 rows because of
the index, so it wins despite the fact that each individual fetch is more expensive than reading a whole
disk page sequentially.
Add another condition to the WHERE clause:
EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 50 AND stringu1 = ’xxx’;
QUERY PLAN
-------------------------------------------------------------------------------
Index Scan using tenk1_unique1 on tenk1 (cost=0.00..179.45 rows=1 width=148)
Index Cond: (unique1 < 50)
Filter: (stringu1 = ’xxx’::name)
The added condition stringu1 = ’xxx’ reduces the output-rows estimate, but not the cost because we
still have to visit the same set of rows. Notice that the stringu1 clause cannot be applied as an index
condition (since this index is only on the unique1 column). Instead it is applied as a filter on the rows
retrieved by the index. Thus the cost has actually gone up a little bit to reflect this extra checking.
218
Chapter 13. Performance Tips
Let’s try joining two tables, using the columns we have been discussing:
EXPLAIN SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 < 50 AND t1.unique2 = t2.uniq
QUERY PLAN
----------------------------------------------------------------------------
Nested Loop (cost=0.00..327.02 rows=49 width=296)
-> Index Scan using tenk1_unique1 on tenk1 t1
(cost=0.00..179.33 rows=49 width=148)
Index Cond: (unique1 < 50)
-> Index Scan using tenk2_unique2 on tenk2 t2
(cost=0.00..3.01 rows=1 width=148)
Index Cond: ("outer".unique2 = t2.unique2)
In this nested-loop join, the outer scan is the same index scan we had in the example before last, and so
its cost and row count are the same because we are applying the WHERE clause unique1 < 50 at that
node. The t1.unique2 = t2.unique2 clause is not relevant yet, so it doesn’t affect row count of the
outer scan. For the inner scan, the unique2 value of the current outer-scan row is plugged into the inner
index scan to produce an index condition like t2.unique2 = constant. So we get the same inner-scan
plan and costs that we’d get from, say, EXPLAIN SELECT * FROM tenk2 WHERE unique2 = 42. The
costs of the loop node are then set on the basis of the cost of the outer scan, plus one repetition of the inner
scan for each outer row (49 * 3.01, here), plus a little CPU time for join processing.
In this example the join’s output row count is the same as the product of the two scans’ row counts, but
that’s not true in general, because in general you can have WHERE clauses that mention both tables and
so can only be applied at the join point, not to either input scan. For example, if we added WHERE ...
AND t1.hundred < t2.hundred, that would decrease the output row count of the join node, but not
change either input scan.
One way to look at variant plans is to force the planner to disregard whatever strategy it thought was the
winner, using the enable/disable flags for each plan type. (This is a crude tool, but useful. See also Section
13.3.)
QUERY PLAN
--------------------------------------------------------------------------
Hash Join (cost=179.45..563.06 rows=49 width=296)
Hash Cond: ("outer".unique2 = "inner".unique2)
-> Seq Scan on tenk2 t2 (cost=0.00..333.00 rows=10000 width=148)
-> Hash (cost=179.33..179.33 rows=49 width=148)
-> Index Scan using tenk1_unique1 on tenk1 t1
(cost=0.00..179.33 rows=49 width=148)
Index Cond: (unique1 < 50)
This plan proposes to extract the 50 interesting rows of tenk1 using ye same olde index scan, stash them
into an in-memory hash table, and then do a sequential scan of tenk2, probing into the hash table for
possible matches of t1.unique2 = t2.unique2 at each tenk2 row. The cost to read tenk1 and set
up the hash table is entirely start-up cost for the hash join, since we won’t get any rows out until we can
start reading tenk2. The total time estimate for the join also includes a hefty charge for the CPU time to
219
Chapter 13. Performance Tips
probe the hash table 10000 times. Note, however, that we are not charging 10000 times 179.33; the hash
table setup is only done once in this plan type.
It is possible to check on the accuracy of the planner’s estimated costs by using EXPLAIN ANALYZE. This
command actually executes the query, and then displays the true run time accumulated within each plan
node along with the same estimated costs that a plain EXPLAIN shows. For example, we might get a result
like this:
EXPLAIN ANALYZE SELECT * FROM tenk1 t1, tenk2 t2 WHERE t1.unique1 < 50 AND t1.unique2 =
QUERY PLAN
-------------------------------------------------------------------------------
Nested Loop (cost=0.00..327.02 rows=49 width=296)
(actual time=1.181..29.822 rows=50 loops=1)
-> Index Scan using tenk1_unique1 on tenk1 t1
(cost=0.00..179.33 rows=49 width=148)
(actual time=0.630..8.917 rows=50 loops=1)
Index Cond: (unique1 < 50)
-> Index Scan using tenk2_unique2 on tenk2 t2
(cost=0.00..3.01 rows=1 width=148)
(actual time=0.295..0.324 rows=1 loops=50)
Index Cond: ("outer".unique2 = t2.unique2)
Total runtime: 31.604 ms
Note that the “actual time” values are in milliseconds of real time, whereas the “cost” estimates are
expressed in arbitrary units of disk fetches; so they are unlikely to match up. The thing to pay attention to
is the ratios.
In some query plans, it is possible for a subplan node to be executed more than once. For example, the
inner index scan is executed once per outer row in the above nested-loop plan. In such cases, the “loops”
value reports the total number of executions of the node, and the actual time and rows values shown are
averages per-execution. This is done to make the numbers comparable with the way that the cost estimates
are shown. Multiply by the “loops” value to get the total time actually spent in the node.
The Total runtime shown by EXPLAIN ANALYZE includes executor start-up and shut-down time, as
well as time spent processing the result rows. It does not include parsing, rewriting, or planning time. For
a SELECT query, the total run time will normally be just a little larger than the total time reported for the
top-level plan node. For INSERT, UPDATE, and DELETE commands, the total run time may be considerably
larger, because it includes the time spent processing the result rows. In these commands, the time for the
top plan node essentially is the time spent computing the new rows and/or locating the old ones, but it
doesn’t include the time spent making the changes.
It is worth noting that EXPLAIN results should not be extrapolated to situations other than the one you are
actually testing; for example, results on a toy-sized table can’t be assumed to apply to large tables. The
planner’s cost estimates are not linear and so it may well choose a different plan for a larger or smaller
table. An extreme example is that on a table that only occupies one disk page, you’ll nearly always get a
sequential scan plan whether indexes are available or not. The planner realizes that it’s going to take one
disk page read to process the table in any case, so there’s no value in expending additional page reads to
look at an index.
220
Chapter 13. Performance Tips
SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE relname LIKE ’tenk1%’;
Here we can see that tenk1 contains 10000 rows, as do its indexes, but the indexes are (unsurprisingly)
much smaller than the table.
For efficiency reasons, reltuples and relpages are not updated on-the-fly, and so they usually contain
somewhat out-of-date values. They are updated by VACUUM, ANALYZE, and a few DDL commands such
as CREATE INDEX. A stand-alone ANALYZE, that is one not part of VACUUM, generates an approximate
reltuples value since it does not read every row of the table. The planner will scale the values it finds
in pg_class to match the current physical table size, thus obtaining a closer approximation.
Most queries retrieve only a fraction of the rows in a table, due to having WHERE clauses that restrict the
rows to be examined. The planner thus needs to make an estimate of the selectivity of WHERE clauses, that
is, the fraction of rows that match each condition in the WHERE clause. The information used for this task
is stored in the pg_statistic system catalog. Entries in pg_statistic are updated by ANALYZE and
VACUUM ANALYZE commands and are always approximate even when freshly updated.
Rather than look at pg_statistic directly, it’s better to look at its view pg_stats when examining the
statistics manually. pg_stats is designed to be more easily readable. Furthermore, pg_stats is readable
by all, whereas pg_statistic is only readable by a superuser. (This prevents unprivileged users from
learning something about the contents of other people’s tables from the statistics. The pg_stats view is
restricted to show only rows about tables that the current user can read.) For example, we might do:
attname | n_distinct |
---------+------------+-----------------------------------------------------------------
name | -0.467008 | {"I- 580 Ramp","I- 880
thepath | 20 | {"[(-122.089,37.71),(-122.0886,37.711)]"}
(2 rows)
221
Chapter 13. Performance Tips
The amount of information stored in pg_statistic, in particular the maximum number of entries in
the most_common_vals and histogram_bounds arrays for each column, can be set on a column-
by-column basis using the ALTER TABLE SET STATISTICS command, or globally by setting the de-
fault_statistics_target configuration variable. The default limit is presently 10 entries. Raising the limit
may allow more accurate planner estimates to be made, particularly for columns with irregular data distri-
butions, at the price of consuming more space in pg_statistic and slightly more time to compute the
estimates. Conversely, a lower limit may be appropriate for columns with simple data distributions.
the planner is free to join the given tables in any order. For example, it could generate a query plan that
joins A to B, using the WHERE condition a.id = b.id, and then joins C to this joined table, using the
other WHERE condition. Or it could join B to C and then join A to that result. Or it could join A to C and
then join them with B, but that would be inefficient, since the full Cartesian product of A and C would
have to be formed, there being no applicable condition in the WHERE clause to allow optimization of the
join. (All joins in the PostgreSQL executor happen between two input tables, so it’s necessary to build up
the result in one or another of these fashions.) The important point is that these different join possibilities
give semantically equivalent results but may have hugely different execution costs. Therefore, the planner
will explore all of them to try to find the most efficient query plan.
When a query only involves two or three tables, there aren’t many join orders to worry about. But the
number of possible join orders grows exponentially as the number of tables expands. Beyond ten or so
input tables it’s no longer practical to do an exhaustive search of all the possibilities, and even for six
or seven tables planning may take an annoyingly long time. When there are too many input tables, the
PostgreSQL planner will switch from exhaustive search to a genetic probabilistic search through a limited
number of possibilities. (The switch-over threshold is set by the geqo_threshold run-time parameter.) The
genetic search takes less time, but it won’t necessarily find the best possible plan.
When the query involves outer joins, the planner has much less freedom than it does for plain (inner)
joins. For example, consider
Although this query’s restrictions are superficially similar to the previous example, the semantics are
different because a row must be emitted for each row of A that has no matching row in the join of B and
C. Therefore the planner has no choice of join order here: it must join B to C and then join A to that result.
Accordingly, this query takes less time to plan than the previous query.
Explicit inner join syntax (INNER JOIN, CROSS JOIN, or unadorned JOIN) is semantically the same as
listing the input relations in FROM, so it does not need to constrain the join order. But it is possible to
instruct the PostgreSQL query planner to treat explicit inner JOINs as constraining the join order anyway.
For example, these three queries are logically equivalent:
222
Chapter 13. Performance Tips
SELECT * FROM a CROSS JOIN b CROSS JOIN c WHERE a.id = b.id AND b.ref = c.id;
SELECT * FROM a JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id);
But if we tell the planner to honor the JOIN order, the second and third take less time to plan than the first.
This effect is not worth worrying about for only three tables, but it can be a lifesaver with many tables.
To force the planner to follow the JOIN order for inner joins, set the join_collapse_limit run-time param-
eter to 1. (Other possible values are discussed below.)
You do not need to constrain the join order completely in order to cut search time, because it’s OK to use
JOIN operators within items of a plain FROM list. For example, consider
With join_collapse_limit = 1, this forces the planner to join A to B before joining them to other
tables, but doesn’t constrain its choices otherwise. In this example, the number of possible join orders is
reduced by a factor of 5.
Constraining the planner’s search in this way is a useful technique both for reducing planning time and
for directing the planner to a good query plan. If the planner chooses a bad join order by default, you can
force it to choose a better order via JOIN syntax — assuming that you know of a better order, that is.
Experimentation is recommended.
A closely related issue that affects planning time is collapsing of subqueries into their parent query. For
example, consider
SELECT *
FROM x, y,
(SELECT * FROM a, b, c WHERE something) AS ss
WHERE somethingelse;
This situation might arise from use of a view that contains a join; the view’s SELECT rule will be inserted
in place of the view reference, yielding a query much like the above. Normally, the planner will try to
collapse the subquery into the parent, yielding
This usually results in a better plan than planning the subquery separately. (For example, the outer WHERE
conditions might be such that joining X to A first eliminates many rows of A, thus avoiding the need
to form the full logical output of the subquery.) But at the same time, we have increased the plan-
ning time; here, we have a five-way join problem replacing two separate three-way join problems. Be-
cause of the exponential growth of the number of possibilities, this makes a big difference. The plan-
ner tries to avoid getting stuck in huge join search problems by not collapsing a subquery if more than
from_collapse_limit FROM items would result in the parent query. You can trade off planning time
against quality of plan by adjusting this run-time parameter up or down.
from_collapse_limit and join_collapse_limit are similarly named because they do almost the same
thing: one controls when the planner will “flatten out” subselects, and the other controls when
it will flatten out explicit inner joins. Typically you would either set join_collapse_limit
equal to from_collapse_limit (so that explicit joins and subselects act similarly) or set
join_collapse_limit to 1 (if you want to control join order with explicit joins). But you might set
them differently if you are trying to fine-tune the trade off between planning time and run time.
223
Chapter 13. Performance Tips
Note that loading a large number of rows using COPY is almost always faster than using INSERT, even if
PREPARE is used and multiple insertions are batched into a single transaction.
224
Chapter 13. Performance Tips
225
III. Server Administration
This part covers topics that are of interest to a PostgreSQL database administrator. This includes instal-
lation of the software, set up and configuration of the server, management of users and databases, and
maintenance tasks. Anyone who runs a PostgreSQL server, even for personal use, but especially in pro-
duction, should be familiar with the topics covered in this part.
The information in this part is arranged approximately in the order in which a new user should read it.
But the chapters are self-contained and can be read individually as desired. The information in this part is
presented in a narrative fashion in topical units. Readers looking for a complete description of a particular
command should look into Part VI.
The first few chapters are written so that they can be understood without prerequisite knowledge, so that
new users who need to set up their own server can begin their exploration with this part. The rest of this
part is about tuning and management; that material assumes that the reader is familiar with the general
use of the PostgreSQL database system. Readers are encouraged to look at Part I and Part II for additional
information.
Chapter 14. Installation Instructions
This chapter describes the installation of PostgreSQL from the source code distribution. (If you are in-
stalling a pre-packaged distribution, such as an RPM or Debian package, ignore this chapter and read the
packager’s instructions instead.)
./configure
gmake
su
gmake install
adduser postgres
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data >logfile 2>&1 &
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test
14.2. Requirements
In general, a modern Unix-compatible platform should be able to run PostgreSQL. The platforms that had
received specific testing at the time of release are listed in Section 14.7 below. In the doc subdirectory of
the distribution there are several platform-specific FAQ documents you might wish to consult if you are
having trouble.
The following software packages are required for building PostgreSQL:
• GNU make is required; other make programs will not work. GNU make is often installed under the
name gmake; this document will always refer to it by that name. (On some systems GNU make is the
default tool with the name make.) To test for GNU make enter
gmake --version
228
Chapter 14. Installation Instructions
• Additional software is needed to build PostgreSQL on Windows. You can build PostgreSQL for NT-
based versions of Windows (like Windows XP and 2003) using MinGW; see doc/FAQ_MINGW for
details. You can also build PostgreSQL using Cygwin; see doc/FAQ_CYGWIN. A Cygwin-based build
will work on older versions of Windows, but if you have a choice, we recommend the MinGW approach.
While these are the only tool sets recommended for a complete build, it is possible to build just the C
client library (libpq) and the interactive terminal (psql) using other Windows tool sets. For details of
that see Chapter 15.
The following packages are optional. They are not required in the default configuration, but they are
needed when certain build options are enabled, as explained below.
• To build the server programming language PL/Perl you need a full Perl installation, including the
libperl library and the header files. Since PL/Perl will be a shared library, the libperl library
must be a shared library also on most platforms. This appears to be the default in recent Perl versions,
but it was not in earlier versions, and in any case it is the choice of whomever installed Perl at your site.
If you don’t have the shared library but you need one, a message like this will appear during the build
to point out this fact:
*** Cannot build PL/Perl because libperl is not a shared library.
*** You might have to rebuild your Perl installation. Refer to
*** the documentation for details.
(If you don’t follow the on-screen output you will merely notice that the PL/Perl library object,
plperl.so or similar, will not be installed.) If you see this, you will have to rebuild and install Perl
manually to be able to build PL/Perl. During the configuration process for Perl, request a shared
library.
• To build the PL/Python server programming language, you need a Python installation with the header
files and the distutils module. The distutils module is included by default with Python 1.6 and later;
users of earlier versions of Python will need to install it.
Since PL/Python will be a shared library, the libpython library must be a shared library also on most
platforms. This is not the case in a default Python installation. If after building and installing you have
a file called plpython.so (possibly a different extension), then everything went well. Otherwise you
should have seen a notice like this flying by:
*** Cannot build PL/Python because libpython is not a shared library.
*** You might have to rebuild your Python installation. Refer to
*** the documentation for details.
That means you have to rebuild (part of) your Python installation to supply this shared library.
If you have problems, run Python 2.3 or later’s configure using the --enable-shared flag. On some
operating systems you don’t have to build a shared library, but you will have to convince the PostgreSQL
build system of this. Consult the Makefile in the src/pl/plpython directory for details.
• If you want to build the PL/Tcl procedural language, you of course need a Tcl installation.
• To enable Native Language Support (NLS), that is, the ability to display a program’s messages in a
language other than English, you need an implementation of the Gettext API. Some operating systems
229
Chapter 14. Installation Instructions
have this built-in (e.g., Linux, NetBSD, Solaris), for other systems you can download an add-on package
from here: http://developer.postgresql.org/~petere/bsd-gettext/. If you are using the Gettext implemen-
tation in the GNU C library then you will additionally need the GNU Gettext package for some utility
programs. For any of the other implementations you will not need it.
• Kerberos, OpenSSL, and/or PAM, if you want to support authentication or encryption using these ser-
vices.
If you are building from a CVS tree instead of using a released source package, or if you want to do
development, you also need the following packages:
• GNU Flex and Bison are needed to build a CVS checkout or if you changed the actual scanner and
parser definition files. If you need them, be sure to get Flex 2.5.4 or later and Bison 1.875 or later. Other
yacc programs can sometimes be used, but doing so requires extra effort and is not recommended. Other
lex programs will definitely not work.
If you need to get a GNU package, you can find it at your local GNU mirror site (see
http://www.gnu.org/order/ftp.html for a list) or at ftp://ftp.gnu.org/gnu/.
Also check that you have sufficient disk space. You will need about 65 MB for the source tree during
compilation and about 15 MB for the installation directory. An empty database cluster takes about 25
MB, databases take about five times the amount of space that a flat text file with the same data would take.
If you are going to run the regression tests you will temporarily need up to an extra 90 MB. Use the df
command to check free disk space.
gunzip postgresql-8.0.0.tar.gz
tar xf postgresql-8.0.0.tar
This will create a directory postgresql-8.0.0 under the current directory with the PostgreSQL sources.
Change into that directory for the rest of the installation procedure.
230
Chapter 14. Installation Instructions
1. Make sure that your database is not updated during or after the backup. This does not affect the
integrity of the backup, but the changed data would of course not be included. If necessary, edit the
permissions in the file /usr/local/pgsql/data/pg_hba.conf (or equivalent) to disallow access
from everyone except you.
2. To back up your database installation, type:
pg_dumpall > outputfile
If you need to preserve OIDs (such as when using them as foreign keys), then use the -o option when
running pg_dumpall.
pg_dumpall does not save large objects. Check Section 22.1.4 if you need to do this.
To make the backup, you can use the pg_dumpall command from the version you are currently run-
ning. For best results, however, try to use the pg_dumpall command from PostgreSQL 8.0.0, since
this version contains bug fixes and improvements over older versions. While this advice might seem
idiosyncratic since you haven’t installed the new version yet, it is advisable to follow it if you plan to
install the new version in parallel with the old version. In that case you can complete the installation
normally and transfer the data later. This will also decrease the downtime.
3. If you are installing the new version at the same location as the old one then shut down the old server,
at the latest before you install the new files:
pg_ctl stop
On systems that have PostgreSQL started at boot time, there is probably a start-up file that will
accomplish the same thing. For example, on a Red Hat Linux system one might find that
/etc/rc.d/init.d/postgresql stop
works.
Very old versions might not have pg_ctl. If you can’t find it or it doesn’t work, find out the process
ID of the old server, for example by typing
ps ax | grep postmaster
4. If you are installing in the same place as the old version then it is also a good idea to move the old
installation out of the way, in case you have trouble and need to revert to it. Use a command like this:
mv /usr/local/pgsql /usr/local/pgsql.old
After you have installed PostgreSQL 8.0.0, create a new database directory and start the new server.
Remember that you must execute these commands while logged in to the special database user account
(which you already have if you are upgrading).
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data
231
Chapter 14. Installation Instructions
1. Configuration
The first step of the installation procedure is to configure the source tree for your system and choose
the options you would like. This is done by running the configure script. For a default installation
simply enter
./configure
This script will run a number of tests to guess values for various system dependent variables and
detect some quirks of your operating system, and finally will create several files in the build tree to
record what it found. (You can also run configure in a directory outside the source tree if you want
to keep the build directory separate.)
The default configuration will build the server and utilities, as well as all client applications and
interfaces that require only a C compiler. All files will be installed under /usr/local/pgsql by
default.
You can customize the build and installation process by supplying one or more of the following
command line options to configure:
--prefix=PREFIX
Install all files under the directory PREFIX instead of /usr/local/pgsql. The actual files will
be installed into various subdirectories; no files will ever be installed directly into the PREFIX
directory.
If you have special needs, you can also customize the individual subdirectories with the follow-
ing options. However, if you leave these with their defaults, the installation will be relocatable,
meaning you can move the directory after installation. (The man and doc locations are not af-
fected by this.)
For relocatable installs, you might want to use configure’s --disable-rpath option. Also,
you will need to tell the operating system how to find the shared libraries.
--exec-prefix=EXEC-PREFIX
You can install architecture-dependent files under a different prefix, EXEC-PREFIX, than what
PREFIX was set to. This can be useful to share architecture-independent files between hosts.
If you omit this, then EXEC-PREFIX is set equal to PREFIX and both architecture-dependent
and independent files will be installed under the same tree, which is probably what you want.
--bindir=DIRECTORY
Specifies the directory for executable programs. The default is EXEC-PREFIX/bin, which nor-
mally means /usr/local/pgsql/bin.
--datadir=DIRECTORY
Sets the directory for read-only data files used by the installed programs. The default is
PREFIX/share. Note that this has nothing to do with where your database files will be placed.
232
Chapter 14. Installation Instructions
--sysconfdir=DIRECTORY
The location to install libraries and dynamically loadable modules. The default is
EXEC-PREFIX/lib.
--includedir=DIRECTORY
The directory for installing C and C++ header files. The default is PREFIX/include.
--mandir=DIRECTORY
The man pages that come with PostgreSQL will be installed under this directory, in their respec-
tive manx subdirectories. The default is PREFIX/man.
--with-docdir=DIRECTORY
--without-docdir
Documentation files, except “man” pages, will be installed into this directory. The default is
PREFIX/doc. If the option --without-docdir is specified, the documentation will not be
installed by make install. This is intended for packaging scripts that have special methods
for installing documentation.
Note: Care has been taken to make it possible to install PostgreSQL into shared installation
locations (such as /usr/local/include) without interfering with the namespace of the rest of
the system. First, the string “/postgresql” is automatically appended to datadir, sysconfdir,
and docdir, unless the fully expanded directory name already contains the string “postgres”
or “pgsql”. For example, if you choose /usr/local as prefix, the documentation will be
installed in /usr/local/doc/postgresql, but if the prefix is /opt/postgres, then it will be
in /opt/postgres/doc. The public C header files of the client interfaces are installed into
includedir and are namespace-clean. The internal header files and the server header files are
installed into private directories under includedir. See the documentation of each interface for
information about how to get at the its header files. Finally, a private subdirectory will also be
created, if appropriate, under libdir for dynamically loadable modules.
--with-includes=DIRECTORIES
DIRECTORIES is a colon-separated list of directories that will be added to the list the com-
piler searches for header files. If you have optional packages (such as GNU Readline) installed
in a non-standard location, you have to use this option and probably also the corresponding
--with-libraries option.
Example: --with-includes=/opt/gnu/include:/usr/sup/include.
--with-libraries=DIRECTORIES
DIRECTORIES is a colon-separated list of directories to search for libraries. You will probably
have to use this option (and the corresponding --with-includes option) if you have packages
installed in non-standard locations.
233
Chapter 14. Installation Instructions
Example: --with-libraries=/opt/gnu/lib:/usr/sup/lib.
--enable-nls[=LANGUAGES]
Enables Native Language Support (NLS), that is, the ability to display a program’s messages in
a language other than English. LANGUAGES is a space-separated list of codes of the languages
that you want supported, for example --enable-nls=’de fr’. (The intersection between
your list and the set of actually provided translations will be computed automatically.) If you do
not specify a list, then all available translations are installed.
To use this option, you will need an implementation of the Gettext API; see above.
--with-pgport=NUMBER
Set NUMBER as the default port number for server and clients. The default is 5432. The port can
always be changed later on, but if you specify it here then both server and clients will have the
same default compiled in, which can be very convenient. Usually the only good reason to select
a non-default value is if you intend to run multiple PostgreSQL servers on the same machine.
--with-perl
Tcl installs the file tclConfig.sh, which contains configuration information needed to build
modules interfacing to Tcl. This file is normally found automatically at a well-known location,
but if you want to use a different version of Tcl you can specify the directory in which to look
for it.
--with-krb4
--with-krb5
Build with support for Kerberos authentication. You can use either Kerberos version 4 or 5, but
not both. On many systems, the Kerberos system is not installed in a location that is searched by
default (e.g., /usr/include, /usr/lib), so you must use the options --with-includes and
--with-libraries in addition to this option. configure will check for the required header
files and libraries to make sure that your Kerberos installation is sufficient before proceeding.
--with-krb-srvnam=NAME
The name of the Kerberos service principal. postgres is the default. There’s probably no reason
to change this.
--with-openssl
Build with support for SSL (encrypted) connections. This requires the OpenSSL package to be
installed. configure will check for the required header files and libraries to make sure that
your OpenSSL installation is sufficient before proceeding.
--with-pam
234
Chapter 14. Installation Instructions
--without-readline
Prevents use of the Readline library. This disables command-line editing and history in psql, so
it is not recommended.
--with-rendezvous
Build with Rendezvous support. This requires Rendezvous support in your operating system.
Recommended on Mac OS X.
--disable-spinlocks
Allow the build to succeed even if PostgreSQL has no CPU spinlock support for the platform.
The lack of spinlock support will result in poor performance; therefore, this option should only
be used if the build aborts and informs you that the platform lacks spinlock support. If this option
is required to build PostgreSQL on your platform, please report the problem to the PostgreSQL
developers.
--enable-thread-safety
Make the client libraries thread-safe. This allows concurrent threads in libpq and ECPG pro-
grams to safely control their private connection handles. This option requires adequate threading
support in your operating system.
--without-zlib
Prevents use of the Zlib library. This disables support for compressed archives in pg_dump and
pg_restore. This option is only intended for those rare systems where this library is not available.
--enable-debug
Compiles all programs and libraries with debugging symbols. This means that you can run the
programs through a debugger to analyze problems. This enlarges the size of the installed exe-
cutables considerably, and on non-GCC compilers it usually also disables compiler optimization,
causing slowdowns. However, having the symbols available is extremely helpful for dealing with
any problems that may arise. Currently, this option is recommended for production installations
only if you use GCC. But you should always have it on if you are doing development work or
running a beta version.
--enable-cassert
Enables assertion checks in the server, which test for many “can’t happen” conditions. This is
invaluable for code development purposes, but the tests slow things down a little. Also, having
the tests turned on won’t necessarily enhance the stability of your server! The assertion checks
are not categorized for severity, and so what might be a relatively harmless bug will still lead
to server restarts if it triggers an assertion failure. Currently, this option is not recommended for
production use, but you should have it on for development work or when running a beta version.
--enable-depend
Enables automatic dependency tracking. With this option, the makefiles are set up so that all
affected object files will be rebuilt when any header file is changed. This is useful if you are
doing development work, but is just wasted overhead if you intend only to compile once and
install. At present, this option will work only if you use GCC.
235
Chapter 14. Installation Instructions
If you prefer a C compiler different from the one configure picks, you can set the environment
variable CC to the program of your choice. By default, configure will pick gcc if available, else the
platform’s default (usually cc). Similarly, you can override the default compiler flags if needed with
the CFLAGS variable.
You can specify environment variables on the configure command line, for example:
./configure CC=/opt/bin/gcc CFLAGS=’-O2 -pipe’
2. Build
To start the build, type
gmake
(Remember to use GNU make.) The build may take anywhere from 5 minutes to half an hour de-
pending on your hardware. The last line displayed should be
All of PostgreSQL is successfully made. Ready to install.
3. Regression Tests
If you want to test the newly built server before you install it, you can run the regression tests at this
point. The regression tests are a test suite to verify that PostgreSQL runs on your machine in the way
the developers expected it to. Type
gmake check
(This won’t work as root; do it as an unprivileged user.) Chapter 26 contains detailed information
about interpreting the test results. You can repeat this test at any later time by issuing the same
command.
4. Installing The Files
Note: If you are upgrading an existing system and are going to install the new files over the old
ones, be sure to back up your data and shut down the old server before proceeding, as explained
in Section 14.4 above.
gmake install
This will install files into the directories that were specified in step 1. Make sure that you have appro-
priate permissions to write into that area. Normally you need to do this step as root. Alternatively, you
could create the target directories in advance and arrange for appropriate permissions to be granted.
You can use gmake install-strip instead of gmake install to strip the executable files and
libraries as they are installed. This will save some space. If you built with debugging support, stripping
will effectively remove the debugging support, so it should only be done if debugging is no longer
needed. install-strip tries to do a reasonable job saving space, but it does not have perfect
knowledge of how to strip every unneeded byte from an executable file, so if you want to save all the
disk space you possibly can, you will have to do manual work.
236
Chapter 14. Installation Instructions
The standard installation provides all the header files needed for client application development as
well as for server-side program development, such as custom functions or data types written in C.
(Prior to PostgreSQL 8.0, a separate gmake install-all-headers command was needed for the
latter, but this step has been folded into the standard install.)
Client-only installation: If you want to install only the client applications and interface libraries,
then you can use these commands:
gmake -C src/bin install
gmake -C src/include install
gmake -C src/interfaces install
gmake -C doc install
Registering eventlog on Windows: To register a Windows eventlog library with the operating system,
issue this command after installation:
regsvr32 pgsql_library_directory/pgevent.dll
LD_LIBRARY_PATH=/usr/local/pgsql/lib
export LD_LIBRARY_PATH
or in csh or tcsh
237
Chapter 14. Installation Instructions
Replace /usr/local/pgsql/lib with whatever you set --libdir to in step 1. You should put these
commands into a shell start-up file such as /etc/profile or ~/.bash_profile. Some good informa-
tion about the caveats associated with this method can be found at http://www.visi.com/~barr/ldpath.html.
On some systems it might be preferable to set the environment variable LD_RUN_PATH before building.
On Cygwin, put the library directory in the PATH or move the .dll files into the bin directory.
If in doubt, refer to the manual pages of your system (perhaps ld.so or rld). If you later on get a message
like
/sbin/ldconfig /usr/local/pgsql/lib
(or equivalent directory) after installation to enable the run-time linker to find the shared libraries faster.
Refer to the manual page of ldconfig for more information. On FreeBSD, NetBSD, and OpenBSD the
command is
/sbin/ldconfig -m /usr/local/pgsql/lib
PATH=/usr/local/pgsql/bin:$PATH
export PATH
To enable your system to find the man documentation, you need to add lines like the following to a shell
start-up file unless you installed into a location that is searched by default.
MANPATH=/usr/local/pgsql/man:$MANPATH
export MANPATH
238
Chapter 14. Installation Instructions
The environment variables PGHOST and PGPORT specify to client applications the host and port of the
database server, overriding the compiled-in defaults. If you are going to run client applications remotely
then it is convenient if every user that plans to use the database sets PGHOST. This is not required, however:
the settings can be communicated via command line options to most client programs.
Note: If you are having problems with the installation on a supported platform, please write to
<[email protected]> or <[email protected]>, not to the people listed here.
6. http://www.pgbuildfarm.org/
239
Chapter 14. Installation Instructions
240
Chapter 14. Installation Instructions
241
Chapter 14. Installation Instructions
242
Chapter 14. Installation Instructions
Unsupported Platforms: The following platforms are either known not to work, or they used to work
in a fairly distant previous release. We include these here to let you know that these platforms could be
supported if given some attention.
243
Chapter 14. Installation Instructions
244
Chapter 15. Client-Only Installation on
Windows
Although a complete PostgreSQL installation for Windows can only be built using MinGW or Cygwin,
the C client library (libpq) and the interactive terminal (psql) can be compiled using other Windows tool
sets. Makefiles are included in the source distribution for Microsoft Visual C++ and Borland C++. It
should be possible to compile the libraries manually for other configurations.
Tip: Using MinGW or Cygwin is preferred. If using one of those tool sets, see Chapter 14.
To build everything that you can on Windows using Microsoft Visual C++, change into the src directory
and type the command
nmake /f win32.mak
interfaces\libpq\Release\libpq.dll
The only file that really needs to be installed is the libpq.dll library. This file should in most cases
be placed in the WINNT\SYSTEM32 directory (or in WINDOWS\SYSTEM on a Windows 95/98/ME sys-
tem). If this file is installed using a setup program, it should be installed with version checking using the
VERSIONINFO resource included in the file, to ensure that a newer version of the library is not overwritten.
If you plan to do development using libpq on this machine, you will have to add the src\include and
src\interfaces\libpq subdirectories of the source tree to the include path in your compiler’s settings.
To use the library, you must add the libpqdll.lib file to your project. (In Visual C++, just right-click
on the project and choose to add it.)
245
Chapter 16. Server Run-time Environment
This chapter discusses how to set up and run the database server and its interactions with the operating
system.
$ initdb -D /usr/local/pgsql/data
Note that you must execute this command while logged into the PostgreSQL user account, which is
described in the previous section.
Tip: As an alternative to the -D option, you can set the environment variable PGDATA.
initdb will attempt to create the directory you specify if it does not already exist. It is likely that it will
not have the permission to do so (if you followed our advice and created an unprivileged account). In that
case you should create the directory yourself (as root) and change the owner to be the PostgreSQL user.
Here is how this might be done:
246
Chapter 16. Server Run-time Environment
initdb will refuse to run if the data directory looks like it has already been initialized.
Because the data directory contains all the data stored in the database, it is essential that it be secured from
unauthorized access. initdb therefore revokes access permissions from everyone but the PostgreSQL
user.
However, while the directory contents are secure, the default client authentication setup allows any lo-
cal user to connect to the database and even become the database superuser. If you do not trust other
local users, we recommend you use one of initdb’s -W, --pwprompt or --pwfile options to assign
a password to the database superuser. Also, specify -A md5 or -A password so that the default trust
authentication mode is not used; or modify the generated pg_hba.conf file after running initdb, before
you start the server for the first time. (Other reasonable approaches include using ident authentication or
file system permissions to restrict connections. See Chapter 19 for more information.)
initdb also initializes the default locale for the database cluster. Normally, it will just take the locale
settings in the environment and apply them to the initialized database. It is possible to specify a different
locale for the database; more information about that can be found in Section 20.1. The sort order used
within a particular database cluster is set by initdb and cannot be changed later, short of dumping all
data, rerunning initdb, and reloading the data. There is also a performance impact for using locales other
than C or POSIX. Therefore, it is important to make this choice correctly the first time.
initdb also sets the default character set encoding for the database cluster. Normally this should be
chosen to match the locale setting. For details see Section 20.2.
$ postmaster -D /usr/local/pgsql/data
which will leave the server running in the foreground. This must be done while logged into the PostgreSQL
user account. Without -D, the server will try to use the data directory named by the environment variable
PGDATA. If that variable is not provided either, it will fail.
Normally it is better to start the postmaster in the background. For this, use the usual shell syntax:
It is important to store the server’s stdout and stderr output somewhere, as shown above. It will help for
auditing purposes and to diagnose problems. (See Section 21.3 for a more thorough discussion of log file
handling.)
The postmaster also takes a number of other command line options. For more information, see the
postmaster reference page and Section 16.4 below.
This shell syntax can get tedious quickly. Therefore the wrapper program pg_ctl is provided to simplify
some tasks. For example:
247
Chapter 16. Server Run-time Environment
will start the server in the background and put the output into the named log file. The -D option has the
same meaning here as in the postmaster. pg_ctl is also capable of stopping the server.
Normally, you will want to start the database server when the computer boots. Autostart
scripts are operating-system-specific. There are a few distributed with PostgreSQL in the
contrib/start-scripts directory. Installing one will require root privileges.
Different systems have different conventions for starting up daemons at boot time. Many systems have
a file /etc/rc.local or /etc/rc.d/rc.local. Others use rc.d directories. Whatever you do, the
server must be run by the PostgreSQL user account and not by root or any other user. Therefore you
probably should form your commands using su -c ’...’ postgres. For example:
Here are a few more operating-system-specific suggestions. (In each case be sure to use the proper instal-
lation directory and user name where we show generic values.)
• For FreeBSD, look at the file contrib/start-scripts/freebsd in the PostgreSQL source distri-
bution.
• On OpenBSD, add the following lines to the file /etc/rc.local:
if [ -x /usr/local/pgsql/bin/pg_ctl -a -x /usr/local/pgsql/bin/postmaster ]; then
su - -c ’/usr/local/pgsql/bin/pg_ctl start -l /var/postgresql/log -s’ postgres
echo -n ’ postgresql’
fi
While the postmaster is running, its PID is stored in the file postmaster.pid in the data directory.
This is used to prevent multiple postmaster processes running in the same data directory and can also
be used for shutting down the postmaster process.
248
Chapter 16. Server Run-time Environment
HINT: Is another postmaster already running on port 5432? If not, wait a few seconds an
FATAL: could not create TCP/IP listen socket
This usually means just what it suggests: you tried to start another postmaster on the same port where
one is already running. However, if the kernel error message is not Address already in use or some
variant of that, there may be a different problem. For example, trying to start a postmaster on a reserved
port number may draw something like:
$ postmaster -p 666
LOG: could not bind IPv4 socket: Permission denied
HINT: Is another postmaster already running on port 666? If not, wait a few seconds and
FATAL: could not create TCP/IP listen socket
A message like
probably means your kernel’s limit on the size of shared memory is smaller than the work area PostgreSQL
is trying to create (4011376640 bytes in this example). Or it could mean that you do not have System-V-
style shared memory support configured into your kernel at all. As a temporary workaround, you can try
starting the server with a smaller-than-normal number of buffers (-B switch). You will eventually want to
reconfigure your kernel to increase the allowed shared memory size. You may also see this message when
trying to start multiple servers on the same machine, if their total space requested exceeds the kernel limit.
An error like
does not mean you’ve run out of disk space. It means your kernel’s limit on the number of System V
semaphores is smaller than the number PostgreSQL wants to create. As above, you may be able to work
around the problem by starting the server with a reduced number of allowed connections (-N switch), but
you’ll eventually want to increase the kernel limit.
If you get an “illegal system call” error, it is likely that shared memory or semaphores are not supported
in your kernel at all. In that case your only option is to reconfigure the kernel to enable these features.
Details about configuring System V IPC facilities are given in Section 16.5.1.
249
Chapter 16. Server Run-time Environment
This is the generic “I couldn’t find a server to talk to” failure. It looks like the above when TCP/IP
communication is attempted. A common mistake is to forget to configure the server to allow TCP/IP
connections.
Alternatively, you’ll get this when attempting Unix-domain socket communication to a local server:
The last line is useful in verifying that the client is trying to connect to the right place. If there is in fact
no server running there, the kernel error message will typically be either Connection refused or No
such file or directory, as illustrated. (It is important to realize that Connection refused in this
context does not mean that the server got your connection request and rejected it. That case will produce
a different message, as shown in Section 19.3.) Other error messages such as Connection timed out
may indicate more fundamental problems, like lack of network connectivity.
# This is a comment
log_connections = yes
log_destination = ’syslog’
search_path = ’$user, public’
One parameter is specified per line. The equal sign between name and value is optional. Whitespace is
insignificant and blank lines are ignored. Hash marks (#) introduce comments anywhere. Parameter values
that are not simple identifiers or numbers must be single-quoted.
The configuration file is reread whenever the postmaster process receives a SIGHUP signal (which
is most easily sent by means of pg_ctl reload). The postmaster also propagates this signal to all
currently running server processes so that existing sessions also get the new value. Alternatively, you can
send the signal to a single server process directly. Some parameters can only be set at server start; any
changes to their entries in the configuration file will be ignored until the server is restarted.
A second way to set these configuration parameters is to give them as a command line option to the
postmaster, such as:
250
Chapter 16. Server Run-time Environment
Command-line options override any conflicting settings in postgresql.conf. Note that this means you
won’t be able to change the value on-the-fly by editing postgresql.conf, so while the command-line
method may be convenient, it can cost you flexibility later.
Occasionally it is useful to give a command line option to one particular session only. The environment
variable PGOPTIONS can be used for this purpose on the client side:
(This works for any libpq-based client application, not just psql.) Note that this won’t work for parameters
that are fixed when the server is started or that must be specified in postgresql.conf.
Furthermore, it is possible to assign a set of option settings to a user or a database. Whenever a ses-
sion is started, the default settings for the user and database involved are loaded. The commands ALTER
USER and ALTER DATABASE, respectively, are used to configure these settings. Per-database settings
override anything received from the postmaster command-line or the configuration file, and in turn are
overridden by per-user settings; both are overridden by per-session options.
Some parameters can be changed in individual SQL sessions with the SET command, for example:
If SET is allowed, it overrides all other sources of values for the parameter. Some parameters cannot
be changed via SET: for example, if they control behavior that cannot reasonably be changed without
restarting PostgreSQL. Also, some parameters can be modified via SET or ALTER by superusers, but not
by ordinary users.
The SHOW command allows inspection of the current values of all parameters.
The virtual table pg_settings (described in Section 41.35) also allows displaying and updating session
run-time parameters. It is equivalent to SHOW and SET, but can be more convenient to use because it can
be joined with other tables, or selected from using any desired selection condition.
data_directory (string)
Specifies the directory to use for data storage. This option can only be set at server start.
config_file (string)
Specifies the main server configuration file (customarily called postgresql.conf). This option can
only be set on the postmaster command line.
hba_file (string)
Specifies the configuration file for host-based authentication (customarily called pg_hba.conf).
This option can only be set at server start.
251
Chapter 16. Server Run-time Environment
ident_file (string)
Specifies the configuration file for ident authentication (customarily called pg_ident.conf). This
option can only be set at server start.
external_pid_file (string)
Specifies the name of an additional process-id (PID) file that the postmaster should create for use by
server administration programs. This option can only be set at server start.
In a default installation, none of the above options are set explicitly. Instead, the data directory is specified
by the -D command-line option or the PGDATA environment variable, and the configuration files are all
found within the data directory.
If you wish to keep the configuration files elsewhere than the data directory, the postmaster’s -D command-
line option or PGDATA environment variable must point to the directory containing the configuration files,
and the data_directory option must be set in postgresql.conf (or on the command line) to show
where the data directory is actually located. Notice that data_directory overrides -D and PGDATA for
the location of the data directory, but not for the location of the configuration files.
If you wish, you can specify the configuration file names and locations individually using the
options config_file, hba_file and/or ident_file. config_file can only be specified on the
postmaster command line, but the others can be set within the main configuration file. If all three
options plus data_directory are explicitly set, then it is not necessary to specify -D or PGDATA.
When setting any of these options, a relative path will be interpreted with respect to the directory in which
the postmaster is started.
listen_addresses (string)
Specifies the TCP/IP address(es) on which the server is to listen for connections from client applica-
tions. The value takes the form of a comma-separated list of host names and/or numeric IP addresses.
The special entry * corresponds to all available IP interfaces. If the list is empty, the server does not
listen on any IP interface at all, in which case only Unix-domain sockets can be used to connect to
it. The default value is localhost, which allows only local “loopback” connections to be made. This
parameter can only be set at server start.
port (integer)
The TCP port the server listens on; 5432 by default. Note that the same port number is used for all
IP addresses the server listens on. This parameter can only be set at server start.
max_connections (integer)
Determines the maximum number of concurrent connections to the database server. The default is
typically 100, but may be less if your kernel settings will not support it (as determined during initdb).
This parameter can only be set at server start.
252
Chapter 16. Server Run-time Environment
Increasing this parameter may cause PostgreSQL to request more System V shared memory or
semaphores than your operating system’s default configuration allows. See Section 16.5.1 for in-
formation on how to adjust those parameters, if necessary.
superuser_reserved_connections (integer)
Determines the number of connection “slots” that are reserved for connections by PostgreSQL
superusers. At most max_connections connections can ever be active simultaneously.
Whenever the number of active concurrent connections is at least max_connections minus
superuser_reserved_connections, new connections will be accepted only for superusers.
The default value is 2. The value must be less than the value of max_connections. This parameter
can only be set at server start.
unix_socket_directory (string)
Specifies the directory of the Unix-domain socket on which the server is to listen for connections
from client applications. The default is normally /tmp, but can be changed at build time. This pa-
rameter can only be set at server start.
unix_socket_group (string)
Sets the owning group of the Unix-domain socket. (The owning user of the socket is always the user
that starts the server.) In combination with the option unix_socket_permissions this can be used
as an additional access control mechanism for Unix-domain connections. By default this is the empty
string, which uses the default group for the current user. This option can only be set at server start.
unix_socket_permissions (integer)
Sets the access permissions of the Unix-domain socket. Unix-domain sockets use the usual Unix file
system permission set. The option value is expected to be a numeric mode specification in the form
accepted by the chmod and umask system calls. (To use the customary octal format the number must
start with a 0 (zero).)
The default permissions are 0777, meaning anyone can connect. Reasonable alternatives are 0770
(only user and group, see also unix_socket_group) and 0700 (only user). (Note that for a Unix-
domain socket, only write permission matters and so there is no point in setting or revoking read or
execute permissions.)
This access control mechanism is independent of the one described in Chapter 19.
This option can only be set at server start.
rendezvous_name (string)
Specifies the Rendezvous broadcast name. By default, the computer name is used, specified as an
empty string ”. This option is ignored if the server was not compiled with Rendezvous support. This
option can only be set at server start.
authentication_timeout (integer)
Maximum time to complete client authentication, in seconds. If a would-be client has not completed
the authentication protocol in this much time, the server breaks the connection. This prevents hung
253
Chapter 16. Server Run-time Environment
clients from occupying a connection indefinitely. This option can only be set at server start or in the
postgresql.conf file. The default is 60.
ssl (boolean)
Enables SSL connections. Please read Section 16.7 before using this. The default is off. This param-
eter can only be set at server start.
password_encryption (boolean)
When a password is specified in CREATE USER or ALTER USER without writing either ENCRYPTED
or UNENCRYPTED, this option determines whether the password is to be encrypted. The default is on
(encrypt the password).
krb_server_keyfile (string)
Sets the location of the Kerberos server key file. See Section 19.2.3 for details.
db_user_namespace (boolean)
Note: This feature is intended as a temporary measure until a complete solution is found. At that
time, this option will be removed.
16.4.3.1. Memory
shared_buffers (integer)
Sets the number of shared memory buffers used by the database server. The default is typically
1000, but may be less if your kernel settings will not support it (as determined during initdb). Each
buffer is 8192 bytes, unless a different value of BLCKSZ was chosen when building the server. This
setting must be at least 16, as well as at least twice the value of max_connections; however, settings
significantly higher than the minimum are usually needed for good performance. Values of a few
thousand are recommended for production installations. This option can only be set at server start.
Increasing this parameter may cause PostgreSQL to request more System V shared memory than
your operating system’s default configuration allows. See Section 16.5.1 for information on how to
adjust those parameters, if necessary.
254
Chapter 16. Server Run-time Environment
work_mem (integer)
Specifies the amount of memory to be used by internal sort operations and hash tables before switch-
ing to temporary disk files. The value is specified in kilobytes, and defaults to 1024 kilobytes (1 MB).
Note that for a complex query, several sort or hash operations might be running in parallel; each one
will be allowed to use as much memory as this value specifies before it starts to put data into tem-
porary files. Also, several running sessions could be doing such operations concurrently. So the total
memory used could be many times the value of work_mem; it is necessary to keep this fact in mind
when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash
tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries.
maintenance_work_mem (integer)
Specifies the maximum amount of memory to be used in maintenance operations, such as VACUUM,
CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY. The value is specified in kilobytes, and
defaults to 16384 kilobytes (16 MB). Since only one of these operations can be executed at a time
by a database session, and an installation normally doesn’t have very many of them happening con-
currently, it’s safe to set this value significantly larger than work_mem. Larger settings may improve
performance for vacuuming and for restoring database dumps.
max_stack_depth (integer)
Specifies the maximum safe depth of the server’s execution stack. The ideal setting for this parameter
is the actual stack size limit enforced by the kernel (as set by ulimit -s or local equivalent),
less a safety margin of a megabyte or so. The safety margin is needed because the stack depth is
not checked in every routine in the server, but only in key potentially-recursive routines such as
expression evaluation. Setting the parameter higher than the actual kernel limit will mean that a
runaway recursive function can crash an individual backend process. The default setting is 2048 KB
(two megabytes), which is conservatively small and unlikely to risk crashes. However, it may be too
small to allow execution of complex functions.
max_fsm_pages (integer)
Sets the maximum number of disk pages for which free space will be tracked in the shared free-space
map. Six bytes of shared memory are consumed for each page slot. This setting must be more than
16 * max_fsm_relations. The default is 20000. This option can only be set at server start.
max_fsm_relations (integer)
Sets the maximum number of relations (tables and indexes) for which free space will be tracked in
the shared free-space map. Roughly fifty bytes of shared memory are consumed for each slot. The
default is 1000. This option can only be set at server start.
255
Chapter 16. Server Run-time Environment
max_files_per_process (integer)
Sets the maximum number of simultaneously open files allowed to each server subprocess. The de-
fault is 1000. If the kernel is enforcing a safe per-process limit, you don’t need to worry about this
setting. But on some platforms (notably, most BSD systems), the kernel will allow individual pro-
cesses to open many more files than the system can really support when a large number of processes
all try to open that many files. If you find yourself seeing “Too many open files” failures, try reducing
this setting. This option can only be set at server start.
preload_libraries (string)
This variable specifies one or more shared libraries that are to be preloaded at server start.
A parameterless initialization function can optionally be called for each library. To specify that,
add a colon and the name of the initialization function after the library name. For example
’$libdir/mylib:mylib_init’ would cause mylib to be preloaded and mylib_init to be
executed. If more than one library is to be loaded, separate their names with commas.
If a specified library or initialization function is not found, the server will fail to start.
PostgreSQL procedural language libraries may be preloaded in this way, typically by using the syntax
’$libdir/plXXX:plXXX_init’ where XXX is pgsql, perl, tcl, or python.
By preloading a shared library (and initializing it if applicable), the library startup time is avoided
when the library is first used. However, the time to start each new server process may increase slightly,
even if that process never uses the library. So this option is recommended only for libraries that will
be used in most sessions.
vacuum_cost_delay (integer)
The length of time, in milliseconds, that the process will sleep when the cost limit has been exceeded.
The default value is 0, which disables the cost-based vacuum delay feature. Positive values enable
cost-based vacuuming. Note that on many systems, the effective resolution of sleep delays is 10
milliseconds; setting vacuum_cost_delay to a value that is not a multiple of 10 may have the same
results as setting it to the next higher multiple of 10.
256
Chapter 16. Server Run-time Environment
vacuum_cost_page_hit (integer)
The estimated cost for vacuuming a buffer found in the shared buffer cache. It represents the cost to
lock the buffer pool, lookup the shared hash table and scan the content of the page. The default value
is 1.
vacuum_cost_page_miss (integer)
The estimated cost for vacuuming a buffer that has to be read from disk. This represents the effort to
lock the buffer pool, lookup the shared hash table, read the desired block in from the disk and scan
its content. The default value is 10.
vacuum_cost_page_dirty (integer)
The estimated cost charged when vacuum modifies a block that was previously clean. It represents
the extra I/O required to flush the dirty block out to disk again. The default value is 20.
vacuum_cost_limit (integer)
The accumulated cost that will cause the vacuuming process to sleep. The default value is 200.
Note: There are certain operations that hold critical locks and should therefore complete as quickly as
possible. Cost-based vacuum delays do not occur during such operations. Therefore it is possible that
the cost accumulates far higher than the specified limit. To avoid uselessly long delays in such cases,
the actual delay is calculated as vacuum_cost_delay * accumulated_balance / vacuum_cost_limit
with a maximum of vacuum_cost_delay * 4.
bgwriter_delay (integer)
Specifies the delay between activity rounds for the background writer. In each round the writer issues
writes for some number of dirty buffers (controllable by the following parameters). The selected
buffers will always be the least recently used ones among the currently dirty buffers. It then sleeps for
bgwriter_delay milliseconds, and repeats. The default value is 200. Note that on many systems,
the effective resolution of sleep delays is 10 milliseconds; setting bgwriter_delay to a value that
is not a multiple of 10 may have the same results as setting it to the next higher multiple of 10. This
option can only be set at server start or in the postgresql.conf file.
257
Chapter 16. Server Run-time Environment
bgwriter_percent (integer)
In each round, no more than this percentage of the currently dirty buffers will be written (rounding
up any fraction to the next whole number of buffers). The default value is 1. This option can only be
set at server start or in the postgresql.conf file.
bgwriter_maxpages (integer)
In each round, no more than this many dirty buffers will be written. The default value is 100. This
option can only be set at server start or in the postgresql.conf file.
Smaller values of bgwriter_percent and bgwriter_maxpages reduce the extra I/O load caused by
the background writer, but leave more work to be done at checkpoint time. To reduce load spikes at
checkpoints, increase the values. To disable background writing entirely, set bgwriter_percent and/or
bgwriter_maxpages to zero.
16.4.4.1. Settings
fsync (boolean)
If this option is on, the PostgreSQL server will use the fsync() system call in several places to
make sure that updates are physically written to disk. This insures that a database cluster will recover
to a consistent state after an operating system or hardware crash.
However, using fsync() results in a performance penalty: when a transaction is committed, Post-
greSQL must wait for the operating system to flush the write-ahead log to disk. When fsync is
disabled, the operating system is allowed to do its best in buffering, ordering, and delaying writes.
This can result in significantly improved performance. However, if the system crashes, the results of
the last few committed transactions may be lost in part or whole. In the worst case, unrecoverable
data corruption may occur. (Crashes of the database server itself are not a risk factor here. Only an
operating-system-level crash creates a risk of corruption.)
Due to the risks involved, there is no universally correct setting for fsync. Some administrators
always disable fsync, while others only turn it off for bulk loads, where there is a clear restart point
if something goes wrong, whereas some administrators always leave fsync enabled. The default is
to enable fsync, for maximum reliability. If you trust your operating system, your hardware, and
your utility company (or your battery backup), you can consider disabling fsync.
This option can only be set at server start or in the postgresql.conf file.
wal_sync_method (string)
Method used for forcing WAL updates out to disk. Possible values are fsync (call fsync() at
each commit), fdatasync (call fdatasync() at each commit), open_sync (write WAL files with
open() option O_SYNC), and open_datasync (write WAL files with open() option O_DSYNC).
Not all of these choices are available on all platforms. If fsync is off then this setting is irrelevant.
This option can only be set at server start or in the postgresql.conf file.
258
Chapter 16. Server Run-time Environment
wal_buffers (integer)
Number of disk-page buffers allocated in shared memory for WAL data. The default is 8. The setting
need only be large enough to hold the amount of WAL data generated by one typical transaction.
This option can only be set at server start.
commit_delay (integer)
Time delay between writing a commit record to the WAL buffer and flushing the buffer out to disk,
in microseconds. A nonzero delay can allow multiple transactions to be committed with only one
fsync() system call, if system load is high enough that additional transactions become ready to
commit within the given interval. But the delay is just wasted if no other transactions become ready
to commit. Therefore, the delay is only performed if at least commit_siblings other transactions
are active at the instant that a server process has written its commit record. The default is zero (no
delay).
commit_siblings (integer)
Minimum number of concurrent open transactions to require before performing the commit_delay
delay. A larger value makes it more probable that at least one other transaction will become ready to
commit during the delay interval. The default is five.
16.4.4.2. Checkpoints
checkpoint_segments (integer)
Maximum distance between automatic WAL checkpoints, in log file segments (each segment is
normally 16 megabytes). The default is three. This option can only be set at server start or in the
postgresql.conf file.
checkpoint_timeout (integer)
Maximum time between automatic WAL checkpoints, in seconds. The default is 300 seconds. This
option can only be set at server start or in the postgresql.conf file.
checkpoint_warning (integer)
Write a message to the server log if checkpoints caused by the filling of checkpoint segment files
happen closer together than this many seconds. The default is 30 seconds. Zero turns off the warning.
16.4.4.3. Archiving
archive_command (string)
The shell command to execute to archive a completed segment of the WAL file series. If this is an
empty string (the default), WAL archiving is disabled. Any %p in the string is replaced by the absolute
path of the file to archive, and any %f is replaced by the file name only. Use %% to embed an actual
% character in the command. For more information see Section 22.3.1. This option can only be set at
server start or in the postgresql.conf file.
It is important for the command to return a zero exit status if and only if it succeeds. Examples:
259
Chapter 16. Server Run-time Environment
enable_hashagg (boolean)
Enables or disables the query planner’s use of hashed aggregation plan types. The default is on.
enable_hashjoin (boolean)
Enables or disables the query planner’s use of hash-join plan types. The default is on.
enable_indexscan (boolean)
Enables or disables the query planner’s use of index-scan plan types. The default is on.
enable_mergejoin (boolean)
Enables or disables the query planner’s use of merge-join plan types. The default is on.
enable_nestloop (boolean)
Enables or disables the query planner’s use of nested-loop join plans. It’s not possible to suppress
nested-loop joins entirely, but turning this variable off discourages the planner from using one if there
are other methods available. The default is on.
enable_seqscan (boolean)
Enables or disables the query planner’s use of sequential scan plan types. It’s not possible to suppress
sequential scans entirely, but turning this variable off discourages the planner from using one if there
are other methods available. The default is on.
enable_sort (boolean)
Enables or disables the query planner’s use of explicit sort steps. It’s not possible to suppress explicit
sorts entirely, but turning this variable off discourages the planner from using one if there are other
methods available. The default is on.
260
Chapter 16. Server Run-time Environment
enable_tidscan (boolean)
Enables or disables the query planner’s use of TID scan plan types. The default is on.
Note: Unfortunately, there is no well-defined method for determining ideal values for the family of
“cost” variables that appear below. You are encouraged to experiment and share your findings.
Sets the planner’s assumption about the effective size of the disk cache that is available to a single
index scan. This is factored into estimates of the cost of using an index; a higher value makes it
more likely index scans will be used, a lower value makes it more likely sequential scans will be
used. When setting this parameter you should consider both PostgreSQL’s shared buffers and the
portion of the kernel’s disk cache that will be used for PostgreSQL data files. Also, take into account
the expected number of concurrent queries using different indexes, since they will have to share the
available space. This parameter has no effect on the size of shared memory allocated by PostgreSQL,
nor does it reserve kernel disk cache; it is used only for estimation purposes. The value is measured
in disk pages, which are normally 8192 bytes each. The default is 1000.
random_page_cost (floating point)
Sets the planner’s estimate of the cost of a nonsequentially fetched disk page. This is measured as a
multiple of the cost of a sequential page fetch. A higher value makes it more likely a sequential scan
will be used, a lower value makes it more likely an index scan will be used. The default is four.
cpu_tuple_cost (floating point)
Sets the planner’s estimate of the cost of processing each row during a query. This is measured as a
fraction of the cost of a sequential page fetch. The default is 0.01.
cpu_index_tuple_cost (floating point)
Sets the planner’s estimate of the cost of processing each index row during an index scan. This is
measured as a fraction of the cost of a sequential page fetch. The default is 0.001.
cpu_operator_cost (floating point)
Sets the planner’s estimate of the cost of processing each operator in a WHERE clause. This is mea-
sured as a fraction of the cost of a sequential page fetch. The default is 0.0025.
geqo (boolean)
Enables or disables genetic query optimization, which is an algorithm that attempts to do query plan-
ning without exhaustive searching. This is on by default. The geqo_threshold variable provides a
261
Chapter 16. Server Run-time Environment
Use genetic query optimization to plan queries with at least this many FROM items involved. (Note
that an outer JOIN construct counts as only one FROM item.) The default is 12. For simpler queries
it is usually best to use the deterministic, exhaustive planner, but for queries with many tables the
deterministic planner takes too long.
geqo_effort (integer)
Controls the trade off between planning time and query plan efficiency in GEQO. This variable must
be an integer in the range from 1 to 10. The default value is 5. Larger values increase the time spent
doing query planning, but also increase the likelihood that an efficient query plan will be chosen.
geqo_effort doesn’t actually do anything directly; it is only used to compute the default values for
the other variables that influence GEQO behavior (described below). If you prefer, you can set the
other parameters by hand instead.
geqo_pool_size (integer)
Controls the pool size used by GEQO. The pool size is the number of individuals in the genetic
population. It must be at least two, and useful values are typically 100 to 1000. If it is set to zero (the
default setting) then a suitable default is chosen based on geqo_effort and the number of tables in
the query.
geqo_generations (integer)
Controls the number of generations used by GEQO. Generations specifies the number of iterations
of the algorithm. It must be at least one, and useful values are in the same range as the pool size. If it
is set to zero (the default setting) then a suitable default is chosen based on geqo_pool_size.
geqo_selection_bias (floating point)
Controls the selection bias used by GEQO. The selection bias is the selective pressure within the
population. Values can be from 1.50 to 2.00; the latter is the default.
default_statistics_target (integer)
Sets the default statistics target for table columns that have not had a column-specific target set via
ALTER TABLE SET STATISTICS. Larger values increase the time needed to do ANALYZE, but may
improve the quality of the planner’s estimates. The default is 10. For more information on the use of
statistics by the PostgreSQL query planner, refer to Section 13.2.
from_collapse_limit (integer)
The planner will merge sub-queries into upper queries if the resulting FROM list would have no more
than this many items. Smaller values reduce planning time but may yield inferior query plans. The
default is 8. It is usually wise to keep this less than geqo_threshold.
join_collapse_limit (integer)
The planner will rewrite explicit inner JOIN constructs into lists of FROM items whenever a list of
no more than this many items in total would result. Prior to PostgreSQL 7.4, joins specified via the
262
Chapter 16. Server Run-time Environment
JOIN construct would never be reordered by the query planner. The query planner has subsequently
been improved so that inner joins written in this form can be reordered; this configuration parameter
controls the extent to which this reordering is performed.
Note: At present, the order of outer joins specified via the JOIN construct is never adjusted by
the query planner; therefore, join_collapse_limit has no effect on this behavior. The planner
may be improved to reorder some classes of outer joins in a future release of PostgreSQL.
By default, this variable is set the same as from_collapse_limit, which is appropriate for most
uses. Setting it to 1 prevents any reordering of inner JOINs. Thus, the explicit join order specified
in the query will be the actual order in which the relations are joined. The query planner does not
always choose the optimal join order; advanced users may elect to temporarily set this variable to 1,
and then specify the join order they desire explicitly. Another consequence of setting this variable
to 1 is that the query planner will behave more like the PostgreSQL 7.3 query planner, which some
users might find useful for backward compatibility reasons.
Setting this variable to a value between 1 and from_collapse_limit might be useful to trade off
planning time against the quality of the chosen plan (higher values produce better plans).
log_destination (string)
PostgreSQL supports several methods for logging server messages, including stderr and syslog. On
Windows, eventlog is also supported. Set this option to a list of desired log destinations separated
by commas. The default is to log to stderr only. This option can only be set at server start or in the
postgresql.conf configuration file.
redirect_stderr (boolean)
This option allows messages sent to stderr to be captured and redirected into log files. This option, in
combination with logging to stderr, is often more useful than logging to syslog, since some types of
messages may not appear in syslog output (a common example is dynamic-linker failure messages).
This option can only be set at server start.
log_directory (string)
When redirect_stderr is enabled, this option determines the directory in which log files will be
created. It may be specified as an absolute path, or relative to the cluster data directory. This option
can only be set at server start or in the postgresql.conf configuration file.
log_filename (string)
When redirect_stderr is enabled, this option sets the file names of the created log files.
The value is treated as a strftime pattern, so %-escapes can be used to specify time-varying file
263
Chapter 16. Server Run-time Environment
names. If no %-escapes are present, PostgreSQL will append the epoch of the new log file’s open
time. For example, if log_filename were server_log, then the chosen file name would be
server_log.1093827753 for a log starting at Sun Aug 29 19:02:33 2004 MST. This option can
only be set at server start or in the postgresql.conf configuration file.
log_rotation_age (integer)
When redirect_stderr is enabled, this option determines the maximum lifetime of an individ-
ual log file. After this many minutes have elapsed, a new log file will be created. Set to zero to
disable time-based creation of new log files. This option can only be set at server start or in the
postgresql.conf configuration file.
log_rotation_size (integer)
When redirect_stderr is enabled, this option determines the maximum size of an individual log
file. After this many kilobytes have been emitted into a log file, a new log file will be created. Set to
zero to disable size-based creation of new log files. This option can only be set at server start or in
the postgresql.conf configuration file.
log_truncate_on_rotation (boolean)
When redirect_stderr is enabled, this option will cause PostgreSQL to truncate (overwrite),
rather than append to, any existing log file of the same name. However, truncation will occur only
when a new file is being opened due to time-based rotation, not during server startup or size-based
rotation. When false, pre-existing files will be appended to in all cases. For example, using this
option in combination with a log_filename like postgresql-%H.log would result in generating
twenty-four hourly log files and then cyclically overwriting them. This option can only be set at
server start or in the postgresql.conf configuration file.
Example: To keep 7 days of logs, one log file per day named server_log.Mon, server_log.Tue,
etc, and automatically overwrite last week’s log with this week’s log, set log_filename to
server_log.%a, log_truncate_on_rotation to true, and log_rotation_age to 1440.
Example: To keep 24 hours of logs, one log file per hour, but also rotate sooner if the log file size
exceeds 1GB, set log_filename to server_log.%H%M, log_truncate_on_rotation to
true, log_rotation_age to 60, and log_rotation_size to 1000000. Including %M in
log_filename allows any size-driven rotations that may occur to select a filename different from
the hour’s initial filename.
syslog_facility (string)
When logging to syslog is enabled, this option determines the syslog “facility” to be used. You may
choose from LOCAL0, LOCAL1, LOCAL2, LOCAL3, LOCAL4, LOCAL5, LOCAL6, LOCAL7; the default
is LOCAL0. See also the documentation of your system’s syslog daemon. This option can only be set
at server start.
syslog_ident (string)
When logging to syslog is enabled, this option determines the program name used to identify Post-
greSQL messages in syslog logs. The default is postgres. This option can only be set at server
start.
264
Chapter 16. Server Run-time Environment
client_min_messages (string)
Controls which message levels are sent to the client. Valid values are DEBUG5, DEBUG4, DEBUG3,
DEBUG2, DEBUG1, LOG, NOTICE, WARNING, and ERROR. Each level includes all the levels that follow
it. The later the level, the fewer messages are sent. The default is NOTICE. Note that LOG has a
different rank here than in log_min_messages.
log_min_messages (string)
Controls which message levels are written to the server log. Valid values are DEBUG5, DEBUG4,
DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. Each level
includes all the levels that follow it. The later the level, the fewer messages are sent to the log. The
default is NOTICE. Note that LOG has a different rank here than in client_min_messages. Only
superusers can change this setting.
log_error_verbosity (string)
Controls the amount of detail written in the server log for each message that is logged. Valid values
are TERSE, DEFAULT, and VERBOSE, each adding more fields to displayed messages. Only superusers
can change this setting.
log_min_error_statement (string)
Controls whether or not the SQL statement that causes an error condition will also be recorded in
the server log. All SQL statements that cause an error of the specified level or higher are logged.
The default is PANIC (effectively turning this feature off for normal use). Valid values are DEBUG5,
DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, FATAL, and PANIC. For ex-
ample, if you set this to ERROR then all SQL statements causing errors, fatal errors, or panics will be
logged. Enabling this option can be helpful in tracking down the source of any errors that appear in
the server log. Only superusers can change this setting.
log_min_duration_statement (integer)
Sets a minimum statement execution time (in milliseconds) that causes a statement to be logged. All
SQL statements that run for the time specified or longer will be logged with their duration. Setting
this to zero will print all queries and their durations. Minus-one (the default) disables the feature.
For example, if you set it to 250 then all SQL statements that run 250ms or longer will be logged.
Enabling this option can be useful in tracking down unoptimized queries in your applications. Only
superusers can change this setting.
silent_mode (boolean)
Runs the server silently. If this option is set, the server will automatically run in background and
any controlling terminals are disassociated (same effect as postmaster’s -S option). The server’s
standard output and standard error are redirected to /dev/null, so any messages sent to them will
be lost. Unless syslog logging is selected or redirect_stderr is enabled, using this option is
discouraged because it makes it impossible to see error messages.
Here is a list of the various message severity levels used in these settings:
DEBUG[1-5]
265
Chapter 16. Server Run-time Environment
INFO
Provides information implicitly requested by the user, e.g., during VACUUM VERBOSE.
NOTICE
Provides information that may be helpful to users, e.g., truncation of long identifiers and the creation
of indexes as part of primary keys.
WARNING
debug_print_parse (boolean)
debug_print_rewritten (boolean)
debug_print_plan (boolean)
debug_pretty_print (boolean)
These options enable various debugging output to be emitted. For each executed query, they print the
resulting parse tree, the query rewriter output, or the execution plan. debug_pretty_print indents
these displays to produce a more readable but much longer output format. client_min_messages
or log_min_messages must be DEBUG1 or lower to actually send this output to the client or the
server log, respectively. These options are off by default.
log_connections (boolean)
This outputs a line to the server log detailing each successful connection. This is off by
default, although it is probably very useful. This option can only be set at server start or in the
postgresql.conf configuration file.
log_disconnections (boolean)
This outputs a line in the server log similar to log_connections but at session termination, and
includes the duration of the session. This is off by default. This option can only be set at server start
or in the postgresql.conf configuration file.
266
Chapter 16. Server Run-time Environment
log_duration (boolean)
Causes the duration of every completed statement which satisfies log_statement to be logged.
When using this option, if you are not using syslog, it is recommended that you log the PID or
session ID using log_line_prefix so that you can link the statement to the duration using the
process ID or session ID. The default is off. Only superusers can change this setting.
log_line_prefix (string)
This is a printf-style string that is output at the beginning of each log line. The default is an empty
string. Each recognized escape is replaced as outlined below - anything else that looks like an escape
is ignored. Other characters are copied straight to the log line. Some escapes are only recognised by
session processes, and do not apply to background processes such as the postmaster. Syslog produces
its own time stamp and process ID information, so you probably do not want to use those escapes
if you are using syslog. This option can only be set at server start or in the postgresql.conf
configuration file.
267
Chapter 16. Server Run-time Environment
log_statement (string)
Controls which SQL statements are logged. Valid values are none, ddl, mod, and all. ddl logs all
data definition commands like CREATE, ALTER, and DROP commands. mod logs all ddl statements,
plus INSERT, UPDATE, DELETE, TRUNCATE, and COPY FROM. PREPARE and EXPLAIN ANALYZE
statements are also logged if their contained command is of an appropriate type.
The default is none. Only superusers can change this setting.
Note: The EXECUTE statement is not considered a ddl or mod statement. When it is logged, only
the name of the prepared statement is reported, not the actual prepared statement.
When a function is defined in the PL/pgSQLserver-side language, any queries executed by the
function will only be logged the first time that the function is invoked in a particular session. This
is because PL/pgSQL keeps a cache of the query plans produced for the SQL statements in the
function.
log_hostname (boolean)
By default, connection log messages only show the IP address of the connecting host. Turning on this
option causes logging of the host name as well. Note that depending on your host name resolution
setup this might impose a non-negligible performance penalty. This option can only be set at server
start or in the postgresql.conf file.
log_statement_stats (boolean)
log_parser_stats (boolean)
log_planner_stats (boolean)
log_executor_stats (boolean)
For each query, write performance statistics of the respective module to the server log. This is a
crude profiling instrument. log_statement_stats reports total statement statistics, while the oth-
ers report per-module statistics. log_statement_stats cannot be enabled together with any of the
per-module options. All of these options are disabled by default. Only superusers can change these
settings.
stats_start_collector (boolean)
Controls whether the server should start the statistics-collection subprocess. This is on by default,
but may be turned off if you know you have no interest in collecting statistics. This option can only
268
Chapter 16. Server Run-time Environment
Enables the collection of statistics on the currently executing command of each session, along with
the time at which that command began execution. This option is off by default. Note that even when
enabled, this information is not visible to all users, only to superusers and the user owning the ses-
sion being reported on; so it should not represent a security risk. This data can be accessed via the
pg_stat_activity system view; refer to Chapter 23 for more information.
stats_block_level (boolean)
Enables the collection of block-level statistics on database activity. This option is disabled by default.
If this option is enabled, the data that is produced can be accessed via the pg_stat and pg_statio
family of system views; refer to Chapter 23 for more information.
stats_row_level (boolean)
Enables the collection of row-level statistics on database activity. This option is disabled by default.
If this option is enabled, the data that is produced can be accessed via the pg_stat and pg_statio
family of system views; refer to Chapter 23 for more information.
stats_reset_on_server_start (boolean)
If on, collected statistics are zeroed out whenever the server is restarted. If off, statistics are accumu-
lated across server restarts. The default is on. This option can only be set at server start.
search_path (string)
This variable specifies the order in which schemas are searched when an object (table, data type,
function, etc.) is referenced by a simple name with no schema component. When there are objects
of identical names in different schemas, the one found first in the search path is used. An object that
is not in any of the schemas in the search path can only be referenced by specifying its containing
schema with a qualified (dotted) name.
The value for search_path has to be a comma-separated list of schema names. If one of the list
items is the special value $user, then the schema having the name returned by SESSION_USER is
substituted, if there is such a schema. (If not, $user is ignored.)
The system catalog schema, pg_catalog, is always searched, whether it is mentioned in the path
or not. If it is mentioned in the path then it will be searched in the specified order. If pg_catalog is
not in the path then it will be searched before searching any of the path items. It should also be noted
that the temporary-table schema, pg_temp_nnn, is implicitly searched before any of these.
When objects are created without specifying a particular target schema, they will be placed in the
first schema listed in the search path. An error is reported if the search path is empty.
269
Chapter 16. Server Run-time Environment
The default value for this parameter is ’$user, public’ (where the second part will be ignored
if there is no schema named public). This supports shared use of a database (where no users have
private schemas, and all share use of public), private per-user schemas, and combinations of these.
Other effects can be obtained by altering the default search path setting, either globally or per-user.
The current effective value of the search path can be examined via the SQL function
current_schemas(). This is not quite the same as examining the value of search_path, since
current_schemas() shows how the requests appearing in search_path were resolved.
This variable specifies the default tablespace in which to create objects (tables and indexes) when a
CREATE command does not explicitly specify a tablespace.
The value is either the name of a tablespace, or an empty string to specify using the default tablespace
of the current database. If the value does not match the name of any existing tablespace, PostgreSQL
will automatically use the default tablespace of the current database.
For more information on tablespaces, see Section 18.6.
check_function_bodies (boolean)
This parameter is normally true. When set to false, it disables validation of the function body string
during CREATE FUNCTION. Disabling validation is occasionally useful to avoid problems such as
forward references when restoring function definitions from a dump.
default_transaction_isolation (string)
Each SQL transaction has an isolation level, which can be either “read uncommitted”, “read commit-
ted”, “repeatable read”, or “serializable”. This parameter controls the default isolation level of each
new transaction. The default is “read committed”.
Consult Chapter 12 and SET TRANSACTION for more information.
default_transaction_read_only (boolean)
A read-only SQL transaction cannot alter non-temporary tables. This parameter controls the default
read-only status of each new transaction. The default is false (read/write).
Consult SET TRANSACTION for more information.
statement_timeout (integer)
Abort any statement that takes over the specified number of milliseconds. A value of zero (the default)
turns off the limitation.
DateStyle (string)
Sets the display format for date and time values, as well as the rules for interpreting ambiguous
date input values. For historical reasons, this variable contains two independent components: the
output format specification (ISO, Postgres, SQL, or German) and the input/output specification for
year/month/day ordering (DMY, MDY, or YMD). These can be set separately or together. The keywords
270
Chapter 16. Server Run-time Environment
Euro and European are synonyms for DMY; the keywords US, NonEuro, and NonEuropean are
synonyms for MDY. See Section 8.5 for more information. The default is ISO, MDY.
timezone (string)
Sets the time zone for displaying and interpreting time stamps. The default is ’unknown’, which
means to use whatever the system environment specifies as the time zone. See Section 8.5 for more
information.
australian_timezones (boolean)
If set to true, ACST, CST, EST, and SAT are interpreted as Australian time zones rather than as
North/South American time zones and Saturday. The default is false.
extra_float_digits (integer)
This parameter adjusts the number of digits displayed for floating-point values, including float4,
float8, and geometric data types. The parameter value is added to the standard number of dig-
its (FLT_DIG or DBL_DIG as appropriate). The value can be set as high as 2, to include partially-
significant digits; this is especially useful for dumping float data that needs to be restored exactly. Or
it can be set negative to suppress unwanted digits.
client_encoding (string)
Sets the client-side encoding (character set). The default is to use the database encoding.
lc_messages (string)
Sets the language in which messages are displayed. Acceptable values are system-dependent; see
Section 20.1 for more information. If this variable is set to the empty string (which is the default)
then the value is inherited from the execution environment of the server in a system-dependent way.
On some systems, this locale category does not exist. Setting this variable will still work, but there
will be no effect. Also, there is a chance that no translated messages for the desired language exist.
In that case you will continue to see the English messages.
lc_monetary (string)
Sets the locale to use for formatting monetary amounts, for example with the to_char family of
functions. Acceptable values are system-dependent; see Section 20.1 for more information. If this
variable is set to the empty string (which is the default) then the value is inherited from the execution
environment of the server in a system-dependent way.
lc_numeric (string)
Sets the locale to use for formatting numbers, for example with the to_char family of functions.
Acceptable values are system-dependent; see Section 20.1 for more information. If this variable is set
to the empty string (which is the default) then the value is inherited from the execution environment
of the server in a system-dependent way.
lc_time (string)
Sets the locale to use for formatting date and time values. (Currently, this setting does nothing, but it
may in the future.) Acceptable values are system-dependent; see Section 20.1 for more information.
If this variable is set to the empty string (which is the default) then the value is inherited from the
execution environment of the server in a system-dependent way.
271
Chapter 16. Server Run-time Environment
explain_pretty_print (boolean)
Determines whether EXPLAIN VERBOSE uses the indented or non-indented format for displaying
detailed query-tree dumps. The default is on.
dynamic_library_path (string)
If a dynamically loadable module needs to be opened and the file name specified in the CREATE
FUNCTION or LOAD command does not have a directory component (i.e. the name does not contain a
slash), the system will search this path for the required file.
The value for dynamic_library_path has to be a list of absolute directory paths separated
by colons (or semi-colons on Windows). If a list element starts with the special string $libdir,
the compiled-in PostgreSQL package library directory is substituted for $libdir. This is where
the modules provided by the standard PostgreSQL distribution are installed. (Use pg_config
--pkglibdir to find out the name of this directory.) For example:
dynamic_library_path = ’/usr/local/lib/postgresql:/home/my_project/lib:$libdir’
The default value for this parameter is ’$libdir’. If the value is set to an empty string, the automatic
path search is turned off.
This parameter can be changed at run time by superusers, but a setting done that way will only persist
until the end of the client connection, so this method should be reserved for development purposes.
The recommended way to set this parameter is in the postgresql.conf configuration file.
deadlock_timeout (integer)
This is the amount of time, in milliseconds, to wait on a lock before checking to see if there is
a deadlock condition. The check for deadlock is relatively slow, so the server doesn’t run it every
time it waits for a lock. We (optimistically?) assume that deadlocks are not common in production
applications and just wait on the lock for a while before starting the check for a deadlock. Increasing
this value reduces the amount of time wasted in needless deadlock checks, but slows down reporting
of real deadlock errors. The default is 1000 (i.e., one second), which is probably about the smallest
value you would want in practice. On a heavily loaded server you might want to raise it. Ideally the
setting should exceed your typical transaction time, so as to improve the odds that a lock will be
released before the waiter decides to check for deadlock.
max_locks_per_transaction (integer)
The shared lock table is sized on the assumption that at most max_locks_per_transaction *
max_connections distinct objects will need to be locked at any one time. (Thus, this parameter’s
name may be confusing: it is not a hard limit on the number of locks taken by any one transaction, but
272
Chapter 16. Server Run-time Environment
rather a maximum average value.) The default, 64, has historically proven sufficient, but you might
need to raise this value if you have clients that touch many different tables in a single transaction.
This option can only be set at server start.
add_missing_from (boolean)
When true, tables that are referenced by a query will be automatically added to the FROM clause
if not already present. The default is true for compatibility with previous releases of PostgreSQL.
However, this behavior is not SQL-standard, and many people dislike it because it can mask mistakes
(such as referencing a table where you should have referenced its alias). Set to false for the SQL-
standard behavior of rejecting references to tables that are not listed in FROM.
regex_flavor (string)
The regular expression “flavor” can be set to advanced, extended, or basic. The default is
advanced. The extended setting may be useful for exact backwards compatibility with pre-7.4
releases of PostgreSQL. See Section 9.7.3.1 for details.
sql_inheritance (boolean)
This controls the inheritance semantics, in particular whether subtables are included by various com-
mands by default. They were not included in versions prior to 7.1. If you need the old behavior you
can set this variable to off, but in the long run you are encouraged to change your applications to use
the ONLY key word to exclude subtables. See Section 5.5 for more information about inheritance.
default_with_oids (boolean)
This controls whether CREATE TABLE and CREATE TABLE AS include an OID column in
newly-created tables, if neither WITH OIDS nor WITHOUT OIDS is specified. It also determines
whether OIDs will be included in tables created by SELECT INTO. In PostgreSQL 8.0.0
default_with_oids defaults to true. This is also the behavior of previous versions of
PostgreSQL. However, assuming that tables will contain OIDs by default is not encouraged. This
option will probably default to false in a future release of PostgreSQL.
To ease compatibility with applications that make use of OIDs, this option should left enabled. To
ease compatibility with future versions of PostgreSQL, this option should be disabled, and applica-
tions that require OIDs on certain tables should explicitly specify WITH OIDS when those tables are
created.
transform_null_equals (boolean)
When turned on, expressions of the form expr = NULL (or NULL = expr ) are treated as expr IS
NULL, that is, they return true if expr evaluates to the null value, and false otherwise. The correct
273
Chapter 16. Server Run-time Environment
SQL-spec-compliant behavior of expr = NULL is to always return null (unknown). Therefore this
option defaults to off.
However, filtered forms in Microsoft Access generate queries that appear to use expr = NULL to
test for null values, so if you use that interface to access the database you might want to turn this
option on. Since expressions of the form expr = NULL always return the null value (using the
correct interpretation) they are not very useful and do not appear often in normal applications, so
this option does little harm in practice. But new users are frequently confused about the semantics of
expressions involving null values, so this option is not on by default.
Note that this option only affects the exact form = NULL, not other comparison operators or other
expressions that are computationally equivalent to some expression involving the equals operator
(such as IN). Thus, this option is not a general fix for bad programming.
Refer to Section 9.2 for related information.
block_size (integer)
Shows the size of a disk block. It is determined by the value of BLCKSZ when building the server. The
default value is 8192 bytes. The meaning of some configuration variables (such as shared_buffers) is
influenced by block_size. See Section 16.4.3 for information.
integer_datetimes (boolean)
Shows whether PostgreSQL was built with support for 64-bit-integer dates and times. It is set by
configuring with --enable-integer-datetimes when building PostgreSQL. The default value
is off.
lc_collate (string)
Shows the locale in which sorting of textual data is done. See Section 20.1 for more information. The
value is determined when the database cluster is initialized.
lc_ctype (string)
Shows the locale that determines character classifications. See Section 20.1 for more information.
The value is determined when the database cluster is initialized. Ordinarily this will be the same as
lc_collate, but for special applications it might be set differently.
max_function_args (integer)
274
Chapter 16. Server Run-time Environment
max_identifier_length (integer)
Shows the maximum identifier length. It is determined as one less than the value of NAMEDATALEN
when building the server. The default value of NAMEDATALEN is 64; therefore the default
max_identifier_length is 63.
max_index_keys (integer)
Shows the maximum number of index keys. It is determined by the value of INDEX_MAX_KEYS when
building the server. The default value is 32.
server_encoding (string)
Shows the database encoding (character set). It is determined when the database is created. Ordinar-
ily, clients need only be concerned with the value of client_encoding.
server_version (string)
Shows the version number of the server. It is determined by the value of PG_VERSION when building
the server.
custom_variable_classes (string)
This variable specifies one or several class names to be used for custom variables, in the form of a
comma-separated list. A custom variable is a variable not normally known to PostgreSQL proper but
used by some add-on module. Such variables must have names consisting of a class name, a dot, and
a variable name. custom_variable_classes specifies all the class names in use in a particular
installation. This option can only be set at server start or in the postgresql.conf configuration
file.
The difficulty with setting custom variables in postgresql.conf is that the file must be read before
add-on modules have been loaded, and so custom variables would ordinarily be rejected as unknown.
When custom_variable_classes is set, the server will accept definitions of arbitrary variables within
each specified class. These variables will be treated as placeholders and will have no function until the
module that defines them is loaded. When a module for a specific class is loaded, it will add the proper
variable definitions for its class name, convert any placeholder values according to those definitions, and
issue warnings for any placeholders of its class that remain (which presumably would be misspelled
configuration variables).
Here is an example of what postgresql.conf might contain when using custom variables:
custom_variable_classes = ’plr,pljava’
plr.path = ’/usr/lib/R’
pljava.foo = 1
plruby.bar = true # generates error, unknown class name
275
Chapter 16. Server Run-time Environment
debug_assertions (boolean)
Turns on various assertion checks. This is a debugging aid. If you are experiencing strange problems
or crashes you might want to turn this on, as it might expose programming mistakes. To use this op-
tion, the macro USE_ASSERT_CHECKING must be defined when PostgreSQL is built (accomplished
by the configure option --enable-cassert). Note that debug_assertions defaults to on if
PostgreSQL has been built with assertions enabled.
debug_shared_buffers (integer)
Number of seconds between ARC reports. If set greater than zero, emit ARC statistics to the log
every so many seconds. Zero (the default) disables reporting.
pre_auth_delay (integer)
If nonzero, a delay of this many seconds occurs just after a new server process is forked, before it
conducts the authentication process. This is intended to give an opportunity to attach to the server
process with a debugger to trace down misbehavior in authentication.
trace_notify (boolean)
Generates a great amount of debugging output for the LISTEN and NOTIFY commands.
client_min_messages or log_min_messages must be DEBUG1 or lower to send this output to the
client or server log, respectively.
trace_locks (boolean)
trace_lwlocks (boolean)
trace_userlocks (boolean)
trace_lock_oidmin (boolean)
trace_lock_table (boolean)
debug_deadlocks (boolean)
log_btree_build_stats (boolean)
If true, emit WAL-related debugging output. This option is only available if the WAL_DEBUG macro
was defined when PostgreSQL was compiled.
zero_damaged_pages (boolean)
Detection of a damaged page header normally causes PostgreSQL to report an error, aborting the cur-
rent command. Setting zero_damaged_pages to true causes the system to instead report a warning,
zero out the damaged page, and continue processing. This behavior will destroy data, namely all the
276
Chapter 16. Server Run-time Environment
rows on the damaged page. But it allows you to get past the error and retrieve rows from any un-
damaged pages that may be present in the table. So it is useful for recovering data if corruption has
occurred due to hardware or software error. You should generally not set this true until you have
given up hope of recovering data from the damaged page(s) of a table. The default setting is off, and
it can only be changed by a superuser.
277
Chapter 16. Server Run-time Environment
The most important shared memory parameter is SHMMAX, the maximum size, in bytes, of a shared memory
segment. If you get an error message from shmget like Invalid argument, it is likely that this limit has
278
Chapter 16. Server Run-time Environment
been exceeded. The size of the required shared memory segment varies both with the number of requested
buffers (-B option) and the number of allowed connections (-N option), although the former is the most
significant. (You can, as a temporary solution, lower these settings to eliminate the failure.) As a rough
approximation, you can estimate the required segment size as suggested in Table 16-2. Any error message
you might get will contain the size of the failed allocation request.
Some systems also have a limit on the total amount of shared memory in the system (SHMALL). Make sure
this is large enough for PostgreSQL plus any other applications that are using shared memory segments.
(Caution: SHMALL is measured in pages rather than bytes on many systems.)
Less likely to cause problems is the minimum size for shared memory segments (SHMMIN), which should
be at most approximately 256 kB for PostgreSQL (it is usually just 1). The maximum number of segments
system-wide (SHMMNI) or per-process (SHMSEG) are unlikely to cause a problem unless your system has
them set to zero.
PostgreSQL uses one semaphore per allowed connection (-N option), in sets of 16. Each such set will also
contain a 17th semaphore which contains a “magic number”, to detect collision with semaphore sets used
by other applications. The maximum number of semaphores in the system is set by SEMMNS, which conse-
quently must be at least as high as max_connections plus one extra for each 16 allowed connections (see
the formula in Table 16-2). The parameter SEMMNI determines the limit on the number of semaphore sets
that can exist on the system at one time. Hence this parameter must be at least ceil(max_connections
/ 16). Lowering the number of allowed connections is a temporary workaround for failures, which are
usually confusingly worded No space left on device, from the function semget.
In some cases it might also be necessary to increase SEMMAP to be at least on the order of SEMMNS. This
parameter defines the size of the semaphore resource map, in which each contiguous block of available
semaphores needs an entry. When a semaphore set is freed it is either added to an existing entry that
is adjacent to the freed block or it is registered under a new map entry. If the map is full, the freed
semaphores get lost (until reboot). Fragmentation of the semaphore space could over time lead to fewer
available semaphores than there should be.
The SEMMSL parameter, which determines how many semaphores can be in a set, must be at least 17 for
PostgreSQL.
Various other settings related to “semaphore undo”, such as SEMMNU and SEMUME, are not of concern for
PostgreSQL.
BSD/OS
Shared Memory. By default, only 4 MB of shared memory is supported. Keep in mind that shared
memory is not pageable; it is locked in RAM. To increase the amount of shared memory supported
by your system, add something like the following to your kernel configuration file:
options "SHMALL=8192"
options "SHMMAX=\(SHMALL*PAGE_SIZE\)"
SHMALL is measured in 4KB pages, so a value of 1024 represents 4 MB of shared memory. Therefore
the above increases the maximum shared memory area to 32 MB. For those running 4.3 or later, you
will probably also need to increase KERNEL_VIRTUAL_MB above the default 248. Once all changes
have been made, recompile the kernel, and reboot.
For those running 4.0 and earlier releases, use bpatch to find the sysptsize value in the current
kernel. This is computed dynamically at boot time.
$ bpatch -r sysptsize
279
Chapter 16. Server Run-time Environment
0x9 = 9
Next, add SYSPTSIZE as a hard-coded value in the kernel configuration file. Increase the value you
found using bpatch. Add 1 for every additional 4 MB of shared memory you desire.
options "SYSPTSIZE=16"
Semaphores. You will probably want to increase the number of semaphores as well; the default
system total of 60 will only allow about 50 PostgreSQL connections. Set the values you want in your
kernel configuration file, e.g.:
options "SEMMNI=40"
options "SEMMNS=240"
FreeBSD
NetBSD
OpenBSD
The options SYSVSHM and SYSVSEM need to be enabled when the kernel is compiled. (They are by
default.) The maximum size of shared memory is determined by the option SHMMAXPGS (in pages).
The following shows an example of how to set the various parameters:
options SYSVSHM
options SHMMAXPGS=4096
options SHMSEG=256
options SYSVSEM
options SEMMNI=256
options SEMMNS=512
options SEMMNU=256
options SEMMAP=256
(On NetBSD and OpenBSD the key word is actually option singular.)
You might also want to configure your kernel to lock shared memory into RAM and prevent it from
being paged out to swap. Use the sysctl setting kern.ipc.shm_use_phys.
HP-UX
The default settings tend to suffice for normal installations. On HP-UX 10, the factory default for
SEMMNS is 128, which might be too low for larger database sites.
IPC parameters can be set in the System Administration Manager (SAM) under Kernel
Configuration−→Configurable Parameters. Hit Create A New Kernel when you’re done.
Linux
The default shared memory limit (both SHMMAX and SHMALL) is 32 MB in 2.2 kernels, but it can be
changed in the proc file system (without reboot). For example, to allow 128 MB:
$ echo 134217728 >/proc/sys/kernel/shmall
$ echo 134217728 >/proc/sys/kernel/shmmax
280
Chapter 16. Server Run-time Environment
kernel.shmmax = 134217728
This file is usually processed at boot time, but sysctl can also be called explicitly later.
Other parameters are sufficiently sized for any application. If you want to see
for yourself look in /usr/src/linux/include/asm-xxx/shmparam.h and
/usr/src/linux/include/linux/sem.h.
MacOS X
In OS X 10.2 and earlier, edit the file /System/Library/StartupItems/SystemTuning/SystemTuning
and change the values in the following commands:
sysctl -w kern.sysv.shmmax
sysctl -w kern.sysv.shmmin
sysctl -w kern.sysv.shmmni
sysctl -w kern.sysv.shmseg
sysctl -w kern.sysv.shmall
In OS X 10.3, these commands have been moved to /etc/rc and must be edited there. You’ll need
to reboot to make changes take effect. Note that /etc/rc is usually overwritten by OS X updates
(such as 10.3.6 to 10.3.7) so you should expect to have to redo your editing after each update.
SHMALL is measured in 4KB pages on this platform.
SCO OpenServer
In the default configuration, only 512 kB of shared memory per segment is allowed, which is about
enough for -B 24 -N 12. To increase the setting, first change to the directory /etc/conf/cf.d.
To display the current value of SHMMAX, run
./configure -y SHMMAX
where value is the new value you want to use (in bytes). After setting SHMMAX, rebuild the kernel:
./link_unix
and reboot.
AIX
At least as of version 5.1, it should not be necessary to do any special configuration for such param-
eters as SHMMAX, as it appears this is configured to allow all memory to be used as shared memory.
That is the sort of configuration commonly used for other databases such as DB/2.
It may, however, be necessary to modify the global ulimit information in
/etc/security/limits, as the default hard limits for file sizes (fsize) and numbers of files
(nofiles) may be too low.
Solaris
At least in version 2.6, the default maximum size of a shared memory segments is too low for Post-
greSQL. The relevant settings can be changed in /etc/system, for example:
set shmsys:shminfo_shmmax=0x2000000
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=256
set shmsys:shminfo_shmseg=256
281
Chapter 16. Server Run-time Environment
set semsys:seminfo_semmap=256
set semsys:seminfo_semmni=512
set semsys:seminfo_semmns=512
set semsys:seminfo_semmsl=32
which displays the current, default, minimum, and maximum values. To set a new value for SHMMAX,
run
/etc/conf/bin/idtune SHMMAX value
where value is the new value you want to use (in bytes). After setting SHMMAX, rebuild the kernel:
/etc/conf/bin/idbuild -B
and reboot.
default:\
...
:datasize-cur=256M:\
:maxproc-cur=256:\
:openfiles-cur=256:\
...
(-cur is the soft limit. Append -max to set the hard limit.)
Kernels can also have system-wide limits on some resources.
282
Chapter 16. Server Run-time Environment
• On Linux /proc/sys/fs/file-max determines the maximum number of open files that the kernel
will support. It can be changed by writing a different number into the file or by adding an assignment in
/etc/sysctl.conf. The maximum limit of files per process is fixed at the time the kernel is compiled;
see /usr/src/linux/Documentation/proc.txt for more information.
The PostgreSQL server uses one process per connection so you should provide for at least as many pro-
cesses as allowed connections, in addition to what you need for the rest of your system. This is usually
not a problem but if you run several servers on one machine things might get tight.
The factory default limit on open files is often set to “socially friendly” values that allow many users to
coexist on a machine without using an inappropriate fraction of the system resources. If you run many
servers on a machine this is perhaps what you want, but on dedicated servers you may want to raise this
limit.
On the other side of the coin, some systems allow individual processes to open large numbers of files; if
more than a few processes do so then the system-wide limit can easily be exceeded. If you find this happen-
ing, and you do not want to alter the system-wide limit, you can set PostgreSQL’s max_files_per_process
configuration parameter to limit the consumption of open files.
This indicates that the postmaster process has been terminated due to memory pressure. Although
existing database connections will continue to function normally, no new connections will be accepted.
To recover, PostgreSQL will need to be restarted.
One way to avoid this problem is to run PostgreSQL on a machine where you can be sure that other
processes will not run the machine out of memory.
On Linux 2.6 and later, a better solution is to modify the kernel’s behavior so that it will not “overcommit”
memory. This is done by selecting strict overcommit mode via sysctl:
sysctl -w vm.overcommit_memory=2
or placing an equivalent entry in /etc/sysctl.conf. You may also wish to modify the
related setting vm.overcommit_ratio. For details see the kernel documentation file
Documentation/vm/overcommit-accounting.
Some vendors’ Linux 2.4 kernels are reported to have early versions of the 2.6 overcommit sysctl
parameter. However, setting vm.overcommit_memory to 2 on a kernel that does not have the relevant
code will make things worse not better. It is recommended that you inspect the actual kernel source code
(see the function vm_enough_memory in the file mm/mmap.c) to verify what is supported in your copy
283
Chapter 16. Server Run-time Environment
before you try this in a 2.4 installation. The presence of the overcommit-accounting documentation
file should not be taken as evidence that the feature is there. If in any doubt, consult a kernel expert or
your kernel vendor.
SIGTERM
After receiving SIGTERM, the server disallows new connections, but lets existing sessions end their
work normally. It shuts down only after all of the sessions terminate normally. This is the Smart
Shutdown.
SIGINT
The server disallows new connections and sends all existing server processes SIGTERM, which will
cause them to abort their current transactions and exit promptly. It then waits for the server processes
to exit and finally shuts down. This is the Fast Shutdown.
SIGQUIT
This is the Immediate Shutdown, which will cause the postmaster process to send a SIGQUIT to
all child processes and exit immediately, without properly shutting itself down. The child processes
likewise exit immediately upon receiving SIGQUIT. This will lead to recovery (by replaying the
WAL log) upon next start-up. This is recommended only in emergencies.
The pg_ctl program provides a convenient interface for sending these signals to shut down the server.
Alternatively, you can send the signal directly using kill. The PID of the postmaster process can be
found using the ps program, or from the file postmaster.pid in the data directory. For example, to do
a fast shutdown:
Important: It is best not to use SIGKILL to shut down the server. Doing so will prevent the server from
releasing shared memory and semaphores, which may then have to be done manually before a new
server can be started. Furthermore, SIGKILL kills the postmaster process without letting it relay the
signal to its subprocesses, so it will be necessary to kill the individual subprocesses by hand as well.
284
Chapter 16. Server Run-time Environment
Fill out the information that openssl asks for. Make sure that you enter the local host name as “Common
Name”; the challenge password can be left blank. The program will generate a key that is passphrase
protected; it will not accept a passphrase that is less than four characters long. To remove the passphrase
(as you must if you want automatic start-up of the server), run the commands
openssl req -x509 -in server.req -text -key server.key -out server.crt
chmod og-rwx server.key
to turn the certificate into a self-signed certificate and to copy the key and certificate to where the server
will look for them.
If verification of client certificates is required, place the certificates of the CA(s) you wish to check for
in the file root.crt in the data directory. When present, a client certificate will be requested from the
client during SSL connection startup, and it must have been signed by one of the certificates present in
root.crt.
When the root.crt file is not present, client certificates will not be requested or checked. In this mode,
SSL provides communication security but not authentication.
The files server.key, server.crt, and root.crt are only examined during server start; so you must
restart the server to make changes in them take effect.
285
Chapter 16. Server Run-time Environment
The first number in the -L argument, 3333, is the port number of your end of the tunnel; it can be chosen
freely. The second number, 5432, is the remote end of the tunnel: the port number your server is using.
The name or IP address between the port numbers is the host with the database server you are going to
connect to. In order to connect to the database server using this tunnel, you connect to port 3333 on the
local machine:
To the database server it will then look as though you are really user [email protected] and it will use
whatever authentication procedure was configured for connections from this user and host. Note that the
server will not think the connection is SSL-encrypted, since in fact it is not encrypted between the SSH
server and the PostgreSQL server. This should not pose any extra security risk as long as they are on the
same machine.
In order for the tunnel setup to succeed you must be allowed to connect via ssh as [email protected], just as
if you had attempted to use ssh to set up a terminal session.
Tip: Several other applications exist that can provide secure tunnels using a procedure similar in
concept to the one just described.
286
Chapter 17. Database Users and Privileges
Every database cluster contains a set of database users. Those users are separate from the users managed
by the operating system on which the server runs. Users own database objects (for example, tables) and
can assign privileges on those objects to other users to control who has access to which object.
This chapter describes how to create and manage users and introduces the privilege system. More infor-
mation about the various types of database objects and the effects of privileges can be found in Chapter
5.
name follows the rules for SQL identifiers: either unadorned without special characters, or double-quoted.
To remove an existing user, use the analogous DROP USER command:
For convenience, the programs createuser and dropuser are provided as wrappers around these SQL com-
mands that can be called from the shell command line:
createuser name
dropuser name
To determine the set of existing users, examine the pg_user system catalog, for example
The psql program’s \du meta-command is also useful for listing the existing users.
In order to bootstrap the database system, a freshly initialized system always contains one predefined user.
This user will have the fixed ID 1, and by default (unless altered when running initdb) it will have the
same name as the operating system user that initialized the database cluster. Customarily, this user will be
named postgres. In order to create more users you first have to connect as this initial user.
Exactly one user identity is active for a connection to the database server. The user name to use for a
particular database connection is indicated by the client that is initiating the connection request in an
application-specific fashion. For example, the psql program uses the -U command line option to indicate
the user to connect as. Many applications assume the name of the current operating system user by de-
fault (including createuser and psql). Therefore it is convenient to maintain a naming correspondence
between the two user sets.
287
Chapter 17. Database Users and Privileges
The set of database users a given client connection may connect as is determined by the client authentica-
tion setup, as explained in Chapter 19. (Thus, a client is not necessarily limited to connect as the user with
the same name as its operating system user, just as a person’s login name need not match her real name.)
Since the user identity determines the set of privileges available to a connected client, it is important to
carefully configure this when setting up a multiuser environment.
superuser
A database superuser bypasses all permission checks. Also, only a superuser can create new users.
To create a database superuser, use CREATE USER name CREATEUSER.
database creation
A user must be explicitly given permission to create databases (except for superusers, since those
bypass all permission checks). To create such a user, use CREATE USER name CREATEDB.
password
A password is only significant if the client authentication method requires the user to supply a pass-
word when connecting to the database. The password, md5, and crypt authentication methods
make use of passwords. Database passwords are separate from operating system passwords. Specify
a password upon user creation with CREATE USER name PASSWORD ’string’.
A user’s attributes can be modified after creation with ALTER USER. See the reference pages for the
CREATE USER and ALTER USER commands for details.
A user can also set personal defaults for many of the run-time configuration settings described in Section
16.4. For example, if for some reason you want to disable index scans (hint: not a good idea) anytime you
connect, you can use
This will save the setting (but not set it immediately). In subsequent connections by this user it will appear
as though SET enable_indexscan TO off; had been executed just before the session started. You
can still alter this setting during the session; it will only be the default. To undo any such setting, use
ALTER USER username RESET varname;.
17.3. Groups
As in Unix, groups are a way of logically grouping users to ease management of privileges: privileges
can be granted to, or revoked from, a group as a whole. To create a group, use the CREATE GROUP SQL
command:
288
Chapter 17. Database Users and Privileges
To add users to or remove users from an existing group, use ALTER GROUP:
The psql program’s \dg meta-command is also useful for listing the existing groups.
17.4. Privileges
When a database object is created, it is assigned an owner. The owner is the user that executed the creation
statement. To change the owner of a table, index, sequence, or view, use the ALTER TABLE command. By
default, only an owner (or a superuser) can do anything with the object. In order to allow other users to
use it, privileges must be granted.
There are several different privileges: SELECT, INSERT, UPDATE, DELETE, RULE, REFERENCES,
TRIGGER, CREATE, TEMPORARY, EXECUTE, USAGE, and ALL PRIVILEGES. For more information on the
different types of privileges supported by PostgreSQL, see the GRANT reference page. The right to
modify or destroy an object is always the privilege of the owner only. To assign privileges, the GRANT
command is used. So, if joe is an existing user, and accounts is an existing table, the privilege to
update the table can be granted with
The user executing this command must be the owner of the table. To grant a privilege to a group, use
The special “user” name PUBLIC can be used to grant a privilege to every user on the system. Writing
ALL in place of a specific privilege specifies that all privileges will be granted.
The special privileges of the table owner (i.e., the right to do DROP, GRANT, REVOKE, etc) are always
implicit in being the owner, and cannot be granted or revoked. But the table owner can choose to revoke
his own ordinary privileges, for example to make a table read-only for himself as well as others.
289
Chapter 17. Database Users and Privileges
290
Chapter 18. Managing Databases
Every instance of a running PostgreSQL server manages one or more databases. Databases are therefore
the topmost hierarchical level for organizing SQL objects (“database objects”). This chapter describes the
properties of databases, and how to create, manage, and destroy them.
18.1. Overview
A database is a named collection of SQL objects (“database objects”). Generally, every database object
(tables, functions, etc.) belongs to one and only one database. (But there are a few system catalogs, for
example pg_database, that belong to a whole cluster and are accessible from each database within
the cluster.) More accurately, a database is a collection of schemas and the schemas contain the tables,
functions, etc. So the full hierarchy is: server, database, schema, table (or some other kind of object, such
as a function).
When connecting to the database server, a client must specify in its connection request the name of the
database it wants to connect to. It is not possible to access more than one database per connection. (But
an application is not restricted in the number of connections it opens to the same or other databases.)
Databases are physically separated and access control is managed at the connection level. If one Post-
greSQL server instance is to house projects or users that should be separate and for the most part unaware
of each other, it is therefore recommendable to put them into separate databases. If the projects or users
are interrelated and should be able to use each other’s resources they should be put in the same database,
but possibly into separate schemas. Schemas are a purely logical structure and who can access what is
managed by the privilege system. More information about managing schemas is in Section 5.8.
Databases are created with the CREATE DATABASE command (see Section 18.2) and destroyed with the
DROP DATABASE command (see Section 18.5). To determine the set of existing databases, examine the
pg_database system catalog, for example
The psql program’s \l meta-command and -l command-line option are also useful for listing the existing
databases.
Note: The SQL standard calls databases “catalogs”, but there is no difference in practice.
where name follows the usual rules for SQL identifiers. The current user automatically becomes the
owner of the new database. It is the privilege of the owner of a database to remove it later on (which also
removes all the objects in it, even if they have a different owner).
291
Chapter 18. Managing Databases
The creation of databases is a restricted operation. See Section 17.2 for how to grant permission.
Since you need to be connected to the database server in order to execute the CREATE DATABASE com-
mand, the question remains how the first database at any given site can be created. The first database is
always created by the initdb command when the data storage area is initialized. (See Section 16.2.) This
database is called template1. So to create the first “real” database you can connect to template1.
The name template1 is no accident: when a new database is created, the template database is essentially
cloned. This means that any changes you make in template1 are propagated to all subsequently cre-
ated databases. This implies that you should not use the template database for real work, but when used
judiciously this feature can be convenient. More details appear in Section 18.3.
As a convenience, there is a program that you can execute from the shell to create new databases,
createdb.
createdb dbname
createdb does no magic. It connects to the template1 database and issues the CREATE DATABASE
command, exactly as described above. The createdb reference page contains the invocation details. Note
that createdb without any arguments will create a database with the current user name, which may or
may not be what you want.
Note: Chapter 19 contains information about how to restrict who can connect to a given database.
Sometimes you want to create a database for someone else. That user should become the owner of the new
database, so he can configure and manage it himself. To achieve that, use one of the following commands:
292
Chapter 18. Managing Databases
dump script should be restored in a virgin database to ensure that one recreates the correct contents of the
dumped database, without any conflicts with additions that may now be present in template1.
To create a database by copying template0, use
After preparing a template database, or making any changes to one, it is a good idea to perform VACUUM
FREEZE in that database. If this is done when there are no other open transactions in the same database,
then it is guaranteed that all rows in the database are “frozen” and will not be subject to transaction ID
wraparound problems. This is particularly important for a database that will have datallowconn set to
false, since it will be impossible to do routine maintenance VACUUM in such a database. See Section 21.1.3
for more information.
Note: template1 and template0 do not have any special status beyond the fact that the name
template1 is the default source database name for CREATE DATABASE and the default database-
to-connect-to for various programs such as createdb. For example, one could drop template1 and
recreate it from template0 without any ill effects. This course of action might be advisable if one has
carelessly added a bunch of junk in template1.
293
Chapter 18. Managing Databases
For example, if for some reason you want to disable the GEQO optimizer for a given database, you’d
ordinarily have to either disable it for all databases or make sure that every connecting client is careful to
issue SET geqo TO off;. To make this setting the default within a particular database, you can execute
the command
This will save the setting (but not set it immediately). In subsequent connections to this database it will
appear as though SET geqo TO off; had been executed just before the session started. Note that users
can still alter this setting during their sessions; it will only be the default. To undo any such setting, use
ALTER DATABASE dbname RESET varname;.
Only the owner of the database (i.e., the user that created it), or a superuser, can drop a database. Dropping
a database removes all objects that were contained within the database. The destruction of a database
cannot be undone.
You cannot execute the DROP DATABASE command while connected to the victim database. You can,
however, be connected to any other database, including the template1 database. template1 would be
the only option for dropping the last user database of a given cluster.
For convenience, there is also a shell program to drop databases, dropdb:
dropdb dbname
(Unlike createdb, it is not the default action to drop the database with the current user name.)
18.6. Tablespaces
Tablespaces in PostgreSQL allow database administrators to define locations in the file system where the
files representing database objects can be stored. Once created, a tablespace can be referred to by name
when creating database objects.
By using tablespaces, an administrator can control the disk layout of a PostgreSQL installation. This is
useful in at least two ways. First, if the partition or volume on which the cluster was initialized runs out
of space and cannot be extended, a tablespace can be created on a different partition and used until the
system can be reconfigured.
Second, tablespaces allow an administrator to use knowledge of the usage pattern of database objects to
optimize performance. For example, an index which is very heavily used can be placed on a very fast,
highly available disk, such as an expensive solid state device. At the same time a table storing archived
data which is rarely used or not performance critical could be stored on a less expensive, slower disk
system.
To define a tablespace, use the CREATE TABLESPACE command, for example:
294
Chapter 18. Managing Databases
The location must be an existing, empty directory that is owned by the PostgreSQL system user. All
objects subsequently created within the tablespace will be stored in files underneath this directory.
Note: There is usually not much point in making more than one tablespace per logical file system,
since you cannot control the location of individual files within a logical file system. However, Post-
greSQL does not enforce any such limitation, and indeed it is not directly aware of the file system
boundaries on your system. It just stores files in the directories you tell it to use.
Creation of the tablespace itself must be done as a database superuser, but after that you can allow ordinary
database users to make use of it. To do that, grant them the CREATE privilege on it.
Tables, indexes, and entire databases can be assigned to particular tablespaces. To do so, a user with the
CREATE privilege on a given tablespace must pass the tablespace name as a parameter to the relevant
command. For example, the following creates a table in the tablespace space1:
When default_tablespace is set to anything but an empty string, it supplies an implicit TABLESPACE
clause for CREATE TABLE and CREATE INDEX commands that do not have an explicit one.
The tablespace associated with a database is used to store the system catalogs of that database, as well
as any temporary files created by server processes using that database. Furthermore, it is the default
tablespace selected for tables and indexes created within the database, if no TABLESPACE clause is given
(either explicitly or via default_tablespace) when the objects are created. If a database is created
without specifying a tablespace for it, it uses the same tablespace as the template database it is copied
from.
Two tablespaces are automatically created by initdb. The pg_global tablespace is used for shared
system catalogs. The pg_default tablespace is the default tablespace of the template1 and template0
databases (and, therefore, will be the default tablespace for other databases as well, unless overridden by
a TABLESPACE clause in CREATE DATABASE).
Once created, a tablespace can be used from any database, provided the requesting user has sufficient
privilege. This means that a tablespace cannot be dropped until all objects in all databases using the
tablespace have been removed.
To remove an empty tablespace, use the DROP TABLESPACE command.
To determine the set of existing tablespaces, examine the pg_tablespace system catalog, for example
The psql program’s \db meta-command is also useful for listing the existing tablespaces.
295
Chapter 18. Managing Databases
PostgreSQL makes extensive use of symbolic links to simplify the implementation of tablespaces. This
means that tablespaces can be used only on systems that support symbolic links.
The directory $PGDATA/pg_tblspc contains symbolic links that point to each of the non-built-in ta-
blespaces defined in the cluster. Although not recommended, it is possible to adjust the tablespace layout
by hand by redefining these links. Two warnings: do not do so while the postmaster is running; and after
you restart the postmaster, update the pg_tablespace catalog to show the new locations. (If you do not,
pg_dump will continue to show the old tablespace locations.)
296
Chapter 19. Client Authentication
When a client application connects to the database server, it specifies which PostgreSQL user name it
wants to connect as, much the same way one logs into a Unix computer as a particular user. Within the
SQL environment the active database user name determines access privileges to database objects — see
Chapter 17 for more information. Therefore, it is essential to restrict which database users can connect.
Authentication is the process by which the database server establishes the identity of the client, and by ex-
tension determines whether the client application (or the user who runs the client application) is permitted
to connect with the user name that was requested.
PostgreSQL offers a number of different client authentication methods. The method used to authenticate
a particular client connection can be selected on the basis of (client) host address, database, and user.
PostgreSQL user names are logically separate from user names of the operating system in which the server
runs. If all the users of a particular server also have accounts on the server’s machine, it makes sense to
assign database user names that match their operating system user names. However, a server that accepts
remote connections may have many database users who have no local operating system account, and in
such cases there need be no connection between database user names and OS user names.
297
Chapter 19. Client Authentication
local
This record matches connection attempts using Unix-domain sockets. Without a record of this type,
Unix-domain socket connections are disallowed.
host
This record matches connection attempts made using TCP/IP. host records match either SSL or
non-SSL connection attempts.
Note: Remote TCP/IP connections will not be possible unless the server is started with an ap-
propriate value for the listen_addresses configuration parameter, since the default behavior is to
listen for TCP/IP connections only on the local loopback address localhost.
hostssl
This record matches connection attempts made using TCP/IP, but only when the connection is made
with SSL encryption.
To make use of this option the server must be built with SSL support. Furthermore, SSL must be
enabled at server start time by setting the ssl configuration parameter (see Section 16.7 for more
information).
hostnossl
This record type has the opposite logic to hostssl: it only matches connection attempts made over
TCP/IP that do not use SSL.
database
Specifies which databases this record matches. The value all specifies that it matches all databases.
The value sameuser specifies that the record matches if the requested database has the same name
as the requested user. The value samegroup specifies that the requested user must be a member of
the group with the same name as the requested database. Otherwise, this is the name of a specific
PostgreSQL database. Multiple database names can be supplied by separating them with commas. A
file containing database names can be specified by preceding the file name with @.
user
Specifies which PostgreSQL users this record matches. The value all specifies that it matches all
users. Otherwise, this is the name of a specific PostgreSQL user. Multiple user names can be supplied
by separating them with commas. Group names can be specified by preceding the group name with
+. A file containing user names can be specified by preceding the file name with @.
CIDR-address
Specifies the client machine IP address range that this record matches. It contains an IP address
in standard dotted decimal notation and a CIDR mask length. (IP addresses can only be specified
numerically, not as domain or host names.) The mask length indicates the number of high-order bits
of the client IP address that must match. Bits to the right of this must be zero in the given IP address.
There must not be any white space between the IP address, the /, and the CIDR mask length.
A typical CIDR-address is 172.20.143.89/32 for a single host, or 172.20.143.0/24 for a
network. To specify a single host, use a CIDR mask of 32 for IPv4 or 128 for IPv6.
298
Chapter 19. Client Authentication
An IP address given in IPv4 format will match IPv6 connections that have the corresponding address,
for example 127.0.0.1 will match the IPv6 address ::ffff:127.0.0.1. An entry given in IPv6
format will match only IPv6 connections, even if the represented address is in the IPv4-in-IPv6 range.
Note that entries in IPv6 format will be rejected if the system’s C library does not have support for
IPv6 addresses.
This field only applies to host, hostssl, and hostnossl records.
IP-address
IP-mask
These fields may be used as an alternative to the CIDR-address notation. Instead of specifying the
mask length, the actual mask is specified in a separate column. For example, 255.0.0.0 represents
an IPv4 CIDR mask length of 8, and 255.255.255.255 represents a CIDR mask length of 32.
These fields only apply to host, hostssl, and hostnossl records.
authentication-method
Specifies the authentication method to use when connecting via this record. The possible choices are
summarized here; details are in Section 19.2.
trust
Allow the connection unconditionally. This method allows anyone that can connect to the Post-
greSQL database server to login as any PostgreSQL user they like, without the need for a pass-
word. See Section 19.2.1 for details.
reject
Reject the connection unconditionally. This is useful for “filtering out” certain hosts from a
group.
md5
Require the client to supply an MD5-encrypted password for authentication. See Section 19.2.2
for details.
crypt
Require the client to supply a crypt()-encrypted password for authentication. md5 is preferred
for 7.2 and later clients, but pre-7.2 clients only support crypt. See Section 19.2.2 for details.
password
Require the client to supply an unencrypted password for authentication. Since the password is
sent in clear text over the network, this should not be used on untrusted networks. See Section
19.2.2 for details.
krb4
Use Kerberos V4 to authenticate the user. This is only available for TCP/IP connections. See
Section 19.2.3 for details.
krb5
Use Kerberos V5 to authenticate the user. This is only available for TCP/IP connections. See
Section 19.2.3 for details.
299
Chapter 19. Client Authentication
ident
Obtain the operating system user name of the client (for TCP/IP connections by contacting the
ident server on the client, for local connections by getting it from the operating system) and
check if the user is allowed to connect as the requested database user by consulting the map
specified after the ident key word. See Section 19.2.4 for details.
pam
Authenticate using the Pluggable Authentication Modules (PAM) service provided by the oper-
ating system. See Section 19.2.5 for details.
authentication-option
The meaning of this optional field depends on the chosen authentication method. Details appear
below.
Files included by @ constructs are read as lists of names, which can be separated by either whitespace or
commas. Comments are introduced by #, just as in pg_hba.conf, and nested @ constructs are allowed.
Unless the file name following @ is an absolute path, it is taken to be relative to the directory containing
the referencing file.
Since the pg_hba.conf records are examined sequentially for each connection attempt, the order of the
records is significant. Typically, earlier records will have tight connection match parameters and weaker
authentication methods, while later records will have looser match parameters and stronger authentication
methods. For example, one might wish to use trust authentication for local TCP/IP connections but
require a password for remote TCP/IP connections. In this case a record specifying trust authentication
for connections from 127.0.0.1 would appear before a record specifying password authentication for a
wider range of allowed client IP addresses.
The pg_hba.conf file is read on start-up and when the main server process (postmaster) receives a
SIGHUP signal. If you edit the file on an active system, you will need to signal the postmaster (using
pg_ctl reload or kill -HUP) to make it re-read the file.
Some examples of pg_hba.conf entries are shown in Example 19-1. See the next section for details on
the different authentication methods.
# Allow any user on the local system to connect to any database under
# any user name using Unix-domain sockets (the default for local
# connections).
#
# TYPE DATABASE USER CIDR-ADDRESS METHOD
local all all trust
300
Chapter 19. Client Authentication
# The same as the last line but using a separate netmask column
#
# TYPE DATABASE USER IP-ADDRESS IP-MASK METHOD
host all all 127.0.0.1 255.255.255.255 trust
# Allow any user from any host with IP address 192.168.93.x to connect
# to database "template1" as the same user name that ident reports for
# the connection (typically the Unix user name).
#
# TYPE DATABASE USER CIDR-ADDRESS METHOD
host template1 all 192.168.93.0/24 ident sameuser
# If these are the only three lines for local connections, they will
# allow local users to connect only to their own databases (databases
# with the same name as their user name) except for administrators and
# members of group "support" who may connect to all databases. The file
# $PGDATA/admins contains a list of user names. Passwords are required in
# all cases.
#
# TYPE DATABASE USER CIDR-ADDRESS METHOD
local sameuser all md5
local all @admins md5
local all +support md5
# The last two lines above can be combined into a single line:
local all @admins,+support md5
301
Chapter 19. Client Authentication
# The database column can also use lists and file names, but not groups:
local db1,db2,@demodbs all md5
trust authentication is only suitable for TCP/IP connections if you trust every user on every machine that
is allowed to connect to the server by the pg_hba.conf lines that specify trust. It is seldom reasonable
to use trust for any TCP/IP connections other than those from localhost (127.0.0.1).
302
Chapter 19. Client Authentication
If you use mod_auth_kerb from http://modauthkerb.sf.net and mod_perl on your Apache web server, you
can use AuthType KerberosV5SaveCredentials with a mod_perl script. This gives secure database
access over the web, no extra passwords required.
1. http://www.nrl.navy.mil/CCS/people/kenh/kerberos-faq.html
2. ftp://athena-dist.mit.edu
303
Chapter 19. Client Authentication
Comments and whitespace are handled in the same way as in pg_hba.conf. The map-name is an
arbitrary name that will be used to refer to this mapping in pg_hba.conf. The other two fields specify
which operating system user is allowed to connect as which database user. The same map-name can be
304
Chapter 19. Client Authentication
used repeatedly to specify more user-mappings within a single map. There is no restriction regarding how
many database users a given operating system user may correspond to, nor vice versa.
The pg_ident.conf file is read on start-up and when the main server process (postmaster) receives a
SIGHUP signal. If you edit the file on an active system, you will need to signal the postmaster (using
pg_ctl reload or kill -HUP) to make it re-read the file.
A pg_ident.conf file that could be used in conjunction with the pg_hba.conf file in Example 19-1
is shown in Example 19-2. In this example setup, anyone logged in to a machine on the 192.168 network
that does not have the Unix user name bryanh, ann, or robert would not be granted access. Unix user
robert would only be allowed access when he tries to connect as PostgreSQL user bob, not as robert
or anyone else. ann would only be allowed to connect as ann. User bryanh would be allowed to connect
as either bryanh himself or as guest1.
FATAL: no pg_hba.conf entry for host "123.123.123.123", user "andym", database "testdb"
This is what you are most likely to get if you succeed in contacting the server, but it does not want to talk
to you. As the message suggests, the server refused the connection request because it found no authorizing
entry in its pg_hba.conf configuration file.
4. http://www.kernel.org/pub/linux/libs/pam/
5. http://www.sun.com/software/solaris/pam/
305
Chapter 19. Client Authentication
Messages like this indicate that you contacted the server, and it is willing to talk to you, but not until you
pass the authorization method specified in the pg_hba.conf file. Check the password you are providing,
or check your Kerberos or ident software if the complaint mentions one of those authentication types.
The database you are trying to connect to does not exist. Note that if you do not specify a database name,
it defaults to the database user name, which may or may not be the right thing.
Tip: The server log may contain more information about an authentication failure than is reported to
the client. If you are confused about the reason for a failure, check the log.
306
Chapter 20. Localization
This chapter describes the available localization features from the point of view of the administrator.
PostgreSQL supports localization with two approaches:
• Using the locale features of the operating system to provide locale-specific collation order, number
formatting, translated messages, and other aspects.
• Providing a number of different character sets defined in the PostgreSQL server, including multiple-byte
character sets, to support storing text in all kinds of languages, and providing character set translation
between client and server.
20.1.1. Overview
Locale support is automatically initialized when a database cluster is created using initdb. initdb will
initialize the database cluster with the locale setting of its execution environment by default, so if your
system is already set to use the locale that you want in your database cluster then there is nothing else you
need to do. If you want to use a different locale (or you are not sure which locale your system is set to),
you can instruct initdb exactly which locale to use by specifying the --locale option. For example:
initdb --locale=sv_SE
This example sets the locale to Swedish (sv) as spoken in Sweden (SE). Other possibilities might be
en_US (U.S. English) and fr_CA (French Canadian). If more than one character set can be useful for a
locale then the specifications look like this: cs_CZ.ISO8859-2. What locales are available under what
names on your system depends on what was provided by the operating system vendor and what was
installed. (On most systems, the command locale -a will provide a list of available locales.)
Occasionally it is useful to mix rules from several locales, e.g., use English collation rules but Spanish
messages. To support that, a set of locale subcategories exist that control only a certain aspect of the
localization rules:
307
Chapter 20. Localization
The category names translate into names of initdb options to override the locale choice for a specific
category. For instance, to set the locale to French Canadian, but use U.S. rules for formatting currency,
use initdb --locale=fr_CA --lc-monetary=en_US.
If you want the system to behave as if it had no locale support, use the special locale C or POSIX.
The nature of some locale categories is that their value has to be fixed for the lifetime of a database cluster.
That is, once initdb has run, you cannot change them anymore. LC_COLLATE and LC_CTYPE are those
categories. They affect the sort order of indexes, so they must be kept fixed, or indexes on text columns
will become corrupt. PostgreSQL enforces this by recording the values of LC_COLLATE and LC_CTYPE
that are seen by initdb. The server automatically adopts those two values when it is started.
The other locale categories can be changed as desired whenever the server is running by setting the run-
time configuration variables that have the same name as the locale categories (see Section 16.4.8.2 for
details). The defaults that are chosen by initdb are actually only written into the configuration file
postgresql.conf to serve as defaults when the server is started. If you delete the assignments from
postgresql.conf then the server will inherit the settings from the execution environment.
Note that the locale behavior of the server is determined by the environment variables seen by the server,
not by the environment of any client. Therefore, be careful to configure the correct locale settings before
starting the server. A consequence of this is that if client and server are set up in different locales, messages
may appear in different languages depending on where they originated.
Note: When we speak of inheriting the locale from the execution environment, this means the following
on most operating systems: For a given locale category, say the collation, the following environment
variables are consulted in this order until one is found to be set: LC_ALL, LC_COLLATE (the variable
corresponding to the respective category), LANG. If none of these environment variables are set then
the locale defaults to C.
Some message localization libraries also look at the environment variable LANGUAGE which overrides
all other locale settings for the purpose of setting the language of messages. If in doubt, please refer
to the documentation of your operating system, in particular the documentation about gettext, for more
information.
To enable messages to be translated to the user’s preferred language, NLS must have been enabled at build
time. This choice is independent of the other locale support.
20.1.2. Behavior
Locale support influences the following features:
308
Chapter 20. Localization
The drawback of using locales other than C or POSIX in PostgreSQL is its performance impact. It slows
character handling and prevents ordinary indexes from being used by LIKE. For this reason use locales
only if you actually need them.
20.1.3. Problems
If locale support doesn’t work in spite of the explanation above, check that the locale support in your
operating system is correctly configured. To check what locales are installed on your system, you may use
the command locale -a if your operating system provides it.
Check that PostgreSQL is actually using the locale that you think it is. LC_COLLATE and LC_CTYPE
settings are determined at initdb time and cannot be changed without repeating initdb. Other lo-
cale settings including LC_MESSAGES and LC_MONETARY are initially determined by the environment the
server is started in, but can be changed on-the-fly. You can check the active locale settings using the SHOW
command.
The directory src/test/locale in the source distribution contains a test suite for PostgreSQL’s locale
support.
Client applications that handle server-side errors by parsing the text of the error message will obviously
have problems when the server’s messages are in a different language. Authors of such applications are
advised to make use of the error code scheme instead.
Maintaining catalogs of message translations requires the on-going efforts of many volunteers that want
to see PostgreSQL speak their preferred language well. If messages in your language are currently not
available or not fully translated, your assistance would be appreciated. If you want to help, refer to Chapter
44 or write to the developers’ mailing list.
Name Description
309
Chapter 20. Localization
Name Description
SQL_ASCII ASCII
EUC_JP Japanese EUC
EUC_CN Chinese EUC
EUC_KR Korean EUC
JOHAB Korean EUC (Hangle base)
EUC_TW Taiwan EUC
UNICODE Unicode (UTF-8)
MULE_INTERNAL Mule internal code
LATIN1 ISO 8859-1/ECMA 94 (Latin alphabet no.1)
LATIN2 ISO 8859-2/ECMA 94 (Latin alphabet no.2)
LATIN3 ISO 8859-3/ECMA 94 (Latin alphabet no.3)
LATIN4 ISO 8859-4/ECMA 94 (Latin alphabet no.4)
LATIN5 ISO 8859-9/ECMA 128 (Latin alphabet no.5)
LATIN6 ISO 8859-10/ECMA 144 (Latin alphabet no.6)
LATIN7 ISO 8859-13 (Latin alphabet no.7)
LATIN8 ISO 8859-14 (Latin alphabet no.8)
LATIN9 ISO 8859-15 (Latin alphabet no.9)
LATIN10 ISO 8859-16/ASRO SR 14111 (Latin alphabet
no.10)
ISO_8859_5 ISO 8859-5/ECMA 113 (Latin/Cyrillic)
ISO_8859_6 ISO 8859-6/ECMA 114 (Latin/Arabic)
ISO_8859_7 ISO 8859-7/ECMA 118 (Latin/Greek)
ISO_8859_8 ISO 8859-8/ECMA 121 (Latin/Hebrew)
KOI8 KOI8-R(U)
ALT Windows CP866
WIN874 Windows CP874 (Thai)
WIN1250 Windows CP1250
WIN Windows CP1251
WIN1256 Windows CP1256 (Arabic)
TCVN TCVN-5712/Windows CP1258 (Vietnamese)
Important: Before PostgreSQL 7.2, LATIN5 mistakenly meant ISO 8859-5. From 7.2 on, LATIN5
means ISO 8859-9. If you have a LATIN5 database created on 7.1 or earlier and want to migrate
to 7.2 or later, you should be careful about this change.
Not all APIs support all the listed character sets. For example, the PostgreSQL JDBC driver does not
support MULE_INTERNAL, LATIN6, LATIN8, and LATIN10.
310
Chapter 20. Localization
initdb -E EUC_JP
sets the default character set (encoding) to EUC_JP (Extended Unix Code for Japanese). You can use
--encoding instead of -E if you prefer to type longer option strings. If no -E or --encoding option is
given, SQL_ASCII is used.
You can create a database with a different character set:
This will create a database named korean that uses the character set EUC_KR. Another way to accomplish
this is to use this SQL command:
The encoding for a database is stored in the system catalog pg_database. You can see that by using the
-l option or the \l command of psql.
$ psql -l
List of databases
Database | Owner | Encoding
---------------+---------+---------------
euc_cn | t-ishii | EUC_CN
euc_jp | t-ishii | EUC_JP
euc_kr | t-ishii | EUC_KR
euc_tw | t-ishii | EUC_TW
mule_internal | t-ishii | MULE_INTERNAL
regression | t-ishii | SQL_ASCII
template1 | t-ishii | EUC_JP
test | t-ishii | EUC_JP
unicode | t-ishii | UNICODE
(9 rows)
Important: Although you can specify any encoding you want for a database, it is unwise to choose an
encoding that is not what is expected by the locale you have selected. The LC_COLLATE and LC_CTYPE
settings imply a particular encoding, and locale-dependent operations (such as sorting) are likely to
misinterpret data that is in an incompatible encoding.
Since these locale settings are frozen by initdb, the apparent flexibility to use different encodings in
different databases of a cluster is more theoretical than real. It is likely that these mechanisms will be
revisited in future versions of PostgreSQL.
One way to use multiple encodings safely is to set the locale to C or POSIX during initdb, thus
disabling any real locale awareness.
311
Chapter 20. Localization
312
Chapter 20. Localization
To enable the automatic character set conversion, you have to tell PostgreSQL the character set (encoding)
you would like to use in the client. There are several ways to accomplish this:
• Using the \encoding command in psql. \encoding allows you to change client encoding on the fly.
For example, to change the encoding to SJIS, type:
\encoding SJIS
• Using libpq functions. \encoding actually calls PQsetClientEncoding() for its purpose.
int PQsetClientEncoding(PGconn *conn, const char *encoding);
where conn is a connection to the server, and encoding is the encoding you want to use. If the func-
tion successfully sets the encoding, it returns 0, otherwise -1. The current encoding for this connection
can be determined by using:
int PQclientEncoding(const PGconn *conn);
Note that it returns the encoding ID, not a symbolic string such as EUC_JP. To convert an encoding ID
to an encoding name, you can use:
char *pg_encoding_to_char(int encoding_id);
• Using SET client_encoding TO. Setting the client encoding can be done with this SQL command:
SET CLIENT_ENCODING TO ’value’;
Also you can use the more standard SQL syntax SET NAMES for this purpose:
SET NAMES ’value’;
313
Chapter 20. Localization
If the conversion of a particular character is not possible — suppose you chose EUC_JP for the server and
LATIN1 for the client, then some Japanese characters cannot be converted to LATIN1 — it is transformed
to its hexadecimal byte values in parentheses, e.g., (826C).
ftp://ftp.ora.com/pub/examples/nutshell/ujip/doc/cjk.inf
Detailed explanations of EUC_JP, EUC_CN, EUC_KR, EUC_TW appear in section 3.2.
http://www.unicode.org/
The web site of the Unicode Consortium
RFC 2044
UTF-8 is defined here.
314
Chapter 21. Routine Database Maintenance
Tasks
There are a few routine maintenance chores that must be performed on a regular basis to keep a Post-
greSQL server running smoothly. The tasks discussed here are repetitive in nature and can easily be auto-
mated using standard Unix tools such as cron scripts. But it is the database administrator’s responsibility
to set up appropriate scripts, and to check that they execute successfully.
One obvious maintenance task is creation of backup copies of the data on a regular schedule. Without a
recent backup, you have no chance of recovery after a catastrophe (disk failure, fire, mistakenly dropping a
critical table, etc.). The backup and recovery mechanisms available in PostgreSQL are discussed at length
in Chapter 22.
The other main category of maintenance task is periodic “vacuuming” of the database. This activity is
discussed in Section 21.1.
Something else that might need periodic attention is log file management. This is discussed in Section
21.3.
PostgreSQL is low-maintenance compared to some other database management systems. Nonetheless,
appropriate attention to these tasks will go far towards ensuring a pleasant and productive experience with
the system.
315
Chapter 21. Routine Database Maintenance Tasks
Chapter 12): the row version must not be deleted while it is still potentially visible to other transactions.
But eventually, an outdated or deleted row version is no longer of interest to any transaction. The space it
occupies must be reclaimed for reuse by new rows, to avoid infinite growth of disk space requirements.
This is done by running VACUUM.
Clearly, a table that receives frequent updates or deletes will need to be vacuumed more often than tables
that are seldom updated. It may be useful to set up periodic cron tasks that VACUUM only selected tables,
skipping tables that are known not to change often. This is only likely to be helpful if you have both large
heavily-updated tables and large seldom-updated tables — the extra cost of vacuuming a small table isn’t
enough to be worth worrying about.
There are two variants of the VACUUM command. The first form, known as “lazy vacuum” or just VACUUM,
marks expired data in tables and indexes for future reuse; it does not attempt to reclaim the space used
by this expired data immediately. Therefore, the table file is not shortened, and any unused space in the
file is not returned to the operating system. This variant of VACUUM can be run concurrently with normal
database operations.
The second form is the VACUUM FULL command. This uses a more aggressive algorithm for reclaiming
the space consumed by expired row versions. Any space that is freed by VACUUM FULL is immediately
returned to the operating system. Unfortunately, this variant of the VACUUM command acquires an exclu-
sive lock on each table while VACUUM FULL is processing it. Therefore, frequently using VACUUM FULL
can have an extremely negative effect on the performance of concurrent database queries.
The standard form of VACUUM is best used with the goal of maintaining a fairly level steady-state usage
of disk space. If you need to return disk space to the operating system you can use VACUUM FULL —
but what’s the point of releasing disk space that will only have to be allocated again soon? Moderately
frequent standard VACUUM runs are a better approach than infrequent VACUUM FULL runs for maintaining
heavily-updated tables.
Recommended practice for most sites is to schedule a database-wide VACUUM once a day at a low-usage
time of day, supplemented by more frequent vacuuming of heavily-updated tables if necessary. (Some
installations with an extremely high rate of data modification VACUUM busy tables as often as once every
few minutes.) If you have multiple databases in a cluster, don’t forget to VACUUM each one; the program
vacuumdb may be helpful.
Tip: The contrib/pg_autovacuum program can be useful for automating high-frequency vacuuming
operations.
VACUUM FULL is recommended for cases where you know you have deleted the majority of rows in a
table, so that the steady-state size of the table can be shrunk substantially with VACUUM FULL’s more
aggressive approach. Use plain VACUUM, not VACUUM FULL, for routine vacuuming for space recovery.
If you have a table whose contents are deleted on a periodic basis, consider doing it with TRUNCATE rather
than using DELETE followed by VACUUM. TRUNCATE removes the entire content of the table immediately,
without requiring a subsequent VACUUM or VACUUM FULL to reclaim the now-unused disk space.
316
Chapter 21. Routine Database Maintenance Tasks
invoked by itself or as an optional step in VACUUM. It is important to have reasonably accurate statistics,
otherwise poor choices of plans may degrade database performance.
As with vacuuming for space recovery, frequent updates of statistics are more useful for heavily-updated
tables than for seldom-updated ones. But even for a heavily-updated table, there may be no need for
statistics updates if the statistical distribution of the data is not changing much. A simple rule of thumb
is to think about how much the minimum and maximum values of the columns in the table change.
For example, a timestamp column that contains the time of row update will have a constantly-increasing
maximum value as rows are added and updated; such a column will probably need more frequent statistics
updates than, say, a column containing URLs for pages accessed on a website. The URL column may
receive changes just as often, but the statistical distribution of its values probably changes relatively slowly.
It is possible to run ANALYZE on specific tables and even just specific columns of a table, so the flexibility
exists to update some statistics more frequently than others if your application requires it. In practice,
however, the usefulness of this feature is doubtful. Beginning in PostgreSQL 7.2, ANALYZE is a fairly fast
operation even on large tables, because it uses a statistical random sampling of the rows of a table rather
than reading every single row. So it’s probably much simpler to just run it over the whole database every
so often.
Tip: Although per-column tweaking of ANALYZE frequency may not be very productive, you may well
find it worthwhile to do per-column adjustment of the level of detail of the statistics collected by
ANALYZE. Columns that are heavily used in WHERE clauses and have highly irregular data distributions
may require a finer-grain data histogram than other columns. See ALTER TABLE SET STATISTICS.
Recommended practice for most sites is to schedule a database-wide ANALYZE once a day at a low-usage
time of day; this can usefully be combined with a nightly VACUUM. However, sites with relatively slowly
changing table statistics may find that this is overkill, and that less-frequent ANALYZE runs are sufficient.
317
Chapter 21. Routine Database Maintenance Tasks
The new approach to XID comparison distinguishes two special XIDs, numbers 1 and 2 (BootstrapXID
and FrozenXID). These two XIDs are always considered older than every normal XID. Normal XIDs
(those greater than 2) are compared using modulo-231 arithmetic. This means that for every normal XID,
there are two billion XIDs that are “older” and two billion that are “newer”; another way to say it is that
the normal XID space is circular with no endpoint. Therefore, once a row version has been created with a
particular normal XID, the row version will appear to be “in the past” for the next two billion transactions,
no matter which normal XID we are talking about. If the row version still exists after more than two billion
transactions, it will suddenly appear to be in the future. To prevent data loss, old row versions must be
reassigned the XID FrozenXID sometime before they reach the two-billion-transactions-old mark. Once
they are assigned this special XID, they will appear to be “in the past” to all normal transactions regardless
of wraparound issues, and so such row versions will be good until deleted, no matter how long that is.
This reassignment of XID is handled by VACUUM.
VACUUM’s normal policy is to reassign FrozenXID to any row version with a normal XID more than one
billion transactions in the past. This policy preserves the original insertion XID until it is not likely to be
of interest anymore. (In fact, most row versions will probably live and die without ever being “frozen”.)
With this policy, the maximum safe interval between VACUUM runs on any table is exactly one billion
transactions: if you wait longer, it’s possible that a row version that was not quite old enough to be
reassigned last time is now more than two billion transactions old and has wrapped around into the future
— i.e., is lost to you. (Of course, it’ll reappear after another two billion transactions, but that’s no help.)
Since periodic VACUUM runs are needed anyway for the reasons described earlier, it’s unlikely that any
table would not be vacuumed for as long as a billion transactions. But to help administrators ensure this
constraint is met, VACUUM stores transaction ID statistics in the system table pg_database. In particu-
lar, the datfrozenxid column of a database’s pg_database row is updated at the completion of any
database-wide VACUUM operation (i.e., VACUUM that does not name a specific table). The value stored in
this field is the freeze cutoff XID that was used by that VACUUM command. All normal XIDs older than
this cutoff XID are guaranteed to have been replaced by FrozenXID within that database. A convenient
way to examine this information is to execute the query
The age column measures the number of transactions from the cutoff XID to the current transaction’s
XID.
With the standard freezing policy, the age column will start at one billion for a freshly-vacuumed database.
When the age approaches two billion, the database must be vacuumed again to avoid risk of wraparound
failures. Recommended practice is to VACUUM each database at least once every half-a-billion (500 million)
transactions, so as to provide plenty of safety margin. To help meet this rule, each database-wide VACUUM
automatically delivers a warning if there are any pg_database entries showing an age of more than 1.5
billion transactions, for example:
play=# VACUUM;
WARNING: some databases have not been vacuumed in 1613770184 transactions
HINT: Better vacuum them within 533713463 transactions, or you may have a wraparound fa
VACUUM
VACUUM with the FREEZE option uses a more aggressive freezing policy: row versions are frozen if they
are old enough to be considered good by all open transactions. In particular, if a VACUUM FREEZE is
performed in an otherwise-idle database, it is guaranteed that all row versions in that database will be
318
Chapter 21. Routine Database Maintenance Tasks
frozen. Hence, as long as the database is not modified in any way, it will not need subsequent vacu-
uming to avoid transaction ID wraparound problems. This technique is used by initdb to prepare the
template0 database. It should also be used to prepare any user-created databases that are to be marked
datallowconn = false in pg_database, since there isn’t any convenient way to VACUUM a database
that you can’t connect to. Note that VACUUM’s automatic warning message about unvacuumed databases
will ignore pg_database entries with datallowconn = false, so as to avoid giving false warnings
about these databases; therefore it’s up to you to ensure that such databases are frozen correctly.
Warning
To be sure of safety against transaction wraparound, it is necessary to vacuum
every table, including system catalogs, in every database at least once every billion
transactions. We have seen data loss situations caused by people deciding that they
only needed to vacuum their active user tables, rather than issuing database-wide
vacuum commands. That will appear to work fine ... for a while.
319
Chapter 21. Routine Database Maintenance Tasks
Another production-grade approach to managing log output is to send it all to syslog and let syslog deal
with file rotation. To do this, set the configuration parameter log_destination to syslog (to log to
syslog only) in postgresql.conf. Then you can send a SIGHUP signal to the syslog daemon whenever
you want to force it to start writing a new log file. If you want to automate log rotation, the logrotate
program can be configured to work with log files from syslog.
On many systems, however, syslog is not very reliable, particularly with large log messages; it may trun-
cate or drop messages just when you need them the most. Also, on Linux, syslog will sync each message
to disk, yielding poor performance. (You can use a - at the start of the file name in the syslog configuration
file to disable this behavior.)
Note that all the solutions described above take care of starting new log files at configurable intervals, but
they do not handle deletion of old, no-longer-interesting log files. You will probably want to set up a batch
job to periodically delete old log files. Another possibility is to configure the rotation program so that old
log files are overwritten cyclically.
320
Chapter 22. Backup and Restore
As with everything that contains valuable data, PostgreSQL databases should be backed up regularly.
While the procedure is essentially simple, it is important to have a basic understanding of the underlying
techniques and assumptions.
There are three fundamentally different approaches to backing up PostgreSQL data:
• SQL dump
• File system level backup
• On-line backup
Each has its own strengths and weaknesses.
As you see, pg_dump writes its results to the standard output. We will see below how this can be useful.
pg_dump is a regular PostgreSQL client application (albeit a particularly clever one). This means that you
can do this backup procedure from any remote host that has access to the database. But remember that
pg_dump does not operate with special permissions. In particular, it must have read access to all tables
that you want to back up, so in practice you almost always have to run it as a database superuser.
To specify which database server pg_dump should contact, use the command line options -h host and
-p port. The default host is the local host or whatever your PGHOST environment variable specifies. Sim-
ilarly, the default port is indicated by the PGPORT environment variable or, failing that, by the compiled-in
default. (Conveniently, the server will normally have the same compiled-in default.)
As any other PostgreSQL client application, pg_dump will by default connect with the database user name
that is equal to the current operating system user name. To override this, either specify the -U option or set
the environment variable PGUSER. Remember that pg_dump connections are subject to the normal client
authentication mechanisms (which are described in Chapter 19).
Dumps created by pg_dump are internally consistent, that is, updates to the database while pg_dump is
running will not be in the dump. pg_dump does not block other operations on the database while it is
working. (Exceptions are those operations that need to operate with an exclusive lock, such as VACUUM
FULL.)
Important: When your database schema relies on OIDs (for instance as foreign keys) you must
instruct pg_dump to dump the OIDs as well. To do this, use the -o command line option. “Large
objects” are not dumped by default, either. See pg_dump’s reference page if you use large objects.
321
Chapter 22. Backup and Restore
where infile is what you used as outfile for the pg_dump command. The database dbname will
not be created by this command, you must create it yourself from template0 before executing psql (e.g.,
with createdb -T template0 dbname). psql supports options similar to pg_dump for controlling the
database server location and the user name. See psql’s reference page for more information.
Not only must the target database already exist before starting to run the restore, but so must all the users
who own objects in the dumped database or were granted permissions on the objects. If they do not, then
the restore will fail to recreate the objects with the original ownership and/or permissions. (Sometimes
this is what you want, but usually it is not.)
Once restored, it is wise to run ANALYZE on each database so the optimizer has useful statistics. An
easy way to do this is to run vacuumdb -a -z to VACUUM ANALYZE all databases; this is equivalent to
running VACUUM ANALYZE manually.
The ability of pg_dump and psql to write to or read from pipes makes it possible to dump a database
directly from one server to another; for example:
Important: The dumps produced by pg_dump are relative to template0. This means that any
languages, procedures, etc. added to template1 will also be dumped by pg_dump. As a result,
when restoring, if you are using a customized template1, you must create the empty database from
template0, as in the example above.
For advice on how to load large amounts of data into PostgreSQL efficiently, refer to Section 13.4.
(Actually, you can specify any existing database name to start from, but if you are reloading in an empty
cluster then template1 is the only available choice.) It is always necessary to have database superuser
access when restoring a pg_dumpall dump, as that is required to restore the user and group information.
322
Chapter 22. Backup and Restore
Reload with
createdb dbname
gunzip -c filename.gz | psql dbname
or
Use split. The split command allows you to split the output into pieces that are acceptable in size to
the underlying file system. For example, to make chunks of 1 megabyte:
Reload with
createdb dbname
cat filename* | psql dbname
Use the custom dump format. If PostgreSQL was built on a system with the zlib compression library
installed, the custom dump format will compress data as it writes it to the output file. This will produce
dump file sizes similar to using gzip, but it has the added advantage that tables can be restored selectively.
The following command dumps a database using the custom dump format:
A custom-format dump is not a script for psql, but instead must be restored with pg_restore. See the
pg_dump and pg_restore reference pages for details.
22.1.4. Caveats
For reasons of backward compatibility, pg_dump does not dump large objects by default. To dump large
objects you must use either the custom or the tar output format, and use the -b option in pg_dump. See
the pg_dump reference page for details. The directory contrib/pg_dumplo of the PostgreSQL source
tree also contains a program that can dump large objects.
Please familiarize yourself with the pg_dump reference page.
323
Chapter 22. Backup and Restore
There are two restrictions, however, which make this method impractical, or at least inferior to the
pg_dump method:
1. The database server must be shut down in order to get a usable backup. Half-way measures such as
disallowing all connections will not work (mainly because tar and similar tools do not take an atomic
snapshot of the state of the file system at a point in time). Information about stopping the server can
be found in Section 16.6. Needless to say that you also need to shut down the server before restoring
the data.
2. If you have dug into the details of the file system layout of the database, you may be tempted to try to
back up or restore only certain individual tables or databases from their respective files or directories.
This will not work because the information contained in these files contains only half the truth. The
other half is in the commit log files pg_clog/*, which contain the commit status of all transactions.
A table file is only usable with this information. Of course it is also impossible to restore only a table
and the associated pg_clog data because that would render all other tables in the database cluster
useless. So file system backups only work for complete restoration of an entire database cluster.
An alternative file-system backup approach is to make a “consistent snapshot” of the data directory, if
the file system supports that functionality (and you are willing to trust that it is implemented correctly).
The typical procedure is to make a “frozen snapshot” of the volume containing the database, then copy
the whole data directory (not just parts, see above) from the snapshot to a backup device, then release the
frozen snapshot. This will work even while the database server is running. However, a backup created in
this way saves the database files in a state where the database server was not properly shut down; therefore,
when you start the database server on the backed-up data, it will think the server had crashed and replay
the WAL log. This is not a problem, just be aware of it (and be sure to include the WAL files in your
backup).
If your database is spread across multiple volumes (for example, data files and WAL log on different
disks) there may not be any way to obtain exactly-simultaneous frozen snapshots of all the volumes. Read
your file system documentation very carefully before trusting to the consistent-snapshot technique in such
situations. The safest approach is to shut down the database server for long enough to establish all the
frozen snapshots.
Note that a file system backup will not necessarily be smaller than an SQL dump. On the contrary, it
will most likely be larger. (pg_dump does not need to dump the contents of indexes for example, just the
commands to recreate them.)
324
Chapter 22. Backup and Restore
• We do not need a perfectly consistent backup as the starting point. Any internal inconsistency in the
backup will be corrected by log replay (this is not significantly different from what happens during
crash recovery). So we don’t need file system snapshot capability, just tar or a similar archiving tool.
• Since we can string together an indefinitely long sequence of WAL files for replay, continuous backup
can be achieved simply by continuing to archive the WAL files. This is particularly valuable for large
databases, where it may not be convenient to take a full backup frequently.
• There is nothing that says we have to replay the WAL entries all the way to the end. We could stop
the replay at any point and have a consistent snapshot of the database as it was at that time. Thus, this
technique supports point-in-time recovery: it is possible to restore the database to its state at any time
since your base backup was taken.
• If we continuously feed the series of WAL files to another machine that has been loaded with the same
base backup file, we have a “hot standby” system: at any point we can bring up the second machine and
it will have a nearly-current copy of the database.
As with the plain file-system-backup technique, this method can only support restoration of an entire
database cluster, not a subset. Also, it requires a lot of archival storage: the base backup may be bulky,
and a busy system will generate many megabytes of WAL traffic that have to be archived. Still, it is the
preferred backup technique in many situations where high reliability is needed.
To recover successfully using an on-line backup, you need a continuous sequence of archived WAL files
that extends back at least as far as the start time of your backup. So to get started, you should set up and
test your procedure for archiving WAL files before you take your first base backup. Accordingly, we first
discuss the mechanics of archiving WAL files.
325
Chapter 22. Backup and Restore
available hardware, there could be many different ways of “saving the data somewhere”: we could copy
the segment files to an NFS-mounted directory on another machine, write them onto a tape drive (ensuring
that you have a way of restoring the file with its original file name), or batch them together and burn them
onto CDs, or something else entirely. To provide the database administrator with as much flexibility as
possible, PostgreSQL tries not to make any assumptions about how the archiving will be done. Instead,
PostgreSQL lets the administrator specify a shell command to be executed to copy a completed segment
file to wherever it needs to go. The command could be as simple as a cp, or it could invoke a complex
shell script — it’s all up to you.
The shell command to use is specified by the archive_command configuration parameter, which in practice
will always be placed in the postgresql.conf file. In this string, any %p is replaced by the absolute path
of the file to archive, while any %f is replaced by the file name only. Write %% if you need to embed an
actual % character in the command. The simplest useful command is something like
which will copy archivable WAL segments to the directory /mnt/server/archivedir. (This is an
example, not a recommendation, and may not work on all platforms.)
The archive command will be executed under the ownership of the same user that the PostgreSQL server is
running as. Since the series of WAL files being archived contains effectively everything in your database,
you will want to be sure that the archived data is protected from prying eyes; for example, archive into a
directory that does not have group or world read access.
It is important that the archive command return zero exit status if and only if it succeeded. Upon getting a
zero result, PostgreSQL will assume that the WAL segment file has been successfully archived, and will
remove or recycle it. However, a nonzero status tells PostgreSQL that the file was not archived; it will try
again periodically until it succeeds.
The archive command should generally be designed to refuse to overwrite any pre-existing archive file.
This is an important safety feature to preserve the integrity of your archive in case of administrator error
(such as sending the output of two different servers to the same archive directory). It is advisable to test
your proposed archive command to ensure that it indeed does not overwrite an existing file, and that it
returns nonzero status in this case. We have found that cp -i does this correctly on some platforms but
not others. If the chosen command does not itself handle this case correctly, you should add a command
to test for pre-existence of the archive file. For example, something like
326
Chapter 22. Backup and Restore
of not-yet-archived segment files, which could eventually exceed available disk space. You are advised to
monitor the archiving process to ensure that it is working as you intend.
If you are concerned about being able to recover right up to the current instant, you may want to take
additional steps to ensure that the current, partially-filled WAL segment is also copied someplace. This is
particularly important if your server generates only little WAL traffic (or has slack periods where it does
so), since it could take a long time before a WAL segment file is completely filled and ready to archive.
One possible way to handle this is to set up a cron job that periodically (once a minute, perhaps) identifies
the current WAL segment file and saves it someplace safe. Then the combination of the archived WAL
segments and the saved current segment will be enough to ensure you can always restore to within a
minute of current time. This behavior is not presently built into PostgreSQL because we did not want to
complicate the definition of the archive_command by requiring it to keep track of successively archived,
but different, copies of the same WAL file. The archive_command is only invoked on completed WAL
segments. Except in the case of retrying a failure, it will be called only once for any given file name.
In writing your archive command, you should assume that the filenames to be archived may be up to 64
characters long and may contain any combination of ASCII letters, digits, and dots. It is not necessary to
remember the original full path (%p) but it is necessary to remember the file name (%f).
Note that although WAL archiving will allow you to restore any modifications made to the data in your
PostgreSQL database it will not restore changes made to configuration files (that is, postgresql.conf,
pg_hba.conf and pg_ident.conf), since those are edited manually rather than through SQL opera-
tions. You may wish to keep the configuration files in a location that will be backed up by your regular
file system backup procedures. See Section 16.4.1 for how to relocate the configuration files.
where label is any string you want to use to uniquely identify this backup operation. (One good
practice is to use the full path where you intend to put the backup dump file.) pg_start_backup
creates a backup label file, called backup_label, in the cluster directory with information about
your backup.
It does not matter which database within the cluster you connect to to issue this command. You can
ignore the result returned by the function; but if it reports an error, deal with that before proceeding.
3. Perform the backup, using any convenient file-system-backup tool such as tar or cpio. It is neither
necessary nor desirable to stop normal operation of the database while you do this.
4. Again connect to the database as a superuser, and issue the command
SELECT pg_stop_backup();
327
Chapter 22. Backup and Restore
It is not necessary to be very concerned about the amount of time elapsed between pg_start_backup
and the start of the actual backup, nor between the end of the backup and pg_stop_backup; a few
minutes’ delay won’t hurt anything. You must however be quite sure that these operations are carried out
in sequence and do not overlap.
Be certain that your backup dump includes all of the files underneath the database cluster directory (e.g.,
/usr/local/pgsql/data). If you are using tablespaces that do not reside underneath this directory,
be careful to include them as well (and be sure that your backup dump archives symbolic links as links,
otherwise the restore will mess up your tablespaces).
You may, however, omit from the backup dump the files within the pg_xlog/ subdirectory of the cluster
directory. This slight complication is worthwhile because it reduces the risk of mistakes when restoring.
This is easy to arrange if pg_xlog/ is a symbolic link pointing to someplace outside the cluster directory,
which is a common setup anyway for performance reasons.
To make use of this backup, you will need to keep around all the WAL segment files generated at or
after the starting time of the backup. To aid you in doing this, the pg_stop_backup function creates
a backup history file that is immediately stored into the WAL archive area. This file is named after the
first WAL segment file that you need to have to make use of the backup. For example, if the start-
ing WAL file is 0000000100001234000055CD the backup history file will be named something like
0000000100001234000055CD.007C9330.backup. (The second part of this file name stands for an
exact position within the WAL file, and can ordinarily be ignored.) Once you have safely archived the
backup dump file, you can delete all archived WAL segments with names numerically preceding this one.
The backup history file is just a small text file. It contains the label string you gave to pg_start_backup,
as well as the starting and ending times of the backup. If you used the label to identify where the asso-
ciated dump file is kept, then the archived history file is enough to tell you which dump file to restore,
should you need to do so.
Since you have to keep around all the archived WAL files back to your last base backup, the interval
between base backups should usually be chosen based on how much storage you want to expend on
archived WAL files. You should also consider how long you are prepared to spend recovering, if recovery
should be necessary — the system will have to replay all those WAL segments, and that could take awhile
if it has been a long time since the last base backup.
It’s also worth noting that the pg_start_backup function makes a file named backup_label in the
database cluster directory, which is then removed again by pg_stop_backup. This file will of course
be archived as a part of your backup dump file. The backup label file includes the label string you gave
to pg_start_backup, as well as the time at which pg_start_backup was run, and the name of the
starting WAL file. In case of confusion it will therefore be possible to look inside a backup dump file and
determine exactly which backup session the dump file came from.
It is also possible to make a backup dump while the postmaster is stopped. In this case, you obviously
cannot use pg_start_backup or pg_stop_backup, and you will therefore be left to your own devices
to keep track of which backup dump is which and how far back the associated WAL files go. It is generally
better to follow the on-line backup procedure above.
328
Chapter 22. Backup and Restore
The key part of all this is to set up a recovery command file that describes how you want to recover and how
far the recovery should run. You can use recovery.conf.sample (normally installed in the installation
share/ directory) as a prototype. The one thing that you absolutely must specify in recovery.conf is
the restore_command, which tells PostgreSQL how to get back archived WAL file segments. Like the
archive_command, this is a shell command string. It may contain %f, which is replaced by the name of
the desired log file, and %p, which is replaced by the absolute path to copy the log file to. Write %% if you
need to embed an actual % character in the command. The simplest useful command is something like
which will copy previously archived WAL segments from the directory /mnt/server/archivedir.
You could of course use something much more complicated, perhaps even a shell script that requests the
operator to mount an appropriate tape.
329
Chapter 22. Backup and Restore
It is important that the command return nonzero exit status on failure. The command will be asked for
log files that are not present in the archive; it must return nonzero when so asked. This is not an error
condition. Be aware also that the base name of the %p path will be different from %f; do not expect them
to be interchangeable.
WAL segments that cannot be found in the archive will be sought in pg_xlog/; this allows use of recent
un-archived segments. However segments that are available from the archive will be used in preference
to files in pg_xlog/. The system will not overwrite the existing contents of pg_xlog/ when retrieving
archived files.
Normally, recovery will proceed through all available WAL segments, thereby restoring the database to
the current point in time (or as close as we can get given the available WAL segments). But if you want to
recover to some previous point in time (say, right before the junior DBA dropped your main transaction
table), just specify the required stopping point in recovery.conf. You can specify the stop point, known
as the “recovery target”, either by date/time or by completion of a specific transaction ID. As of this writing
only the date/time option is very usable, since there are no tools to help you identify with any accuracy
which transaction ID to use.
Note: The stop point must be after the ending time of the base backup (the time of pg_stop_backup).
You cannot use a base backup to recover to a time when that backup was still going on. (To recover
to such a time, you must go back to your previous base backup and roll forward from there.)
restore_command (string)
The shell command to execute to retrieve an archived segment of the WAL file series. This parameter
is required. Any %f in the string is replaced by the name of the file to retrieve from the archive, and
any %p is replaced by the absolute path to copy it to on the server. Write %% to embed an actual %
character in the command.
It is important for the command to return a zero exit status if and only if it succeeds. The command
will be asked for file names that are not present in the archive; it must return nonzero when so asked.
Examples:
restore_command = ’cp /mnt/server/archivedir/%f "%p"’
restore_command = ’copy /mnt/server/archivedir/%f "%p"’ # Windows
recovery_target_time (timestamp)
This parameter specifies the time stamp up to which recovery will proceed. At most one of
recovery_target_time and recovery_target_xid can be specified. The default is to recover to the
end of the WAL log. The precise stopping point is also influenced by recovery_target_inclusive.
330
Chapter 22. Backup and Restore
recovery_target_xid (string)
This parameter specifies the transaction ID up to which recovery will proceed. Keep in mind that
while transaction IDs are assigned sequentially at transaction start, transactions can complete in a
different numeric order. The transactions that will be recovered are those that committed before
(and optionally including) the specified one. At most one of recovery_target_xid and recov-
ery_target_time can be specified. The default is to recover to the end of the WAL log. The precise
stopping point is also influenced by recovery_target_inclusive.
recovery_target_inclusive (boolean)
Specifies whether we stop just after the specified recovery target (true), or just before the recov-
ery target (false). Applies to both recovery_target_time and recovery_target_xid, whichever one is
specified for this recovery. This indicates whether transactions having exactly the target commit time
or ID, respectively, will be included in the recovery. Default is true.
recovery_target_timeline (string)
Specifies recovering into a particular timeline. The default is to recover along the same timeline that
was current when the base backup was taken. You would only need to set this parameter in complex
re-recovery situations, where you need to return to a state that itself was reached after a point-in-time
recovery. See Section 22.3.4 for discussion.
22.3.4. Timelines
The ability to restore the database to a previous point in time creates some complexities that are akin
to science-fiction stories about time travel and parallel universes. In the original history of the database,
perhaps you dropped a critical table at 5:15PM on Tuesday evening. Unfazed, you get out your backup, re-
store to the point-in-time 5:14PM Tuesday evening, and are up and running. In this history of the database
universe, you never dropped the table at all. But suppose you later realize this wasn’t such a great idea
after all, and would like to return to some later point in the original history. You won’t be able to if,
while your database was up-and-running, it overwrote some of the sequence of WAL segment files that
led up to the time you now wish you could get back to. So you really want to distinguish the series of
WAL records generated after you’ve done a point-in-time recovery from those that were generated in the
original database history.
To deal with these problems, PostgreSQL has a notion of timelines. Each time you recover to a point-in-
time earlier than the end of the WAL sequence, a new timeline is created to identify the series of WAL
records generated after that recovery. (If recovery proceeds all the way to the end of WAL, however, we do
not start a new timeline: we just extend the existing one.) The timeline ID number is part of WAL segment
file names, and so a new timeline does not overwrite the WAL data generated by previous timelines. It is in
fact possible to archive many different timelines. While that might seem like a useless feature, it’s often a
lifesaver. Consider the situation where you aren’t quite sure what point-in-time to recover to, and so have
to do several point-in-time recoveries by trial and error until you find the best place to branch off from the
old history. Without timelines this process would soon generate an unmanageable mess. With timelines,
you can recover to any prior state, including states in timeline branches that you later abandoned.
Each time a new timeline is created, PostgreSQL creates a “timeline history” file that shows which time-
line it branched off from and when. These history files are necessary to allow the system to pick the right
331
Chapter 22. Backup and Restore
WAL segment files when recovering from an archive that contains multiple timelines. Therefore, they are
archived into the WAL archive area just like WAL segment files. The history files are just small text files,
so it’s cheap and appropriate to keep them around indefinitely (unlike the segment files which are large).
You can, if you like, add comments to a history file to make your own notes about how and why this
particular timeline came to be. Such comments will be especially valuable when you have a thicket of
different timelines as a result of experimentation.
The default behavior of recovery is to recover along the same timeline that was current when the base
backup was taken. If you want to recover into some child timeline (that is, you want to return to some
state that was itself generated after a recovery attempt), you need to specify the target timeline ID in
recovery.conf. You cannot recover into timelines that branched off earlier than the base backup.
22.3.5. Caveats
At this writing, there are several limitations of the on-line backup technique. These will probably be fixed
in future releases:
• Operations on non-B-tree indexes (hash, R-tree, and GiST indexes) are not presently WAL-logged, so
replay will not update these index types. The recommended workaround is to manually REINDEX each
such index after completing a recovery operation.
It should also be noted that the present WAL format is extremely bulky since it includes many disk page
snapshots. This is appropriate for crash recovery purposes, since we may need to fix partially-written disk
pages. It is not necessary to store so many page copies for PITR operations, however. An area for future
development is to compress archived WAL data by removing unnecessary page copies.
332
Chapter 22. Backup and Restore
The least downtime can be achieved by installing the new server in a different directory and running both
the old and the new servers in parallel, on different ports. Then you can use something like
to transfer your data. Or use an intermediate file if you want. Then you can shut down the old server and
start the new server at the port the old one was running at. You should make sure that the old database
is not updated after you run pg_dumpall, otherwise you will obviously lose that data. See Chapter 19 for
information on how to prohibit access.
In practice you probably want to test your client applications on the new setup before switching over
completely. This is another reason for setting up concurrent installations of old and new versions.
If you cannot or do not want to run two servers in parallel you can do the backup step before installing the
new version, bring down the server, move the old version out of the way, install the new version, start the
new server, restore the data. For example:
See Chapter 16 about ways to start and stop the server and other details. The installation instructions will
advise you of strategic places to perform these steps.
Note: When you “move the old installation out of the way” it may no longer be perfectly usable. Some
of the executable programs contain absolute paths to various installed programs and data files. This
is usually not a big problem but if you plan on using two installations in parallel for a while you should
assign them different installation directories at build time. (This problem is rectified in PostgreSQL 8.0
and later, but you need to be wary of moving older installations.)
333
Chapter 23. Monitoring Database Activity
A database administrator frequently wonders, “What is the system doing right now?” This chapter dis-
cusses how to find that out.
Several tools are available for monitoring database activity and analyzing performance. Most of this chap-
ter is devoted to describing PostgreSQL’s statistics collector, but one should not neglect regular Unix
monitoring programs such as ps, top, iostat, and vmstat. Also, once one has identified a poorly-
performing query, further investigation may be needed using PostgreSQL’s EXPLAIN command. Section
13.1 discusses EXPLAIN and other methods for understanding the behavior of an individual query.
(The appropriate invocation of ps varies across different platforms, as do the details of what is shown.
This example is from a recent Linux system.) The first process listed here is the postmaster, the master
server process. The command arguments shown for it are the same ones given when it was launched.
The next two processes implement the statistics collector, which will be described in detail in the next
section. (These will not be present if you have set the system not to start the statistics collector.) Each
of the remaining processes is a server process handling one client connection. Each such process sets its
command line display in the form
The user, database, and connection source host items remain the same for the life of the client connection,
but the activity indicator changes. The activity may be idle (i.e., waiting for a client command), idle
in transaction (waiting for client inside a BEGIN block), or a command type name such as SELECT.
Also, waiting is attached if the server process is presently waiting on a lock held by another server
process. In the above example we can infer that process 1003 is waiting for process 1016 to complete its
transaction and thereby release some lock or other.
Tip: Solaris requires special handling. You must use /usr/ucb/ps, rather than /bin/ps. You also
must use two w flags, not just one. In addition, your original invocation of the postmaster command
must have a shorter ps status display than that provided by each server process. If you fail to do all
three things, the ps output for each server process will be the original postmaster command line.
334
Chapter 23. Monitoring Database Activity
335
Chapter 23. Monitoring Database Activity
to perform several queries on the statistics and correlate the results without worrying that the numbers are
changing underneath you. But if you want to see new results with each query, be sure to do the queries
outside any transaction block.
336
Chapter 23. Monitoring Database Activity
The per-index statistics are particularly useful to determine which indexes are being used and how effec-
tive they are.
The pg_statio_ views are primarily useful to determine the effectiveness of the buffer cache. When the
number of actual disk reads is much smaller than the number of buffer hits, then the cache is satisfying
most read requests without invoking a kernel call. However, these statistics do not give the entire story:
due to the way in which PostgreSQL handles disk I/O, data that is not in the PostgreSQL buffer cache
may still reside in the kernel’s I/O cache, and may therefore still be fetched without requiring a physical
read. Users interested in obtaining more detailed information on PostgreSQL I/O behavior are advised to
use the PostgreSQL statistics collector in combination with operating system utilities that allow insight
into the kernel’s handling of I/O.
Other ways of looking at the statistics can be set up by writing queries that use the same underlying
statistics access functions as these standard views do. These functions are listed in Table 23-2. The per-
database access functions take a database OID as argument to identify which database to report on. The
per-table and per-index functions take a table or index OID. (Note that only tables and indexes in the cur-
rent database can be seen with these functions.) The per-backend process access functions take a backend
process ID number, which ranges from one to the number of currently active backend processes.
337
Chapter 23. Monitoring Database Activity
pg_stat_get_tuples_deleted(oid
bigint
) Number of rows deleted from
table
pg_stat_get_blocks_fetched(oid
bigint
) Number of disk block fetch
requests for table or index
pg_stat_get_blocks_hit(oid) bigint Number of disk block requests
found in cache for table or index
pg_stat_get_backend_idset() set of integer Set of currently active backend
process IDs (from 1 to the number
of active backend processes). See
usage example in the text.
pg_backend_pid() integer Process ID of the backend process
attached to the current session
338
Chapter 23. Monitoring Database Activity
The function pg_stat_get_backend_idset provides a convenient way to generate one row for each
active backend process. For example, to show the PIDs and current queries of all backend processes:
339
Chapter 23. Monitoring Database Activity
• View all the locks currently outstanding, all the locks on relations in a particular database, all the locks
on a particular relation, or all the locks held by a particular PostgreSQL session.
• Determine the relation in the current database with the most ungranted locks (which might be a source
of contention among database clients).
• Determine the effect of lock contention on overall database performance, as well as the extent to which
contention varies with overall database traffic.
Details of the pg_locks view appear in Section 41.33. For more information on locking and managing
concurrency with PostgreSQL, refer to Chapter 12.
340
Chapter 24. Monitoring Disk Usage
This chapter discusses how to monitor the disk usage of a PostgreSQL database system.
relfilenode | relpages
-------------+----------
16806 | 60
(1 row)
Each page is typically 8 kilobytes. (Remember, relpages is only updated by VACUUM, ANALYZE, and
a few DDL commands such as CREATE INDEX.) The relfilenode value is of interest if you want to
examine the table’s disk file directly.
To show the space used by TOAST tables, use a query like the following:
relname | relpages
----------------------+----------
pg_toast_16806 | 0
pg_toast_16806_index | 1
341
Chapter 24. Monitoring Disk Usage
relname | relpages
----------------------+----------
customer_id_indexdex | 26
It is easy to find your largest tables and indexes using this information:
relname | relpages
----------------------+----------
bigtable | 3290
customer | 3144
contrib/dbsize loads functions into your database that allow you to find the size of a table or database
from inside psql without the need for VACUUM or ANALYZE.
You can also use contrib/oid2name to show disk usage. See README.oid2name in that directory for
examples. It includes a script that shows disk usage for each database.
Tip: Some file systems perform badly when they are almost full, so do not wait until the disk is
completely full to take action.
If your system supports per-user disk quotas, then the database will naturally be subject to whatever quota
is placed on the user the server runs as. Exceeding the quota will have the same bad effects as running out
of space entirely.
342
Chapter 25. Write-Ahead Logging (WAL)
Write-Ahead Logging (WAL) is a standard approach to transaction logging. Its detailed description may
be found in most (if not all) books about transaction processing. Briefly, WAL’s central concept is that
changes to data files (where tables and indexes reside) must be written only after those changes have been
logged, that is, when log records describing the changes have been flushed to permanent storage. If we
follow this procedure, we do not need to flush data pages to disk on every transaction commit, because we
know that in the event of a crash we will be able to recover the database using the log: any changes that
have not been applied to the data pages can be redone from the log records. (This is roll-forward recovery,
also known as REDO.)
343
Chapter 25. Write-Ahead Logging (WAL)
are flushed to disk and a special checkpoint record is written to the log file. As a result, in the event of a
crash, the crash recovery procedure knows from what point in the log (known as the redo record) it should
start the REDO operation, since any changes made to data files before that point are already on disk. After
a checkpoint has been made, any log segments written before the redo record are no longer needed and
can be recycled or removed. (When WAL archiving is being done, the log segments must be archived
before being recycled or removed.)
The server’s background writer process will automatically perform a checkpoint every so often. A
checkpoint is created every checkpoint_segments log segments, or every checkpoint_timeout seconds,
whichever comes first. The default settings are 3 segments and 300 seconds respectively. It is also
possible to force a checkpoint by using the SQL command CHECKPOINT.
Reducing checkpoint_segments and/or checkpoint_timeout causes checkpoints to be done more
often. This allows faster after-crash recovery (since less work will need to be redone). However, one must
balance this against the increased cost of flushing dirty data pages more often. In addition, to ensure data
page consistency, the first modification of a data page after each checkpoint results in logging the entire
page content. Thus a smaller checkpoint interval increases the volume of output to the WAL log, partially
negating the goal of using a smaller interval, and in any case causing more disk I/O.
Checkpoints are fairly expensive, first because they require writing out all currently dirty buffers, and
second because they result in extra subsequent WAL traffic as discussed above. It is therefore wise to set
the checkpointing parameters high enough that checkpoints don’t happen too often. As a simple sanity
check on your checkpointing parameters, you can set the checkpoint_warning parameter. If checkpoints
happen closer together than checkpoint_warning seconds, a message will be output to the server log
recommending increasing checkpoint_segments. Occasional appearance of such a message is not
cause for alarm, but if it appears often then the checkpoint control parameters should be increased.
There will be at least one WAL segment file, and will normally not be more than 2 *
checkpoint_segments + 1 files. Each segment file is normally 16 MB (though this size can be altered
when building the server). You can use this to estimate space requirements for WAL. Ordinarily, when
old log segment files are no longer needed, they are recycled (renamed to become the next segments
in the numbered sequence). If, due to a short-term peak of log output rate, there are more than 2 *
checkpoint_segments + 1 segment files, the unneeded segment files will be deleted instead of
recycled until the system gets back under this limit.
There are two commonly used WAL functions: LogInsert and LogFlush. LogInsert is used to
place a new record into the WAL buffers in shared memory. If there is no space for the new record,
LogInsert will have to write (move to kernel cache) a few filled WAL buffers. This is undesirable be-
cause LogInsert is used on every database low level modification (for example, row insertion) at a
time when an exclusive lock is held on affected data pages, so the operation needs to be as fast as possi-
ble. What is worse, writing WAL buffers may also force the creation of a new log segment, which takes
even more time. Normally, WAL buffers should be written and flushed by a LogFlush request, which
is made, for the most part, at transaction commit time to ensure that transaction records are flushed to
permanent storage. On systems with high log output, LogFlush requests may not occur often enough to
prevent LogInsert from having to do writes. On such systems one should increase the number of WAL
buffers by modifying the configuration parameter wal_buffers. The default number of WAL buffers is 8.
Increasing this value will correspondingly increase shared memory usage. (It should be noted that there is
presently little evidence to suggest that increasing wal_buffers beyond the default is worthwhile.)
The commit_delay parameter defines for how many microseconds the server process will sleep after writ-
ing a commit record to the log with LogInsert but before performing a LogFlush. This delay allows
344
Chapter 25. Write-Ahead Logging (WAL)
other server processes to add their commit records to the log so as to have all of them flushed with a single
log sync. No sleep will occur if fsync is not enabled, nor if fewer than commit_siblings other sessions are
currently in active transactions; this avoids sleeping when it’s unlikely that any other session will commit
soon. Note that on most platforms, the resolution of a sleep request is ten milliseconds, so that any nonzero
commit_delay setting between 1 and 10000 microseconds would have the same effect. Good values for
these parameters are not yet clear; experimentation is encouraged.
The wal_sync_method parameter determines how PostgreSQL will ask the kernel to force WAL updates
out to disk. All the options should be the same as far as reliability goes, but it’s quite platform-specific
which one will be the fastest. Note that this parameter is irrelevant if fsync has been turned off.
Enabling the wal_debug configuration parameter (provided that PostgreSQL has been compiled with sup-
port for it) will result in each LogInsert and LogFlush WAL call being logged to the server log. This
option may be replaced by a more general mechanism in the future.
25.3. Internals
WAL is automatically enabled; no action is required from the administrator except ensuring that the disk-
space requirements for the WAL logs are met, and that any necessary tuning is done (see Section 25.2).
WAL logs are stored in the directory pg_xlog under the data directory, as a set of segment files, normally
each 16 MB in size. Each segment is divided into pages, normally 8 KB each. The log record headers are
described in access/xlog.h; the record content is dependent on the type of event that is being logged.
Segment files are given ever-increasing numbers as names, starting at 000000010000000000000000.
The numbers do not wrap, at present, but it should take a very very long time to exhaust the available
stock of numbers.
The WAL buffers and control structure are in shared memory and are handled by the server child pro-
cesses; they are protected by lightweight locks. The demand on shared memory is dependent on the
number of buffers. The default size of the WAL buffers is 8 buffers of 8 kB each, or 64 kB total.
It is of advantage if the log is located on another disk than the main database files. This may be achieved by
moving the directory pg_xlog to another location (while the server is shut down, of course) and creating
a symbolic link from the original location in the main data directory to the new location.
The aim of WAL, to ensure that the log is written before database records are altered, may be subverted
by disk drives that falsely report a successful write to the kernel, when in fact they have only cached the
data and not yet stored it on the disk. A power failure in such a situation may still lead to irrecoverable
data corruption. Administrators should try to ensure that disks holding PostgreSQL’s WAL log files do
not make such false reports.
After a checkpoint has been made and the log flushed, the checkpoint’s position is saved in the file
pg_control. Therefore, when recovery is to be done, the server first reads pg_control and then the
checkpoint record; then it performs the REDO operation by scanning forward from the log position indi-
cated in the checkpoint record. Because the entire content of data pages is saved in the log on the first page
modification after a checkpoint, all pages changed since the checkpoint will be restored to a consistent
state.
To deal with the case where pg_control is corrupted, we should support the possibility of scanning
existing log segments in reverse order — newest to oldest — in order to find the latest checkpoint. This
has not been implemented yet. pg_control is small enough (less than one disk page) that it is not subject
to partial-write problems, and as of this writing there have been no reports of database failures due solely
345
Chapter 25. Write-Ahead Logging (WAL)
to inability to read pg_control itself. So while it is theoretically a weak spot, pg_control does not
seem to be a problem in practice.
346
Chapter 26. Regression Tests
The regression tests are a comprehensive set of tests for the SQL implementation in PostgreSQL. They
test standard SQL operations as well as the extended capabilities of PostgreSQL.
gmake check
in the top-level directory. (Or you can change to src/test/regress and run the command there.) This
will first build several auxiliary files, such as some sample user-defined trigger functions, and then run the
test driver script. At the end you should see something like
======================
All 96 tests passed.
======================
or otherwise a note about which tests failed. See Section 26.2 below before assuming that a “failure”
represents a serious problem.
Because this test method runs a temporary server, it will not work when you are the root user (since the
server will not start as root). If you already did the build as root, you do not have to start all over. Instead,
make the regression test directory writable by some other user, log in as that user, and restart the tests. For
example
(The only possible “security risk” here is that other users might be able to alter the regression test results
behind your back. Use common sense when managing user permissions.)
Alternatively, run the tests after installation.
If you have configured PostgreSQL to install into a location where an older PostgreSQL installation
already exists, and you perform gmake check before installing the new version, you may find that the
tests fail because the new programs try to use the already-installed shared libraries. (Typical symptoms are
complaints about undefined symbols.) If you wish to run the tests before overwriting the old installation,
347
Chapter 26. Regression Tests
you’ll need to build with configure --disable-rpath. It is not recommended that you use this option
for the final installation, however.
The parallel regression test starts quite a few processes under your user ID. Presently, the maximum
concurrency is twenty parallel test scripts, which means sixty processes: there’s a server process, a psql,
and usually a shell parent process for the psql for each test script. So if your system enforces a per-user
limit on the number of processes, make sure this limit is at least seventy-five or so, else you may get
random-seeming failures in the parallel test. If you are not in a position to raise the limit, you can cut
down the degree of parallelism by setting the MAX_CONNECTIONS parameter. For example,
If no non-broken shell is available, you may be able to work around the problem by limiting the number
of connections, as shown above.
To run the tests after installation (see Chapter 14), initialize a data area and start the server, as explained
in Chapter 16, then type
gmake installcheck
gmake installcheck-parallel
The tests will expect to contact the server at the local host and the default port number, unless directed
otherwise by PGHOST and PGPORT environment variables.
348
Chapter 26. Regression Tests
Note: Because USA daylight-saving time rules are used, this problem always occurs on the first
Sunday of April, the last Sunday of October, and their following Mondays, regardless of when daylight-
saving time is in effect where you live. Also note that the problem appears or disappears at midnight
Pacific time (UTC-7 or UTC-8), not midnight your local time. Thus the failure may appear late on
Saturday or persist through much of Tuesday, depending on where you live.
Most of the date and time results are dependent on the time zone environment. The reference files are
generated for time zone PST8PDT (Berkeley, California), and there will be apparent failures if the tests are
not run with that time zone setting. The regression test driver sets environment variable PGTZ to PST8PDT,
which normally ensures proper results.
349
Chapter 26. Regression Tests
forms, or even with different compiler optimization options. Human eyeball comparison is needed to
determine the real significance of these differences which are usually 10 places to the right of the decimal
point.
Some systems display minus zero as -0, while others just show 0.
Some systems signal errors from pow() and exp() differently from the mechanism expected by the
current PostgreSQL code.
should produce only one or a few lines of differences. You need not worry unless the random test fails
repeatedly.
testname/platformpattern=comparisonfilename
350
Chapter 26. Regression Tests
The test name is just the name of the particular regression test module. The platform pattern is a pattern
in the style of the Unix tool expr (that is, a regular expression with an implicit ^ anchor at the start). It
is matched against the platform name as printed by config.guess followed by :gcc or :cc, depending
on whether you use the GNU compiler or the system’s native compiler (on systems where there is a
difference). The comparison file name is the name of the substitute result comparison file.
For example: some systems interpret very small floating-point values as zero, rather than reporting an
underflow error. This causes a few differences in the float8 regression test. Therefore, we provide a
variant comparison file, float8-small-is-zero.out, which includes the results to be expected on
these systems. To silence the bogus “failure” message on OpenBSD platforms, resultmap includes
float8/i.86-.*-openbsd=float8-small-is-zero
which will trigger on any machine for which the output of config.guess matches i.86-.*-openbsd.
Other lines in resultmap select the variant comparison file for other platforms where it’s appropriate.
351
IV. Client Interfaces
This part describes the client programming interfaces distributed with PostgreSQL. Each of these chapters
can be read independently. Note that there are many other programming interfaces for client programs
that are distributed separately and contain their own documentation (Appendix H lists some of the more
popular ones). Readers of this part should be familiar with using SQL commands to manipulate and query
the database (see Part II) and of course with the programming language that the interface uses.
Chapter 27. libpq - C Library
libpq is the C application programmer’s interface to PostgreSQL. libpq is a set of library functions that
allow client programs to pass queries to the PostgreSQL backend server and to receive the results of these
queries.
libpq is also the underlying engine for several other PostgreSQL application interfaces, including those
written for C++, Perl, Python, Tcl and ECPG. So some aspects of libpq’s behavior will be important to you
if you use one of those packages. In particular, Section 27.11, Section 27.12 and Section 27.13 describe
behavior that is visible to the user of any application that uses libpq.
Some short programs are included at the end of this chapter (Section 27.16) to show how to write pro-
grams that use libpq. There are also several complete examples of libpq applications in the directory
src/test/examples in the source code distribution.
Client programs that use libpq must include the header file libpq-fe.h and must link with the libpq
library.
PQconnectdb
This function opens a new database connection using the parameters taken from the string
conninfo. Unlike PQsetdbLogin below, the parameter set can be extended without changing the
function signature, so use of this function (or its nonblocking analogues PQconnectStart and
PQconnectPoll) is preferred for new application programming.
The passed string can be empty to use all default parameters, or it can contain one or more parameter
settings separated by whitespace. Each parameter setting is in the form keyword = value. Spaces
around the equal sign are optional. To write an empty value or a value containing spaces, surround
it with single quotes, e.g., keyword = ’a value’. Single quotes and backslashes within the value
must be escaped with a backslash, i.e., \’ and \\.
The currently recognized parameter key words are:
host
Name of host to connect to. If this begins with a slash, it specifies Unix-domain communication
rather than TCP/IP communication; the value is the name of the directory in which the socket
354
Chapter 27. libpq - C Library
file is stored. The default behavior when host is not specified is to connect to a Unix-domain
socket in /tmp (or whatever socket directory was specified when PostgreSQL was built). On
machines without Unix-domain sockets, the default is to connect to localhost.
hostaddr
Numeric IP address of host to connect to. This should be in the standard IPv4 address format,
e.g., 172.28.40.9. If your machine supports IPv6, you can also use those addresses. TCP/IP
communication is always used when a nonempty string is specified for this parameter.
Using hostaddr instead of host allows the application to avoid a host name look-up, which
may be important in applications with time constraints. However, Kerberos authentication re-
quires the host name. The following therefore applies: If host is specified without hostaddr,
a host name lookup occurs. If hostaddr is specified without host, the value for hostaddr
gives the remote address. When Kerberos is used, a reverse name query occurs to obtain the
host name for Kerberos. If both host and hostaddr are specified, the value for hostaddr
gives the remote address; the value for host is ignored, unless Kerberos is used, in which case
that value is used for Kerberos authentication. (Note that authentication is likely to fail if libpq
is passed a host name that is not the name of the machine at hostaddr.) Also, host rather than
hostaddr is used to identify the connection in ~/.pgpass (see Section 27.12).
Without either a host name or host address, libpq will connect using a local Unix-domain socket;
or on machines without Unix-domain sockets, it will attempt to connect to localhost.
port
Port number to connect to at the server host, or socket file name extension for Unix-domain
connections.
dbname
PostgreSQL user name to connect as. Defaults to be the same as the operating system name of
the user running the application.
password
Maximum wait for connection, in seconds (write as a decimal integer string). Zero or not spec-
ified means wait indefinitely. It is not recommended to use a timeout of less than 2 seconds.
options
This option determines whether or with what priority an SSL connection will be negotiated with
the server. There are four modes: disable will attempt only an unencrypted SSL connection;
355
Chapter 27. libpq - C Library
allow will negotiate, trying first a non-SSL connection, then if that fails, trying an SSL con-
nection; prefer (the default) will negotiate, trying first an SSL connection, then if that fails,
trying a regular non-SSL connection; require will try only an SSL connection.
If PostgreSQL is compiled without SSL support, using option require will cause an error,
while options allow and prefer will be accepted but libpq will not in fact attempt an SSL
connection.
requiressl
service
PQsetdbLogin
This is the predecessor of PQconnectdb with a fixed set of parameters. It has the same functionality
except that the missing parameters will always take on default values. Write NULL or an empty string
for any one of the fixed parameters that is to be defaulted.
PQsetdb
356
Chapter 27. libpq - C Library
This is a macro that calls PQsetdbLogin with null pointers for the login and pwd parameters. It is
provided for backward compatibility with very old programs.
PQconnectStart
PQconnectPoll
These two functions are used to open a connection to a database server such that your application’s
thread of execution is not blocked on remote I/O whilst doing so. The point of this approach is
that the waits for I/O to complete can occur in the application’s main loop, rather than down inside
PQconnectdb, and so the application can manage this operation in parallel with other activities.
The database connection is made using the parameters taken from the string conninfo, passed to
PQconnectStart. This string is in the same format as described above for PQconnectdb.
Neither PQconnectStart nor PQconnectPoll will block, so long as a number of restrictions are
met:
• The hostaddr and host parameters are used appropriately to ensure that name and reverse name
queries are not made. See the documentation of these parameters under PQconnectdb above for
details.
• If you call PQtrace, ensure that the stream object into which you trace will not block.
• You ensure that the socket is in the appropriate state before calling PQconnectPoll, as described
below.
If PQconnectStart succeeds, the next stage is to poll libpq so that it may proceed with
the connection sequence. Use PQsocket(conn) to obtain the descriptor of the socket
underlying the database connection. Loop thus: If PQconnectPoll(conn) last returned
PGRES_POLLING_READING, wait until the socket is ready to read (as indicated by select(),
poll(), or similar system function). Then call PQconnectPoll(conn) again. Conversely, if
PQconnectPoll(conn) last returned PGRES_POLLING_WRITING, wait until the socket is ready to
write, then call PQconnectPoll(conn) again. If you have yet to call PQconnectPoll, i.e.,
just after the call to PQconnectStart, behave as if it last returned PGRES_POLLING_WRITING.
Continue this loop until PQconnectPoll(conn) returns PGRES_POLLING_FAILED, indicating the
connection procedure has failed, or PGRES_POLLING_OK, indicating the connection has been
successfully made.
357
Chapter 27. libpq - C Library
At any time during connection, the status of the connection may be checked by calling PQstatus. If
this gives CONNECTION_BAD, then the connection procedure has failed; if it gives CONNECTION_OK,
then the connection is ready. Both of these states are equally detectable from the return value of
PQconnectPoll, described above. Other states may also occur during (and only during) an asyn-
chronous connection procedure. These indicate the current stage of the connection procedure and
may be useful to provide feedback to the user for example. These statuses are:
CONNECTION_STARTED
case CONNECTION_MADE:
feedback = "Connected to server...";
break;
.
.
.
default:
feedback = "Connecting...";
}
358
Chapter 27. libpq - C Library
Note that if PQconnectStart returns a non-null pointer, you must call PQfinish when you are
finished with it, in order to dispose of the structure and any associated memory blocks. This must be
done even if the connection attempt fails or is abandoned.
PQconndefaults
typedef struct
{
char *keyword; /* The keyword of the option */
char *envvar; /* Fallback environment variable name */
char *compiled; /* Fallback compiled in default value */
char *val; /* Option’s current value, or NULL */
char *label; /* Label for field in connect dialog */
char *dispchar; /* Character to display for this field
in a connect dialog. Values are:
"" Display entered value as is
"*" Password field - hide value
"D" Debug option - don’t show by default */
int dispsize; /* Field size in characters for dialog */
} PQconninfoOption;
Returns a connection options array. This may be used to determine all possible PQconnectdb op-
tions and their current default values. The return value points to an array of PQconninfoOption
structures, which ends with an entry having a null keyword pointer. Note that the current default
values (val fields) will depend on environment variables and other context. Callers must treat the
connection options data as read-only.
After processing the options array, free it by passing it to PQconninfoFree. If this is not done, a
small amount of memory is leaked for each call to PQconndefaults.
PQfinish
Closes the connection to the server. Also frees memory used by the PGconn object.
void PQfinish(PGconn *conn);
Note that even if the server connection attempt fails (as indicated by PQstatus), the application
should call PQfinish to free the memory used by the PGconn object. The PGconn pointer must not
be used again after PQfinish has been called.
PQreset
This function will close the connection to the server and attempt to reestablish a new connection to
the same server, using all the same parameters previously used. This may be useful for error recovery
if a working connection is lost.
359
Chapter 27. libpq - C Library
PQresetStart
PQresetPoll
These functions will close the connection to the server and attempt to reestablish a new connection
to the same server, using all the same parameters previously used. This may be useful for error
recovery if a working connection is lost. They differ from PQreset (above) in that they act in a
nonblocking manner. These functions suffer from the same restrictions as PQconnectStart and
PQconnectPoll.
To initiate a connection reset, call PQresetStart. If it returns 0, the reset has failed. If it returns 1,
poll the reset using PQresetPoll in exactly the same way as you would create the connection using
PQconnectPoll.
Tip: libpq application programmers should be careful to maintain the PGconn abstraction. Use the ac-
cessor functions described below to get at the contents of PGconn. Avoid directly referencing the fields
of the PGconn structure because they are subject to change in the future. (Beginning in PostgreSQL
release 6.4, the definition of the struct behind PGconn is not even provided in libpq-fe.h. If you
have old code that accesses PGconn fields directly, you can keep using it by including libpq-int.h
too, but you are encouraged to fix the code soon.)
The following functions return parameter values established at connection. These values are fixed for the
life of the PGconn object.
PQdb
PQuser
360
Chapter 27. libpq - C Library
PQpass
PQhost
PQport
PQtty
Returns the debug TTY of the connection. (This is obsolete, since the server no longer pays attention
to the TTY setting, but the function remains for backwards compatibility.)
char *PQtty(const PGconn *conn);
PQoptions
The following functions return status data that can change as operations are executed on the PGconn
object.
PQstatus
The status can be one of a number of values. However, only two of these are seen outside of an
asynchronous connection procedure: CONNECTION_OK and CONNECTION_BAD. A good connection
to the database has the status CONNECTION_OK. A failed connection attempt is signaled by status
CONNECTION_BAD. Ordinarily, an OK status will remain so until PQfinish, but a communications
failure might result in the status changing to CONNECTION_BAD prematurely. In that case the appli-
cation could try to recover by calling PQreset.
See the entry for PQconnectStart and PQconnectPoll with regards to other status codes that
might be seen.
361
Chapter 27. libpq - C Library
PQtransactionStatus
Caution
PQtransactionStatus will give incorrect results when using a
PostgreSQL 7.3 server that has the parameter autocommit set to off.
The server-side autocommit feature has been deprecated and does not
exist in later server versions.
PQparameterStatus
Certain parameter values are reported by the server automatically at connection startup or whenever
their values change. PQparameterStatus can be used to interrogate these settings. It returns the
current value of a parameter if known, or NULL if the parameter is not known.
Parameters reported as of the current release include server_version, server_encoding,
client_encoding, is_superuser, session_authorization, DateStyle, TimeZone, and
integer_datetimes. (server_encoding, TimeZone, and integer_datetimes were
not reported by releases before 8.0.) Note that server_version, server_encoding and
integer_datetimes cannot change after startup.
Pre-3.0-protocol servers do not report parameter settings, but libpq includes logic to obtain val-
ues for server_version and client_encoding anyway. Applications are encouraged to use
PQparameterStatus rather than ad hoc code to determine these values. (Beware however that
on a pre-3.0 connection, changing client_encoding via SET after connection startup will not be
reflected by PQparameterStatus.) For server_version, see also PQserverVersion, which
returns the information in a numeric form that is much easier to compare against.
Although the returned pointer is declared const, it in fact points to mutable storage associated with
the PGconn structure. It is unwise to assume the pointer will remain valid across queries.
PQprotocolVersion
Applications may wish to use this to determine whether certain features are supported. Currently, the
possible values are 2 (2.0 protocol), 3 (3.0 protocol), or zero (connection bad). This will not change
after connection startup is complete, but it could theoretically change during a connection reset.
The 3.0 protocol will normally be used when communicating with PostgreSQL 7.4 or later servers;
pre-7.4 servers support only protocol 2.0. (Protocol 1.0 is obsolete and not supported by libpq.)
362
Chapter 27. libpq - C Library
PQserverVersion
Applications may use this to determine the version of the database server they are connected to. The
number is formed by converting the major, minor, and revision numbers into two-decimal-digit num-
bers and appending them together. For example, version 7.4.2 will be returned as 70402, and version
8.1 will be returned as 80100 (leading zeroes are not shown). Zero is returned if the connection is
bad.
PQerrorMessage
Returns the error message most recently generated by an operation on the connection.
char *PQerrorMessage(const PGconn *conn);
Nearly all libpq functions will set a message for PQerrorMessage if they fail. Note that by libpq
convention, a nonempty PQerrorMessage result will include a trailing newline. The caller should
not free the result directly. It will be freed when the associated PGconn handle is passed to PQfinish.
The result string should not be expected to remain the same across operations on the PGconn struc-
ture.
PQsocket
Obtains the file descriptor number of the connection socket to the server. A valid descriptor will be
greater than or equal to 0; a result of -1 indicates that no server connection is currently open. (This
will not change during normal operation, but could change during connection setup or reset.)
int PQsocket(const PGconn *conn);
PQbackendPID
Returns the process ID (PID) of the backend server process handling this connection.
int PQbackendPID(const PGconn *conn);
The backend PID is useful for debugging purposes and for comparison to NOTIFY messages (which
include the PID of the notifying backend process). Note that the PID belongs to a process executing
on the database server host, not the local host!
PQgetssl
Returns the SSL structure used in the connection, or null if SSL is not in use.
SSL *PQgetssl(const PGconn *conn);
This structure can be used to verify encryption levels, check server certificates, and more. Refer to
the OpenSSL documentation for information about this structure.
You must define USE_SSL in order to get the correct prototype for this function. Doing this will also
automatically include ssl.h from OpenSSL.
363
Chapter 27. libpq - C Library
PQexec
Returns a PGresult pointer or possibly a null pointer. A non-null pointer will generally be returned
except in out-of-memory conditions or serious errors such as inability to send the command to the
server. If a null pointer is returned, it should be treated like a PGRES_FATAL_ERROR result. Use
PQerrorMessage to get more information about such errors.
It is allowed to include multiple SQL commands (separated by semicolons) in the command string. Mul-
tiple queries sent in a single PQexec call are processed in a single transaction, unless there are explicit
BEGIN/COMMIT commands included in the query string to divide it into multiple transactions. Note how-
ever that the returned PGresult structure describes only the result of the last command executed from the
string. Should one of the commands fail, processing of the string stops with it and the returned PGresult
describes the error condition.
PQexecParams
Submits a command to the server and waits for the result, with the ability to pass parameters sepa-
rately from the SQL command text.
PGresult *PQexecParams(PGconn *conn,
const char *command,
int nParams,
const Oid *paramTypes,
const char * const *paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat);
PQexecParams is like PQexec, but offers additional functionality: parameter values can be specified
separately from the command string proper, and query results can be requested in either text or binary
format. PQexecParams is supported only in protocol 3.0 and later connections; it will fail when
using protocol 2.0.
364
Chapter 27. libpq - C Library
If parameters are used, they are referred to in the command string as $1, $2, etc. nParams is the
number of parameters supplied; it is the length of the arrays paramTypes[], paramValues[],
paramLengths[], and paramFormats[]. (The array pointers may be NULL when nParams is
zero.) paramTypes[] specifies, by OID, the data types to be assigned to the parameter symbols. If
paramTypes is NULL, or any particular element in the array is zero, the server assigns a data type
to the parameter symbol in the same way it would do for an untyped literal string. paramValues[]
specifies the actual values of the parameters. A null pointer in this array means the corresponding
parameter is null; otherwise the pointer points to a zero-terminated text string (for text format) or
binary data in the format expected by the server (for binary format). paramLengths[] specifies
the actual data lengths of binary-format parameters. It is ignored for null parameters and text-format
parameters. The array pointer may be null when there are no binary parameters. paramFormats[]
specifies whether parameters are text (put a zero in the array) or binary (put a one in the array). If the
array pointer is null then all parameters are presumed to be text. resultFormat is zero to obtain
results in text format, or one to obtain results in binary format. (There is not currently a provision
to obtain different result columns in different formats, although that is possible in the underlying
protocol.)
The primary advantage of PQexecParams over PQexec is that parameter values may be separated from
the command string, thus avoiding the need for tedious and error-prone quoting and escaping. Unlike
PQexec, PQexecParams allows at most one SQL command in the given string. (There can be semicolons
in it, but not more than one nonempty command.) This is a limitation of the underlying protocol, but has
some usefulness as an extra defense against SQL-injection attacks.
PQprepare
Submits a request to create a prepared statement with the given parameters, and waits for completion.
PGresult *PQprepare(PGconn *conn,
const char *stmtName,
const char *query,
int nParams,
const Oid *paramTypes);
PQprepare creates a prepared statement for later execution with PQexecPrepared. This feature
allows commands that will be used repeatedly to be parsed and planned just once, rather than each
time they are executed. PQprepare is supported only in protocol 3.0 and later connections; it will
fail when using protocol 2.0.
The function creates a prepared statement named stmtName from the query string, which must
contain a single SQL command. stmtName may be "" to create an unnamed statement, in which case
any pre-existing unnamed statement is automatically replaced; otherwise it is an error if the statement
name is already defined in the current session. If any parameters are used, they are referred to in the
query as $1, $2, etc. nParams is the number of parameters for which types are pre-specified in the
array paramTypes[]. (The array pointer may be NULL when nParams is zero.) paramTypes[]
specifies, by OID, the data types to be assigned to the parameter symbols. If paramTypes is NULL,
or any particular element in the array is zero, the server assigns a data type to the parameter symbol
in the same way it would do for an untyped literal string. Also, the query may use parameter symbols
with numbers higher than nParams; data types will be inferred for these symbols as well.
365
Chapter 27. libpq - C Library
As with PQexec, the result is normally a PGresult object whose contents indicate server-side suc-
cess or failure. A null result indicates out-of-memory or inability to send the command at all. Use
PQerrorMessage to get more information about such errors.
At present, there is no way to determine the actual data type inferred for any parameters whose types
are not specified in paramTypes[]. This is a libpq omission that will probably be rectified in a
future release.
Prepared statements for use with PQexecPrepared can also be created by executing SQL PREPARE
statements. (But PQprepare is more flexible since it does not require parameter types to be pre-specified.)
Also, although there is no libpq function for deleting a prepared statement, the SQL DEALLOCATE state-
ment can be used for that purpose.
PQexecPrepared
Sends a request to execute a prepared statement with given parameters, and waits for the result.
PGresult *PQexecPrepared(PGconn *conn,
const char *stmtName,
int nParams,
const char * const *paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat);
The PGresult structure encapsulates the result returned by the server. libpq application programmers
should be careful to maintain the PGresult abstraction. Use the accessor functions below to get at the
contents of PGresult. Avoid directly referencing the fields of the PGresult structure because they are
subject to change in the future.
PQresultStatus
366
Chapter 27. libpq - C Library
PGRES_EMPTY_QUERY
Converts the enumerated type returned by PQresultStatus into a string constant describing the
status code. The caller should not free the result.
char *PQresStatus(ExecStatusType status);
PQresultErrorMessage
Returns the error message associated with the command, or an empty string if there was no error.
char *PQresultErrorMessage(const PGresult *res);
If there was an error, the returned string will include a trailing newline. The caller should not free the
result directly. It will be freed when the associated PGresult handle is passed to PQclear.
Immediately following a PQexec or PQgetResult call, PQerrorMessage (on the connection) will
return the same string as PQresultErrorMessage (on the result). However, a PGresult will re-
tain its error message until destroyed, whereas the connection’s error message will change when
367
Chapter 27. libpq - C Library
subsequent operations are done. Use PQresultErrorMessage when you want to know the status
associated with a particular PGresult; use PQerrorMessage when you want to know the status
from the latest operation on the connection.
PQresultErrorField
fieldcode is an error field identifier; see the symbols listed below. NULL is returned if the
PGresult is not an error or warning result, or does not include the specified field. Field values will
normally not include a trailing newline. The caller should not free the result directly. It will be freed
when the associated PGresult handle is passed to PQclear.
The following field codes are available:
PG_DIAG_SEVERITY
The severity; the field contents are ERROR, FATAL, or PANIC (in an error message), or WARNING,
NOTICE, DEBUG, INFO, or LOG (in a notice message), or a localized translation of one of these.
Always present.
PG_DIAG_SQLSTATE
The SQLSTATE code for the error. The SQLSTATE code identifies the type of error that has
occurred; it can be used by front-end applications to perform specific operations (such as error
handling) in response to a particular database error. For a list of the possible SQLSTATE codes,
see Appendix A. This field is not localizable, and is always present.
PG_DIAG_MESSAGE_PRIMARY
The primary human-readable error message (typically one line). Always present.
PG_DIAG_MESSAGE_DETAIL
Detail: an optional secondary error message carrying more detail about the problem. May run
to multiple lines.
PG_DIAG_MESSAGE_HINT
Hint: an optional suggestion what to do about the problem. This is intended to differ from detail
in that it offers advice (potentially inappropriate) rather than hard facts. May run to multiple
lines.
PG_DIAG_STATEMENT_POSITION
A string containing a decimal integer indicating an error cursor position as an index into the
original statement string. The first character has index 1, and positions are measured in charac-
ters not bytes.
PG_DIAG_INTERNAL_POSITION
This is defined the same as the PG_DIAG_STATEMENT_POSITION field, but it is used when the
cursor position refers to an internally generated command rather than the one submitted by the
client. The PG_DIAG_INTERNAL_QUERY field will always appear when this field appears.
368
Chapter 27. libpq - C Library
PG_DIAG_INTERNAL_QUERY
The text of a failed internally-generated command. This could be, for example, a SQL query
issued by a PL/pgSQL function.
PG_DIAG_CONTEXT
An indication of the context in which the error occurred. Presently this includes a call stack
traceback of active procedural language functions and internally-generated queries. The trace is
one entry per line, most recent first.
PG_DIAG_SOURCE_FILE
The file name of the source-code location where the error was reported.
PG_DIAG_SOURCE_LINE
The line number of the source-code location where the error was reported.
PG_DIAG_SOURCE_FUNCTION
The client is responsible for formatting displayed information to meet its needs; in particular it should
break long lines as needed. Newline characters appearing in the error message fields should be treated
as paragraph breaks, not line breaks.
Errors generated internally by libpq will have severity and primary message, but typically no other
fields. Errors returned by a pre-3.0-protocol server will include severity and primary message, and
sometimes a detail message, but no other fields.
Note that error fields are only available from PGresult objects, not PGconn objects; there is no
PQerrorField function.
PQclear
Frees the storage associated with a PGresult. Every command result should be freed via PQclear
when it is no longer needed.
void PQclear(PGresult *res);
You can keep a PGresult object around for as long as you need it; it does not go away when you
issue a new command, nor even if you close the connection. To get rid of it, you must call PQclear.
Failure to do this will result in memory leaks in your application.
PQmakeEmptyPGresult
This is libpq’s internal function to allocate and initialize an empty PGresult object. It is exported
because some applications find it useful to generate result objects (particularly objects with error
status) themselves. If conn is not null and status indicates an error, the current error message of
369
Chapter 27. libpq - C Library
the specified connection is copied into the PGresult. Note that PQclear should eventually be called
on the object, just as with a PGresult returned by libpq itself.
PQntuples
PQnfields
Returns the number of columns (fields) in each row of the query result.
int PQnfields(const PGresult *res);
PQfname
Returns the column name associated with the given column number. Column numbers start at 0. The
caller should not free the result directly. It will be freed when the associated PGresult handle is
passed to PQclear.
char *PQfname(const PGresult *res,
int column_number);
PQfnumber
Returns the column number associated with the given column name.
int PQfnumber(const PGresult *res,
const char *column_name);
370
Chapter 27. libpq - C Library
PQfnumber(res, "foo") 0
PQfnumber(res, "BAR") -1
PQfnumber(res, "\"BAR\"") 1
PQftable
Returns the OID of the table from which the given column was fetched. Column numbers start at 0.
Oid PQftable(const PGresult *res,
int column_number);
InvalidOid is returned if the column number is out of range, or if the specified column is not a
simple reference to a table column, or when using pre-3.0 protocol. You can query the system table
pg_class to determine exactly which table is referenced.
The type Oid and the constant InvalidOid will be defined when you include the libpq header file.
They will both be some integer type.
PQftablecol
Returns the column number (within its table) of the column making up the specified query result
column. Query-result column numbers start at 0, but table columns have nonzero numbers.
int PQftablecol(const PGresult *res,
int column_number);
Zero is returned if the column number is out of range, or if the specified column is not a simple
reference to a table column, or when using pre-3.0 protocol.
PQfformat
Returns the format code indicating the format of the given column. Column numbers start at 0.
int PQfformat(const PGresult *res,
int column_number);
Format code zero indicates textual data representation, while format code one indicates binary rep-
resentation. (Other codes are reserved for future definition.)
PQftype
Returns the data type associated with the given column number. The integer returned is the internal
OID number of the type. Column numbers start at 0.
Oid PQftype(const PGresult *res,
int column_number);
You can query the system table pg_type to obtain the names and properties of the various data types.
The OIDs of the built-in data types are defined in the file src/include/catalog/pg_type.h in
the source tree.
371
Chapter 27. libpq - C Library
PQfmod
Returns the type modifier of the column associated with the given column number. Column numbers
start at 0.
int PQfmod(const PGresult *res,
int column_number);
The interpretation of modifier values is type-specific; they typically indicate precision or size limits.
The value -1 is used to indicate “no information available”. Most data types do not use modifiers, in
which case the value is always -1.
PQfsize
Returns the size in bytes of the column associated with the given column number. Column numbers
start at 0.
int PQfsize(const PGresult *res,
int column_number);
PQfsize returns the space allocated for this column in a database row, in other words the size of the
server’s internal representation of the data type. (Accordingly, it is not really very useful to clients.)
A negative value indicates the data type is variable-length.
PQbinaryTuples
Returns 1 if the PGresult contains binary data and 0 if it contains text data.
int PQbinaryTuples(const PGresult *res);
This function is deprecated (except for its use in connection with COPY), because it is possible for
a single PGresult to contain text data in some columns and binary data in others. PQfformat is
preferred. PQbinaryTuples returns 1 only if all columns of the result are binary (format 1).
PQgetvalue
Returns a single field value of one row of a PGresult. Row and column numbers start at 0. The
caller should not free the result directly. It will be freed when the associated PGresult handle is
passed to PQclear.
char *PQgetvalue(const PGresult *res,
int row_number,
int column_number);
For data in text format, the value returned by PQgetvalue is a null-terminated character string
representation of the field value. For data in binary format, the value is in the binary representation
determined by the data type’s typsend and typreceive functions. (The value is actually followed
by a zero byte in this case too, but that is not ordinarily useful, since the value is likely to contain
embedded nulls.)
An empty string is returned if the field value is null. See PQgetisnull to distinguish null values
from empty-string values.
372
Chapter 27. libpq - C Library
The pointer returned by PQgetvalue points to storage that is part of the PGresult structure. One
should not modify the data it points to, and one must explicitly copy the data into other storage if it
is to be used past the lifetime of the PGresult structure itself.
PQgetisnull
Tests a field for a null value. Row and column numbers start at 0.
int PQgetisnull(const PGresult *res,
int row_number,
int column_number);
This function returns 1 if the field is null and 0 if it contains a non-null value. (Note that PQgetvalue
will return an empty string, not a null pointer, for a null field.)
PQgetlength
Returns the actual length of a field value in bytes. Row and column numbers start at 0.
int PQgetlength(const PGresult *res,
int row_number,
int column_number);
This is the actual data length for the particular data value, that is, the size of the object pointed to by
PQgetvalue. For text data format this is the same as strlen(). For binary format this is essential
information. Note that one should not rely on PQfsize to obtain the actual data length.
PQprint
Prints out all the rows and, optionally, the column names to the specified output stream.
void PQprint(FILE *fout, /* output stream */
const PGresult *res,
const PQprintOpt *po);
typedef struct {
pqbool header; /* print output field headings and row count */
pqbool align; /* fill align the fields */
pqbool standard; /* old brain dead format */
pqbool html3; /* output HTML tables */
pqbool expanded; /* expand tables */
pqbool pager; /* use pager for output if needed */
char *fieldSep; /* field separator */
char *tableOpt; /* attributes for HTML table element */
char *caption; /* HTML table caption */
char **fieldName; /* null-terminated array of replacement field names */
} PQprintOpt;
This function was formerly used by psql to print query results, but this is no longer the case. Note
that it assumes all the data is in text format.
373
Chapter 27. libpq - C Library
PQcmdStatus
Returns the command status tag from the SQL command that generated the PGresult.
char *PQcmdStatus(PGresult *res);
Commonly this is just the name of the command, but it may include additional data such as the
number of rows processed. The caller should not free the result directly. It will be freed when the
associated PGresult handle is passed to PQclear.
PQcmdTuples
This function returns a string containing the number of rows affected by the SQL statement that
generated the PGresult. This function can only be used following the execution of an INSERT,
UPDATE, DELETE, MOVE, or FETCH statement, or an EXECUTE of a prepared query that contains a
INSERT, UPDATE, or DELETE statement. If the command that generated the PGresult was anything
else, PQcmdTuples returns the empty string. The caller should not free the return value directly. It
will be freed when the associated PGresult handle is passed to PQclear.
PQoidValue
Returns the OID of the inserted row, if the SQL command was an INSERT that inserted exactly
one row into a table that has OIDs, or a EXECUTE of a prepared query containing a suitable INSERT
statement. Otherwise, this function returns InvalidOid. This function will also return InvalidOid
if the table affected by the INSERT statement does not contain OIDs.
Oid PQoidValue(const PGresult *res);
PQoidStatus
Returns a string with the OID of the inserted row, if the SQL command was an INSERT that inserted
exactly one row, or a EXECUTE of a prepared statement consisting of a suitable INSERT. (The string
will be 0 if the INSERT did not insert exactly one row, or if the target table does not have OIDs.) If
the command was not an INSERT, returns an empty string.
char *PQoidStatus(const PGresult *res);
374
Chapter 27. libpq - C Library
Tip: It is especially important to do proper escaping when handling strings that were received from an
untrustworthy source. Otherwise there is a security risk: you are vulnerable to “SQL injection” attacks
wherein unwanted SQL commands are fed to your database.
Note that it is not necessary nor correct to do escaping when a data value is passed as a separate parameter
in PQexecParams or its sibling routines.
The parameter from points to the first character of the string that is to be escaped, and the length
parameter gives the number of characters in this string. A terminating zero byte is not required, and
should not be counted in length. (If a terminating zero byte is found before length bytes are processed,
PQescapeString stops at the zero; the behavior is thus rather like strncpy.) to shall point to a buffer
that is able to hold at least one more character than twice the value of length, otherwise the behavior
is undefined. A call to PQescapeString writes an escaped version of the from string to the to buffer,
replacing special characters so that they cannot cause any harm, and adding a terminating zero byte. The
single quotes that must surround PostgreSQL string literals are not included in the result string; they
should be provided in the SQL command that the result is inserted into.
PQescapeString returns the number of characters written to to, not including the terminating zero byte.
PQescapeBytea
Escapes binary data for use within an SQL command with the type bytea. As with
PQescapeString, this is only used when inserting data directly into an SQL command string.
unsigned char *PQescapeBytea(const unsigned char *from,
size_t from_length,
size_t *to_length);
Certain byte values must be escaped (but all byte values may be escaped) when used as part of
a bytea literal in an SQL statement. In general, to escape a byte, it is converted into the three
digit octal number equal to the octet value, and preceded by two backslashes. The single quote (’)
and backslash (\) characters have special alternative escape sequences. See Section 8.4 for more
information. PQescapeBytea performs this operation, escaping only the minimally required bytes.
375
Chapter 27. libpq - C Library
The from parameter points to the first byte of the string that is to be escaped, and the from_length
parameter gives the number of bytes in this binary string. (A terminating zero byte is neither neces-
sary nor counted.) The to_length parameter points to a variable that will hold the resultant escaped
string length. The result string length includes the terminating zero byte of the result.
PQescapeBytea returns an escaped version of the from parameter binary string in memory allo-
cated with malloc(). This memory must be freed using PQfreemem when the result is no longer
needed. The return string has all special characters replaced so that they can be properly processed
by the PostgreSQL string literal parser, and the bytea input function. A terminating zero byte is
also added. The single quotes that must surround PostgreSQL string literals are not part of the result
string.
PQunescapeBytea
Converts an escaped string representation of binary data into binary data — the reverse of
PQescapeBytea. This is needed when retrieving bytea data in text format, but not when retrieving
it in binary format.
unsigned char *PQunescapeBytea(const unsigned char *from, size_t *to_length);
The from parameter points to an escaped string such as might be returned by PQgetvalue when
applied to a bytea column. PQunescapeBytea converts this string representation into its binary
representation. It returns a pointer to a buffer allocated with malloc(), or null on error, and puts the
size of the buffer in to_length. The result must be freed using PQfreemem when it is no longer
needed.
PQfreemem
• PQexec waits for the command to be completed. The application may have other work to do (such as
maintaining a user interface), in which case it won’t want to block waiting for the response.
• Since the execution of the client application is suspended while it waits for the result, it is hard for the
application to decide that it would like to try to cancel the ongoing command. (It can be done from a
signal handler, but not otherwise.)
376
Chapter 27. libpq - C Library
• PQexec can return only one PGresult structure. If the submitted command string contains multiple
SQL commands, all but the last PGresult are discarded by PQexec.
Applications that do not like these limitations can instead use the underlying functions that PQexec is
built from: PQsendQuery and PQgetResult. There are also PQsendQueryParams, PQsendPrepare,
and PQsendQueryPrepared, which can be used with PQgetResult to duplicate the functionality of
PQexecParams, PQprepare, and PQexecPrepared respectively.
PQsendQuery
Submits a command to the server without waiting for the result(s). 1 is returned if the command was
successfully dispatched and 0 if not (in which case, use PQerrorMessage to get more information
about the failure).
int PQsendQuery(PGconn *conn, const char *command);
After successfully calling PQsendQuery, call PQgetResult one or more times to obtain the results.
PQsendQuery may not be called again (on the same connection) until PQgetResult has returned a
null pointer, indicating that the command is done.
PQsendQueryParams
Submits a command and separate parameters to the server without waiting for the result(s).
int PQsendQueryParams(PGconn *conn,
const char *command,
int nParams,
const Oid *paramTypes,
const char * const *paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat);
This is equivalent to PQsendQuery except that query parameters can be specified separately from
the query string. The function’s parameters are handled identically to PQexecParams. Like
PQexecParams, it will not work on 2.0-protocol connections, and it allows only one command in
the query string.
PQsendPrepare
Sends a request to create a prepared statement with the given parameters, without waiting for com-
pletion.
int PQsendPrepare(PGconn *conn,
const char *stmtName,
const char *query,
int nParams,
const Oid *paramTypes);
This is an asynchronous version of PQprepare: it returns 1 if it was able to dispatch the request,
and 0 if not. After a successful call, call PQgetResult to determine whether the server successfully
created the prepared statement. The function’s parameters are handled identically to PQprepare.
Like PQprepare, it will not work on 2.0-protocol connections.
377
Chapter 27. libpq - C Library
PQsendQueryPrepared
Sends a request to execute a prepared statement with given parameters, without waiting for the re-
sult(s).
int PQsendQueryPrepared(PGconn *conn,
const char *stmtName,
int nParams,
const char * const *paramValues,
const int *paramLengths,
const int *paramFormats,
int resultFormat);
Waits for the next result from a prior PQsendQuery, PQsendQueryParams, PQsendPrepare, or
PQsendQueryPrepared call, and returns it. A null pointer is returned when the command is com-
plete and there will be no more results.
PGresult *PQgetResult(PGconn *conn);
PQgetResult must be called repeatedly until it returns a null pointer, indicating that the command
is done. (If called when no command is active, PQgetResult will just return a null pointer at once.)
Each non-null result from PQgetResult should be processed using the same PGresult accessor
functions previously described. Don’t forget to free each result object with PQclear when done with
it. Note that PQgetResult will block only if a command is active and the necessary response data
has not yet been read by PQconsumeInput.
Using PQsendQuery and PQgetResult solves one of PQexec’s problems: If a command string con-
tains multiple SQL commands, the results of those commands can be obtained individually. (This allows
a simple form of overlapped processing, by the way: the client can be handling the results of one com-
mand while the server is still working on later queries in the same command string.) However, calling
PQgetResult will still cause the client to block until the server completes the next SQL command. This
can be avoided by proper use of two more functions:
PQconsumeInput
PQconsumeInput normally returns 1 indicating “no error”, but returns 0 if there was some kind
of trouble (in which case PQerrorMessage can be consulted). Note that the result does not say
whether any input data was actually collected. After calling PQconsumeInput, the application may
check PQisBusy and/or PQnotifies to see if their state has changed.
378
Chapter 27. libpq - C Library
PQconsumeInput may be called even if the application is not prepared to deal with a result or
notification just yet. The function will read available data and save it in a buffer, thereby causing
a select() read-ready indication to go away. The application can thus use PQconsumeInput to
clear the select() condition immediately, and then examine the results at leisure.
PQisBusy
Returns 1 if a command is busy, that is, PQgetResult would block waiting for input. A 0 return
indicates that PQgetResult can be called with assurance of not blocking.
int PQisBusy(PGconn *conn);
PQisBusy will not itself attempt to read data from the server; therefore PQconsumeInput must be
invoked first, or the busy state will never end.
A typical application using these functions will have a main loop that uses select() or poll() to wait
for all the conditions that it must respond to. One of the conditions will be input available from the server,
which in terms of select() means readable data on the file descriptor identified by PQsocket. When
the main loop detects input ready, it should call PQconsumeInput to read the input. It can then call
PQisBusy, followed by PQgetResult if PQisBusy returns false (0). It can also call PQnotifies to
detect NOTIFY messages (see Section 27.7).
A client that uses PQsendQuery/PQgetResult can also attempt to cancel a command that is still being
processed by the server; see Section 27.5. But regardless of the return value of PQcancel, the application
must continue with the normal result-reading sequence using PQgetResult. A successful cancellation
will simply cause the command to terminate sooner than it would have otherwise.
By using the functions described above, it is possible to avoid blocking while waiting for input from the
database server. However, it is still possible that the application will block waiting to send output to the
server. This is relatively uncommon but can happen if very long SQL commands or data values are sent.
(It is much more probable if the application sends data via COPY IN, however.) To prevent this possibility
and achieve completely nonblocking database operation, the following additional functions may be used.
PQsetnonblocking
Sets the state of the connection to nonblocking if arg is 1, or blocking if arg is 0. Returns 0 if OK,
-1 if error.
In the nonblocking state, calls to PQsendQuery, PQputline, PQputnbytes, and PQendcopy will
not block but instead return an error if they need to be called again.
Note that PQexec does not honor nonblocking mode; if it is called, it will act in blocking fashion
anyway.
379
Chapter 27. libpq - C Library
PQisnonblocking
Attempts to flush any queued output data to the server. Returns 0 if successful (or if the send queue
is empty), -1 if it failed for some reason, or 1 if it was unable to send all the data in the send queue
yet (this case can only occur if the connection is nonblocking).
int PQflush(PGconn *conn);
After sending any command or data on a nonblocking connection, call PQflush. If it returns 1, wait for
the socket to be write-ready and call it again; repeat until it returns 0. Once PQflush returns 0, wait for
the socket to be read-ready and then read the response as described above.
PQgetCancel
Creates a data structure containing the information needed to cancel a command issued through a
particular database connection.
PGcancel *PQgetCancel(PGconn *conn);
PQgetCancel creates a PGcancel object given a PGconn connection object. It will return NULL
if the given conn is NULL or an invalid connection. The PGcancel object is an opaque structure
that is not meant to be accessed directly by the application; it can only be passed to PQcancel or
PQfreeCancel.
PQfreeCancel
Frees a data structure created by PQgetCancel.
void PQfreeCancel(PGcancel *cancel);
380
Chapter 27. libpq - C Library
PQcancel
The return value is 1 if the cancel request was successfully dispatched and 0 if not. If not, errbuf is
filled with an error message explaining why not. errbuf must be a char array of size errbufsize
(the recommended size is 256 bytes).
Successful dispatch is no guarantee that the request will have any effect, however. If the cancellation
is effective, the current command will terminate early and return an error result. If the cancellation
fails (say, because the server was already done processing the command), then there will be no visible
result at all.
PQcancel can safely be invoked from a signal handler, if the errbuf is a local variable in the
signal handler. The PGcancel object is read-only as far as PQcancel is concerned, so it can also be
invoked from a thread that is separate from the one manipulating the PGconn object.
PQrequestCancel
Tip: This interface is somewhat obsolete, as one may achieve similar performance and greater func-
tionality by setting up a prepared statement to define the function call. Then, executing the statement
with binary transmission of parameters and results substitutes for a fast-path function call.
The function PQfn requests execution of a server function via the fast-path interface:
381
Chapter 27. libpq - C Library
int result_is_int,
const PQArgBlock *args,
int nargs);
typedef struct {
int len;
int isint;
union {
int *ptr;
int integer;
} u;
} PQArgBlock;
The fnid argument is the OID of the function to be executed. args and nargs define the parameters
to be passed to the function; they must match the declared function argument list. When the isint field
of a parameter structure is true, the u.integer value is sent to the server as an integer of the indicated
length (this must be 1, 2, or 4 bytes); proper byte-swapping occurs. When isint is false, the indicated
number of bytes at *u.ptr are sent with no processing; the data must be in the format expected by the
server for binary transmission of the function’s argument data type. result_buf is the buffer in which to
place the return value. The caller must have allocated sufficient space to store the return value. (There is
no check!) The actual result length will be returned in the integer pointed to by result_len. If a 1, 2, or
4-byte integer result is expected, set result_is_int to 1, otherwise set it to 0. Setting result_is_int
to 1 causes libpq to byte-swap the value if necessary, so that it is delivered as a proper int value for the
client machine. When result_is_int is 0, the binary-format byte string sent by the server is returned
unmodified.
PQfn always returns a valid PGresult pointer. The result status should be checked before the result is
used. The caller is responsible for freeing the PGresult with PQclear when it is no longer needed.
Note that it is not possible to handle null arguments, null results, nor set-valued results when using this
interface.
The function PQnotifies returns the next notification from a list of unhandled notification messages
received from the server. It returns a null pointer if there are no pending notifications. Once a notification
is returned from PQnotifies, it is considered handled and will be removed from the list of notifications.
382
Chapter 27. libpq - C Library
After processing a PGnotify object returned by PQnotifies, be sure to free it with PQfreemem. It is
sufficient to free the PGnotify pointer; the relname and extra fields do not represent separate alloca-
tions. (At present, the extra field is unused and will always point to an empty string.)
Note: In PostgreSQL 6.4 and later, the be_pid is that of the notifying server process, whereas in
earlier versions it was always the PID of your own server process.
Example 27-2 gives a sample program that illustrates the use of asynchronous notification.
PQnotifies does not actually read data from the server; it just returns messages previously absorbed
by another libpq function. In prior releases of libpq, the only way to ensure timely receipt of NOTIFY
messages was to constantly submit commands, even empty ones, and then check PQnotifies after each
PQexec. While this still works, it is deprecated as a waste of processing power.
A better way to check for NOTIFY messages when you have no useful commands to execute is to call
PQconsumeInput, then check PQnotifies. You can use select() to wait for data to arrive from
the server, thereby using no CPU power unless there is something to do. (See PQsocket to obtain the
file descriptor number to use with select().) Note that this will work OK whether you submit com-
mands with PQsendQuery/PQgetResult or simply use PQexec. You should, however, remember to
check PQnotifies after each PQgetResult or PQexec, to see if any notifications came in during the
processing of the command.
383
Chapter 27. libpq - C Library
The functions of this section should be executed only after obtaining a result status of PGRES_COPY_OUT
or PGRES_COPY_IN from PQexec or PQgetResult.
A PGresult object bearing one of these status values carries some additional data about the COPY opera-
tion that is starting. This additional data is available using functions that are also used in connection with
query results:
PQnfields
0 indicates the overall copy format is textual (rows separated by newlines, columns separated by sep-
arator characters, etc). 1 indicates the overall copy format is binary. See COPY for more information.
PQfformat
Returns the format code (0 for text, 1 for binary) associated with each column of the copy operation.
The per-column format codes will always be zero when the overall copy format is textual, but the
binary format can support both text and binary columns. (However, as of the current implementation
of COPY, only binary columns appear in a binary copy; so the per-column formats always match the
overall format at present.)
Note: These additional data values are only available when using protocol 3.0. When using protocol
2.0, all these functions will return 0.
PQputCopyData
Transmits the COPY data in the specified buffer, of length nbytes, to the server. The result is 1 if
the data was sent, zero if it was not sent because the attempt would block (this case is only possible
if the connection is in nonblocking mode), or -1 if an error occurred. (Use PQerrorMessage to
retrieve details if the return value is -1. If the value is zero, wait for write-ready and try again.)
The application may divide the COPY data stream into buffer loads of any convenient size. Buffer-
load boundaries have no semantic significance when sending. The contents of the data stream must
match the data format expected by the COPY command; see COPY for details.
384
Chapter 27. libpq - C Library
PQputCopyEnd
Ends the COPY_IN operation successfully if errormsg is NULL. If errormsg is not NULL then the
COPY is forced to fail, with the string pointed to by errormsg used as the error message. (One should
not assume that this exact error message will come back from the server, however, as the server might
have already failed the COPY for its own reasons. Also note that the option to force failure does not
work when using pre-3.0-protocol connections.)
The result is 1 if the termination data was sent, zero if it was not sent because the attempt would block
(this case is only possible if the connection is in nonblocking mode), or -1 if an error occurred. (Use
PQerrorMessage to retrieve details if the return value is -1. If the value is zero, wait for write-ready
and try again.)
After successfully calling PQputCopyEnd, call PQgetResult to obtain the final result status of the
COPY command. One may wait for this result to be available in the usual way. Then return to normal
operation.
PQgetCopyData
Attempts to obtain another row of data from the server during a COPY. Data is always returned one
data row at a time; if only a partial row is available, it is not returned. Successful return of a data row
involves allocating a chunk of memory to hold the data. The buffer parameter must be non-NULL.
*buffer is set to point to the allocated memory, or to NULL in cases where no buffer is returned. A
non-NULL result buffer must be freed using PQfreemem when no longer needed.
When a row is successfully returned, the return value is the number of data bytes in the row (this will
always be greater than zero). The returned string is always null-terminated, though this is probably
only useful for textual COPY. A result of zero indicates that the COPY is still in progress, but no row
is yet available (this is only possible when async is true). A result of -1 indicates that the COPY is
done. A result of -2 indicates that an error occurred (consult PQerrorMessage for the reason).
When async is true (not zero), PQgetCopyData will not block waiting for input; it will return zero
if the COPY is still in progress but no complete row is available. (In this case wait for read-ready
385
Chapter 27. libpq - C Library
before trying again; it does not matter whether you call PQconsumeInput.) When async is false
(zero), PQgetCopyData will block until data is available or the operation completes.
After PQgetCopyData returns -1, call PQgetResult to obtain the final result status of the COPY
command. One may wait for this result to be available in the usual way. Then return to normal
operation.
PQgetline
Reads a newline-terminated line of characters (transmitted by the server) into a buffer string of size
length.
int PQgetline(PGconn *conn,
char *buffer,
int length);
This function copies up to length-1 characters into the buffer and converts the terminating newline
into a zero byte. PQgetline returns EOF at the end of input, 0 if the entire line has been read, and 1
if the buffer is full but the terminating newline has not yet been read.
Note that the application must check to see if a new line consists of the two characters \., which in-
dicates that the server has finished sending the results of the COPY command. If the application might
receive lines that are more than length-1 characters long, care is needed to be sure it recognizes
the \. line correctly (and does not, for example, mistake the end of a long data line for a terminator
line).
PQgetlineAsync
Reads a row of COPY data (transmitted by the server) into a buffer without blocking.
int PQgetlineAsync(PGconn *conn,
char *buffer,
int bufsize);
This function is similar to PQgetline, but it can be used by applications that must read COPY
data asynchronously, that is, without blocking. Having issued the COPY command and gotten a
PGRES_COPY_OUT response, the application should call PQconsumeInput and PQgetlineAsync
until the end-of-data signal is detected.
Unlike PQgetline, this function takes responsibility for detecting end-of-data.
On each call, PQgetlineAsync will return data if a complete data row is available in libpq’s input
buffer. Otherwise, no data is returned until the rest of the row arrives. The function returns -1 if the
end-of-copy-data marker has been recognized, or 0 if no data is available, or a positive number giving
386
Chapter 27. libpq - C Library
the number of bytes of data returned. If -1 is returned, the caller must next call PQendcopy, and then
return to normal processing.
The data returned will not extend beyond a data-row boundary. If possible a whole row will be
returned at one time. But if the buffer offered by the caller is too small to hold a row sent by the
server, then a partial data row will be returned. With textual data this can be detected by testing
whether the last returned byte is \n or not. (In a binary COPY, actual parsing of the COPY data format
will be needed to make the equivalent determination.) The returned string is not null-terminated. (If
you want to add a terminating null, be sure to pass a bufsize one smaller than the room actually
available.)
PQputline
Sends a null-terminated string to the server. Returns 0 if OK and EOF if unable to send the string.
int PQputline(PGconn *conn,
const char *string);
The COPY data stream sent by a series of calls to PQputline has the same format as that returned
by PQgetlineAsync, except that applications are not obliged to send exactly one data row per
PQputline call; it is okay to send a partial line or multiple lines per call.
Note: Before PostgreSQL protocol 3.0, it was necessary for the application to explicitly send
the two characters \. as a final line to indicate to the server that it had finished sending COPY
data. While this still works, it is deprecated and the special meaning of \. can be expected to be
removed in a future release. It is sufficient to call PQendcopy after having sent the actual data.
PQputnbytes
Sends a non-null-terminated string to the server. Returns 0 if OK and EOF if unable to send the string.
int PQputnbytes(PGconn *conn,
const char *buffer,
int nbytes);
This is exactly like PQputline, except that the data buffer need not be null-terminated since the
number of bytes to send is specified directly. Use this procedure when sending binary data.
PQendcopy
This function waits until the server has finished the copying. It should either be issued when the last
string has been sent to the server using PQputline or when the last string has been received from
the server using PGgetline. It must be issued or the server will get “out of sync” with the client.
Upon return from this function, the server is ready to receive the next SQL command. The return
value is 0 on successful completion, nonzero otherwise. (Use PQerrorMessage to retrieve details if
the return value is nonzero.)
When using PQgetResult, the application should respond to a PGRES_COPY_OUT result by
executing PQgetline repeatedly, followed by PQendcopy after the terminator line is seen. It
387
Chapter 27. libpq - C Library
should then return to the PQgetResult loop until PQgetResult returns a null pointer. Similarly a
PGRES_COPY_IN result is processed by a series of PQputline calls followed by PQendcopy, then
return to the PQgetResult loop. This arrangement will ensure that a COPY command embedded in
a series of SQL commands will be executed correctly.
Older applications are likely to submit a COPY via PQexec and assume that the transaction is done af-
ter PQendcopy. This will work correctly only if the COPY is the only SQL command in the command
string.
PQsetErrorVerbosity
PQsetErrorVerbosity sets the verbosity mode, returning the connection’s previous setting. In
TERSE mode, returned messages include severity, primary text, and position only; this will normally
fit on a single line. The default mode produces messages that include the above plus any detail, hint,
or context fields (these may span multiple lines). The VERBOSE mode includes all available fields.
Changing the verbosity does not affect the messages available from already-existing PGresult ob-
jects, only subsequently-created ones.
PQtrace
PQuntrace
388
Chapter 27. libpq - C Library
PQnoticeReceiver
PQsetNoticeReceiver(PGconn *conn,
PQnoticeReceiver proc,
void *arg);
PQnoticeProcessor
PQsetNoticeProcessor(PGconn *conn,
PQnoticeProcessor proc,
void *arg);
Each of these functions returns the previous notice receiver or processor function pointer, and sets the new
value. If you supply a null function pointer, no action is taken, but the current pointer is returned.
When a notice or warning message is received from the server, or generated internally by libpq, the
notice receiver function is called. It is passed the message in the form of a PGRES_NONFATAL_ERROR
PGresult. (This allows the receiver to extract individual fields using PQresultErrorField, or the
complete preformatted message using PQresultErrorMessage.) The same void pointer passed to
PQsetNoticeReceiver is also passed. (This pointer can be used to access application-specific state if
needed.)
The default notice receiver simply extracts the message (using PQresultErrorMessage) and passes it
to the notice processor.
The notice processor is responsible for handling a notice or warning message given in text form. It is
passed the string text of the message (including a trailing newline), plus a void pointer that is the same
one passed to PQsetNoticeProcessor. (This pointer can be used to access application-specific state if
needed.)
The default notice processor is simply
static void
defaultNoticeProcessor(void *arg, const char *message)
{
fprintf(stderr, "%s", message);
}
389
Chapter 27. libpq - C Library
Once you have set a notice receiver or processor, you should expect that that function could be called as
long as either the PGconn object or PGresult objects made from it exist. At creation of a PGresult,
the PGconn’s current notice handling pointers are copied into the PGresult for possible use by functions
like PQgetvalue.
• PGHOST sets the database server name. If this begins with a slash, it specifies Unix-domain commu-
nication rather than TCP/IP communication; the value is then the name of the directory in which the
socket file is stored (in a default installation setup this would be /tmp).
• PGHOSTADDR specifies the numeric IP address of the database server. This can be set instead of or in
addition to PGHOST to avoid DNS lookup overhead. See the documentation of these parameters, under
PQconnectdb above, for details on their interaction.
When neither PGHOST nor PGHOSTADDR is set, the default behavior is to connect using a local
Unix-domain socket; or on machines without Unix-domain sockets, libpq will attempt to connect to
localhost.
• PGPORT sets the TCP port number or Unix-domain socket file extension for communicating with the
PostgreSQL server.
• PGDATABASE sets the PostgreSQL database name.
• PGPASSWORD sets the password used if the server demands password authentication. This environment
variable is deprecated for security reasons; instead consider using the ~/.pgpass file (see Section
27.12).
• PGSERVICE sets the service name to be looked up in pg_service.conf. This offers a shorthand way
of setting all the parameters.
• PGREALM sets the Kerberos realm to use with PostgreSQL, if it is different from the local realm.
If PGREALM is set, libpq applications will attempt authentication with servers for this realm and use
separate ticket files to avoid conflicts with local ticket files. This environment variable is only used if
Kerberos authentication is selected by the server.
• PGOPTIONS sets additional run-time options for the PostgreSQL server.
• PGSSLMODE determines whether and with what priority an SSL connection will be negotiated with
the server. There are four modes: disable will attempt only an unencrypted SSL connection; allow
will negotiate, trying first a non-SSL connection, then if that fails, trying an SSL connection; prefer
(the default) will negotiate, trying first an SSL connection, then if that fails, trying a regular non-SSL
390
Chapter 27. libpq - C Library
connection; require will try only an SSL connection. If PostgreSQL is compiled without SSL support,
using option require will cause an error, while options allow and prefer will be accepted but libpq
will not in fact attempt an SSL connection.
• PGREQUIRESSL sets whether or not the connection must be made over SSL. If set to “1”, libpq will
refuse to connect if the server does not accept an SSL connection (equivalent to sslmode prefer). This
option is deprecated in favor of the sslmode setting, and is only available if PostgreSQL is compiled
with SSL support.
• PGCONNECT_TIMEOUT sets the maximum number of seconds that libpq will wait when attempting to
connect to the PostgreSQL server. If unset or set to zero, libpq will wait indefinitely. It is not recom-
mended to set the timeout to less than 2 seconds.
The following environment variables can be used to specify default behavior for each PostgreSQL session.
(See also the ALTER USER and ALTER DATABASE commands for ways to set default behavior on a per-
user or per-database basis.)
• PGDATESTYLE sets the default style of date/time representation. (Equivalent to SET datestyle TO
....)
• PGTZ sets the default time zone. (Equivalent to SET timezone TO ....)
• PGCLIENTENCODING sets the default client character set encoding. (Equivalent to SET
client_encoding TO ....)
• PGGEQO sets the default mode for the genetic query optimizer. (Equivalent to SET geqo TO ....)
Refer to the SQL command SET for information on correct values for these environment variables.
The following environment variables determine internal behavior of libpq; they override compiled-in de-
faults.
• PGLOCALEDIR sets the directory containing the locale files for message internationalization.
hostname:port:database:username:password
Each of the first four fields may be a literal value, or *, which matches anything. The password field from
the first line that matches the current connection parameters will be used. (Therefore, put more-specific
391
Chapter 27. libpq - C Library
entries first when you are using wildcards.) If an entry needs to contain : or \, escape this character with
\.
The permissions on .pgpass must disallow any access to world or group; achieve this by the command
chmod 0600 ~/.pgpass. If the permissions are less strict than this, the file will be ignored. (The file
permissions are not currently checked on Microsoft Windows, however.)
If the file ~/.postgresql/root.crt is present in the user’s home directory, libpq will use the cer-
tificate list stored therein to verify the server’s certificate. (On Microsoft Windows the file is named
%APPDATA%\postgresql\root.crt.) The SSL connection will fail if the server does not present a
certificate; therefore, to use this feature the server must also have a root.crt file.
The deprecated functions PQrequestCancel, PQoidStatus and fe_setauthsvc are not thread-safe
and should not be used in multithread programs. PQrequestCancel can be replaced by PQcancel.
PQoidStatus can be replaced by PQoidValue. There is no good reason to call fe_setauthsvc at all.
libpq applications that use the crypt authentication method rely on the crypt() operating system func-
tion, which is often not thread-safe. It is better to use the md5 method, which is thread-safe on all platforms.
If you experience problems with threaded applications, run the program in src/tools/thread to see if
your platform has thread-unsafe functions. This program is run by configure, but for binary distributions
your library might not match the library used to build the binaries.
392
Chapter 27. libpq - C Library
If you failed to do that then you will normally get error messages from your compiler similar to
foo.c: In function ‘main’:
foo.c:34: ‘PGconn’ undeclared (first use in this function)
foo.c:35: ‘PGresult’ undeclared (first use in this function)
foo.c:54: ‘CONNECTION_BAD’ undeclared (first use in this function)
foo.c:68: ‘PGRES_COMMAND_OK’ undeclared (first use in this function)
foo.c:95: ‘PGRES_TUPLES_OK’ undeclared (first use in this function)
• Point your compiler to the directory where the PostgreSQL header files were installed, by supplying
the -Idirectory option to your compiler. (In some cases the compiler will look into the directory in
question by default, so you can omit this option.) For instance, your compile command line could look
like:
cc -c -I/usr/local/pgsql/include testprog.c
If you are using makefiles then add the option to the CPPFLAGS variable:
CPPFLAGS += -I/usr/local/pgsql/include
If there is any chance that your program might be compiled by other users then you should not hardcode
the directory location like that. Instead, you can run the utility pg_config to find out where the header
files are on the local system:
$ pg_config --includedir
/usr/local/include
Failure to specify the correct option to the compiler will result in an error message such as
testlibpq.c:8:22: libpq-fe.h: No such file or directory
• When linking the final program, specify the option -lpq so that the libpq library gets pulled in, as
well as the option -Ldirectory to point the compiler to the directory where the libpq library resides.
(Again, the compiler will search some directories by default.) For maximum portability, put the -L
option before the -lpq option. For example:
cc -o testprog testprog1.o testprog2.o -L/usr/local/pgsql/lib -lpq
You can find out the library directory using pg_config as well:
$ pg_config --libdir
/usr/local/pgsql/lib
Error messages that point to problems in this area could look like the following.
393
Chapter 27. libpq - C Library
This means you forgot the -L option or did not specify the right directory.
If your codes references the header file libpq-int.h and you refuse to fix your code to not use it, starting
in PostgreSQL 7.2, this file will be found in includedir/postgresql/internal/libpq-int.h, so
you need to add the appropriate -I option to your compiler command line.
/*
* testlibpq.c
*
* Test the C version of LIBPQ, the POSTGRES frontend library.
*/
#include <stdio.h>
#include <stdlib.h>
#include "libpq-fe.h"
static void
exit_nicely(PGconn *conn)
{
PQfinish(conn);
exit(1);
}
int
main(int argc, char **argv)
{
const char *conninfo;
PGconn *conn;
PGresult *res;
int nFields;
int i,
j;
/*
* If the user supplies a parameter on the command line, use it as
394
Chapter 27. libpq - C Library
/*
* Our test case here involves using a cursor, for which we must be
* inside a transaction block. We could do the whole thing with a
* single PQexec() of "select * from pg_database", but that’s too
* trivial to make a good example.
*/
/*
* Should PQclear PGresult whenever it is no longer needed to avoid
* memory leaks
*/
PQclear(res);
/*
* Fetch rows from pg_database, the system catalog of databases
*/
res = PQexec(conn, "DECLARE myportal CURSOR FOR select * from pg_database");
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, "DECLARE CURSOR failed: %s", PQerrorMessage(conn));
PQclear(res);
exit_nicely(conn);
}
PQclear(res);
395
Chapter 27. libpq - C Library
PQclear(res);
/* close the portal ... we don’t bother to check for errors ... */
res = PQexec(conn, "CLOSE myportal");
PQclear(res);
return 0;
}
/*
* testlibpq2.c
* Test of the asynchronous notification interface
*
* Start this program, then from psql in another window do
* NOTIFY TBL2;
* Repeat four times to get this program to exit.
*
* Or, if you want to get fancy, try this:
* populate a database with the following commands
* (provided in src/test/examples/testlibpq2.sql):
396
Chapter 27. libpq - C Library
*
* CREATE TABLE TBL1 (i int4);
*
* CREATE TABLE TBL2 (i int4);
*
* CREATE RULE r1 AS ON INSERT TO TBL1 DO
* (INSERT INTO TBL2 VALUES (new.i); NOTIFY TBL2);
*
* and do this four times:
*
* INSERT INTO TBL1 VALUES (10);
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <sys/time.h>
#include "libpq-fe.h"
static void
exit_nicely(PGconn *conn)
{
PQfinish(conn);
exit(1);
}
int
main(int argc, char **argv)
{
const char *conninfo;
PGconn *conn;
PGresult *res;
PGnotify *notify;
int nnotifies;
/*
* If the user supplies a parameter on the command line, use it as
* the conninfo string; otherwise default to setting dbname=template1
* and using environment variables or defaults for all other connection
* parameters.
*/
if (argc > 1)
conninfo = argv[1];
else
conninfo = "dbname = template1";
397
Chapter 27. libpq - C Library
PQerrorMessage(conn));
exit_nicely(conn);
}
/*
* Issue LISTEN command to enable notifications from the rule’s NOTIFY.
*/
res = PQexec(conn, "LISTEN TBL2");
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{
fprintf(stderr, "LISTEN command failed: %s", PQerrorMessage(conn));
PQclear(res);
exit_nicely(conn);
}
/*
* should PQclear PGresult whenever it is no longer needed to avoid
* memory leaks
*/
PQclear(res);
sock = PQsocket(conn);
if (sock < 0)
break; /* shouldn’t happen */
FD_ZERO(&input_mask);
FD_SET(sock, &input_mask);
398
Chapter 27. libpq - C Library
notify->relname, notify->be_pid);
PQfreemem(notify);
nnotifies++;
}
}
fprintf(stderr, "Done.\n");
return 0;
}
/*
* testlibpq3.c
* Test out-of-line parameters and binary I/O.
*
* Before running this, populate a database with the following commands
* (provided in src/test/examples/testlibpq3.sql):
*
* CREATE TABLE test1 (i int4, t text, b bytea);
*
* INSERT INTO test1 values (1, ’joe”s place’, ’\\000\\001\\002\\003\\004’);
* INSERT INTO test1 values (2, ’ho there’, ’\\004\\003\\002\\001\\000’);
*
* The expected output is:
*
* tuple 0: got
* i = (4 bytes) 1
* t = (11 bytes) ’joe’s place’
* b = (5 bytes) \000\001\002\003\004
*
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include "libpq-fe.h"
/* for ntohl/htonl */
#include <netinet/in.h>
#include <arpa/inet.h>
static void
exit_nicely(PGconn *conn)
{
PQfinish(conn);
exit(1);
399
Chapter 27. libpq - C Library
int
main(int argc, char **argv)
{
const char *conninfo;
PGconn *conn;
PGresult *res;
const char *paramValues[1];
int i,
j;
int i_fnum,
t_fnum,
b_fnum;
/*
* If the user supplies a parameter on the command line, use it as
* the conninfo string; otherwise default to setting dbname=template1
* and using environment variables or defaults for all other connection
* parameters.
*/
if (argc > 1)
conninfo = argv[1];
else
conninfo = "dbname = template1";
/*
* The point of this program is to illustrate use of PQexecParams()
* with out-of-line parameters, as well as binary transmission of
* results. By using out-of-line parameters we can avoid a lot of
* tedious mucking about with quoting and escaping. Notice how we
* don’t have to do anything special with the quote mark in the
* parameter value.
*/
res = PQexecParams(conn,
"SELECT * FROM test1 WHERE t = $1",
1, /* one param */
NULL, /* let the backend deduce param
400
Chapter 27. libpq - C Library
paramValues,
NULL, /* don’t need param lengths sinc
NULL, /* default to all text params */
1); /* ask for binary results */
if (PQresultStatus(res) != PGRES_TUPLES_OK)
{
fprintf(stderr, "SELECT failed: %s", PQerrorMessage(conn));
PQclear(res);
exit_nicely(conn);
}
/* Get the field values (we ignore possibility they are null!) */
iptr = PQgetvalue(res, i, i_fnum);
tptr = PQgetvalue(res, i, t_fnum);
bptr = PQgetvalue(res, i, b_fnum);
/*
* The binary representation of INT4 is in network byte order,
* which we’d better coerce to the local byte order.
*/
ival = ntohl(*((uint32_t *) iptr));
/*
* The binary representation of TEXT is, well, text, and since
* libpq was nice enough to append a zero byte to it, it’ll work
* just fine as a C string.
*
* The binary representation of BYTEA is a bunch of bytes, which
* could include embedded nulls so we have to pay attention to
* field length.
*/
blen = PQgetlength(res, i, b_fnum);
401
Chapter 27. libpq - C Library
PQclear(res);
return 0;
}
402
Chapter 28. Large Objects
PostgreSQL has a large object facility, which provides stream-style access to user data that is stored in a
special large-object structure. Streaming access is useful when working with data values that are too large
to manipulate conveniently as a whole.
This chapter describes the implementation and the programming and query language interfaces to Post-
greSQL large object data. We use the libpq C library for the examples in this chapter, but most program-
ming interfaces native to PostgreSQL support equivalent functionality. Other interfaces may use the large
object interface internally to provide generic support for large values. This is not described here.
28.1. History
POSTGRES 4.2, the indirect predecessor of PostgreSQL, supported three standard implementations of
large objects: as files external to the POSTGRES server, as external files managed by the POSTGRES
server, and as data stored within the POSTGRES database. This caused considerable confusion among
users. As a result, only support for large objects as data stored within the database is retained in Post-
greSQL. Even though this is slower to access, it provides stricter data integrity. For historical reasons, this
storage scheme is referred to as Inversion large objects. (You will see the term Inversion used occasionally
to mean the same thing as large object.) Since PostgreSQL 7.1, all large objects are placed in one system
table called pg_largeobject.
PostgreSQL 7.1 introduced a mechanism (nicknamed “TOAST”) that allows data values to be much larger
than single pages. This makes the large object facility partially obsolete. One remaining advantage of the
large object facility is that it allows values up to 2 GB in size, whereas TOASTed fields can be at most 1
GB. Also, large objects can be manipulated piece-by-piece much more easily than ordinary data fields, so
the practical limits are considerably different.
403
Chapter 28. Large Objects
creates a new large object. mode is a bit mask describing several different attributes of the new object.
The symbolic constants used here are defined in the header file libpq/libpq-fs.h. The access type
(read, write, or both) is controlled by or’ing together the bits INV_READ and INV_WRITE. The low-order
sixteen bits of the mask have historically been used at Berkeley to designate the storage manager number
on which the large object should reside. These bits should always be zero now. (The access type does not
actually do anything anymore either, but one or both flag bits must be set to avoid an error.) The return
value is the OID that was assigned to the new large object, or InvalidOid (zero) on failure.
An example:
filename specifies the operating system name of the file to be imported as a large object. The return
value is the OID that was assigned to the new large object, or InvalidOid (zero) on failure. Note that the
file is read by the client interface library, not by the server; so it must exist in the client filesystem and be
readable by the client application.
The lobjId argument specifies the OID of the large object to export and the filename argument spec-
ifies the operating system name of the file. Note that the file is written by the client interface library, not
by the server. Returns 1 on success, -1 on failure.
The lobjId argument specifies the OID of the large object to open. The mode bits control whether the
object is opened for reading (INV_READ), writing (INV_WRITE), or both. A large object cannot be opened
before it is created. lo_open returns a (non-negative) large object descriptor for later use in lo_read,
404
Chapter 28. Large Objects
lo_write, lo_lseek, lo_tell, and lo_close. The descriptor is only valid for the duration of the
current transaction. On failure, -1 is returned.
int lo_write(PGconn *conn, int fd, const char *buf, size_t len);
writes len bytes from buf to large object descriptor fd. The fd argument must have been returned by a
previous lo_open. The number of bytes actually written is returned. In the event of an error, the return
value is negative.
reads len bytes from large object descriptor fd into buf. The fd argument must have been returned by
a previous lo_open. The number of bytes actually read is returned. In the event of an error, the return
value is negative.
This function moves the current location pointer for the large object descriptor identified by fd to the
new location specified by offset. The valid values for whence are SEEK_SET (seek from object start),
SEEK_CUR (seek from current position), and SEEK_END (seek from object end). The return value is the
new location pointer, or -1 on error.
405
Chapter 28. Large Objects
where fd is a large object descriptor returned by lo_open. On success, lo_close returns zero. On error,
the return value is negative.
Any large object descriptors that remain open at the end of a transaction will be closed automatically.
The lobjId argument specifies the OID of the large object to remove. Returns 1 if successful, -1 on
failure.
The server-side lo_import and lo_export functions behave considerably differently from their client-
side analogs. These two functions read and write files in the server’s file system, using the permissions
of the database’s owning user. Therefore, their use is restricted to superusers. In contrast, the client-side
import and export functions read and write files in the client’s file system, using the permissions of the
client program. The client-side functions can be used by any PostgreSQL user.
406
Chapter 28. Large Objects
/*--------------------------------------------------------------
*
* testlo.c--
* test using large objects with libpq
*
* Copyright (c) 1994, Regents of the University of California
*
*--------------------------------------------------------------
*/
#include <stdio.h>
#include "libpq-fe.h"
#include "libpq/libpq-fs.h"
/*
* importFile
* import file "in_filename" into database as large object "lobjOid"
*
*/
Oid
importFile(PGconn *conn, char *filename)
{
Oid lobjId;
int lobj_fd;
char buf[BUFSIZE];
int nbytes,
tmp;
int fd;
/*
* open the file to be read in
*/
fd = open(filename, O_RDONLY, 0666);
if (fd < 0)
{ /* error */
fprintf(stderr, "can’t open unix file %s\n", filename);
}
/*
* create the large object
*/
lobjId = lo_creat(conn, INV_READ | INV_WRITE);
if (lobjId == 0)
fprintf(stderr, "can’t create large object\n");
407
Chapter 28. Large Objects
/*
* read in from the Unix file and write to the inversion file
*/
while ((nbytes = read(fd, buf, BUFSIZE)) > 0)
{
tmp = lo_write(conn, lobj_fd, buf, nbytes);
if (tmp < nbytes)
fprintf(stderr, "error while reading large object\n");
}
(void) close(fd);
(void) lo_close(conn, lobj_fd);
return lobjId;
}
void
pickout(PGconn *conn, Oid lobjId, int start, int len)
{
int lobj_fd;
char *buf;
int nbytes;
int nread;
nread = 0;
while (len - nread > 0)
{
nbytes = lo_read(conn, lobj_fd, buf, len - nread);
buf[nbytes] = ’ ’;
fprintf(stderr, ">>> %s", buf);
nread += nbytes;
}
free(buf);
fprintf(stderr, "\n");
lo_close(conn, lobj_fd);
}
void
overwrite(PGconn *conn, Oid lobjId, int start, int len)
{
408
Chapter 28. Large Objects
int lobj_fd;
char *buf;
int nbytes;
int nwritten;
int i;
nwritten = 0;
while (len - nwritten > 0)
{
nbytes = lo_write(conn, lobj_fd, buf + nwritten, len - nwritten);
nwritten += nbytes;
}
free(buf);
fprintf(stderr, "\n");
lo_close(conn, lobj_fd);
}
/*
* exportFile * export large object "lobjOid" to file "out_filename"
*
*/
void
exportFile(PGconn *conn, Oid lobjId, char *filename)
{
int lobj_fd;
char buf[BUFSIZE];
int nbytes,
tmp;
int fd;
/*
* create an inversion "object"
*/
lobj_fd = lo_open(conn, lobjId, INV_READ);
if (lobj_fd < 0)
{
fprintf(stderr, "can’t open large object %d\n",
lobjId);
}
409
Chapter 28. Large Objects
/*
* open the file to be written to
*/
fd = open(filename, O_CREAT | O_WRONLY, 0666);
if (fd < 0)
{ /* error */
fprintf(stderr, "can’t open unix file %s\n",
filename);
}
/*
* read in from the Unix file and write to the inversion file
*/
while ((nbytes = lo_read(conn, lobj_fd, buf, BUFSIZE)) > 0)
{
tmp = write(fd, buf, nbytes);
if (tmp < nbytes)
{
fprintf(stderr, "error while writing %s\n",
filename);
}
}
return;
}
void
exit_nicely(PGconn *conn)
{
PQfinish(conn);
exit(1);
}
int
main(int argc, char **argv)
{
char *in_filename,
*out_filename;
char *database;
Oid lobjOid;
PGconn *conn;
PGresult *res;
if (argc != 4)
{
fprintf(stderr, "Usage: %s database_name in_filename out_filename\n",
argv[0]);
exit(1);
}
410
Chapter 28. Large Objects
database = argv[1];
in_filename = argv[2];
out_filename = argv[3];
/*
* set up the connection
*/
conn = PQsetdb(NULL, NULL, NULL, NULL, database);
411
Chapter 29. ECPG - Embedded SQL in C
This chapter describes the embedded SQL package for PostgreSQL. It was written by Linus Tolke
(<[email protected]>) and Michael Meskes (<[email protected]>). Originally it was written
to work with C. It also works with C++, but it does not recognize all C++ constructs yet.
This documentation is quite incomplete. But since this interface is standardized, additional information
can be found in many resources about SQL.
These statements syntactically take the place of a C statement. Depending on the particular statement, they
may appear at the global level or within a function. Embedded SQL statements follow the case-sensitivity
rules of normal SQL code, and not those of C.
The following sections explain all the embedded SQL statements.
• dbname[@hostname][:port]
• tcp:postgresql://hostname[:port][/dbname][?options]
• unix:postgresql://hostname[:port][/dbname][?options]
412
Chapter 29. ECPG - Embedded SQL in C
• a reference to a character variable containing one of the above forms (see examples)
• DEFAULT
If you specify the connection target literally (that is, not through a variable reference) and you don’t quote
the value, then the case-insensitivity rules of normal SQL are applied. In that case you can also double-
quote the individual parameters separately as needed. In practice, it is probably less error-prone to use a
(single-quoted) string literal or a variable reference. The connection target DEFAULT initiates a connection
to the default database under the default user name. No separate user name or connection name may be
specified in that case.
There are also different ways to specify the user name:
• username
• username/password
As above, the parameters username and password may be an SQL identifier, an SQL string literal, or
a reference to a character variable.
The connection-name is used to handle multiple connections in one program. It can be omitted if a
program uses only one connection. The most recently opened connection becomes the current connection,
which is used by default when an SQL statement is to be executed (see later in this chapter).
Here are some examples of CONNECT statements:
The last form makes use of the variant referred to above as character variable reference. You will see in
later sections how C variables can be used in SQL statements when you prefix them with a colon.
Be advised that the format of the connection target is not specified in the SQL standard. So if you want
to develop portable applications, you might want to use something based on the last example above to
encapsulate the connection target string somewhere.
413
Chapter 29. ECPG - Embedded SQL in C
• connection-name
• DEFAULT
• CURRENT
• ALL
Inserting rows:
EXEC SQL INSERT INTO foo (number, ascii) VALUES (9999, ’doodad’);
EXEC SQL COMMIT;
Deleting rows:
Single-row select:
EXEC SQL SELECT foo INTO :FooBar FROM table1 WHERE ascii = ’doodad’;
414
Chapter 29. ECPG - Embedded SQL in C
Updates:
The tokens of the form :something are host variables, that is, they refer to variables in the C program.
They are explained in Section 29.6.
In the default mode, statements are committed only when EXEC SQL COMMIT is issued. The embedded
SQL interface also supports autocommit of transactions (similar to libpq behavior) via the -t command-
line option to ecpg (see below) or via the EXEC SQL SET AUTOCOMMIT TO ON statement. In autocom-
mit mode, each command is automatically committed unless it is inside an explicit transaction block. This
mode can be explicitly turned off using EXEC SQL SET AUTOCOMMIT TO OFF.
This option is particularly suitable if the application needs to use several connections in mixed order.
If your application uses multiple threads of execution, they cannot share a connection concurrently. You
must either explicitly control access to the connection (using mutexes) or use a connection for each thread.
If each thread uses its own connection, you will need to use the AT clause to specify which connection
the thread will use.
The second option is to execute a statement to switch the current connection. That statement is:
This option is particularly convenient if many statements are to be executed on the same connection. It is
not thread-aware.
415
Chapter 29. ECPG - Embedded SQL in C
29.6.1. Overview
Passing data between the C program and the SQL statements is particularly simple in embedded SQL.
Instead of having the program paste the data into the statement, which entails various complications, such
as properly quoting the value, you can simply write the name of a C variable into the SQL statement,
prefixed by a colon. For example:
This statements refers to two C variables named v1 and v2 and also uses a regular SQL string literal, to
illustrate that you are not restricted to use one kind of data or the other.
This style of inserting C variables in SQL statements works anywhere a value expression is expected in
an SQL statement. In the SQL environment we call the references to C variables host variables.
int x;
char foo[16], bar[16];
VARCHAR var[180];
is converted into
This structure is suitable for interfacing with SQL datums of type varchar.
416
Chapter 29. ECPG - Embedded SQL in C
/*
* assume this table:
* CREATE TABLE test1 (a int, b varchar(50));
*/
...
So the INTO clause appears between the select list and the FROM clause. The number of elements in the
select list and the list after INTO (also called the target list) must be equal.
Here is an example using the command FETCH:
...
...
do {
...
EXEC SQL FETCH NEXT FROM foo INTO :v1, :v2;
...
} while (...);
Here the INTO clause appears after all the normal clauses.
Both of these methods only allow retrieving one row at a time. If you need to process result sets that
potentially contain more than one row, you need to use a cursor, as shown in the second example.
417
Chapter 29. ECPG - Embedded SQL in C
29.6.4. Indicators
The examples above do not handle null values. In fact, the retrieval examples will raise an error if they
fetch a null value from the database. To be able to pass null values to the database or retrieve null values
from the database, you need to append a second host variable specification to each host variable that
contains data. This second host variable is called the indicator and contains a flag that tells whether the
datum is null, in which case the value of the real host variable is ignored. Here is an example that handles
the retrieval of null values correctly:
...
The indicator variable val_ind will be zero if the value was not null, and it will be negative if the value
was null.
The indicator has another function: if the indicator value is positive, it means that the value is not null, but
it was truncated when it was stored in the host variable.
You may not execute statements that retrieve data (e.g., SELECT) this way.
A more powerful way to execute arbitrary SQL statements is to prepare them once and execute the pre-
pared statement as often as you like. It is also possible to prepare a generalized version of a statement
and then execute specific versions of it by substituting parameters. When preparing the statement, write
question marks where you want to substitute parameters later. For example:
418
Chapter 29. ECPG - Embedded SQL in C
If the statement you are executing returns values, then add an INTO clause:
An EXECUTE command may have an INTO clause, a USING clause, both, or neither.
When you don’t need the prepared statement anymore, you should deallocate it:
The identifier serves as the “variable name” of the descriptor area. When you don’t need the descriptor
anymore, you should deallocate it:
To use a descriptor area, specify it as the storage target in an INTO clause, instead of listing host variables:
419
Chapter 29. ECPG - Embedded SQL in C
Now how do you get the data out of the descriptor area? You can think of the descriptor area as a structure
with named fields. To retrieve the value of a field from the header and store it into a host variable, use the
following command:
Currently, there is only one header field defined: COUNT, which tells how many item descriptor areas exist
(that is, how many columns are contained in the result). The host variable needs to be of an integer type.
To get a field from the item descriptor area, use the following command:
num can be a literal integer or a host variable containing an integer. Possible fields are:
CARDINALITY (integer)
actual data item (therefore, the data type of this field depends on the query)
DATETIME_INTERVAL_CODE (integer)
?
DATETIME_INTERVAL_PRECISION (integer)
not implemented
INDICATOR (integer)
not implemented
LENGTH (integer)
not implemented
OCTET_LENGTH (integer)
420
Chapter 29. ECPG - Embedded SQL in C
RETURNED_OCTET_LENGTH (integer)
SQLERROR
The specified action is called whenever an error occurs during the execution of an SQL statement.
SQLWARNING
The specified action is called whenever a warning occurs during the execution of an SQL statement.
NOT FOUND
The specified action is called whenever an SQL statement retrieves or affects zero rows. (This con-
dition is not an error, but you might be interested in handling it specially.)
CONTINUE
This effectively means that the condition is ignored. This is the default.
GOTO label
GO TO label
421
Chapter 29. ECPG - Embedded SQL in C
SQLPRINT
Print a message to standard error. This is useful for simple programs or during prototyping. The
details of the message cannot be configured.
STOP
Execute the C statement break. This should only be used in loops or switch statements.
CALL name (args)
DO name (args)
Here is an example that you might want to use in a simple program. It prints a simple message when a
warning occurs and aborts the program when an error happens.
The statement EXEC SQL WHENEVER is a directive of the SQL preprocessor, not a C statement. The
error or warning actions that it sets apply to all embedded SQL statements that appear below the point
where the handler is set, unless a different action was set for the same condition between the first EXEC
SQL WHENEVER and the SQL statement causing the condition, regardless of the flow of control in the C
program. So neither of the two following C program excerpts will have the desired effect.
/*
* WRONG
*/
int main(int argc, char *argv[])
{
...
if (verbose) {
EXEC SQL WHENEVER SQLWARNING SQLPRINT;
}
...
EXEC SQL SELECT ...;
...
}
/*
* WRONG
*/
int main(int argc, char *argv[])
{
...
set_error_handler();
...
422
Chapter 29. ECPG - Embedded SQL in C
29.9.2. sqlca
For more powerful error handling, the embedded SQL interface provides a global variable with the name
sqlca that has the following structure:
struct
{
char sqlcaid[8];
long sqlabc;
long sqlcode;
struct
{
int sqlerrml;
char sqlerrmc[70];
} sqlerrm;
char sqlerrp[8];
long sqlerrd[6];
char sqlwarn[8];
char sqlstate[5];
} sqlca;
(In a multithreaded program, every thread automatically gets its own copy of sqlca. This works similarly
to the handling of the standard C global variable errno.)
sqlca covers both warnings and errors. If multiple warnings or errors occur during the execution of a
statement, then sqlca will only contain information about the last one.
If no error occurred in the last SQL statement, sqlca.sqlcode will be 0 and sqlca.sqlstate will be
"00000". If a warning or error occurred, then sqlca.sqlcode will be negative and sqlca.sqlstate
will be different from "00000". A positive sqlca.sqlcode indicates a harmless condition, such as that
the last query returned zero rows. sqlcode and sqlstate are two different error code schemes; details
appear below.
If the last SQL statement was successful, then sqlca.sqlerrd[1] contains the OID of the processed
row, if applicable, and sqlca.sqlerrd[2] contains the number of processed or returned rows, if appli-
cable to the command.
In case of an error or warning, sqlca.sqlerrm.sqlerrmc will contain a string that describes the er-
ror. The field sqlca.sqlerrm.sqlerrml contains the length of the error message that is stored in
sqlca.sqlerrm.sqlerrmc (the result of strlen(), not really interesting for a C programmer). Note
that some messages are too long to fit in the fixed-size sqlerrmc array; they will be truncated.
423
Chapter 29. ECPG - Embedded SQL in C
In case of a warning, sqlca.sqlwarn[2] is set to W. (In all other cases, it is set to something different
from W.) If sqlca.sqlwarn[1] is set to W, then a value was truncated when it was stored in a host
variable. sqlca.sqlwarn[0] is set to W if any of the other elements are set to indicate a warning.
The fields sqlcaid, sqlcabc, sqlerrp, and the remaining elements of sqlerrd and sqlwarn cur-
rently contain no useful information.
The structure sqlca is not defined in the SQL standard, but is implemented in several other SQL database
systems. The definitions are similar at the core, but if you want to write portable applications, then you
should investigate the different implementations carefully.
-12 (ECPG_OUT_OF_MEMORY)
Indicates that your virtual memory is exhausted. (SQLSTATE YE001)
-200 (ECPG_UNSUPPORTED)
Indicates the preprocessor has generated something that the library does not know about. Perhaps
you are running incompatible versions of the preprocessor and the library. (SQLSTATE YE002)
-201 (ECPG_TOO_MANY_ARGUMENTS)
This means that the command specified more host variables than the command expected. (SQL-
STATE 07001 or 07002)
424
Chapter 29. ECPG - Embedded SQL in C
-202 (ECPG_TOO_FEW_ARGUMENTS)
This means that the command specified fewer host variables than the command expected. (SQL-
STATE 07001 or 07002)
-203 (ECPG_TOO_MANY_MATCHES)
This means a query has returned multiple rows but the statement was only prepared to store one
result row (for example, because the specified variables are not arrays). (SQLSTATE 21000)
-204 (ECPG_INT_FORMAT)
The host variable is of type int and the datum in the database is of a different type and contains a
value that cannot be interpreted as an int. The library uses strtol() for this conversion. (SQL-
STATE 42804)
-205 (ECPG_UINT_FORMAT)
The host variable is of type unsigned int and the datum in the database is of a different type and
contains a value that cannot be interpreted as an unsigned int. The library uses strtoul() for
this conversion. (SQLSTATE 42804)
-206 (ECPG_FLOAT_FORMAT)
The host variable is of type float and the datum in the database is of another type and contains a
value that cannot be interpreted as a float. The library uses strtod() for this conversion. (SQL-
STATE 42804)
-207 (ECPG_CONVERT_BOOL)
This means the host variable is of type bool and the datum in the database is neither ’t’ nor ’f’.
(SQLSTATE 42804)
-208 (ECPG_EMPTY)
The statement sent to the PostgreSQL server was empty. (This cannot normally happen in an embed-
ded SQL program, so it may point to an internal error.) (SQLSTATE YE002)
-209 (ECPG_MISSING_INDICATOR)
A null value was returned and no null indicator variable was supplied. (SQLSTATE 22002)
-210 (ECPG_NO_ARRAY)
An ordinary variable was used in a place that requires an array. (SQLSTATE 42804)
-211 (ECPG_DATA_NOT_ARRAY)
The database returned an ordinary variable in a place that requires array value. (SQLSTATE 42804)
-220 (ECPG_NO_CONN)
The program tried to access a connection that does not exist. (SQLSTATE 08003)
-221 (ECPG_NOT_CONN)
The program tried to access a connection that does exist but is not open. (This is an internal error.)
(SQLSTATE YE002)
-230 (ECPG_INVALID_STMT)
The statement you are trying to use has not been prepared. (SQLSTATE 26000)
425
Chapter 29. ECPG - Embedded SQL in C
-240 (ECPG_UNKNOWN_DESCRIPTOR)
The descriptor specified was not found. The statement you are trying to use has not been prepared.
(SQLSTATE 33000)
-241 (ECPG_INVALID_DESCRIPTOR_INDEX)
The descriptor index specified was out of range. (SQLSTATE 07009)
-242 (ECPG_UNKNOWN_DESCRIPTOR_ITEM)
An invalid descriptor item was requested. (This is an internal error.) (SQLSTATE YE002)
-243 (ECPG_VAR_NOT_NUMERIC)
During the execution of a dynamic statement, the database returned a numeric value and the host
variable was not numeric. (SQLSTATE 07006)
-244 (ECPG_VAR_NOT_CHAR)
During the execution of a dynamic statement, the database returned a non-numeric value and the host
variable was numeric. (SQLSTATE 07006)
-400 (ECPG_PGSQL)
Some error caused by the PostgreSQL server. The message contains the error message from the
PostgreSQL server.
-401 (ECPG_TRANS)
The PostgreSQL server signaled that we cannot start, commit, or rollback the transaction. (SQL-
STATE 08007)
-402 (ECPG_CONNECT)
The connection attempt to the database did not succeed. (SQLSTATE 08001)
100 (ECPG_NOT_FOUND)
This is a harmless condition indicating that the last command retrieved or processed zero rows, or
that you are at the end of the cursor. (SQLSTATE 02000)
The embedded SQL preprocessor will look for a file named filename.h, preprocess it, and include it in
the resulting C output. Thus, embedded SQL statements in the included file are handled correctly.
Note that this is not the same as
#include <filename.h>
426
Chapter 29. ECPG - Embedded SQL in C
because this file would not be subject to SQL command preprocessing. Naturally, you can continue to use
the C #include directive to include other header files.
Note: The include file name is case-sensitive, even though the rest of the EXEC SQL INCLUDE com-
mand follows the normal SQL case-sensitivity rules.
ecpg prog1.pgc
This will create a file called prog1.c. If your input files do not follow the suggested naming pattern, you
can specify the output file explicitly using the -o option.
The preprocessed file can be compiled normally, for example:
cc -c prog1.c
The generated C source files include header files from the PostgreSQL installation, so if you
installed PostgreSQL in a location that is not searched by default, you have to add an option such as
-I/usr/local/pgsql/include to the compilation command line.
To link an embedded SQL program, you need to include the libecpg library, like so:
Again, you might have to add an option like -L/usr/local/pgsql/lib to that command line.
If you manage the build process of a larger project using make, it may be convenient to include the
following implicit rule to your makefiles:
ECPG = ecpg
%.c: %.pgc
$(ECPG) $<
427
Chapter 29. ECPG - Embedded SQL in C
The ecpg library is thread-safe if it is built using the --enable-thread-safety command-line option
to configure. (You might need to use other threading command-line options to compile your client
code.)
• ECPGdebug(int on, FILE *stream) turns on debug logging if called with the first argument non-
zero. Debug logging is done on stream. The log contains all SQL statements with all the input vari-
ables inserted, and the results from the PostgreSQL server. This can be very useful when searching for
errors in your SQL statements.
• ECPGstatus(int lineno, const char* connection_name) returns true if you are connected
to a database and false if not. connection_name can be NULL if a single connection is being used.
29.13. Internals
This section explains how ECPG works internally. This information can occasionally be useful to help
users understand how to use ECPG.
The first four lines written by ecpg to the output are fixed lines. Two are comments and two are include
lines necessary to interface to the library. Then the preprocessor reads through the file and writes output.
Normally it just echoes everything to the output.
When it sees an EXEC SQL statement, it intervenes and changes it. The command starts with EXEC SQL
and ends with ;. Everything in between is treated as an SQL statement and parsed for variable substitution.
Variable substitution occurs when a symbol starts with a colon (:). The variable with that name is looked
up among the variables that were previously declared within a EXEC SQL DECLARE section.
The most important function in the library is ECPGdo, which takes care of executing most commands. It
takes a variable number of arguments. This can easily add up to 50 or so arguments, and we hope this will
not be a problem on any platform.
The arguments are:
A line number
This is the line number of the original line; used in error messages only.
A string
This is the SQL command that is to be issued. It is modified by the input variables, i.e., the variables
that where not known at compile time but are to be entered in the command. Where the variables
should go the string contains ?.
Input variables
Every input variable causes ten arguments to be created. (See below.)
428
Chapter 29. ECPG - Embedded SQL in C
ECPGt_EOIT
For every variable that is part of the SQL command, the function gets ten arguments:
Note that not all SQL commands are treated in this way. For instance, an open cursor statement like
is not copied to the output. Instead, the cursor’s DECLARE command is used at the position of the OPEN
command because it indeed opens the cursor.
Here is a complete example describing the output of the preprocessor of a file foo.pgc (details may
change with each particular version of the preprocessor):
is translated into:
429
Chapter 29. ECPG - Embedded SQL in C
#include <ecpgtype.h>;
#include <ecpglib.h>;
#line 1 "foo.pgc"
int index;
int result;
/* exec sql end declare section */
...
ECPGdo(__LINE__, NULL, "SELECT res FROM mytable WHERE index = ? ",
ECPGt_int,&(index),1L,1L,sizeof(int),
ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT,
ECPGt_int,&(result),1L,1L,sizeof(int),
ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);
#line 147 "foo.pgc"
(The indentation here is added for readability and not something the preprocessor does.)
430
Chapter 30. The Information Schema
The information schema consists of a set of views that contain information about the objects defined in the
current database. The information schema is defined in the SQL standard and can therefore be expected
to be portable and remain stable — unlike the system catalogs, which are specific to PostgreSQL and
are modelled after implementation concerns. The information schema views do not, however, contain
information about PostgreSQL-specific features; to inquire about those you need to query the system
catalogs or other PostgreSQL-specific views.
cardinal_number
A nonnegative integer.
character_data
A character string. This type is used for SQL identifiers, the type character_data is used for any
other kind of text data.
time_stamp
Boolean (true/false) data is represented in the information schema by a column of type character_data
that contains either YES or NO. (The information schema was invented before the type boolean was
431
Chapter 30. The Information Schema
added to the SQL standard, so this convention is necessary to keep the information schema backward
compatible.)
30.3. information_schema_catalog_name
information_schema_catalog_name is a table that always contains one row and one column con-
taining the name of the current database (current catalog, in SQL terminology).
30.4. applicable_roles
The view applicable_roles identifies all groups that the current user is a member of. (A role is the
same thing as a group.) Generally, it is better to use the view enabled_roles instead of this one; see
also there.
30.5. check_constraints
The view check_constraints contains all check constraints, either defined on a table or on a domain,
that are owned by the current user. (The owner of the table or domain is the owner of the constraint.)
432
Chapter 30. The Information Schema
30.6. column_domain_usage
The view column_domain_usage identifies all columns (of a table or a view) that make use of some
domain defined in the current database and owned by the current user.
30.7. column_privileges
The view column_privileges identifies all privileges granted on columns to the current user or by the
current user. There is one row for each combination of column, grantor, and grantee. Privileges granted to
groups are identified in the view role_column_grants.
In PostgreSQL, you can only grant privileges on entire tables, not individual columns. Therefore, this
view contains the same information as table_privileges, just represented through one row for each
column in each appropriate table, but it only covers privilege types where column granularity is possible:
SELECT, INSERT, UPDATE, REFERENCES. If you want to make your applications fit for possible future
developments, it is generally the right choice to use this view instead of table_privileges if one of
those privilege types is concerned.
433
Chapter 30. The Information Schema
Note that the column grantee makes no distinction between users and groups. If you have users and
groups with the same name, there is unfortunately no way to distinguish them. A future version of Post-
greSQL will possibly prohibit having users and groups with the same name.
30.8. column_udt_usage
The view column_udt_usage identifies all columns that use data types owned by the current user. Note
that in PostgreSQL, built-in data types behave like user-defined types, so they are included here as well.
See also Section 30.9 for details.
434
Chapter 30. The Information Schema
30.9. columns
The view columns contains information about all table columns (or view columns) in the database. Sys-
tem columns (oid, etc.) are not included. Only those columns are shown that the current user has access
to (by way of being the owner or having some privilege).
435
Chapter 30. The Information Schema
436
Chapter 30. The Information Schema
437
Chapter 30. The Information Schema
Since data types can be defined in a variety of ways in SQL, and PostgreSQL contains additional ways
to define data types, their representation in the information schema can be somewhat difficult. The col-
umn data_type is supposed to identify the underlying built-in type of the column. In PostgreSQL, this
means that the type is defined in the system catalog schema pg_catalog. This column may be useful
if the application can handle the well-known built-in types specially (for example, format the numeric
types differently or use the data in the precision columns). The columns udt_name, udt_schema, and
udt_catalog always identify the underlying data type of the column, even if the column is based on a
domain. (Since PostgreSQL treats built-in types like user-defined types, built-in types appear here as well.
This is an extension of the SQL standard.) These columns should be used if an application wants to pro-
cess data differently according to the type, because in that case it wouldn’t matter if the column is really
based on a domain. If the column is based on a domain, the identity of the domain is stored in the columns
438
Chapter 30. The Information Schema
domain_name, domain_schema, and domain_catalog. If you want to pair up columns with their as-
sociated data types and treat domains as separate types, you could write coalesce(domain_name,
udt_name), etc.
30.10. constraint_column_usage
The view constraint_column_usage identifies all columns in the current database that are used by
some constraint. Only those columns are shown that are contained in a table owned the current user. For a
check constraint, this view identifies the columns that are used in the check expression. For a foreign key
constraint, this view identifies the columns that the foreign key references. For a unique or primary key
constraint, this view identifies the constrained columns.
30.11. constraint_table_usage
The view constraint_table_usage identifies all tables in the current database that are used by some
constraint and are owned by the current user. (This is different from the view table_constraints,
which identifies all table constraints along with the table they are defined on.) For a foreign key constraint,
this view identifies the table that the foreign key references. For a unique or primary key constraint, this
view simply identifies the table the constraint belongs to. Check constraints and not-null constraints are
not included in this view.
439
Chapter 30. The Information Schema
30.12. data_type_privileges
The view data_type_privileges identifies all data type descriptors that the current user has access to,
by way of being the owner of the described object or having some privilege for it. A data type descriptor
is generated whenever a data type is used in the definition of a table column, a domain, or a function (as
parameter or return type) and stores some information about how the data type is used in that instance (for
example, the declared maximum length, if applicable). Each data type descriptor is assigned an arbitrary
identifier that is unique among the data type descriptor identifiers assigned for one object (table, domain,
function). This view is probably not useful for applications, but it is used to define some other views in
the information schema.
440
Chapter 30. The Information Schema
30.13. domain_constraints
The view domain_constraints contains all constraints belonging to domains owned by the current
user.
30.14. domain_udt_usage
The view domain_udt_usage identifies all columns that use data types owned by the current user. Note
441
Chapter 30. The Information Schema
that in PostgreSQL, built-in data types behave like user-defined types, so they are included here as well.
30.15. domains
The view domains contains all domains defined in the current database.
442
Chapter 30. The Information Schema
443
Chapter 30. The Information Schema
444
Chapter 30. The Information Schema
30.16. element_types
The view element_types contains the data type descriptors of the elements of arrays. When a table
column, domain, function parameter, or function return value is defined to be of an array type, the respec-
tive information schema view only contains ARRAY in the column data_type. To obtain information on
the element type of the array, you can join the respective view with this view. For example, to show the
columns of a table with data types and array element types, if applicable, you could do
This view only includes objects that the current user has access to, by way of being the owner or having
some privilege.
445
Chapter 30. The Information Schema
446
Chapter 30. The Information Schema
30.17. enabled_roles
The view enabled_roles identifies all groups that the current user is a member of. (A role is the same
thing as a group.) The difference between this view and applicable_roles is that in the future there
may be a mechanism to enable and disable groups during a session. In that case this view identifies those
groups that are currently enabled.
447
Chapter 30. The Information Schema
30.18. key_column_usage
The view key_column_usage identifies all columns in the current database that are restricted by some
unique, primary key, or foreign key constraint. Check constraints are not included in this view. Only those
columns are shown that are contained in a table owned by the current user.
30.19. parameters
The view parameters contains information about the parameters (arguments) of all functions in the
current database. Only those functions are shown that the current user has access to (by way of being the
owner or having some privilege).
448
Chapter 30. The Information Schema
449
Chapter 30. The Information Schema
450
Chapter 30. The Information Schema
30.20. referential_constraints
The view referential_constraints contains all referential (foreign key) constraints in the current
database that belong to a table owned by the current user.
451
Chapter 30. The Information Schema
30.21. role_column_grants
The view role_column_grants identifies all privileges granted on columns to a group that the current
user is a member of. Further information can be found under column_privileges.
30.22. role_routine_grants
The view role_routine_grants identifies all privileges granted on functions to a group that the current
user is a member of. Further information can be found under routine_privileges.
452
Chapter 30. The Information Schema
30.23. role_table_grants
The view role_table_grants identifies all privileges granted on tables or views to a group that the
current user is a member of. Further information can be found under table_privileges.
453
Chapter 30. The Information Schema
30.24. role_usage_grants
The view role_usage_grants is meant to identify USAGE privileges granted on various kinds of objects
to a group that the current user is a member of. In PostgreSQL, this currently only applies to domains, and
since domains do not have real privileges in PostgreSQL, this view is empty. Further information can be
found under usage_privileges. In the future, this view may contain more useful information.
30.25. routine_privileges
The view routine_privileges identifies all privileges granted on functions to the current user or by the
current user. There is one row for each combination of function, grantor, and grantee. Privileges granted
to groups are identified in the view role_routine_grants.
454
Chapter 30. The Information Schema
Note that the column grantee makes no distinction between users and groups. If you have users and
groups with the same name, there is unfortunately no way to distinguish them. A future version of Post-
greSQL will possibly prohibit having users and groups with the same name.
30.26. routines
The view routines contains all functions in the current database. Only those functions are shown that
the current user has access to (by way of being the owner or having some privilege).
455
Chapter 30. The Information Schema
456
Chapter 30. The Information Schema
457
Chapter 30. The Information Schema
458
Chapter 30. The Information Schema
30.27. schemata
The view schemata contains all schemas in the current database that are owned by the current user.
459
Chapter 30. The Information Schema
30.28. sql_features
The table sql_features contains information about which formal features defined in the SQL standard
are supported by PostgreSQL. This is the same information that is presented in Appendix D. There you
can also find some additional background information.
460
Chapter 30. The Information Schema
30.29. sql_implementation_info
The table sql_information_info contains information about various aspects that are left
implementation-defined by the SQL standard. This information is primarily intended for use in the
context of the ODBC interface; users of other interfaces will probably find this information to be of little
use. For this reason, the individual implementation information items are not described here; you will
find them in the description of the ODBC interface.
30.30. sql_languages
The table sql_languages contains one row for each SQL language binding that is supported by Post-
greSQL. PostgreSQL supports direct SQL and embedded SQL in C; that is all you will learn from this
table.
461
Chapter 30. The Information Schema
30.31. sql_packages
The table sql_packages contains information about which feature packages defined in the SQL standard
are supported by PostgreSQL. Refer to Appendix D for background information on feature packages.
30.32. sql_sizing
The table sql_sizing contains information about various size limits and maximum values in Post-
greSQL. This information is primarily intended for use in the context of the ODBC interface; users of
other interfaces will probably find this information to be of little use. For this reason, the individual sizing
items are not described here; you will find them in the description of the ODBC interface.
462
Chapter 30. The Information Schema
30.33. sql_sizing_profiles
The table sql_sizing_profiles contains information about the sql_sizing values that are required
by various profiles of the SQL standard. PostgreSQL does not track any SQL profiles, so this table is
empty.
30.34. table_constraints
The view table_constraints contains all constraints belonging to tables owned by the current user.
463
Chapter 30. The Information Schema
30.35. table_privileges
The view table_privileges identifies all privileges granted on tables or views to the current user or by
the current user. There is one row for each combination of table, grantor, and grantee. Privileges granted
to groups are identified in the view role_table_grants.
464
Chapter 30. The Information Schema
Note that the column grantee makes no distinction between users and groups. If you have users and
groups with the same name, there is unfortunately no way to distinguish them. A future version of Post-
greSQL will possibly prohibit having users and groups with the same name.
30.36. tables
The view tables contains all tables and views defined in the current database. Only those tables and
views are shown that the current user has access to (by way of being the owner or having some privilege).
30.37. triggers
The view triggers contains all triggers defined in the current database that are owned by the current
465
Chapter 30. The Information Schema
Triggers in PostgreSQL have two incompatibilities with the SQL standard that affect the representation
in the information schema. First, trigger names are local to the table in PostgreSQL, rather than being
independent schema objects. Therefore there may be duplicate trigger names defined in one schema, as
long as they belong to different tables. (trigger_catalog and trigger_schema are really the values
pertaining to the table that the trigger is defined on.) Second, triggers can be defined to fire on multiple
events in PostgreSQL (e.g., ON INSERT OR UPDATE), whereas the SQL standard only allows one. If a
466
Chapter 30. The Information Schema
trigger is defined to fire on multiple events, it is represented as multiple rows in the information schema,
one for each type of event. As a consequence of these two issues, the primary key of the view triggers
is really (trigger_catalog, trigger_schema, trigger_name, event_object_name,
event_manipulation) instead of (trigger_catalog, trigger_schema, trigger_name),
which is what the SQL standard specifies. Nonetheless, if you define your triggers in a manner that
conforms with the SQL standard (trigger names unique in the schema and only one event type per
trigger), this will not affect you.
30.38. usage_privileges
The view usage_privileges is meant to identify USAGE privileges granted on various kinds of objects
to the current user or by the current user. In PostgreSQL, this currently only applies to domains, and since
domains do not have real privileges in PostgreSQL, this view shows implicit USAGE privileges granted to
PUBLIC for all domains. In the future, this view may contain more useful information.
30.39. view_column_usage
The view view_column_usage identifies all columns that are used in the query expression of a view
(the SELECT statement that defines the view). A column is only included if the current user is the owner
of the table that contains the column.
Note: Columns of system tables are not included. This should be fixed sometime.
467
Chapter 30. The Information Schema
30.40. view_table_usage
The view view_table_usage identifies all tables that are used in the query expression of a view (the
SELECT statement that defines the view). A table is only included if the current user is the owner of that
table.
Note: System tables are not included. This should be fixed sometime.
468
Chapter 30. The Information Schema
30.41. views
The view views contains all views defined in the current database. Only those views are shown that the
current user has access to (by way of being the owner or having some privilege).
469
V. Server Programming
This part is about extending the server functionality with user-defined functions, data types, triggers,
etc. These are advanced topics which should probably be approached only after all the other user doc-
umentation about PostgreSQL has been understood. Later chapters in this part describe the server-side
programming languages available in the PostgreSQL distribution as well as general issues concerning
server-side programming languages. It is essential to read at least the earlier sections of Chapter 31 (cov-
ering functions) before diving into the material about server-side programming languages.
Chapter 31. Extending SQL
In the sections that follow, we will discuss how you can extend the PostgreSQL SQL query language by
adding:
472
Chapter 31. Extending SQL
31.2.3. Domains
A domain is based on a particular base type and for many purposes is interchangeable with its base type.
However, a domain may have constraints that restrict its valid values to a subset of what the underlying
base type would allow.
Domains can be created using the SQL command CREATE DOMAIN. Their creation and use is not dis-
cussed in this chapter.
31.2.4. Pseudo-Types
There are a few “pseudo-types” for special purposes. Pseudo-types cannot appear as columns of tables or
attributes of composite types, but they can be used to declare the argument and result types of functions.
This provides a mechanism within the type system to identify special classes of functions. Table 8-20 lists
the existing pseudo-types.
473
Chapter 31. Extending SQL
Every kind of function can take base types, composite types, or combinations of these as arguments
(parameters). In addition, every kind of function can return a base type or a composite type. Functions
may also be defined to return sets of base or composite values.
Many kinds of functions can take or return certain pseudo-types (such as polymorphic types), but the
available facilities vary. Consult the description of each kind of function for more details.
It’s easiest to define SQL functions, so we’ll start by discussing those. Most of the concepts presented for
SQL functions will carry over to the other types of functions.
Throughout this chapter, it can be useful to look at the reference page of the CREATE FUNCTION com-
mand to understand the examples better. Some examples from this chapter can be found in funcs.sql
and funcs.c in the src/tutorial directory in the PostgreSQL source distribution.
474
Chapter 31. Extending SQL
SELECT clean_emp();
clean_emp
-----------
(1 row)
The syntax of the CREATE FUNCTION command requires the function body to be written as a string
constant. It is usually most convenient to use dollar quoting (see Section 4.1.2.2) for the string constant.
If you choose to use regular single-quoted string constant syntax, you must escape single quote marks (’)
and backslashes (\) used in the body of the function, typically by doubling them (see Section 4.1.2.1).
Arguments to the SQL function are referenced in the function body using the syntax $n: $1 refers to the
first argument, $2 to the second, and so on. If an argument is of a composite type, then the dot notation,
e.g., $1.name, may be used to access attributes of the argument. The arguments can only be used as data
values, not as identifiers. Thus for example this is reasonable:
SELECT one();
one
-----
1
475
Chapter 31. Extending SQL
Notice that we defined a column alias within the function body for the result of the function (with the
name result), but this column alias is not visible outside the function. Hence, the result is labeled one
instead of result.
It is almost as easy to define SQL functions that take base types as arguments. In the example below,
notice how we refer to the arguments within the function as $1 and $2.
answer
--------
3
Here is a more useful function, which might be used to debit a bank account:
In practice one would probably like a more useful result from the function than a constant 1, so a more
likely definition is
476
Chapter 31. Extending SQL
of each row of the table. Here is a function double_salary that computes what someone’s salary would
be if it were doubled:
name | dream
------+-------
Bill | 8400
Notice the use of the syntax $1.salary to select one field of the argument row value. Also notice how
the calling SELECT command uses * to select the entire current row of a table as a composite value. The
table row can alternatively be referenced using just the table name, like this:
It is also possible to build a function that returns a composite type. This is an example of a function that
returns a single emp row:
In this example we have specified each of the attributes with a constant value, but any computation could
have been substituted for these constants.
477
Chapter 31. Extending SQL
• The select list order in the query must be exactly the same as that in which the columns appear in the
table associated with the composite type. (Naming the columns, as we did above, is irrelevant to the
system.)
• You must typecast the expressions to match the definition of the composite type, or you will get errors
like this:
ERROR: function declared to return emp returns varchar instead of text at column 1
Here we wrote a SELECT that returns just a single column of the correct composite type. This isn’t really
better in this situation, but it is a handy alternative in some cases — for example, if we need to compute
the result by calling another function that returns the desired composite value.
We could call this function directly in either of two ways:
SELECT new_emp();
new_emp
--------------------------
(None,1000.0,25,"(2,2)")
SELECT (new_emp()).name;
name
------
None
The extra parentheses are needed to keep the parser from getting confused. If you try to do it without
them, you get something like this:
SELECT new_emp().name;
ERROR: syntax error at or near "." at character 17
LINE 1: SELECT new_emp().name;
^
478
Chapter 31. Extending SQL
Another option is to use functional notation for extracting an attribute. The simple way to explain this is
that we can use the notations attribute(table) and table.attribute interchangeably.
SELECT name(new_emp());
name
------
None
youngster
-----------
Sam
Andy
Tip: The equivalence between functional notation and attribute notation makes it possible to use
functions on composite types to emulate “computed fields”. For example, using the previous definition
for double_salary(emp), we can write
An application using this wouldn’t need to be directly aware that double_salary isn’t a real column
of the table. (You can also emulate computed fields with views.)
Another way to use a function returning a row result is to pass the result to another function that accepts
the correct row type as input:
SELECT getname(new_emp());
getname
---------
None
(1 row)
Another way to use a function that returns a composite type is to call it as a table function, as described
below.
479
Chapter 31. Extending SQL
As the example shows, we can work with the columns of the function’s result just the same as if they were
columns of a regular table.
Note that we only got one row out of the function. This is because we did not use SETOF. That is described
in the next section.
480
Chapter 31. Extending SQL
Currently, functions returning sets may also be called in the select list of a query. For each row that the
query generates by itself, the function returning set is invoked, and an output row is generated for each
element of the function’s result set. Note, however, that this capability is deprecated and may be removed
in future releases. The following is an example function returning a set from the select list:
SELECT listchildren(’Top’);
listchildren
--------------
Child1
Child2
Child3
(3 rows)
In the last SELECT, notice that no output row appears for Child2, Child3, etc. This happens because
listchildren returns an empty set for those arguments, so no result rows are generated.
481
Chapter 31. Extending SQL
Notice the use of the typecast ’a’::text to specify that the argument is of type text. This is required
if the argument is just a string literal, since otherwise it would be treated as type unknown, and array of
unknown is not a valid type. Without the typecast, you will get errors like this:
ERROR: could not determine "anyarray"/"anyelement" type because input has type "unknown"
It is permitted to have polymorphic arguments with a fixed return type, but the converse is not. For exam-
ple:
482
Chapter 31. Extending SQL
it is not immediately clear which function would be called with some trivial input like test(1, 1.5).
The currently implemented resolution rules are described in Chapter 10, but it is unwise to design a system
that subtly relies on this behavior.
A function that takes a single argument of a composite type should generally not have the same name
as any attribute (field) of that type. Recall that attribute(table) is considered equivalent to
table.attribute. In the case that there is an ambiguity between a function on a composite type and
an attribute of the composite type, the attribute will always be used. It is possible to override that choice
by schema-qualifying the function name (that is, schema.func(table)) but it’s better to avoid the
problem by not choosing conflicting names.
When overloading C-language functions, there is an additional constraint: The C name of each function
in the family of overloaded functions must be different from the C names of all other functions, either
internal or dynamically loaded. If this rule is violated, the behavior is not portable. You might get a run-
time linker error, or one of the functions will get called (usually the internal one). The alternative form
of the AS clause for the SQL CREATE FUNCTION command decouples the SQL function name from the
function name in the C source code. For instance,
The names of the C functions here reflect one of many possible conventions.
• A VOLATILE function can do anything, including modifying the database. It can return different results
on successive calls with the same arguments. The optimizer makes no assumptions about the behavior
of such functions. A query using a volatile function will re-evaluate the function at every row where its
value is needed.
• A STABLE function cannot modify the database and is guaranteed to return the same results given the
same arguments for all calls within a single surrounding query. This category allows the optimizer to
optimize away multiple calls of the function within a single query. In particular, it is safe to use an
expression containing such a function in an index scan condition. (Since an index scan will evaluate
the comparison value only once, not once at each row, it is not valid to use a VOLATILE function in an
index scan condition.)
• An IMMUTABLE function cannot modify the database and is guaranteed to return the same results given
the same arguments forever. This category allows the optimizer to pre-evaluate the function when a
query calls it with constant arguments. For example, a query like SELECT ... WHERE x = 2 + 2
can be simplified on sight to SELECT ... WHERE x = 4, because the function underlying the integer
addition operator is marked IMMUTABLE.
483
Chapter 31. Extending SQL
For best optimization results, you should label your functions with the strictest volatility category that is
valid for them.
Any function with side-effects must be labeled VOLATILE, so that calls to it cannot be optimized away.
Even a function with no side-effects needs to be labeled VOLATILE if its value can change within a single
query; some examples are random(), currval(), timeofday().
There is relatively little difference between STABLE and IMMUTABLE categories when considering simple
interactive queries that are planned and immediately executed: it doesn’t matter a lot whether a function
is executed once during planning or once during query execution startup. But there is a big difference
if the plan is saved and reused later. Labeling a function IMMUTABLE when it really isn’t may allow it
to be prematurely folded to a constant during planning, resulting in a stale value being re-used during
subsequent uses of the plan. This is a hazard when using prepared statements or when using function
languages that cache plans (such as PL/pgSQL).
Because of the snapshotting behavior of MVCC (see Chapter 12) a function containing only SELECT
commands can safely be marked STABLE, even if it selects from tables that might be undergoing modifi-
cations by concurrent queries. PostgreSQL will execute a STABLE function using the snapshot established
for the calling query, and so it will see a fixed view of the database throughout that query. Also note that
the current_timestamp family of functions qualify as stable, since their values do not change within a
transaction.
The same snapshotting behavior is used for SELECT commands within IMMUTABLE functions. It is gen-
erally unwise to select from database tables within an IMMUTABLE function at all, since the immutability
will be broken if the table contents ever change. However, PostgreSQL does not enforce that you do not
do that.
A common error is to label a function IMMUTABLE when its results depend on a configuration parameter.
For example, a function that manipulates timestamps might well have results that depend on the timezone
setting. For safety, such functions should be labeled STABLE instead.
Note: Before PostgreSQL release 8.0, the requirement that STABLE and IMMUTABLE functions can-
not modify the database was not enforced by the system. Release 8.0 enforces it by requiring SQL
functions and procedural language functions of these categories to contain no SQL commands other
than SELECT. (This is not a completely bulletproof test, since such functions could still call VOLATILE
functions that modify the database. If you do that, you will find that the STABLE or IMMUTABLE function
does not notice the database changes applied by the called function.)
484
Chapter 31. Extending SQL
Note: Not all “predefined” functions are “internal” in the above sense. Some predefined functions are
written in SQL.
485
Chapter 31. Extending SQL
Note: PostgreSQL will not compile a C function automatically. The object file must be compiled before
it is referenced in a CREATE FUNCTION command. See Section 31.9.6 for additional information.
After it is used for the first time, a dynamically loaded object file is retained in memory. Future calls in
the same session to the function(s) in that file will only incur the small overhead of a symbol table lookup.
If you need to force a reload of an object file, for example after recompiling it, use the LOAD command or
begin a fresh session.
It is recommended to locate shared libraries either relative to $libdir or through the dynamic library
path. This simplifies version upgrades if the new installation is at a different location. The actual directory
that $libdir stands for can be found out with the command pg_config --pkglibdir.
Before PostgreSQL release 7.2, only exact absolute paths to object files could be specified in CREATE
FUNCTION. This approach is now deprecated since it makes the function definition unnecessarily un-
portable. It’s best to specify just the shared library name with no path nor extension, and let the search
mechanism provide that information instead.
486
Chapter 31. Extending SQL
By-value types can only be 1, 2, or 4 bytes in length (also 8 bytes, if sizeof(Datum) is 8 on your
machine). You should be careful to define your types such that they will be the same size (in bytes) on
all architectures. For example, the long type is dangerous because it is 4 bytes on some machines and 8
bytes on others, whereas int type is 4 bytes on most Unix machines. A reasonable implementation of the
int4 type on Unix machines might be:
On the other hand, fixed-length types of any size may be passed by-reference. For example, here is a
sample implementation of a PostgreSQL type:
Only pointers to such types can be used when passing them in and out of PostgreSQL functions. To return
a value of such a type, allocate the right amount of memory with palloc, fill in the allocated memory,
and return a pointer to it. (You can also return an input value that has the same type as the return value
directly by returning the pointer to the input value. Never modify the contents of a pass-by-reference input
value, however.)
Finally, all variable-length types must also be passed by reference. All variable-length types must begin
with a length field of exactly 4 bytes, and all data to be stored within that type must be located in the
memory immediately following that length field. The length field contains the total length of the structure,
that is, it includes the size of the length field itself.
As an example, we can define the type text as follows:
typedef struct {
int4 length;
char data[1];
} text;
Obviously, the data field declared here is not long enough to hold all possible strings. Since it’s impossible
to declare a variable-size structure in C, we rely on the knowledge that the C compiler won’t range-check
array subscripts. We just allocate the necessary amount of space and then access the array as if it were
declared the right length. (This is a common trick, which you can read about in many textbooks about C.)
When manipulating variable-length types, we must be careful to allocate the correct amount of memory
and set the length field correctly. For example, if we wanted to store 40 bytes in a text structure, we
might use a code fragment like this:
#include "postgres.h"
...
char buffer[40]; /* our source data */
...
487
Chapter 31. Extending SQL
VARHDRSZ is the same as sizeof(int4), but it’s considered good style to use the macro VARHDRSZ to
refer to the size of the overhead for a variable-length type.
Table 31-1 specifies which C type corresponds to which SQL type when writing a C-language function
that uses a built-in type of PostgreSQL. The “Defined In” column gives the header file that needs to be
included to get the type definition. (The actual definition may be in a different file that is included by the
listed file. It is recommended that users stick to the defined interface.) Note that you should always include
postgres.h first in any source file, because it declares a number of things that you will need anyway.
488
Chapter 31. Extending SQL
Now that we’ve gone over all of the possible structures for base types, we can show some examples of
real functions.
#include "postgres.h"
#include <string.h>
/* by value */
int
add_one(int arg)
{
return arg + 1;
}
float8 *
add_one_float8(float8 *arg)
{
float8 *result = (float8 *) palloc(sizeof(float8));
return result;
}
Point *
makepoint(Point *pointx, Point *pointy)
{
Point *new_point = (Point *) palloc(sizeof(Point));
new_point->x = pointx->x;
new_point->y = pointy->y;
489
Chapter 31. Extending SQL
return new_point;
}
text *
copytext(text *t)
{
/*
* VARSIZE is the total size of the struct in bytes.
*/
text *new_t = (text *) palloc(VARSIZE(t));
VARATT_SIZEP(new_t) = VARSIZE(t);
/*
* VARDATA is a pointer to the data region of the struct.
*/
memcpy((void *) VARDATA(new_t), /* destination */
(void *) VARDATA(t), /* source */
VARSIZE(t)-VARHDRSZ); /* how many bytes */
return new_t;
}
text *
concat_text(text *arg1, text *arg2)
{
int32 new_text_size = VARSIZE(arg1) + VARSIZE(arg2) - VARHDRSZ;
text *new_text = (text *) palloc(new_text_size);
VARATT_SIZEP(new_text) = new_text_size;
memcpy(VARDATA(new_text), VARDATA(arg1), VARSIZE(arg1)-VARHDRSZ);
memcpy(VARDATA(new_text) + (VARSIZE(arg1)-VARHDRSZ),
VARDATA(arg2), VARSIZE(arg2)-VARHDRSZ);
return new_text;
}
Supposing that the above code has been prepared in file funcs.c and compiled into a shared object, we
could define the functions to PostgreSQL with commands like this:
490
Chapter 31. Extending SQL
AS ’DIRECTORY/funcs’, ’copytext’
LANGUAGE C STRICT;
Here, DIRECTORY stands for the directory of the shared library file (for instance the PostgreSQL tutorial
directory, which contains the code for the examples used in this section). (Better style would be to use just
’funcs’ in the AS clause, after having added DIRECTORY to the search path. In any case, we may omit
the system-specific extension for a shared library, commonly .so or .sl.)
Notice that we have specified the functions as “strict”, meaning that the system should automatically
assume a null result if any input value is null. By doing this, we avoid having to check for null inputs
in the function code. Without this, we’d have to check for null values explicitly, by checking for a null
pointer for each pass-by-reference argument. (For pass-by-value arguments, we don’t even have a way to
check!)
Although this calling convention is simple to use, it is not very portable; on some architectures there are
problems with passing data types that are smaller than int this way. Also, there is no simple way to
return a null result, nor to cope with null arguments in any way other than making the function strict. The
version-1 convention, presented next, overcomes these objections.
Datum funcname(PG_FUNCTION_ARGS)
PG_FUNCTION_INFO_V1(funcname);
must appear in the same source file. (Conventionally. it’s written just before the function itself.) This
macro call is not needed for internal-language functions, since PostgreSQL assumes that all internal
functions use the version-1 convention. It is, however, required for dynamically-loaded functions.
In a version-1 function, each actual argument is fetched using a PG_GETARG_xxx() macro that corre-
sponds to the argument’s data type, and the result is returned using a PG_RETURN_xxx() macro for the
return type. PG_GETARG_xxx() takes as its argument the number of the function argument to fetch, where
the count starts at 0. PG_RETURN_xxx() takes as its argument the actual value to return.
Here we show the same functions as above, coded in version-1 style:
#include "postgres.h"
#include <string.h>
#include "fmgr.h"
/* by value */
491
Chapter 31. Extending SQL
PG_FUNCTION_INFO_V1(add_one);
Datum
add_one(PG_FUNCTION_ARGS)
{
int32 arg = PG_GETARG_INT32(0);
PG_RETURN_INT32(arg + 1);
}
PG_FUNCTION_INFO_V1(add_one_float8);
Datum
add_one_float8(PG_FUNCTION_ARGS)
{
/* The macros for FLOAT8 hide its pass-by-reference nature. */
float8 arg = PG_GETARG_FLOAT8(0);
PG_RETURN_FLOAT8(arg + 1.0);
}
PG_FUNCTION_INFO_V1(makepoint);
Datum
makepoint(PG_FUNCTION_ARGS)
{
/* Here, the pass-by-reference nature of Point is not hidden. */
Point *pointx = PG_GETARG_POINT_P(0);
Point *pointy = PG_GETARG_POINT_P(1);
Point *new_point = (Point *) palloc(sizeof(Point));
new_point->x = pointx->x;
new_point->y = pointy->y;
PG_RETURN_POINT_P(new_point);
}
PG_FUNCTION_INFO_V1(copytext);
Datum
copytext(PG_FUNCTION_ARGS)
{
text *t = PG_GETARG_TEXT_P(0);
/*
* VARSIZE is the total size of the struct in bytes.
*/
text *new_t = (text *) palloc(VARSIZE(t));
VARATT_SIZEP(new_t) = VARSIZE(t);
/*
492
Chapter 31. Extending SQL
PG_FUNCTION_INFO_V1(concat_text);
Datum
concat_text(PG_FUNCTION_ARGS)
{
text *arg1 = PG_GETARG_TEXT_P(0);
text *arg2 = PG_GETARG_TEXT_P(1);
int32 new_text_size = VARSIZE(arg1) + VARSIZE(arg2) - VARHDRSZ;
text *new_text = (text *) palloc(new_text_size);
VARATT_SIZEP(new_text) = new_text_size;
memcpy(VARDATA(new_text), VARDATA(arg1), VARSIZE(arg1)-VARHDRSZ);
memcpy(VARDATA(new_text) + (VARSIZE(arg1)-VARHDRSZ),
VARDATA(arg2), VARSIZE(arg2)-VARHDRSZ);
PG_RETURN_TEXT_P(new_text);
}
The CREATE FUNCTION commands are the same as for the version-0 equivalents.
At first glance, the version-1 coding conventions may appear to be just pointless obscurantism. They do,
however, offer a number of improvements, because the macros can hide unnecessary detail. An example is
that in coding add_one_float8, we no longer need to be aware that float8 is a pass-by-reference type.
Another example is that the GETARG macros for variable-length types allow for more efficient fetching of
“toasted” (compressed or out-of-line) values.
One big improvement in version-1 functions is better handling of null inputs and results. The macro
PG_ARGISNULL(n) allows a function to test whether each input is null. (Of course, doing this is only
necessary in functions not declared “strict”.) As with the PG_GETARG_xxx() macros, the input arguments
are counted beginning at zero. Note that one should refrain from executing PG_GETARG_xxx() until one
has verified that the argument isn’t null. To return a null result, execute PG_RETURN_NULL(); this works
in both strict and nonstrict functions.
Other options provided in the new-style interface are two variants of the PG_GETARG_xxx() macros. The
first of these, PG_GETARG_xxx_COPY(), guarantees to return a copy of the specified argument that is
safe for writing into. (The normal macros will sometimes return a pointer to a value that is physically
stored in a table, which must not be written to. Using the PG_GETARG_xxx_COPY() macros guarantees a
writable result.) The second variant consists of the PG_GETARG_xxx_SLICE() macros which take three
arguments. The first is the number of the function argument (as above). The second and third are the
offset and length of the segment to be returned. Offsets are counted from zero, and a negative length
requests that the remainder of the value be returned. These macros provide more efficient access to parts
of large values in the case where they have storage type “external”. (The storage type of a column can be
specified using ALTER TABLE tablename ALTER COLUMN colname SET STORAGE storagetype.
storagetype is one of plain, external, extended, or main.)
493
Chapter 31. Extending SQL
Finally, the version-1 function call conventions make it possible to return set results (Section 31.9.10) and
implement trigger functions (Chapter 32) and procedural-language call handlers (Chapter 45). Version-1
code is also more portable than version-0, because it does not break restrictions on function call protocol
in the C standard. For more details see src/backend/utils/fmgr/README in the source distribution.
• Use pg_config --includedir-server to find out where the PostgreSQL server header files are
installed on your system (or the system that your users will be running on). This option is new with
PostgreSQL 7.2. For PostgreSQL 7.1 you should use the option --includedir. (pg_config will exit
with a non-zero status if it encounters an unknown option.) For releases prior to 7.1 you will have to
guess, but since that was before the current calling conventions were introduced, it is unlikely that you
want to support those releases.
• When allocating memory, use the PostgreSQL functions palloc and pfree instead of the correspond-
ing C library functions malloc and free. The memory allocated by palloc will be freed automati-
cally at the end of each transaction, preventing memory leaks.
• Always zero the bytes of your structures using memset. Without this, it’s difficult to support hash
indexes or hash joins, as you must pick out only the significant bits of your data structure to compute
a hash. Even if you initialize all fields of your structure, there may be alignment padding (holes in the
structure) that may contain garbage values.
• Most of the internal PostgreSQL types are declared in postgres.h, while the function manager inter-
faces (PG_FUNCTION_ARGS, etc.) are in fmgr.h, so you will need to include at least these two files.
For portability reasons it’s best to include postgres.h first, before any other system or user header
files. Including postgres.h will also include elog.h and palloc.h for you.
• Symbol names defined within object files must not conflict with each other or with symbols defined in
the PostgreSQL server executable. You will have to rename your functions or variables if you get error
messages to this effect.
• Compiling and linking your code so that it can be dynamically loaded into PostgreSQL always requires
special flags. See Section 31.9.6 for a detailed explanation of how to do it for your particular operating
system.
494
Chapter 31. Extending SQL
BSD/OS
The compiler flag to create PIC is -fpic. The linker flag to create shared libraries is -shared.
gcc -fpic -c foo.c
ld -shared -o foo.so foo.o
or
gcc -fpic -c foo.c
and then
ld -b -o foo.sl foo.o
HP-UX uses the extension .sl for shared libraries, unlike most other systems.
IRIX
PIC is the default, no special compiler options are necessary. The linker option to produce shared
libraries is -shared.
cc -c foo.c
ld -shared -o foo.so foo.o
495
Chapter 31. Extending SQL
Linux
The compiler flag to create PIC is -fpic. On some platforms in some situations -fPIC must be used
if -fpic does not work. Refer to the GCC manual for more information. The compiler flag to create
a shared library is -shared. A complete example looks like this:
cc -fpic -c foo.c
cc -shared -o foo.so foo.o
MacOS X
Here is an example. It assumes the developer tools are installed.
cc -c foo.c
cc -bundle -flat_namespace -undefined suppress -o foo.so foo.o
NetBSD
The compiler flag to create PIC is -fpic. For ELF systems, the compiler with the flag -shared is
used to link shared libraries. On the older non-ELF systems, ld -Bshareable is used.
gcc -fpic -c foo.c
gcc -shared -o foo.so foo.o
OpenBSD
The compiler flag to create PIC is -fpic. ld -Bshareable is used to link shared libraries.
gcc -fpic -c foo.c
ld -Bshareable -o foo.so foo.o
Solaris
The compiler flag to create PIC is -KPIC with the Sun compiler and -fpic with GCC. To link shared
libraries, the compiler option is -G with either compiler or alternatively -shared with GCC.
cc -KPIC -c foo.c
cc -G -o foo.so foo.o
or
gcc -fpic -c foo.c
gcc -G -o foo.so foo.o
Tru64 UNIX
PIC is the default, so the compilation command is the usual one. ld with special options is used to
do the linking:
cc -c foo.c
ld -shared -expect_unresolved ’*’ -o foo.so foo.o
496
Chapter 31. Extending SQL
The same procedure is used with GCC instead of the system compiler; no special options are re-
quired.
UnixWare
The compiler flag to create PIC is -K PIC with the SCO compiler and -fpic with GCC. To link
shared libraries, the compiler option is -G with the SCO compiler and -shared with GCC.
cc -K PIC -c foo.c
cc -G -o foo.so foo.o
or
gcc -fpic -c foo.c
gcc -shared -o foo.so foo.o
Tip: If this is too complicated for you, you should consider using GNU Libtool1, which hides the platform
differences behind a uniform interface.
The resulting shared library file can then be loaded into PostgreSQL. When specifying the file name to the
CREATE FUNCTION command, one must give it the name of the shared library file, not the intermediate
object file. Note that the system’s standard shared-library extension (usually .so or .sl) can be omitted
from the CREATE FUNCTION command, and normally should be omitted for best portability.
Refer back to Section 31.9.1 about where the server expects to find the shared library files.
MODULES = isbn_issn
DATA_built = isbn_issn.sql
DOCS = README.isbn_issn
1. http://www.gnu.org/software/libtool/
497
Chapter 31. Extending SQL
The last two lines should always be the same. Earlier in the file, you assign variables or add custom make
rules.
The following variables can be set:
MODULES
list of shared objects to be build from source file with same stem (do not include suffix in this list)
DATA
script files (not binaries) to install into prefix/bin, which need to be built first
REGRESS
PROGRAM
EXTRA_CLEAN
498
Chapter 31. Extending SQL
Put this makefile as Makefile in the directory which holds your extension. Then you can do make to
compile, and later make install to install your module. The extension is compiled and installed for the
PostgreSQL installation that corresponds to the first pg_config command found in your path.
#include "postgres.h"
#include "executor/executor.h" /* for GetAttributeByName() */
bool
c_overpaid(HeapTupleHeader t, /* the current row of emp */
int32 limit)
{
bool isnull;
int32 salary;
#include "postgres.h"
#include "executor/executor.h" /* for GetAttributeByName() */
PG_FUNCTION_INFO_V1(c_overpaid);
Datum
c_overpaid(PG_FUNCTION_ARGS)
{
HeapTupleHeader t = PG_GETARG_HEAPTUPLEHEADER(0);
int32 limit = PG_GETARG_INT32(1);
bool isnull;
Datum salary;
499
Chapter 31. Extending SQL
PG_RETURN_BOOL(false);
/* Alternatively, we might prefer to do PG_RETURN_NULL() for null salary. */
GetAttributeByName is the PostgreSQL system function that returns attributes out of the specified row.
It has three arguments: the argument of type HeapTupleHeader passed into the function, the name of the
desired attribute, and a return parameter that tells whether the attribute is null. GetAttributeByName re-
turns a Datum value that you can convert to the proper data type by using the appropriate DatumGetXXX()
macro. Note that the return value is meaningless if the null flag is set; always check the null flag before
trying to do anything with the result.
There is also GetAttributeByNum, which selects the target attribute by column number instead of name.
The following command declares the function c_overpaid in SQL:
Notice we have used STRICT so that we did not have to check whether the input arguments were NULL.
#include "funcapi.h"
There are two ways you can build a composite data value (henceforth a “tuple”): you can build it from an
array of Datum values, or from an array of C strings that can be passed to the input conversion functions of
the tuple’s column data types. In either case, you first need to obtain or construct a TupleDesc descriptor
for the tuple structure. When working with Datums, you pass the TupleDesc to BlessTupleDesc, and
then call heap_formtuple for each row. When working with C strings, you pass the TupleDesc to
TupleDescGetAttInMetadata, and then call BuildTupleFromCStrings for each row. In the case of
a function returning a set of tuples, the setup steps can all be done once during the first call of the function.
Several helper functions are available for setting up the initial TupleDesc. If you want to use a named
composite type, you can fetch the information from the system catalogs. Use
500
Chapter 31. Extending SQL
to get a TupleDesc based on a type OID. This can be used to get a TupleDesc for a base or composite
type. When writing a function that returns record, the expected TupleDesc must be passed in by the
caller.
Once you have a TupleDesc, call
if you plan to work with C strings. If you are writing a function returning set, you can save the results of
these functions in the FuncCallContext structure — use the tuple_desc or attinmeta field respec-
tively.
When working with Datums, use
to build a HeapTuple given user data in C string form. values is an array of C strings, one for each
attribute of the return row. Each C string should be in the form expected by the input function of the
attribute data type. In order to return a null value for one of the attributes, the corresponding pointer in the
values array should be set to NULL. This function will need to be called again for each row you return.
Once you have built a tuple to return from your function, it must be converted into a Datum. Use
HeapTupleGetDatum(HeapTuple tuple)
to convert a HeapTuple into a valid Datum. This Datum can be returned directly if you intend to return
just a single row, or it can be used as the current return value in a set-returning function.
An example appears in the next section.
typedef struct
{
/*
501
Chapter 31. Extending SQL
/*
* OPTIONAL maximum number of calls
*
* max_calls is here for convenience only and setting it is optional.
* If not set, you must provide alternative means to know when the
* function is done.
*/
uint32 max_calls;
/*
* OPTIONAL pointer to result slot
*
* This is obsolete and only present for backwards compatibility, viz,
* user-defined SRFs that use the deprecated TupleDescGetSlot().
*/
TupleTableSlot *slot;
/*
* OPTIONAL pointer to miscellaneous user-provided context information
*
* user_fctx is for use as a pointer to your own data to retain
* arbitrary context information between calls of your function.
*/
void *user_fctx;
/*
* OPTIONAL pointer to struct containing attribute type input metadata
*
* attinmeta is for use when returning tuples (i.e., composite data types)
* and is not used when returning base data types. It is only needed
* if you intend to use BuildTupleFromCStrings() to create the return
* tuple.
*/
AttInMetadata *attinmeta;
/*
* memory context used for structures that must live for multiple calls
*
* multi_call_memory_ctx is set by SRF_FIRSTCALL_INIT() for you, and used
* by SRF_RETURN_DONE() for cleanup. It is the most appropriate memory
* context for any memory that is to be reused across multiple calls
* of the SRF.
*/
MemoryContext multi_call_memory_ctx;
/*
502
Chapter 31. Extending SQL
} FuncCallContext;
An SRF uses several functions and macros that automatically manipulate the FuncCallContext struc-
ture (and expect to find it via fn_extra). Use
SRF_IS_FIRSTCALL()
to determine if your function is being called for the first or a subsequent time. On the first call (only) use
SRF_FIRSTCALL_INIT()
to initialize the FuncCallContext. On every function call, including the first, use
SRF_PERCALL_SETUP()
to properly set up for using the FuncCallContext and clearing any previously returned data left over
from the previous pass.
If your function has data to return, use
SRF_RETURN_NEXT(funcctx, result)
to return it to the caller. (result must be of type Datum, either a single value or a tuple prepared as
described above.) Finally, when your function is finished returning data, use
SRF_RETURN_DONE(funcctx)
Datum
my_set_returning_function(PG_FUNCTION_ARGS)
{
FuncCallContext *funcctx;
Datum result;
503
Chapter 31. Extending SQL
MemoryContext oldcontext;
further declarations as needed
if (SRF_IS_FIRSTCALL())
{
funcctx = SRF_FIRSTCALL_INIT();
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
/* One-time setup code appears here: */
user code
if returning composite
build TupleDesc, and perhaps AttInMetadata
endif returning composite
user code
MemoryContextSwitchTo(oldcontext);
}
PG_FUNCTION_INFO_V1(testpassbyval);
Datum
testpassbyval(PG_FUNCTION_ARGS)
{
FuncCallContext *funcctx;
int call_cntr;
int max_calls;
TupleDesc tupdesc;
AttInMetadata *attinmeta;
504
Chapter 31. Extending SQL
{
MemoryContext oldcontext;
/*
* generate attribute metadata needed later to produce tuples from raw
* C strings
*/
attinmeta = TupleDescGetAttInMetadata(tupdesc);
funcctx->attinmeta = attinmeta;
MemoryContextSwitchTo(oldcontext);
}
call_cntr = funcctx->call_cntr;
max_calls = funcctx->max_calls;
attinmeta = funcctx->attinmeta;
/*
* Prepare a values array for building the returned tuple.
* This should be an array of C strings which will
* be processed later by the type input functions.
*/
values = (char **) palloc(3 * sizeof(char *));
values[0] = (char *) palloc(16 * sizeof(char));
values[1] = (char *) palloc(16 * sizeof(char));
values[2] = (char *) palloc(16 * sizeof(char));
/* build a tuple */
505
Chapter 31. Extending SQL
SRF_RETURN_NEXT(funcctx, result);
}
else /* do when there is no more left */
{
SRF_RETURN_DONE(funcctx);
}
}
The directory contrib/tablefunc in the source distribution contains more examples of set-returning
functions.
PG_FUNCTION_INFO_V1(make_array);
Datum
make_array(PG_FUNCTION_ARGS)
{
ArrayType *result;
506
Chapter 31. Extending SQL
if (!OidIsValid(element_type))
elog(ERROR, "could not determine data type of input");
PG_RETURN_ARRAYTYPE_P(result);
}
Note the use of STRICT; this is essential since the code is not bothering to test for a null input.
507
Chapter 31. Extending SQL
Thus, in addition to the argument and result data types seen by a user of the aggregate, there is an internal
state-value data type that may be different from both the argument and result types.
If we define an aggregate that does not use a final function, we have an aggregate that computes a running
function of the column values from each row. sum is an example of this kind of aggregate. sum starts at
zero and always adds the current row’s value to its running total. For example, if we want to make a sum
aggregate to work on a data type for complex numbers, we only need the addition function for that data
type. The aggregate definition would be:
complex_sum
-------------
(34,53.9)
(In practice, we’d just name the aggregate sum and rely on PostgreSQL to figure out which kind of sum
to apply to a column of type complex.)
The above definition of sum will return zero (the initial state condition) if there are no nonnull input
values. Perhaps we want to return null in that case instead — the SQL standard expects sum to behave
that way. We can do this simply by omitting the initcond phrase, so that the initial state condition is
null. Ordinarily this would mean that the sfunc would need to check for a null state-condition input, but
for sum and some other simple aggregates like max and min, it is sufficient to insert the first nonnull input
value into the state variable and then start applying the transition function at the second nonnull input
value. PostgreSQL will do that automatically if the initial condition is null and the transition function is
marked “strict” (i.e., not to be called for null inputs).
Another bit of default behavior for a “strict” transition function is that the previous state value is retained
unchanged whenever a null input value is encountered. Thus, null values are ignored. If you need some
other behavior for null inputs, just do not define your transition function as strict, and code it to test for
null inputs and do whatever is needed.
avg (average) is a more complex example of an aggregate. It requires two pieces of running state: the sum
of the inputs and the count of the number of inputs. The final result is obtained by dividing these quantities.
Average is typically implemented by using a two-element array as the state value. For example, the built-in
implementation of avg(float8) looks like:
508
Chapter 31. Extending SQL
Aggregate functions may use polymorphic state transition functions or final functions, so that the same
functions can be used to implement multiple aggregates. See Section 31.2.5 for an explanation of poly-
morphic functions. Going a step further, the aggregate function itself may be specified with a polymorphic
base type and state type, allowing a single aggregate definition to serve for multiple input data types. Here
is an example of a polymorphic aggregate:
Here, the actual state type for any aggregate call is the array type having the actual input type as elements.
Here’s the output using two different actual data types as arguments:
attrelid | array_accum
----------+-----------------------------------------------------------------------------
pg_user | {usename,usesysid,usecreatedb,usesuper,usecatupd,passwd,valuntil,useconfig}
(1 row)
attrelid | array_accum
----------+------------------------------
pg_user | {19,23,16,16,16,25,702,1009}
(1 row)
509
Chapter 31. Extending SQL
A user-defined type must always have input and output functions. These functions determine how the type
appears in strings (for input by the user and output to the user) and how the type is organized in memory.
The input function takes a null-terminated character string as its argument and returns the internal (in
memory) representation of the type. The output function takes the internal representation of the type as
argument and returns a null-terminated character string. If we want to do anything more with the type than
merely store it, we must provide additional functions to implement whatever operations we’d like to have
for the type.
Suppose we want to define a type complex that represents complex numbers. A natural way to represent
a complex number in memory would be the following C structure:
We will need to make this a pass-by-reference type, since it’s too large to fit into a single Datum value.
As the external string representation of the type, we choose a string of the form (x,y).
The input and output functions are usually not hard to write, especially the output function. But when
defining the external string representation of the type, remember that you must eventually write a complete
and robust parser for that representation as your input function. For instance:
PG_FUNCTION_INFO_V1(complex_in);
Datum
complex_in(PG_FUNCTION_ARGS)
{
char *str = PG_GETARG_CSTRING(0);
double x,
y;
Complex *result;
PG_FUNCTION_INFO_V1(complex_out);
Datum
complex_out(PG_FUNCTION_ARGS)
{
Complex *complex = (Complex *) PG_GETARG_POINTER(0);
510
Chapter 31. Extending SQL
char *result;
You should be careful to make the input and output functions inverses of each other. If you do not, you
will have severe problems when you need to dump your data into a file and then read it back in. This is a
particularly common problem when floating-point numbers are involved.
Optionally, a user-defined type can provide binary input and output routines. Binary I/O is normally faster
but less portable than textual I/O. As with textual I/O, it is up to you to define exactly what the external
binary representation is. Most of the built-in data types try to provide a machine-independent binary
representation. For complex, we will piggy-back on the binary I/O converters for type float8:
PG_FUNCTION_INFO_V1(complex_recv);
Datum
complex_recv(PG_FUNCTION_ARGS)
{
StringInfo buf = (StringInfo) PG_GETARG_POINTER(0);
Complex *result;
PG_FUNCTION_INFO_V1(complex_send);
Datum
complex_send(PG_FUNCTION_ARGS)
{
Complex *complex = (Complex *) PG_GETARG_POINTER(0);
StringInfoData buf;
pq_begintypsend(&buf);
pq_sendfloat8(&buf, complex->x);
pq_sendfloat8(&buf, complex->y);
PG_RETURN_BYTEA_P(pq_endtypsend(&buf));
}
To define the complex type, we need to create the user-defined I/O functions before creating the type:
511
Chapter 31. Extending SQL
Notice that the declarations of the input and output functions must reference the not-yet-defined type. This
is allowed, but will draw warning messages that may be ignored. The input function must appear first.
Finally, we can declare the data type:
When you define a new base type, PostgreSQL automatically provides support for arrays of that type. For
historical reasons, the array type has the same name as the base type with the underscore character (_)
prepended.
Once the data type exists, we can declare additional functions to provide useful operations on the data
type. Operators can then be defined atop the functions, and if needed, operator classes can be created to
support indexing of the data type. These additional layers are discussed in following sections.
If the values of your data type might exceed a few hundred bytes in size (in internal form), you should
make the data type TOAST-able (see Section 49.2). To do this, the internal representation must follow the
standard layout for variable-length data: the first four bytes must be an int32 containing the total length in
bytes of the datum (including itself). The C functions operating on the data type must be careful to unpack
any toasted values they are handed, by using PG_DETOAST_DATUM. (This detail is customarily hidden by
defining type-specific GETARG macros.) Then, when running the CREATE TYPE command, specify the
internal length as variable and select the appropriate storage option.
For further details see the description of the CREATE TYPE command.
512
Chapter 31. Extending SQL
CREATE OPERATOR + (
leftarg = complex,
rightarg = complex,
procedure = complex_add,
commutator = +
);
c
-----------------
(5.2,6.05)
(133.42,144.95)
We’ve shown how to create a binary operator here. To create unary operators, just omit one of leftarg
(for left unary) or rightarg (for right unary). The procedure clause and the argument clauses are the
only required items in CREATE OPERATOR. The commutator clause shown in the example is an optional
hint to the query optimizer. Further details about commutator and other optimizer hints appear in the
next section.
513
Chapter 31. Extending SQL
you must be sure that they are right! Incorrect use of an optimization clause can result in server process
crashes, subtly wrong output, or other Bad Things. You can always leave out an optimization clause if you
are not sure about it; the only consequence is that queries might run slower than they need to.
Additional optimization clauses might be added in future versions of PostgreSQL. The ones described
here are all the ones that release 8.0.0 understands.
31.13.1. COMMUTATOR
The COMMUTATOR clause, if provided, names an operator that is the commutator of the operator being
defined. We say that operator A is the commutator of operator B if (x A y) equals (y B x) for all possible
input values x, y. Notice that B is also the commutator of A. For example, operators < and > for a
particular data type are usually each others’ commutators, and operator + is usually commutative with
itself. But operator - is usually not commutative with anything.
The left operand type of a commutable operator is the same as the right operand type of its commutator,
and vice versa. So the name of the commutator operator is all that PostgreSQL needs to be given to look
up the commutator, and that’s all that needs to be provided in the COMMUTATOR clause.
It’s critical to provide commutator information for operators that will be used in indexes and join clauses,
because this allows the query optimizer to “flip around” such a clause to the forms needed for different plan
types. For example, consider a query with a WHERE clause like tab1.x = tab2.y, where tab1.x and
tab2.y are of a user-defined type, and suppose that tab2.y is indexed. The optimizer cannot generate
an index scan unless it can determine how to flip the clause around to tab2.y = tab1.x, because the
index-scan machinery expects to see the indexed column on the left of the operator it is given. PostgreSQL
will not simply assume that this is a valid transformation — the creator of the = operator must specify that
it is valid, by marking the operator with commutator information.
When you are defining a self-commutative operator, you just do it. When you are defining a pair of
commutative operators, things are a little trickier: how can the first one to be defined refer to the other
one, which you haven’t defined yet? There are two solutions to this problem:
• One way is to omit the COMMUTATOR clause in the first operator that you define, and then provide one
in the second operator’s definition. Since PostgreSQL knows that commutative operators come in pairs,
when it sees the second definition it will automatically go back and fill in the missing COMMUTATOR
clause in the first definition.
• The other, more straightforward way is just to include COMMUTATOR clauses in both definitions. When
PostgreSQL processes the first definition and realizes that COMMUTATOR refers to a nonexistent operator,
the system will make a dummy entry for that operator in the system catalog. This dummy entry will have
valid data only for the operator name, left and right operand types, and result type, since that’s all that
PostgreSQL can deduce at this point. The first operator’s catalog entry will link to this dummy entry.
Later, when you define the second operator, the system updates the dummy entry with the additional
information from the second definition. If you try to use the dummy operator before it’s been filled in,
you’ll just get an error message.
514
Chapter 31. Extending SQL
31.13.2. NEGATOR
The NEGATOR clause, if provided, names an operator that is the negator of the operator being defined. We
say that operator A is the negator of operator B if both return Boolean results and (x A y) equals NOT
(x B y) for all possible inputs x, y. Notice that B is also the negator of A. For example, < and >= are a
negator pair for most data types. An operator can never validly be its own negator.
Unlike commutators, a pair of unary operators could validly be marked as each others’ negators; that
would mean (A x) equals NOT (B x) for all x, or the equivalent for right unary operators.
An operator’s negator must have the same left and/or right operand types as the operator to be defined, so
just as with COMMUTATOR, only the operator name need be given in the NEGATOR clause.
Providing a negator is very helpful to the query optimizer since it allows expressions like NOT (x = y)
to be simplified into x <> y. This comes up more often than you might think, because NOT operations
can be inserted as a consequence of other rearrangements.
Pairs of negator operators can be defined using the same methods explained above for commutator pairs.
31.13.3. RESTRICT
The RESTRICT clause, if provided, names a restriction selectivity estimation function for the operator.
(Note that this is a function name, not an operator name.) RESTRICT clauses only make sense for binary
operators that return boolean. The idea behind a restriction selectivity estimator is to guess what fraction
of the rows in a table will satisfy a WHERE-clause condition of the form
column OP constant
for the current operator and a particular constant value. This assists the optimizer by giving it some idea
of how many rows will be eliminated by WHERE clauses that have this form. (What happens if the constant
is on the left, you may be wondering? Well, that’s one of the things that COMMUTATOR is for...)
Writing new restriction selectivity estimation functions is far beyond the scope of this chapter, but fortu-
nately you can usually just use one of the system’s standard estimators for many of your own operators.
These are the standard restriction estimators:
eqsel for =
neqsel for <>
scalarltsel for < or <=
scalargtsel for > or >=
It might seem a little odd that these are the categories, but they make sense if you think about it. = will
typically accept only a small fraction of the rows in a table; <> will typically reject only a small fraction.
< will accept a fraction that depends on where the given constant falls in the range of values for that
table column (which, it just so happens, is information collected by ANALYZE and made available to the
selectivity estimator). <= will accept a slightly larger fraction than < for the same comparison constant,
but they’re close enough to not be worth distinguishing, especially since we’re not likely to do better than
a rough guess anyhow. Similar remarks apply to > and >=.
You can frequently get away with using either eqsel or neqsel for operators that have very high or very
low selectivity, even if they aren’t really equality or inequality. For example, the approximate-equality
geometric operators use eqsel on the assumption that they’ll usually only match a small fraction of the
entries in a table.
515
Chapter 31. Extending SQL
You can use scalarltsel and scalargtsel for comparisons on data types that have
some sensible means of being converted into numeric scalars for range comparisons. If
possible, add the data type to those understood by the function convert_to_scalar() in
src/backend/utils/adt/selfuncs.c. (Eventually, this function should be replaced by
per-data-type functions identified through a column of the pg_type system catalog; but that hasn’t
happened yet.) If you do not do this, things will still work, but the optimizer’s estimates won’t be as good
as they could be.
There are additional selectivity estimation functions designed for geometric operators in
src/backend/utils/adt/geo_selfuncs.c: areasel, positionsel, and contsel. At this
writing these are just stubs, but you may want to use them (or even better, improve them) anyway.
31.13.4. JOIN
The JOIN clause, if provided, names a join selectivity estimation function for the operator. (Note that this
is a function name, not an operator name.) JOIN clauses only make sense for binary operators that return
boolean. The idea behind a join selectivity estimator is to guess what fraction of the rows in a pair of
tables will satisfy a WHERE-clause condition of the form
table1.column1 OP table2.column2
for the current operator. As with the RESTRICT clause, this helps the optimizer very substantially by
letting it figure out which of several possible join sequences is likely to take the least work.
As before, this chapter will make no attempt to explain how to write a join selectivity estimator function,
but will just suggest that you use one of the standard estimators if one is applicable:
eqjoinsel for =
neqjoinsel for <>
scalarltjoinsel for < or <=
scalargtjoinsel for > or >=
areajoinsel for 2D area-based comparisons
positionjoinsel for 2D position-based comparisons
contjoinsel for 2D containment-based comparisons
31.13.5. HASHES
The HASHES clause, if present, tells the system that it is permissible to use the hash join method for a
join based on this operator. HASHES only makes sense for a binary operator that returns boolean, and in
practice the operator had better be equality for some data type.
The assumption underlying hash join is that the join operator can only return true for pairs of left and right
values that hash to the same hash code. If two values get put in different hash buckets, the join will never
compare them at all, implicitly assuming that the result of the join operator must be false. So it never
makes sense to specify HASHES for operators that do not represent equality.
To be marked HASHES, the join operator must appear in a hash index operator class. This is not enforced
when you create the operator, since of course the referencing operator class couldn’t exist yet. But attempts
516
Chapter 31. Extending SQL
to use the operator in hash joins will fail at runtime if no such operator class exists. The system needs the
operator class to find the data-type-specific hash function for the operator’s input data type. Of course,
you must also supply a suitable hash function before you can create the operator class.
Care should be exercised when preparing a hash function, because there are machine-dependent ways in
which it might fail to do the right thing. For example, if your data type is a structure in which there may
be uninteresting pad bits, you can’t simply pass the whole structure to hash_any. (Unless you write your
other operators and functions to ensure that the unused bits are always zero, which is the recommended
strategy.) Another example is that on machines that meet the IEEE floating-point standard, negative zero
and positive zero are different values (different bit patterns) but they are defined to compare equal. If a
float value might contain negative zero then extra steps are needed to ensure it generates the same hash
value as positive zero.
Note: The function underlying a hash-joinable operator must be marked immutable or stable. If it is
volatile, the system will never attempt to use the operator for a hash join.
Note: If a hash-joinable operator has an underlying function that is marked strict, the function must
also be complete: that is, it should return true or false, never null, for any two nonnull inputs. If this
rule is not followed, hash-optimization of IN operations may generate wrong results. (Specifically, IN
might return false where the correct answer according to the standard would be null; or it might yield
an error complaining that it wasn’t prepared for a null result.)
517
Chapter 31. Extending SQL
of these four operator options appear, so it is possible to specify just some of them and let the system fill
in the rest.
The operand data types of the four comparison operators can be deduced from the operand types of the
merge-joinable operator, so just as with COMMUTATOR, only the operator names need be given in these
clauses. Unless you are using peculiar choices of operator names, it’s sufficient to write MERGES and let
the system fill in the details. (As with COMMUTATOR and NEGATOR, the system is able to make dummy
operator entries if you happen to define the equality operator before the other ones.)
There are additional restrictions on operators that you mark merge-joinable. These restrictions are not
currently checked by CREATE OPERATOR, but errors may occur when the operator is used if any are not
true:
• A merge-joinable equality operator must have a merge-joinable commutator (itself if the two operand
data types are the same, or a related equality operator if they are different).
• If there is a merge-joinable operator relating any two data types A and B, and another merge-joinable
operator relating B to any third data type C, then A and C must also have a merge-joinable operator; in
other words, having a merge-joinable operator must be transitive.
• Bizarre results will ensue at runtime if the four comparison operators you name do not sort the data
values compatibly.
Note: The function underlying a merge-joinable operator must be marked immutable or stable. If it is
volatile, the system will never attempt to use the operator for a merge join.
Note: In PostgreSQL versions before 7.3, the MERGES shorthand was not available: to make a merge-
joinable operator one had to write both SORT1 and SORT2 explicitly. Also, the LTCMP and GTCMP options
did not exist; the names of those operators were hardwired as < and > respectively.
Note: Prior to PostgreSQL release 7.3, it was necessary to make manual additions to the system
catalogs pg_amop, pg_amproc, and pg_opclass in order to create a user-defined operator class. That
approach is now deprecated in favor of using CREATE OPERATOR CLASS, which is a much simpler and
less error-prone way of creating the necessary catalog entries.
518
Chapter 31. Extending SQL
The routines for an index method do not directly know anything about the data types that the index method
will operate on. Instead, an operator class identifies the set of operations that the index method needs to
use to work with a particular data type. Operator classes are so called because one thing they specify is
the set of WHERE-clause operators that can be used with an index (i.e., can be converted into an index-scan
qualification). An operator class may also specify some support procedures that are needed by the internal
operations of the index method, but do not directly correspond to any WHERE-clause operator that can be
used with the index.
It is possible to define multiple operator classes for the same data type and index method. By doing this,
multiple sets of indexing semantics can be defined for a single data type. For example, a B-tree index
requires a sort ordering to be defined for each data type it works on. It might be useful for a complex-
number data type to have one B-tree operator class that sorts the data by complex absolute value, another
that sorts by real part, and so on. Typically, one of the operator classes will be deemed most commonly
useful and will be marked as the default operator class for that data type and index method.
The same operator class name can be used for several different index methods (for example, both B-tree
and hash index methods have operator classes named int4_ops), but each such class is an independent
entity and must be defined separately.
Hash indexes express only bitwise equality, and so they use only one strategy, shown in Table 31-3.
519
Chapter 31. Extending SQL
R-tree indexes express rectangle-containment relationships. They use eight strategies, shown in Table
31-4.
GiST indexes are even more flexible: they do not have a fixed set of strategies at all. Instead, the “consis-
tency” support routine of each particular GiST operator class interprets the strategy numbers however it
likes.
Note that all strategy operators return Boolean values. In practice, all operators defined as index method
strategies must return type boolean, since they must appear at the top level of a WHERE clause to be used
with an index.
By the way, the amorderstrategy column in pg_am tells whether the index method supports ordered
scans. Zero means it doesn’t; if it does, amorderstrategy is the strategy number that corresponds to
the ordering operator. For example, B-tree has amorderstrategy = 1, which is its “less than” strategy
number.
520
Chapter 31. Extending SQL
Hash indexes likewise require one support function, shown in Table 31-6.
Unlike strategy operators, support functions return whichever data type the particular index method ex-
pects, for example in the case of the comparison function for B-trees, a signed integer.
31.14.4. An Example
Now that we have seen the ideas, here is the promised example of creating a new operator class. (You can
find a working copy of this example in src/tutorial/complex.c and src/tutorial/complex.sql
in the source distribution.) The operator class encapsulates operators that sort complex numbers in abso-
521
Chapter 31. Extending SQL
lute value order, so we choose the name complex_abs_ops. First, we need a set of operators. The proce-
dure for defining operators was discussed in Section 31.12. For an operator class on B-trees, the operators
we require are:
The least error-prone way to define a related set of comparison operators is to write the B-tree comparison
support function first, and then write the other functions as one-line wrappers around the support function.
This reduces the odds of getting inconsistent results for corner cases. Following this approach, we first
write
static int
complex_abs_cmp_internal(Complex *a, Complex *b)
{
double amag = Mag(a),
bmag = Mag(b);
PG_FUNCTION_INFO_V1(complex_abs_lt);
Datum
complex_abs_lt(PG_FUNCTION_ARGS)
{
Complex *a = (Complex *) PG_GETARG_POINTER(0);
Complex *b = (Complex *) PG_GETARG_POINTER(1);
The other four functions differ only in how they compare the internal function’s result to zero.
Next we declare the functions and the operators based on the functions to SQL:
522
Chapter 31. Extending SQL
It is important to specify the correct commutator and negator operators, as well as suitable restriction and
join selectivity functions, otherwise the optimizer will be unable to make effective use of the index. Note
that the less-than, equal, and greater-than cases should use different selectivity functions.
Other things worth noting are happening here:
• There can only be one operator named, say, = and taking type complex for both operands. In this case
we don’t have any other operator = for complex, but if we were building a practical data type we’d
probably want = to be the ordinary equality operation for complex numbers (and not the equality of the
absolute values). In that case, we’d need to use some other operator name for complex_abs_eq.
• Although PostgreSQL can cope with functions having the same name as long as they have different
argument data types, C can only cope with one global function having a given name. So we shouldn’t
name the C function something simple like abs_eq. Usually it’s a good practice to include the data
type name in the C function name, so as not to conflict with functions for other data types.
• We could have made the PostgreSQL name of the function abs_eq, relying on PostgreSQL to distin-
guish it by argument data types from any other PostgreSQL function of the same name. To keep the
example simple, we make the function have the same names at the C level and PostgreSQL level.
The next step is the registration of the support routine required by B-trees. The example C code that
implements this is in the same file that contains the operator functions. This is how we declare the function:
Now that we have the required operators and support routine, we can finally create the operator class:
And we’re done! It should now be possible to create and use B-tree indexes on complex columns.
We could have written the operator entries more verbosely, as in
523
Chapter 31. Extending SQL
but there is no need to do so when the operators take the same data type we are defining the operator class
for.
The above example assumes that you want to make this new operator class the default B-tree operator
class for the complex data type. If you don’t, just leave out the word DEFAULT.
Notice that this definition “overloads” the operator strategy and support function numbers. This is allowed
(for B-tree operator classes only) so long as each instance of a particular number has a different right-hand
data type. The instances that are not cross-type are the default or primary operators of the operator class.
GiST indexes do not allow overloading of strategy or support function numbers, but it is still possible to
get the effect of supporting multiple right-hand data types, by assigning a distinct strategy number to each
operator that needs to be supported. The consistent support function must determine what it needs to
524
Chapter 31. Extending SQL
do based on the strategy number, and must be prepared to accept comparison values of the appropriate
data types.
Note: In PostgreSQL versions before 7.4, sorting and grouping operations would implicitly use oper-
ators named =, <, and >. The new behavior of relying on default operator classes avoids having to
make any assumption about the behavior of operators with particular names.
can be satisfied exactly by a B-tree index on the integer column. But there are cases where an index is
useful as an inexact guide to the matching rows. For example, if an R-tree index stores only bounding
boxes for objects, then it cannot exactly satisfy a WHERE condition that tests overlap between nonrectan-
gular objects such as polygons. Yet we could use the index to find objects whose bounding box overlaps
the bounding box of the target object, and then do the exact overlap test only on the objects found by
the index. If this scenario applies, the index is said to be “lossy” for the operator, and we add RECHECK
to the OPERATOR clause in the CREATE OPERATOR CLASS command. RECHECK is valid if the index is
guaranteed to return all the required rows, plus perhaps some additional rows, which can be eliminated by
performing the original operator invocation.
525
Chapter 31. Extending SQL
Consider again the situation where we are storing in the index only the bounding box of a complex object
such as a polygon. In this case there’s not much value in storing the whole polygon in the index entry —
we may as well store just a simpler object of type box. This situation is expressed by the STORAGE option
in CREATE OPERATOR CLASS: we’d write something like
At present, only the GiST index method supports a STORAGE type that’s different from the column data
type. The GiST compress and decompress support routines must deal with data-type conversion when
STORAGE is used.
526
Chapter 32. Triggers
This chapter describes how to write trigger functions. Trigger functions can be written in C or in some of
the available procedural languages. It is not currently possible to write a SQL-language trigger function.
• It can return NULL to skip the operation for the current row. This instructs the executor to not perform
the row-level operation that invoked the trigger (the insertion or modification of a particular table row).
• For row-level INSERT and UPDATE triggers only, the returned row becomes the row that will be inserted
or will replace the row being updated. This allows the trigger function to modify the row being inserted
or updated.
A row-level before trigger that does not intend to cause either of these behaviors must be careful to return
as its result the same row that was passed in (that is, the NEW row for INSERT and UPDATE triggers, the
OLD row for DELETE triggers).
The return value is ignored for row-level triggers fired after an operation, and so they may as well return
NULL.
If more than one trigger is defined for the same event on the same relation, the triggers will be fired in
alphabetical order by trigger name. In the case of before triggers, the possibly-modified row returned by
each trigger becomes the input to the next trigger. If any before trigger returns NULL, the operation is
abandoned and subsequent triggers are not fired.
527
Chapter 32. Triggers
Typically, row before triggers are used for checking or modifying the data that will be inserted or updated.
For example, a before trigger might be used to insert the current time into a timestamp column, or to
check that two elements of the row are consistent. Row after triggers are most sensibly used to propagate
the updates to other tables, or make consistency checks against other tables. The reason for this division
of labor is that an after trigger can be certain it is seeing the final value of the row, while a before trigger
cannot; there might be other before triggers firing after it. If you have no specific reason to make a trigger
before or after, the before case is more efficient, since the information about the operation doesn’t have to
be saved until end of statement.
If a trigger function executes SQL commands then these commands may fire triggers again. This is known
as cascading triggers. There is no direct limitation on the number of cascade levels. It is possible for
cascades to cause a recursive invocation of the same trigger; for example, an INSERT trigger might execute
a command that inserts an additional row into the same table, causing the INSERT trigger to be fired again.
It is the trigger programmer’s responsibility to avoid infinite recursion in such scenarios.
When a trigger is being defined, arguments can be specified for it. The purpose of including arguments in
the trigger definition is to allow different triggers with similar requirements to call the same function. As
an example, there could be a generalized trigger function that takes as its arguments two column names
and puts the current user in one and the current time stamp in the other. Properly written, this trigger
function would be independent of the specific table it is triggering on. So the same function could be
used for INSERT events on any table with suitable columns, to automatically track creation of records in
a transaction table for example. It could also be used to track last-update events if defined as an UPDATE
trigger.
Each programming language that supports triggers has its own method for making the trigger input data
available to the trigger function. This input data includes the type of trigger event (e.g., INSERT or
UPDATE) as well as any arguments that were listed in CREATE TRIGGER. For a row-level trigger, the
input data also includes the NEW row for INSERT and UPDATE triggers, and/or the OLD row for UPDATE
and DELETE triggers. Statement-level triggers do not currently have any way to examine the individual
row(s) modified by the statement.
• Statement-level triggers follow simple visibility rules: none of the changes made by a statement are
visible to statement-level triggers that are invoked before the statement, whereas all modifications are
visible to statement-level after triggers.
• The data change (insertion, update, or deletion) causing the trigger to fire is naturally not visible to SQL
commands executed in a row-level before trigger, because it hasn’t happened yet.
• However, SQL commands executed in a row-level before trigger will see the effects of data changes
for rows previously processed in the same outer command. This requires caution, since the ordering of
these change events is not in general predictable; a SQL command that affects multiple rows may visit
the rows in any order.
528
Chapter 32. Triggers
• When a row-level after trigger is fired, all data changes made by the outer command are already com-
plete, and are visible to the invoked trigger function.
Further information about data visibility rules can be found in Section 39.4. The example in Section 32.4
contains a demonstration of these rules.
CALLED_AS_TRIGGER(fcinfo)
which expands to
If this returns true, then it is safe to cast fcinfo->context to type TriggerData * and make use of
the pointed-to TriggerData structure. The function must not alter the TriggerData structure or any of
the data it points to.
struct TriggerData is defined in commands/trigger.h:
type
Always T_TriggerData.
529
Chapter 32. Triggers
tg_event
Describes the event for which the function is called. You may use the following macros to examine
tg_event:
TRIGGER_FIRED_BEFORE(tg_event)
tg_relation
A pointer to a structure describing the relation that the trigger fired for. Look at utils/rel.h for
details about this structure. The most interesting things are tg_relation->rd_att (descriptor of
the relation tuples) and tg_relation->rd_rel->relname (relation name; the type is not char*
but NameData; use SPI_getrelname(tg_relation) to get a char* if you need a copy of the
name).
tg_trigtuple
A pointer to the row for which the trigger was fired. This is the row being inserted, updated, or
deleted. If this trigger was fired for an INSERT or DELETE then this is what you should return from
the function if you don’t want to replace the row with a different one (in the case of INSERT) or skip
the operation.
tg_newtuple
A pointer to the new version of the row, if the trigger was fired for an UPDATE, and NULL if it is for
an INSERT or a DELETE. This is what you have to return from the function if the event is an UPDATE
and you don’t want to replace this row by a different one or skip the operation.
tg_trigger
530
Chapter 32. Triggers
Oid tgoid;
char *tgname;
Oid tgfoid;
int16 tgtype;
bool tgenabled;
bool tgisconstraint;
Oid tgconstrrelid;
bool tgdeferrable;
bool tginitdeferred;
int16 tgnargs;
int16 tgattr[FUNC_MAX_ARGS];
char **tgargs;
} Trigger;
where tgname is the trigger’s name, tgnargs is number of arguments in tgargs, and tgargs is an
array of pointers to the arguments specified in the CREATE TRIGGER statement. The other members
are for internal use only.
tg_trigtuplebuf
The buffer containing tg_trigtuple, or InvalidBuffer if there is no such tuple or it is not stored
in a disk buffer.
tg_newtuplebuf
The buffer containing tg_newtuple, or InvalidBuffer if there is no such tuple or it is not stored
in a disk buffer.
A trigger function must return either a HeapTuple pointer or a NULL pointer (not an SQL null value, that
is, do not set isNull true). Be careful to return either tg_trigtuple or tg_newtuple, as appropriate,
if you don’t want to modify the row being operated on.
#include "postgres.h"
531
Chapter 32. Triggers
PG_FUNCTION_INFO_V1(trigf);
Datum
trigf(PG_FUNCTION_ARGS)
{
TriggerData *trigdata = (TriggerData *) fcinfo->context;
TupleDesc tupdesc;
HeapTuple rettuple;
char *when;
bool checknull = false;
bool isnull;
int ret, i;
if (TRIGGER_FIRED_BEFORE(trigdata->tg_event))
when = "before";
else
when = "after ";
tupdesc = trigdata->tg_relation->rd_att;
if (ret < 0)
elog(NOTICE, "trigf (fired %s): SPI_exec returned %d", when, ret);
532
Chapter 32. Triggers
1,
&isnull));
elog (INFO, "trigf (fired %s): there are %d rows in ttest", when, i);
SPI_finish();
if (checknull)
{
SPI_getbinval(rettuple, tupdesc, 1, &isnull);
if (isnull)
rettuple = NULL;
}
return PointerGetDatum(rettuple);
}
After you have compiled the source code, declare the function and the triggers:
533
Chapter 32. Triggers
---
1
(1 row)
534
Chapter 33. The Rule System
This chapter discusses the rule system in PostgreSQL. Production rule systems are conceptually simple,
but there are many subtle points involved in actually using them.
Some other database systems define active database rules, which are usually stored procedures and trig-
gers. In PostgreSQL, these can be implemented using functions and triggers as well.
The rule system (more precisely speaking, the query rewrite rule system) is totally different from stored
procedures and triggers. It modifies queries to take rules into consideration, and then passes the modified
query to the query planner for planning and execution. It is very powerful, and can be used for many
things such as query language procedures, views, and versions. The theoretical foundations and the power
of this rule system are also discussed in On Rules, Procedures, Caching and Views in Database Systems
and A Unified Framework for Version Modeling Using Production Rules in a Database System.
535
Chapter 33. The Rule System
536
Chapter 33. The Rule System
the others
The other parts of the query tree like the ORDER BY clause aren’t of interest here. The rule sys-
tem substitutes some entries there while applying rules, but that doesn’t have much to do with the
fundamentals of the rule system.
because this is exactly what the CREATE VIEW command does internally. This has some side effects. One
of them is that the information about a view in the PostgreSQL system catalogs is exactly the same as it
is for a table. So for the parser, there is absolutely no difference between a table and a view. They are the
same thing: relations.
537
Chapter 33. The Rule System
The real tables we need in the first two rule system descriptions are these:
538
Chapter 33. The Rule System
rsl.sl_avail,
min(rsh.sh_avail, rsl.sl_avail) AS total_avail
FROM shoe rsh, shoelace rsl
WHERE rsl.sl_color = rsh.slcolor
AND rsl.sl_len_cm >= rsh.slminlen_cm
AND rsl.sl_len_cm <= rsh.slmaxlen_cm;
The CREATE VIEW command for the shoelace view (which is the simplest one we have) will create a
relation shoelace and an entry in pg_rewrite that tells that there is a rewrite rule that must be applied
whenever the relation shoelace is referenced in a query’s range table. The rule has no rule qualification
(discussed later, with the non-SELECT rules, since SELECT rules currently cannot have them) and it is
INSTEAD. Note that rule qualifications are not the same as query qualifications. The action of our rule has
a query qualification. The action of the rule is one query tree that is a copy of the SELECT statement in the
view creation command.
Note: The two extra range table entries for NEW and OLD (named *NEW* and *OLD* for historical rea-
sons in the printed query tree) you can see in the pg_rewrite entry aren’t of interest for SELECT
rules.
Now we populate unit, shoe_data and shoelace_data and run a simple query on a view:
539
Chapter 33. The Rule System
(8 rows)
This is the simplest SELECT you can do on our views, so we take this opportunity to explain the basics of
view rules. The SELECT * FROM shoelace was interpreted by the parser and produced the query tree
and this is given to the rule system. The rule system walks through the range table and checks if there are
rules for any relation. When processing the range table entry for shoelace (the only one up to now) it
finds the _RETURN rule with the query tree
To expand the view, the rewriter simply creates a subquery range-table entry containing the rule’s action
query tree, and substitutes this range table entry for the original one that referenced the view. The resulting
rewritten query tree is almost the same as if you had typed
There is one difference however: the subquery’s range table has two extra entries shoelace *OLD* and
shoelace *NEW*. These entries don’t participate directly in the query, since they aren’t referenced by the
subquery’s join tree or target list. The rewriter uses them to store the access privilege check information
that was originally present in the range-table entry that referenced the view. In this way, the executor will
still check that the user has proper privileges to access the view, even though there’s no direct use of the
view in the rewritten query.
That was the first rule applied. The rule system will continue checking the remaining range-table entries
in the top query (in this example there are no more), and it will recursively check the range-table entries
in the added subquery to see if any of them reference views. (But it won’t expand *OLD* or *NEW* —
otherwise we’d have infinite recursion!) In this example, there are no rewrite rules for shoelace_data
or unit, so rewriting is complete and the above is the final result given to the planner.
540
Chapter 33. The Rule System
No we want to write a query that finds out for which shoes currently in the store we have the matching
shoelaces (color and length) and where the total number of exactly matching pairs is greater or equal to
two.
The first rule applied will be the one for the shoe_ready view and it results in the query tree
Similarly, the rules for shoe and shoelace are substituted into the range table of the subquery, leading
to a three-level final query tree:
541
Chapter 33. The Rule System
sh.slmaxlen,
sh.slmaxlen * un.un_fact AS slmaxlen_cm,
sh.slunit
FROM shoe_data sh, unit un
WHERE sh.slunit = un.un_name) rsh,
(SELECT s.sl_name,
s.sl_avail,
s.sl_color,
s.sl_len,
s.sl_unit,
s.sl_len * u.un_fact AS sl_len_cm
FROM shoelace_data s, unit u
WHERE s.sl_unit = u.un_name) rsl
WHERE rsl.sl_color = rsh.slcolor
AND rsl.sl_len_cm >= rsh.slminlen_cm
AND rsl.sl_len_cm <= rsh.slmaxlen_cm) shoe_ready
WHERE shoe_ready.total_avail > 2;
It turns out that the planner will collapse this tree into a two-level query tree: the bottommost SELECT
commands will be “pulled up” into the middle SELECT since there’s no need to process them separately.
But the middle SELECT will remain separate from the top, because it contains aggregate functions. If we
pulled those up it would change the behavior of the topmost SELECT, which we don’t want. However,
collapsing the query tree is an optimization that the rewrite system doesn’t have to concern itself with.
Note: There is currently no recursion stopping mechanism for view rules in the rule system (only for
the other kinds of rules). This doesn’t hurt much, because the only way to push this into an endless
loop (bloating up the server process until it reaches the memory limit) is to create tables and then
setup the view rules by hand with CREATE RULE in such a way, that one selects from the other that
selects from the one. This could never happen if CREATE VIEW is used because for the first CREATE
VIEW, the second relation does not exist and thus the first view cannot select from the second.
542
Chapter 33. The Rule System
• The range tables contain entries for the tables t1 and t2.
• The target lists contain one variable that points to column b of the range table entry for table t2.
• The qualification expressions compare the columns a of both range-table entries for equality.
• The join trees show a simple join between t1 and t2.
The consequence is, that both query trees result in similar execution plans: They are both joins over the
two tables. For the UPDATE the missing columns from t1 are added to the target list by the planner and
the final query tree will read as
and thus the executor run over the join will produce exactly the same result set as a
will do. But there is a little problem in UPDATE: The executor does not care what the results from the join it
is doing are meant for. It just produces a result set of rows. The difference that one is a SELECT command
and the other is an UPDATE is handled in the caller of the executor. The caller still knows (looking at the
query tree) that this is an UPDATE, and it knows that this result should go into table t1. But which of the
rows that are there has to be replaced by the new row?
To resolve this problem, another entry is added to the target list in UPDATE (and also in DELETE) state-
ments: the current tuple ID (CTID). This is a system column containing the file block number and position
in the block for the row. Knowing the table, the CTID can be used to retrieve the original row of t1 to be
updated. After adding the CTID to the target list, the query actually looks like
Now another detail of PostgreSQL enters the stage. Old table rows aren’t overwritten, and this is why
ROLLBACK is fast. In an UPDATE, the new result row is inserted into the table (after stripping the CTID)
and in the row header of the old row, which the CTID pointed to, the cmax and xmax entries are set to the
current command counter and current transaction ID. Thus the old row is hidden, and after the transaction
committed the vacuum cleaner can really move it out.
Knowing all that, we can simply apply view rules in absolutely the same way to any command. There is
no difference.
543
Chapter 33. The Rule System
can be. And the rule system as implemented in PostgreSQL ensures, that this is all information available
about the query up to that point.
in mind. In the following, update rules means rules that are defined on INSERT, UPDATE, or DELETE.
Update rules get applied by the rule system when the result relation and the command type of a query
tree are equal to the object and event given in the CREATE RULE command. For update rules, the rule
system creates a list of query trees. Initially the query-tree list is empty. There can be zero (NOTHING key
word), one, or multiple actions. To simplify, we will look at a rule with one action. This rule can have a
qualification or not and it can be INSTEAD or ALSO (default).
What is a rule qualification? It is a restriction that tells when the actions of the rule should be done
and when not. This qualification can only reference the pseudorelations NEW and/or OLD, which basically
represent the relation that was given as object (but with a special meaning).
So we have four cases that produce the following query trees for a one-action rule.
544
Chapter 33. The Rule System
For ON INSERT rules, the original query (if not suppressed by INSTEAD) is done before any actions
added by rules. This allows the actions to see the inserted row(s). But for ON UPDATE and ON DELETE
rules, the original query is done after the actions added by rules. This ensures that the actions can see the
to-be-updated or to-be-deleted rows; otherwise, the actions might do nothing because they find no rows
matching their qualifications.
The query trees generated from rule actions are thrown into the rewrite system again, and maybe more
rules get applied resulting in more or less query trees. So the query trees in the rule actions must have
either a different command type or a different result relation, otherwise, this recursive process will end up
in a loop. There is a fixed recursion limit of currently 100 iterations. If after 100 iterations there are still
update rules to apply, the rule system assumes a loop over multiple rule definitions and reports an error.
The query trees found in the actions of the pg_rewrite system catalog are only templates. Since they can
reference the range-table entries for NEW and OLD, some substitutions have to be made before they can be
used. For any reference to NEW, the target list of the original query is searched for a corresponding entry.
If found, that entry’s expression replaces the reference. Otherwise, NEW means the same as OLD (for an
UPDATE) or is replaced by a null value (for an INSERT). Any reference to OLD is replaced by a reference
to the range-table entry that is the result relation.
After the system is done applying update rules, it applies view rules to the produced query tree(s). Views
cannot insert new update actions so there is no need to apply update rules to the output of view rewriting.
545
Chapter 33. The Rule System
That’s what we expected. What happened in the background is the following. The parser created the query
tree
There is a rule log_shoelace that is ON UPDATE with the rule qualification expression
(This looks a little strange since you can’t normally write INSERT ... VALUES ... FROM. The FROM
clause here is just to indicate that there are range-table entries in the query tree for *NEW* and *OLD*.
These are needed so that they can be referenced by variables in the INSERT command’s query tree.)
The rule is a qualified ALSO rule, so the rule system has to return two query trees: the modified rule action
and the original query tree. In step 1, the range table of the original query is incorporated into the rule’s
action query tree. This results in:
546
Chapter 33. The Rule System
current_user, current_timestamp )
FROM shoelace_data *NEW*, shoelace_data *OLD*,
shoelace_data shoelace_data;
In step 2, the rule qualification is added to it, so the result set is restricted to rows where sl_avail
changes:
(This looks even stranger, since INSERT ... VALUES doesn’t have a WHERE clause either, but the planner
and executor will have no difficulty with it. They need to support this same functionality anyway for
INSERT ... SELECT.)
In step 3, the original query tree’s qualification is added, restricting the result set further to only the rows
that would have been touched by the original query:
Step 4 replaces references to NEW by the target list entries from the original query tree or by the matching
variable references from the result relation:
547
Chapter 33. The Rule System
That’s it. Since the rule is ALSO, we also output the original query tree. In short, the output from the rule
system is a list of two query trees that correspond to these statements:
These are executed in this order, and that is exactly what the rule was meant to do.
The substitutions and the added qualifications ensure that, if the original query would be, say,
no log entry would get written. In that case, the original query tree does not contain a target list entry
for sl_avail, so NEW.sl_avail will get replaced by shoelace_data.sl_avail. Thus, the extra
command generated by the rule is
four rows in fact get updated (sl1, sl2, sl3, and sl4). But sl3 already has sl_avail = 0. In this case,
the original query trees qualification is different and that results in the extra query tree
being generated by the rule. This query tree will surely insert three new log entries. And that’s absolutely
correct.
Here we can see why it is important that the original query tree is executed last. If the UPDATE had been
executed first, all the rows would have already been set to zero, so the logging INSERT would not find any
row where 0 <> shoelace_data.sl_avail.
548
Chapter 33. The Rule System
If someone now tries to do any of these operations on the view relation shoe, the rule system will apply
these rules. Since the rules have no actions and are INSTEAD, the resulting list of query trees will be empty
and the whole query will become nothing because there is nothing left to be optimized or executed after
the rule system is done with it.
A more sophisticated way to use the rule system is to create rules that rewrite the query tree into one that
does the right operation on the real tables. To do that on the shoelace view, we create the following
rules:
Now assume that once in a while, a pack of shoelaces arrives at the shop and a big parts list along with it.
But you don’t want to manually update the shoelace view every time. Instead we setup two little tables:
one where you can insert the items from the part list, and one with a special trick. The creation commands
for these are:
549
Chapter 33. The Rule System
arr_name text,
arr_quant integer
);
Now you can fill the table shoelace_arrive with the data from the parts list:
arr_name | arr_quant
----------+-----------
sl3 | 10
sl6 | 20
sl8 | 20
(3 rows)
550
Chapter 33. The Rule System
sl7 | 6 | brown | 60 | cm | 60
sl4 | 8 | black | 40 | inch | 101.6
sl3 | 10 | black | 35 | inch | 88.9
sl8 | 21 | brown | 40 | inch | 101.6
sl5 | 4 | brown | 1 | m | 100
sl6 | 20 | brown | 0.9 | m | 90
(8 rows)
It’s a long way from the one INSERT ... SELECT to these results. And the description of the query-tree
transformation will be the last in this chapter. First, there is the parser’s output
Now the first rule shoelace_ok_ins is applied and turns this into
UPDATE shoelace
SET sl_avail = shoelace.sl_avail + shoelace_arrive.arr_quant
FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok,
shoelace_ok *OLD*, shoelace_ok *NEW*,
shoelace shoelace
WHERE shoelace.sl_name = shoelace_arrive.arr_name;
and throws away the original INSERT on shoelace_ok. This rewritten query is passed to the rule system
again, and the second applied rule shoelace_upd produces
UPDATE shoelace_data
SET sl_name = shoelace.sl_name,
sl_avail = shoelace.sl_avail + shoelace_arrive.arr_quant,
sl_color = shoelace.sl_color,
sl_len = shoelace.sl_len,
sl_unit = shoelace.sl_unit
FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok,
shoelace_ok *OLD*, shoelace_ok *NEW*,
shoelace shoelace, shoelace *OLD*,
shoelace *NEW*, shoelace_data shoelace_data
WHERE shoelace.sl_name = shoelace_arrive.arr_name
AND shoelace_data.sl_name = shoelace.sl_name;
551
Chapter 33. The Rule System
Again it’s an INSTEAD rule and the previous query tree is trashed. Note that this query still uses the view
shoelace. But the rule system isn’t finished with this step, so it continues and applies the _RETURN rule
on it, and we get
UPDATE shoelace_data
SET sl_name = s.sl_name,
sl_avail = s.sl_avail + shoelace_arrive.arr_quant,
sl_color = s.sl_color,
sl_len = s.sl_len,
sl_unit = s.sl_unit
FROM shoelace_arrive shoelace_arrive, shoelace_ok shoelace_ok,
shoelace_ok *OLD*, shoelace_ok *NEW*,
shoelace shoelace, shoelace *OLD*,
shoelace *NEW*, shoelace_data shoelace_data,
shoelace *OLD*, shoelace *NEW*,
shoelace_data s, unit u
WHERE s.sl_name = shoelace_arrive.arr_name
AND shoelace_data.sl_name = s.sl_name;
Finally, the rule log_shoelace gets applied, producing the extra query tree
After that the rule system runs out of rules and returns the generated query trees.
So we end up with two final query trees that are equivalent to the SQL statements
UPDATE shoelace_data
552
Chapter 33. The Rule System
The result is that data coming from one relation inserted into another, changed into updates on a third,
changed into updating a fourth plus logging that final update in a fifth gets reduced into two queries.
There is a little detail that’s a bit ugly. Looking at the two queries, it turns out that the shoelace_data
relation appears twice in the range table where it could definitely be reduced to one. The planner does not
handle it and so the execution plan for the rule systems output of the INSERT will be
Nested Loop
-> Merge Join
-> Seq Scan
-> Sort
-> Seq Scan on s
-> Seq Scan
-> Sort
-> Seq Scan on shoelace_arrive
-> Seq Scan on shoelace_data
Merge Join
-> Seq Scan
-> Sort
-> Seq Scan on s
-> Seq Scan
-> Sort
-> Seq Scan on shoelace_arrive
which produces exactly the same entries in the log table. Thus, the rule system caused one extra scan on
the table shoelace_data that is absolutely not necessary. And the same redundant scan is done once
more in the UPDATE. But it was a really hard job to make that all possible at all.
Now we make a final demonstration of the PostgreSQL rule system and its power. Say you add some
shoelaces with extraordinary colors to your database:
We would like to make a view to check which shoelace entries do not fit any shoe in color. The view for
this is
Its output is
553
Chapter 33. The Rule System
Now we want to set it up so that mismatching shoelaces that are not in stock are deleted from the database.
To make it a little harder for PostgreSQL, we don’t delete it directly. Instead we create one more view
Voilà:
A DELETE on a view, with a subquery qualification that in total uses 4 nesting/joined views, where one of
them itself has a subquery qualification containing a view and where calculated view columns are used,
gets rewritten into one single query tree that deletes the requested data from a real table.
There are probably only a few situations out in the real world where such a construct is necessary. But it
makes you feel comfortable that it works.
554
Chapter 33. The Rule System
Rewrite rules don’t have a separate owner. The owner of a relation (table or view) is automatically the
owner of the rewrite rules that are defined for it. The PostgreSQL rule system changes the behavior of the
default access control system. Relations that are used due to rules get checked against the privileges of the
rule owner, not the user invoking the rule. This means that a user only needs the required privileges for
the tables/views that he names explicitly in his queries.
For example: A user has a list of phone numbers where some of them are private, the others are of interest
for the secretary of the office. He can construct the following:
Nobody except him (and the database superusers) can access the phone_data table. But because of
the GRANT, the secretary can run a SELECT on the phone_number view. The rule system will rewrite
the SELECT from phone_number into a SELECT from phone_data and add the qualification that only
entries where private is false are wanted. Since the user is the owner of phone_number and therefore
the owner of the rule, the read access to phone_data is now checked against his privileges and the
query is permitted. The check for accessing phone_number is also performed, but this is done against
the invoking user, so nobody but the user and the secretary can use it.
The privileges are checked rule by rule. So the secretary is for now the only one who can see the public
phone numbers. But the secretary can setup another view and grant access to that to the public. Then,
anyone can see the phone_number data through the secretary’s view. What the secretary cannot do is
to create a view that directly accesses phone_data. (Actually he can, but it will not work since every
access will be denied during the permission checks.) And as soon as the user will notice, that the secretary
opened his phone_number view, he can revoke his access. Immediately, any access to the secretary’s
view would fail.
One might think that this rule-by-rule checking is a security hole, but in fact it isn’t. But if it did not work
this way, the secretary could set up a table with the same columns as phone_number and copy the data
to there once per day. Then it’s his own data and he can grant access to everyone he wants. A GRANT
command means, “I trust you”. If someone you trust does the thing above, it’s time to think it over and
then use REVOKE.
This mechanism also works for update rules. In the examples of the previous section, the owner of the
tables in the example database could grant the privileges SELECT, INSERT, UPDATE, and DELETE on the
shoelace view to someone else, but only SELECT on shoelace_log. The rule action to write log entries
will still be executed successfully, and that other user could see the log entries. But he cannot create fake
entries, nor could he manipulate or remove existing ones.
555
Chapter 33. The Rule System
• If there is no unconditional INSTEAD rule for the query, then the originally given query will be executed,
and its command status will be returned as usual. (But note that if there were any conditional INSTEAD
rules, the negation of their qualifications will have been added to the original query. This may reduce
the number of rows it processes, and if so the reported status will be affected.)
• If there is any unconditional INSTEAD rule for the query, then the original query will not be executed
at all. In this case, the server will return the command status for the last query that was inserted by an
INSTEAD rule (conditional or unconditional) and is of the same command type (INSERT, UPDATE, or
DELETE) as the original query. If no query meeting those requirements is added by any rule, then the
returned command status shows the original query type and zeroes for the row-count and OID fields.
(This system was established in PostgreSQL 7.3. In versions before that, the command status might show
different results when rules exist.)
The programmer can ensure that any desired INSTEAD rule is the one that sets the command status in the
second case, by giving it the alphabetically last rule name among the active rules, so that it gets applied
last.
556
Chapter 33. The Rule System
Both tables have many thousands of rows and the indexes on hostname are unique. The rule or trigger
should implement a constraint that deletes rows from software that reference a deleted computer. The
trigger would use this command:
Since the trigger is called for each individual row deleted from computer, it can prepare and save the
plan for this command and pass the hostname value in the parameter. The rule would be written as
the table computer is scanned by index (fast), and the command issued by the trigger would also use an
index scan (also fast). The extra command from the rule would be
Since there are appropriate indexes setup, the planner will create a plan of
Nestloop
-> Index Scan using comp_hostidx on computer
-> Index Scan using soft_hostidx on software
So there would be not that much difference in speed between the trigger and the rule implementation.
With the next delete we want to get rid of all the 2000 computers where the hostname starts with old.
There are two possible commands to do that. One is
DELETE FROM software WHERE computer.hostname >= ’old’ AND computer.hostname < ’ole’
AND software.hostname = computer.hostname;
Hash Join
-> Seq Scan on software
-> Hash
-> Index Scan using comp_hostidx on computer
which results in the following executing plan for the command added by the rule:
557
Chapter 33. The Rule System
Nestloop
-> Index Scan using comp_hostidx on computer
-> Index Scan using soft_hostidx on software
This shows, that the planner does not realize that the qualification for hostname in computer could also
be used for an index scan on software when there are multiple qualification expressions combined with
AND, which is what it does in the regular-expression version of the command. The trigger will get invoked
once for each of the 2000 old computers that have to be deleted, and that will result in one index scan over
computer and 2000 index scans over software. The rule implementation will do it with two commands
that use indexes. And it depends on the overall size of the table software whether the rule will still be
faster in the sequential scan situation. 2000 command executions from the trigger over the SPI manager
take some time, even if all the index blocks will soon be in the cache.
The last command we look at is
Again this could result in many rows to be deleted from computer. So the trigger will again run many
commands through the executor. The command generated by the rule will be
The plan for that command will again be the nested loop over two index scans, only using a different index
on computer:
Nestloop
-> Index Scan using comp_manufidx on computer
-> Index Scan using soft_hostidx on software
In any of these cases, the extra commands from the rule system will be more or less independent from the
number of affected rows in a command.
The summary is, rules will only be significantly slower than triggers if their actions result in large and
badly qualified joins, a situation where the planner fails.
558
Chapter 34. Procedural Languages
PostgreSQL allows user-defined functions to be written in other languages besides SQL and C. These
other languages are generically called procedural languages (PLs). For a function written in a procedural
language, the database server has no built-in knowledge about how to interpret the function’s source
text. Instead, the task is passed to a special handler that knows the details of the language. The handler
could either do all the work of parsing, syntax analysis, execution, etc. itself, or it could serve as “glue”
between PostgreSQL and an existing implementation of a programming language. The handler itself is a
C language function compiled into a shared object and loaded on demand, just like any other C function.
There are currently four procedural languages available in the standard PostgreSQL distribution:
PL/pgSQL (Chapter 35), PL/Tcl (Chapter 36), PL/Perl (Chapter 37), and PL/Python (Chapter 38). Other
languages can be defined by users. The basics of developing a new procedural language are covered in
Chapter 45.
There are additional procedural languages available that are not included in the core distribution. Ap-
pendix H has information about finding them.
The manual procedure described below is only recommended for installing custom languages that
createlang does not know about.
A procedural language is installed in a database in four steps, which must be carried out by a database
superuser. The createlang program automates all but step 1.
1. The shared object for the language handler must be compiled and installed into an appropriate library
directory. This works in the same way as building and installing modules with regular user-defined
C functions does; see Section 31.9.6. Often, the language handler will depend on an external library
that provides the actual programming language engine; if so, that must be installed as well.
2. The handler must be declared with the command
CREATE FUNCTION handler_function_name()
RETURNS language_handler
AS ’path-to-shared-object’
LANGUAGE C;
559
Chapter 34. Procedural Languages
The special return type of language_handler tells the database system that this function does not
return one of the defined SQL data types and is not directly usable in SQL statements.
3. Optionally, the language handler may provide a “validator” function that checks a function definition
for correctness without actually executing it. The validator function is called by CREATE FUNCTION
if it exists. If a validator function is provided by the handler, declare it with a command like
CREATE FUNCTION validator_function_name(oid)
RETURNS void
AS ’path-to-shared-object’
LANGUAGE C;
The optional key word TRUSTED specifies that ordinary database users that have no superuser privi-
leges should be allowed to use this language to create functions and trigger procedures. Since PL func-
tions are executed inside the database server, the TRUSTED flag should only be given for languages
that do not allow access to database server internals or the file system. The languages PL/pgSQL,
PL/Tcl, and PL/Perl are considered trusted; the languages PL/TclU, PL/PerlU, and PL/PythonU are
designed to provide unlimited functionality and should not be marked trusted.
Example 34-1 shows how the manual installation procedure would work with the language PL/pgSQL.
The following command tells the database server where to find the shared object for the PL/pgSQL lan-
guage’s call handler function.
CREATE FUNCTION plpgsql_call_handler() RETURNS language_handler AS
’$libdir/plpgsql’ LANGUAGE C;
The command
CREATE TRUSTED PROCEDURAL LANGUAGE plpgsql
HANDLER plpgsql_call_handler
VALIDATOR plpgsql_validator;
then defines that the previously declared functions should be invoked for functions and trigger procedures
where the language attribute is plpgsql.
In a default PostgreSQL installation, the handler for the PL/pgSQL language is built and installed into the
“library” directory. If Tcl support is configured in, the handlers for PL/Tcl and PL/TclU are also built and
installed in the same location. Likewise, the PL/Perl and PL/PerlU handlers are built and installed if Perl
support is configured, and PL/PythonU is installed if Python support is configured.
560
Chapter 35. PL/pgSQL - SQL Procedural
Language
PL/pgSQL is a loadable procedural language for the PostgreSQL database system. The design goals of
PL/pgSQL were to create a loadable procedural language that
Except for input/output conversion and calculation functions for user-defined types, anything that can be
defined in C language functions can also be done with PL/pgSQL. For example, it is possible to create
complex conditional computation functions and later use them to define operators or use them in index
expressions.
35.1. Overview
The PL/pgSQL call handler parses the function’s source text and produces an internal binary instruction
tree the first time the function is called (within each session). The instruction tree fully translates the
PL/pgSQL statement structure, but individual SQL expressions and SQL commands used in the function
are not translated immediately.
As each expression and SQL command is first used in the function, the PL/pgSQL interpreter creates a
prepared execution plan (using the SPI manager’s SPI_prepare and SPI_saveplan functions). Sub-
sequent visits to that expression or command reuse the prepared plan. Thus, a function with conditional
code that contains many statements for which execution plans might be required will only prepare and
save those plans that are really used during the lifetime of the database connection. This can substan-
tially reduce the total amount of time required to parse, and generate execution plans for the statements
in a PL/pgSQL function. A disadvantage is that errors in a specific expression or command may not be
detected until that part of the function is reached in execution.
Once PL/pgSQL has made an execution plan for a particular command in a function, it will reuse that
plan for the life of the database connection. This is usually a win for performance, but it can cause some
problems if you dynamically alter your database schema. For example:
561
Chapter 35. PL/pgSQL - SQL Procedural Language
END;
$$ LANGUAGE plpgsql;
If you execute the above function, it will reference the OID for my_function() in the execution plan
produced for the PERFORM statement. Later, if you drop and recreate my_function(), then populate()
will not be able to find my_function() anymore. You would then have to recreate populate(), or at
least start a new database session so that it will be compiled afresh. Another way to avoid this problem is to
use CREATE OR REPLACE FUNCTION when updating the definition of my_function (when a function
is “replaced”, its OID is not changed).
Because PL/pgSQL saves execution plans in this way, SQL commands that appear directly in a PL/pgSQL
function must refer to the same tables and columns on every execution; that is, you cannot use a parameter
as the name of a table or column in an SQL command. To get around this restriction, you can construct dy-
namic commands using the PL/pgSQL EXECUTE statement — at the price of constructing a new execution
plan on every execution.
Note: The PL/pgSQL EXECUTE statement is not related to the EXECUTE SQL statement supported by
the PostgreSQL server. The server’s EXECUTE statement cannot be used within PL/pgSQL functions
(and is not needed).
562
Chapter 35. PL/pgSQL - SQL Procedural Language
PL/pgSQL functions can also be declared to return a “set”, or table, of any data type they can return
a single instance of. Such a function generates its output by executing RETURN NEXT for each desired
element of the result set.
Finally, a PL/pgSQL function may be declared to return void if it has no useful return value.
PL/pgSQL does not currently have full support for domain types: it treats a domain the same as the
underlying scalar type. This means that constraints associated with the domain will not be enforced. This
is not an issue for function arguments, but it is a hazard if you declare a PL/pgSQL function as returning
a domain type.
While running psql, you can load or reload such a function definition file with
\i filename.sql
563
Chapter 35. PL/pgSQL - SQL Procedural Language
Within this, you might use quote marks for simple literal strings in SQL commands and $$ to delimit
fragments of SQL commands that you are assembling as strings. If you need to quote text that includes
$$, you could use $Q$, and so on.
The following chart shows what you have to do when writing quote marks without dollar quoting. It may
be useful when translating pre-dollar quoting code into something more comprehensible.
1 quotation mark
To begin and end the function body, for example:
CREATE FUNCTION foo() RETURNS integer AS ’
....
’ LANGUAGE plpgsql;
Anywhere within a single-quoted function body, quote marks must appear in pairs.
2 quotation marks
For string literals inside the function body, for example:
a_output := ”Blah”;
SELECT * FROM users WHERE f_name=”foobar”;
which is exactly what the PL/pgSQL parser would see in either case.
4 quotation marks
When you need a single quotation mark in a string constant inside the function body, for example:
a_output := a_output || ” AND name LIKE ””foobar”” AND xyz”
The value actually appended to a_output would be: AND name LIKE ’foobar’ AND xyz.
In the dollar-quoting approach, you’d write
a_output := a_output || $$ AND name LIKE ’foobar’ AND xyz$$
being careful that any dollar-quote delimiters around this are not just $$.
6 quotation marks
When a single quotation mark in a string inside the function body is adjacent to the end of that string
constant, for example:
a_output := a_output || ” AND name LIKE ””foobar”””
The value appended to a_output would then be: AND name LIKE ’foobar’.
In the dollar-quoting approach, this becomes
a_output := a_output || $$ AND name LIKE ’foobar’$$
564
Chapter 35. PL/pgSQL - SQL Procedural Language
10 quotation marks
When you want two single quotation marks in a string constant (which accounts for 8 quotation
marks) and this is adjacent to the end of that string constant (2 more). You will probably only need
that if you are writing a function that generates other functions, as in Example 35-5. For example:
a_output := a_output || ” if v_” ||
referrer_keys.kind || ” like ”””””
|| referrer_keys.key_string || ”””””
then return ””” || referrer_keys.referrer_type
|| ”””; end if;”;
where we assume we only need to put single quote marks into a_output, because it will be re-quoted
before use.
A variant approach is to escape quotation marks in the function body with a backslash rather than by
doubling them. With this method you’ll find yourself writing things like \’\’ instead of ””. Some find
this easier to keep track of, some do not.
[ <<label>> ]
[ DECLARE
declarations ]
BEGIN
statements
END;
Each declaration and each statement within a block is terminated by a semicolon. A block that appears
within another block must have a semicolon after END, as shown above; however the final END that con-
cludes a function body does not require a semicolon.
All key words and identifiers can be written in mixed upper and lower case. Identifiers are implicitly
converted to lowercase unless double-quoted.
565
Chapter 35. PL/pgSQL - SQL Procedural Language
There are two types of comments in PL/pgSQL. A double dash (--) starts a comment that extends to the
end of the line. A /* starts a block comment that extends to the next occurrence of */. Block comments
cannot be nested, but double dash comments can be enclosed into a block comment and a double dash can
hide the block comment delimiters /* and */.
Any statement in the statement section of a block can be a subblock. Subblocks can be used for logical
grouping or to localize variables to a small group of statements.
The variables declared in the declarations section preceding a block are initialized to their default values
every time the block is entered, not only once per function call. For example:
RETURN quantity;
END;
$$ LANGUAGE plpgsql;
It is important not to confuse the use of BEGIN/END for grouping statements in PL/pgSQL with the
database commands for transaction control. PL/pgSQL’s BEGIN/END are only for grouping; they do not
start or end a transaction. Functions and trigger procedures are always executed within a transaction estab-
lished by an outer query — they cannot start or commit that transaction, since there would be no context
for them to execute in. However, a block containing an EXCEPTION clause effectively forms a subtrans-
action that can be rolled back without affecting the outer transaction. For more about that see Section
35.7.5.
35.4. Declarations
All variables used in a block must be declared in the declarations section of the block. (The only exception
is that the loop variable of a FOR loop iterating over a range of integer values is automatically declared as
an integer variable.)
PL/pgSQL variables can have any SQL data type, such as integer, varchar, and char.
Here are some examples of variable declarations:
user_id integer;
566
Chapter 35. PL/pgSQL - SQL Procedural Language
quantity numeric(5);
url varchar;
myrow tablename%ROWTYPE;
myfield tablename.columnname%TYPE;
arow RECORD;
The DEFAULT clause, if given, specifies the initial value assigned to the variable when the block is entered.
If the DEFAULT clause is not given then the variable is initialized to the SQL null value. The CONSTANT
option prevents the variable from being assigned to, so that its value remains constant for the duration of
the block. If NOT NULL is specified, an assignment of a null value results in a run-time error. All variables
declared as NOT NULL must have a nonnull default value specified.
The default value is evaluated every time the block is entered. So, for example, assigning now() to a
variable of type timestamp causes the variable to have the time of the current function call, not the time
when the function was precompiled.
Examples:
The other way, which was the only way available before PostgreSQL 8.0, is to explicitly declare an alias,
using the declaration syntax
567
Chapter 35. PL/pgSQL - SQL Procedural Language
When the return type of a PL/pgSQL function is declared as a polymorphic type (anyelement or
anyarray), a special parameter $0 is created. Its data type is the actual return type of the function, as
deduced from the actual input types (see Section 31.2.5). This allows the function to access its actual
return type as shown in Section 35.4.2. $0 is initialized to null and can be modified by the function, so it
can be used to hold the return value if desired, though that is not required. $0 can also be given an alias.
For example, this function works on any data type that has a + operator:
%TYPE provides the data type of a variable or table column. You can use this to declare variables that will
hold database values. For example, let’s say you have a column named user_id in your users table. To
declare a variable with the same data type as users.user_id you write:
568
Chapter 35. PL/pgSQL - SQL Procedural Language
user_id users.user_id%TYPE;
By using %TYPE you don’t need to know the data type of the structure you are referencing, and most
importantly, if the data type of the referenced item changes in the future (for instance: you change the
type of user_id from integer to real), you may not need to change your function definition.
%TYPE is particularly valuable in polymorphic functions, since the data types needed for internal variables
may change from one call to the next. Appropriate variables can be created by applying %TYPE to the
function’s arguments or result placeholders.
A variable of a composite type is called a row variable (or row-type variable). Such a variable can hold
a whole row of a SELECT or FOR query result, so long as that query’s column set matches the declared
type of the variable. The individual fields of the row value are accessed using the usual dot notation, for
example rowvar.field.
A row variable can be declared to have the same type as the rows of an existing table or view, by using
the table_name%ROWTYPE notation; or it can be declared by giving a composite type’s name. (Since
every table has an associated composite type of the same name, it actually does not matter in PostgreSQL
whether you write %ROWTYPE or not. But the form with %ROWTYPE is more portable.)
Parameters to a function can be composite types (complete table rows). In that case, the corresponding
identifier $n will be a row variable, and fields can be selected from it, for example $1.user_id.
Only the user-defined columns of a table row are accessible in a row-type variable, not the OID or other
system columns (because the row could be from a view). The fields of the row type inherit the table’s field
size or precision for data types such as char(n).
Here is an example of using composite types:
569
Chapter 35. PL/pgSQL - SQL Procedural Language
Record variables are similar to row-type variables, but they have no predefined structure. They take on the
actual row structure of the row they are assigned during a SELECT or FOR command. The substructure of a
record variable can change each time it is assigned to. A consequence of this is that until a record variable
is first assigned to, it has no substructure, and any attempt to access a field in it will draw a run-time error.
Note that RECORD is not a true data type, only a placeholder. One should also realize that when a
PL/pgSQL function is declared to return type record, this is not quite the same concept as a record
variable, even though such a function may well use a record variable to hold its result. In both cases the
actual row structure is unknown when the function is written, but for a function returning record the
actual structure is determined when the calling query is parsed, whereas a record variable can change its
row structure on-the-fly.
35.4.5. RENAME
RENAME oldname TO newname;
Using the RENAME declaration you can change the name of a variable, record or row. This is primarily
useful if NEW or OLD should be referenced by another name inside a trigger procedure. See also ALIAS.
Examples:
RENAME id TO user_id;
RENAME this_var TO that_var;
Note: RENAME appears to be broken as of PostgreSQL 7.3. Fixing this is of low priority, since ALIAS
covers most of the practical uses of RENAME.
35.5. Expressions
All expressions used in PL/pgSQL statements are processed using the server’s regular SQL executor. In
effect, a query like
SELECT expression
is executed using the SPI manager. Before evaluation, occurrences of PL/pgSQL variable identifiers are
replaced by parameters, and the actual values from the variables are passed to the executor in the parameter
array. This allows the query plan for the SELECT to be prepared just once and then reused for subsequent
evaluations.
The evaluation done by the PostgreSQL main parser has some side effects on the interpretation of constant
values. In detail there is a difference between what these two functions do:
570
Chapter 35. PL/pgSQL - SQL Procedural Language
RETURN ’now’;
END;
$$ LANGUAGE plpgsql;
and
In the case of logfunc1, the PostgreSQL main parser knows when preparing the plan for the INSERT,
that the string ’now’ should be interpreted as timestamp because the target column of logtable is
of that type. Thus, it will make a constant from it at this time and this constant value is then used in
all invocations of logfunc1 during the lifetime of the session. Needless to say that this isn’t what the
programmer wanted.
In the case of logfunc2, the PostgreSQL main parser does not know what type ’now’ should become and
therefore it returns a data value of type text containing the string now. During the ensuing assignment to
the local variable curtime, the PL/pgSQL interpreter casts this string to the timestamp type by calling
the text_out and timestamp_in functions for the conversion. So, the computed time stamp is updated
on each execution as the programmer expects.
The mutable nature of record variables presents a problem in this connection. When fields of a record
variable are used in expressions or statements, the data types of the fields must not change between calls
of one and the same expression, since the expression will be planned using the data type that is present
when the expression is first reached. Keep this in mind when writing trigger procedures that handle events
for more than one table. (EXECUTE can be used to get around this problem when necessary.)
35.6.1. Assignment
An assignment of a value to a variable or row/record field is written as:
identifier := expression;
571
Chapter 35. PL/pgSQL - SQL Procedural Language
As explained above, the expression in such a statement is evaluated by means of an SQL SELECT com-
mand sent to the main database engine. The expression must yield a single value.
If the expression’s result data type doesn’t match the variable’s data type, or the variable has a specific
size/precision (like char(20)), the result value will be implicitly converted by the PL/pgSQL interpreter
using the result type’s output-function and the variable type’s input-function. Note that this could poten-
tially result in run-time errors generated by the input function, if the string form of the result value is not
acceptable to the input function.
Examples:
user_id := 20;
tax := subtotal * 0.06;
where target can be a record variable, a row variable, or a comma-separated list of simple variables
and record/row fields. The select_expressions and the remainder of the command are the same as
in regular SQL.
Note that this is quite different from PostgreSQL’s normal interpretation of SELECT INTO, where the
INTO target is a newly created table. If you want to create a table from a SELECT result inside a PL/pgSQL
function, use the syntax CREATE TABLE ... AS SELECT.
If a row or a variable list is used as target, the selected values must exactly match the structure of the
target, or a run-time error occurs. When a record variable is the target, it automatically configures itself to
the row type of the query result columns.
Except for the INTO clause, the SELECT statement is the same as a normal SQL SELECT command and
can use its full power.
The INTO clause can appear almost anywhere in the SELECT statement. Customarily it is written either
just after SELECT as shown above, or just before FROM — that is, either just before or just after the list of
select_expressions.
If the query returns zero rows, null values are assigned to the target(s). If the query returns multiple
rows, the first row is assigned to the target(s) and the rest are discarded. (Note that “the first row” is not
well-defined unless you’ve used ORDER BY.)
You can check the special FOUND variable (see Section 35.6.6) after a SELECT INTO statement to deter-
mine whether the assignment was successful, that is, at least one row was was returned by the query. For
example:
572
Chapter 35. PL/pgSQL - SQL Procedural Language
To test for whether a record/row result is null, you can use the IS NULL conditional. There is, however,
no way to tell whether any additional rows might have been discarded. Here is an example that handles
the case where no rows have been returned:
DECLARE
users_rec RECORD;
BEGIN
SELECT INTO users_rec * FROM users WHERE user_id=3;
PERFORM query;
This executes query and discards the result. Write the query the same way as you would in an SQL
SELECT command, but replace the initial keyword SELECT with PERFORM. PL/pgSQL variables will be
substituted into the query as usual. Also, the special variable FOUND is set to true if the query produced at
least one row or false if it produced no rows.
Note: One might expect that SELECT with no INTO clause would accomplish this result, but at present
the only accepted way to do it is PERFORM.
An example:
NULL;
573
Chapter 35. PL/pgSQL - SQL Procedural Language
BEGIN
y := x / 0;
EXCEPTION
WHEN division_by_zero THEN
NULL; -- ignore the error
END;
BEGIN
y := x / 0;
EXCEPTION
WHEN division_by_zero THEN -- ignore the error
END;
Note: In Oracle’s PL/SQL, empty statement lists are not allowed, and so NULL statements are required
for situations such as this. PL/pgSQL allows you to just write nothing, instead.
EXECUTE command-string;
where command-string is an expression yielding a string (of type text) containing the command to
be executed. This string is fed literally to the SQL engine.
Note in particular that no substitution of PL/pgSQL variables is done on the command string. The values
of variables must be inserted in the command string as it is constructed.
Unlike all other commands in PL/pgSQL, a command run by an EXECUTE statement is not prepared and
saved just once during the life of the session. Instead, the command is prepared each time the statement is
run. The command string can be dynamically created within the function to perform actions on different
tables and columns.
The results from SELECT commands are discarded by EXECUTE, and SELECT INTO is not currently
supported within EXECUTE. So there is no way to extract a result from a dynamically-created SELECT
using the plain EXECUTE command. There are two other ways to do it, however: one is to use the
FOR-IN-EXECUTE loop form described in Section 35.7.4, and the other is to use a cursor with
OPEN-FOR-EXECUTE, as described in Section 35.8.2.
When working with dynamic commands you will often have to handle escaping of single quotes. The
recommended method for quoting fixed text in your function body is dollar quoting. (If you have legacy
code that does not use dollar quoting, please refer to the overview in Section 35.2.1, which can save you
some effort when translating said code to a more reasonable scheme.)
574
Chapter 35. PL/pgSQL - SQL Procedural Language
Dynamic values that are to be inserted into the constructed query require special handling since they might
themselves contain quote characters. An example (this assumes that you are using dollar quoting for the
function as a whole, so the quote marks need not be doubled):
This example shows use of the functions quote_ident(text) and quote_literal(text). For safety,
variables containing column and table identifiers should be passed to function quote_ident. Vari-
ables containing values that should be literal strings in the constructed command should be passed to
quote_literal. Both take the appropriate steps to return the input text enclosed in double or single
quotes respectively, with any embedded special characters properly escaped.
Note that dollar quoting is only useful for quoting fixed text. It would be a very bad idea to try to do the
above example as
because it would break if the contents of newvalue happened to contain $$. The same objection would
apply to any other dollar-quoting delimiter you might pick. So, to safely quote text that is not known in
advance, you must use quote_literal.
A much larger example of a dynamic command and EXECUTE can be seen in Example 35-5, which builds
and executes a CREATE FUNCTION command to define a new function.
This command allows retrieval of system status indicators. Each item is a key word identifying a state
value to be assigned to the specified variable (which should be of the right data type to receive it). The
currently available status items are ROW_COUNT, the number of rows processed by the last SQL command
sent down to the SQL engine, and RESULT_OID, the OID of the last row inserted by the most recent SQL
command. Note that RESULT_OID is only useful after an INSERT command.
An example:
575
Chapter 35. PL/pgSQL - SQL Procedural Language
The second method to determine the effects of a command is to check the special variable named FOUND,
which is of type boolean. FOUND starts out false within each PL/pgSQL function call. It is set by each of
the following types of statements:
• A SELECT INTO statement sets FOUND true if it returns a row, false if no row is returned.
• A PERFORM statement sets FOUND true if it produces (and discards) a row, false if no row is produced.
• UPDATE, INSERT, and DELETE statements set FOUND true if at least one row is affected, false if no row
is affected.
• A FETCH statement sets FOUND true if it returns a row, false if no row is returned.
• A FOR statement sets FOUND true if it iterates one or more times, else false. This applies to all three
variants of the FOR statement (integer FOR loops, record-set FOR loops, and dynamic record-set FOR
loops). FOUND is set this way when the FOR loop exits; inside the execution of the loop, FOUND is not
modified by the FOR statement, although it may be changed by the execution of other statements within
the loop body.
FOUND is a local variable within each PL/pgSQL function; so any changes to it affect only the current
function.
35.7.1.1. RETURN
RETURN expression;
RETURN with an expression terminates the function and returns the value of expression to the caller.
This form is to be used for PL/pgSQL functions that do not return a set.
When returning a scalar type, any expression can be used. The expression’s result will be automatically
cast into the function’s return type as described for assignments. To return a composite (row) value, you
must write a record or row variable as the expression.
The return value of a function cannot be left undefined. If control reaches the end of the top-level block
of the function without hitting a RETURN statement, a run-time error will occur.
If you have declared the function to return void, a RETURN statement must still be provided; but in this
case the expression following RETURN is optional and will be ignored if present.
576
Chapter 35. PL/pgSQL - SQL Procedural Language
When a PL/pgSQL function is declared to return SETOF sometype, the procedure to follow is slightly
different. In that case, the individual items to return are specified in RETURN NEXT commands, and then
a final RETURN command with no argument is used to indicate that the function has finished executing.
RETURN NEXT can be used with both scalar and composite data types; in the latter case, an entire “table”
of results will be returned.
Functions that use RETURN NEXT should be called in the following fashion:
That is, the function must be used as a table source in a FROM clause.
RETURN NEXT does not actually return from the function; it simply saves away the value of the expression.
Execution then continues with the next statement in the PL/pgSQL function. As successive RETURN NEXT
commands are executed, the result set is built up. A final RETURN, which should have no argument, causes
control to exit the function.
Note: The current implementation of RETURN NEXT for PL/pgSQL stores the entire result set before
returning from the function, as discussed above. That means that if a PL/pgSQL function produces
a very large result set, performance may be poor: data will be written to disk to avoid memory ex-
haustion, but the function itself will not return until the entire result set has been generated. A future
version of PL/pgSQL may allow users to define set-returning functions that do not have this limitation.
Currently, the point at which data begins being written to disk is controlled by the work_mem configura-
tion variable. Administrators who have sufficient memory to store larger result sets in memory should
consider increasing this parameter.
35.7.2. Conditionals
IF statements let you execute commands based on certain conditions. PL/pgSQL has five forms of IF:
• IF ... THEN
35.7.2.1. IF-THEN
IF boolean-expression THEN
statements
577
Chapter 35. PL/pgSQL - SQL Procedural Language
END IF;
IF-THEN statements are the simplest form of IF. The statements between THEN and END IF will be
executed if the condition is true. Otherwise, they are skipped.
Example:
35.7.2.2. IF-THEN-ELSE
IF boolean-expression THEN
statements
ELSE
statements
END IF;
IF-THEN-ELSE statements add to IF-THEN by letting you specify an alternative set of statements that
should be executed if the condition evaluates to false.
Examples:
35.7.2.3. IF-THEN-ELSE IF
IF statements can be nested, as in the following example:
578
Chapter 35. PL/pgSQL - SQL Procedural Language
When you use this form, you are actually nesting an IF statement inside the ELSE part of an outer IF
statement. Thus you need one END IF statement for each nested IF and one for the parent IF-ELSE. This
is workable but grows tedious when there are many alternatives to be checked. Hence the next form.
35.7.2.4. IF-THEN-ELSIF-ELSE
IF boolean-expression THEN
statements
[ ELSIF boolean-expression THEN
statements
[ ELSIF boolean-expression THEN
statements
...]]
[ ELSE
statements ]
END IF;
IF-THEN-ELSIF-ELSE provides a more convenient method of checking many alternatives in one state-
ment. Formally it is equivalent to nested IF-THEN-ELSE-IF-THEN commands, but only one END IF is
needed.
Here is an example:
IF number = 0 THEN
result := ’zero’;
ELSIF number > 0 THEN
result := ’positive’;
ELSIF number < 0 THEN
result := ’negative’;
ELSE
-- hmm, the only other possibility is that number is null
result := ’NULL’;
END IF;
35.7.2.5. IF-THEN-ELSEIF-ELSE
ELSEIF is an alias for ELSIF.
579
Chapter 35. PL/pgSQL - SQL Procedural Language
35.7.3.1. LOOP
[<<label>>]
LOOP
statements
END LOOP;
LOOP defines an unconditional loop that is repeated indefinitely until terminated by an EXIT or RETURN
statement. The optional label can be used by EXIT statements in nested loops to specify which level of
nesting should be terminated.
35.7.3.2. EXIT
If no label is given, the innermost loop is terminated and the statement following END LOOP is executed
next. If label is given, it must be the label of the current or some outer level of nested loop or block. Then
the named loop or block is terminated and control continues with the statement after the loop’s/block’s
corresponding END.
If WHEN is present, loop exit occurs only if the specified condition is true, otherwise control passes to the
statement after EXIT.
EXIT can be used to cause early exit from all types of loops; it is not limited to use with unconditional
loops.
Examples:
LOOP
-- some computations
IF count > 0 THEN
EXIT; -- exit loop
END IF;
END LOOP;
LOOP
-- some computations
EXIT WHEN count > 0; -- same result as previous example
END LOOP;
BEGIN
-- some computations
IF stocks > 100000 THEN
EXIT; -- causes exit from the BEGIN block
END IF;
END;
580
Chapter 35. PL/pgSQL - SQL Procedural Language
35.7.3.3. WHILE
[<<label>>]
WHILE expression LOOP
statements
END LOOP;
The WHILE statement repeats a sequence of statements so long as the condition expression evaluates to
true. The condition is checked just before each entry to the loop body.
For example:
[<<label>>]
FOR name IN [ REVERSE ] expression .. expression LOOP
statements
END LOOP;
This form of FOR creates a loop that iterates over a range of integer values. The variable name is auto-
matically defined as type integer and exists only inside the loop. The two expressions giving the lower
and upper bound of the range are evaluated once when entering the loop. The iteration step is normally 1,
but is -1 when REVERSE is specified.
Some examples of integer FOR loops:
If the lower bound is greater than the upper bound (or less than, in the REVERSE case), the loop body is
not executed at all. No error is raised.
581
Chapter 35. PL/pgSQL - SQL Procedural Language
[<<label>>]
FOR record_or_row IN query LOOP
statements
END LOOP;
The record or row variable is successively assigned each row resulting from the query (which must be a
SELECT command) and the loop body is executed for each row. Here is an example:
If the loop is terminated by an EXIT statement, the last assigned row value is still accessible after the loop.
The FOR-IN-EXECUTE statement is another way to iterate over rows:
[<<label>>]
FOR record_or_row IN EXECUTE text_expression LOOP
statements
END LOOP;
This is like the previous form, except that the source SELECT statement is specified as a string expression,
which is evaluated and replanned on each entry to the FOR loop. This allows the programmer to choose the
speed of a preplanned query or the flexibility of a dynamic query, just as with a plain EXECUTE statement.
Note: The PL/pgSQL parser presently distinguishes the two kinds of FOR loops (integer or query
result) by checking whether .. appears outside any parentheses between IN and LOOP. If .. is not
seen then the loop is presumed to be a loop over rows. Mistyping the .. is thus likely to lead to a
complaint along the lines of “loop variable of loop over rows must be a record or row variable”, rather
than the simple syntax error one might expect to get.
582
Chapter 35. PL/pgSQL - SQL Procedural Language
[ <<label>> ]
[ DECLARE
declarations ]
BEGIN
statements
EXCEPTION
WHEN condition [ OR condition ... ] THEN
handler_statements
[ WHEN condition [ OR condition ... ] THEN
handler_statements
... ]
END;
If no error occurs, this form of block simply executes all the statements, and then control passes
to the next statement after END. But if an error occurs within the statements, further processing
of the statements is abandoned, and control passes to the EXCEPTION list. The list is searched
for the first condition matching the error that occurred. If a match is found, the corresponding
handler_statements are executed, and then control passes to the next statement after END. If no
match is found, the error propagates out as though the EXCEPTION clause were not there at all: the error
can be caught by an enclosing block with EXCEPTION, or if there is none it aborts processing of the
function.
The condition names can be any of those shown in Appendix A. A category name matches any
error within its category. The special condition name OTHERS matches every error type except
QUERY_CANCELED. (It is possible, but often unwise, to trap QUERY_CANCELED by name.) Condition
names are not case-sensitive.
If a new error occurs within the selected handler_statements, it cannot be caught by this
EXCEPTION clause, but is propagated out. A surrounding EXCEPTION clause could catch it.
When an error is caught by an EXCEPTION clause, the local variables of the PL/pgSQL function remain as
they were when the error occurred, but all changes to persistent database state within the block are rolled
back. As an example, consider this fragment:
583
Chapter 35. PL/pgSQL - SQL Procedural Language
When control reaches the assignment to y, it will fail with a division_by_zero error. This will be
caught by the EXCEPTION clause. The value returned in the RETURN statement will be the incremented
value of x, but the effects of the UPDATE command will have been rolled back. The INSERT command
preceding the block is not rolled back, however, so the end result is that the database contains Tom Jones
not Joe Jones.
Tip: A block containing an EXCEPTION clause is significantly more expensive to enter and exit than a
block without one. Therefore, don’t use EXCEPTION without need.
35.8. Cursors
Rather than executing a whole query at once, it is possible to set up a cursor that encapsulates the query,
and then read the query result a few rows at a time. One reason for doing this is to avoid memory overrun
when the result contains a large number of rows. (However, PL/pgSQL users do not normally need to
worry about that, since FOR loops automatically use a cursor internally to avoid memory problems.) A
more interesting usage is to return a reference to a cursor that a function has created, allowing the caller
to read the rows. This provides an efficient way to return large row sets from functions.
DECLARE
curs1 refcursor;
curs2 CURSOR FOR SELECT * FROM tenk1;
curs3 CURSOR (key integer) IS SELECT * FROM tenk1 WHERE unique1 = key;
All three of these variables have the data type refcursor, but the first may be used with any query, while
the second has a fully specified query already bound to it, and the last has a parameterized query bound
to it. (key will be replaced by an integer parameter value when the cursor is opened.) The variable curs1
is said to be unbound since it is not bound to any particular query.
584
Chapter 35. PL/pgSQL - SQL Procedural Language
The cursor variable is opened and given the specified query to execute. The cursor cannot be open already,
and it must have been declared as an unbound cursor (that is, as a simple refcursor variable). The
SELECT query is treated in the same way as other SELECT statements in PL/pgSQL: PL/pgSQL variable
names are substituted, and the query plan is cached for possible reuse.
An example:
The cursor variable is opened and given the specified query to execute. The cursor cannot be open already,
and it must have been declared as an unbound cursor (that is, as a simple refcursor variable). The query
is specified as a string expression in the same way as in the EXECUTE command. As usual, this gives
flexibility so the query can vary from one run to the next.
An example:
This form of OPEN is used to open a cursor variable whose query was bound to it when it was declared.
The cursor cannot be open already. A list of actual argument value expressions must appear if and only if
the cursor was declared to take arguments. These values will be substituted in the query. The query plan
for a bound cursor is always considered cacheable; there is no equivalent of EXECUTE in this case.
Examples:
OPEN curs2;
OPEN curs3(42);
585
Chapter 35. PL/pgSQL - SQL Procedural Language
35.8.3.1. FETCH
FETCH retrieves the next row from the cursor into a target, which may be a row variable, a record variable,
or a comma-separated list of simple variables, just like SELECT INTO. As with SELECT INTO, the special
variable FOUND may be checked to see whether a row was obtained or not.
An example:
35.8.3.2. CLOSE
CLOSE cursor;
CLOSE closes the portal underlying an open cursor. This can be used to release resources earlier than end
of transaction, or to free up the cursor variable to be opened again.
An example:
CLOSE curs1;
586
Chapter 35. PL/pgSQL - SQL Procedural Language
if the refcursor variable is null, OPEN automatically generates a name that does not conflict with any
existing portal, and assigns it to the refcursor variable.
Note: A bound cursor variable is initialized to the string value representing its name, so that the portal
name is the same as the cursor variable name, unless the programmer overrides it by assignment
before opening the cursor. But an unbound cursor variable defaults to the null value initially , so it will
receive an automatically-generated unique name, unless overridden.
The following example shows one way a cursor name can be supplied by the caller:
BEGIN;
SELECT reffunc(’funccursor’);
FETCH ALL IN funccursor;
COMMIT;
BEGIN;
SELECT reffunc2();
reffunc2
--------------------
<unnamed cursor 1>
(1 row)
The following example shows one way to return multiple cursors from a single function:
587
Chapter 35. PL/pgSQL - SQL Procedural Language
Possible levels are DEBUG, LOG, INFO, NOTICE, WARNING, and EXCEPTION. EXCEPTION raises an er-
ror (which normally aborts the current transaction); the other levels only generate messages of different
priority levels. Whether messages of a particular priority are reported to the client, written to the server
log, or both is controlled by the log_min_messages and client_min_messages configuration variables. See
Section 16.4 for more information.
Inside the format string, % is replaced by the next optional argument’s string representation. Write %% to
emit a literal %. Note that the optional arguments must presently be simple variables, not expressions, and
the format must be a simple string literal.
In this example, the value of v_job_id will replace the % in the string:
This example will abort the transaction with the given error message:
588
Chapter 35. PL/pgSQL - SQL Procedural Language
RAISE EXCEPTION presently always generates the same SQLSTATE code, P0001, no matter
what message it is invoked with. It is possible to trap this exception with EXCEPTION ... WHEN
RAISE_EXCEPTION THEN ... but there is no way to tell one RAISE from another.
When a PL/pgSQL function is called as a trigger, several special variables are created automatically in the
top-level block. They are:
NEW
Data type RECORD; variable holding the new database row for INSERT/UPDATE operations in row-
level triggers. This variable is NULL in statement-level triggers.
OLD
Data type RECORD; variable holding the old database row for UPDATE/DELETE operations in row-
level triggers. This variable is NULL in statement-level triggers.
TG_NAME
Data type name; variable that contains the name of the trigger actually fired.
TG_WHEN
Data type text; a string of either BEFORE or AFTER depending on the trigger’s definition.
TG_LEVEL
Data type text; a string of either ROW or STATEMENT depending on the trigger’s definition.
TG_OP
Data type text; a string of INSERT, UPDATE, or DELETE telling for which operation the trigger was
fired.
TG_RELID
Data type oid; the object ID of the table that caused the trigger invocation.
TG_RELNAME
Data type name; the name of the table that caused the trigger invocation.
TG_NARGS
Data type integer; the number of arguments given to the trigger procedure in the CREATE TRIGGER
statement.
TG_ARGV[]
Data type array of text; the arguments from the CREATE TRIGGER statement. The index counts
from 0. Invalid indices (less than 0 or greater than or equal to tg_nargs) result in a null value.
589
Chapter 35. PL/pgSQL - SQL Procedural Language
A trigger function must return either NULL or a record/row value having exactly the structure of the table
the trigger was fired for.
Row-level triggers fired BEFORE may return null to signal the trigger manager to skip the rest of the oper-
ation for this row (i.e., subsequent triggers are not fired, and the INSERT/UPDATE/DELETE does not occur
for this row). If a nonnull value is returned then the operation proceeds with that row value. Returning a
row value different from the original value of NEW alters the row that will be inserted or updated (but has
no direct effect in the DELETE case). To alter the row to be stored, it is possible to replace single values
directly in NEW and return the modified NEW, or to build a complete new record/row to return.
The return value of a BEFORE or AFTER statement-level trigger or an AFTER row-level trigger is always
ignored; it may as well be null. However, any of these types of triggers can still abort the entire operation
by raising an error.
Example 35-1 shows an example of a trigger procedure in PL/pgSQL.
This example trigger ensures that any time a row is inserted or updated in the table, the current user name
and time are stamped into the row. And it checks that an employee’s name is given and that the salary is a
positive value.
CREATE TABLE emp (
empname text,
salary integer,
last_date timestamp,
last_user text
);
590
Chapter 35. PL/pgSQL - SQL Procedural Language
Another way to log changes to a table involves creating a new table that holds a row for each insert,
update, or delete that occurs. This approach can be thought of as auditing changes to a table. Example
35-2 shows an example of an audit trigger procedure in PL/pgSQL.
This example trigger ensures that any insert, update or delete of a row in the emp table is recorded (i.e.,
audited) in the emp_audit table. The current time and user name are stamped into the row, together with
the type of operation performed on it.
CREATE TABLE emp (
empname text NOT NULL,
salary integer
);
One use of triggers is to maintain a summary table of another table. The resulting summary can be used
in place of the original table for certain queries — often with vastly reduced run times. This technique
591
Chapter 35. PL/pgSQL - SQL Procedural Language
is commonly used in Data Warehousing, where the tables of measured or observed data (called fact ta-
bles) can be extremely large. Example 35-3 shows an example of a trigger procedure in PL/pgSQL that
maintains a summary table for a fact table in a data warehouse.
The schema detailed here is partly based on the Grocery Store example from The Data Warehouse Toolkit
by Ralph Kimball.
--
-- Main tables - time dimension and sales fact.
--
CREATE TABLE time_dimension (
time_key integer NOT NULL,
day_of_week integer NOT NULL,
day_of_month integer NOT NULL,
month integer NOT NULL,
quarter integer NOT NULL,
year integer NOT NULL
);
CREATE UNIQUE INDEX time_dimension_key ON time_dimension(time_key);
--
-- Summary table - sales by time.
--
CREATE TABLE sales_summary_bytime (
time_key integer NOT NULL,
amount_sold numeric(15,2) NOT NULL,
units_sold numeric(12) NOT NULL,
amount_cost numeric(15,2) NOT NULL
);
CREATE UNIQUE INDEX sales_summary_bytime_key ON sales_summary_bytime(time_key);
--
-- Function and trigger to amend summarized column(s) on UPDATE, INSERT, DELETE.
--
CREATE OR REPLACE FUNCTION maint_sales_summary_bytime() RETURNS TRIGGER AS $maint_sales_
DECLARE
delta_time_key integer;
delta_amount_sold numeric(15,2);
delta_units_sold numeric(12);
delta_amount_cost numeric(15,2);
BEGIN
592
Chapter 35. PL/pgSQL - SQL Procedural Language
delta_time_key = OLD.time_key;
delta_amount_sold = -1 * OLD.amount_sold;
delta_units_sold = -1 * OLD.units_sold;
delta_amount_cost = -1 * OLD.amount_cost;
delta_time_key = OLD.time_key;
delta_amount_sold = NEW.amount_sold - OLD.amount_sold;
delta_units_sold = NEW.units_sold - OLD.units_sold;
delta_amount_cost = NEW.amount_cost - OLD.amount_cost;
delta_time_key = NEW.time_key;
delta_amount_sold = NEW.amount_sold;
delta_units_sold = NEW.units_sold;
delta_amount_cost = NEW.amount_cost;
END IF;
-- There might have been no row with this time_key (e.g new data!).
IF (NOT FOUND) THEN
BEGIN
INSERT INTO sales_summary_bytime (
time_key,
amount_sold,
units_sold,
amount_cost)
VALUES (
delta_time_key,
delta_amount_sold,
delta_units_sold,
593
Chapter 35. PL/pgSQL - SQL Procedural Language
delta_amount_cost
);
EXCEPTION
--
-- Catch race condition when two transactions are adding data
-- for a new time_key.
--
WHEN UNIQUE_VIOLATION THEN
UPDATE sales_summary_bytime
SET amount_sold = amount_sold + delta_amount_sold,
units_sold = units_sold + delta_units_sold,
amount_cost = amount_cost + delta_amount_cost
WHERE time_key = delta_time_key;
END;
END IF;
RETURN NULL;
END;
$maint_sales_summary_bytime$ LANGUAGE plpgsql;
594
Chapter 35. PL/pgSQL - SQL Procedural Language
• Oracle can have IN, OUT, and INOUT parameters passed to functions. INOUT, for example, means that
the parameter will receive a value and return another. PostgreSQL only has IN parameters, and hence
there is no specification of the parameter kind.
• The RETURN key word in the function prototype (not the function body) becomes RETURNS in Post-
greSQL. Also, IS becomes AS, and you need to add a LANGUAGE clause because PL/pgSQL is not the
only possible function language.
• In PostgreSQL, the function body is considered to be a string literal, so you need to use quote marks or
dollar quotes around it. This substitutes for the terminating / in the Oracle approach.
• The show errors command does not exist in PostgreSQL, and is not needed since errors are reported
automatically.
Example 35-5 shows how to port a function that creates another function and how to handle the ensuing
quoting problems.
595
Chapter 35. PL/pgSQL - SQL Procedural Language
Example 35-5. Porting a Function that Creates Another Function from PL/SQL to PL/pgSQL
The following procedure grabs rows from a SELECT statement and builds a large function with the results
in IF statements, for the sake of efficiency. Notice particularly the differences in the cursor and the FOR
loop.
This is the Oracle version:
CREATE OR REPLACE PROCEDURE cs_update_referrer_type_proc IS
CURSOR referrer_keys IS
SELECT * FROM cs_referrer_keys
ORDER BY try_order;
func_cmd VARCHAR(4000);
BEGIN
func_cmd := ’CREATE OR REPLACE FUNCTION cs_find_referrer_type(v_host IN VARCHAR,
v_domain IN VARCHAR, v_url IN VARCHAR) RETURN VARCHAR IS BEGIN’;
596
Chapter 35. PL/pgSQL - SQL Procedural Language
func_cmd :=
’CREATE OR REPLACE FUNCTION cs_find_referrer_type(v_host varchar,
v_domain varchar,
v_url varchar)
RETURNS varchar AS ’
|| quote_literal(func_body)
|| ’ LANGUAGE plpgsql;’ ;
EXECUTE func_cmd;
RETURN;
END;
$func$ LANGUAGE plpgsql;
Notice how the body of the function is built separately and passed through quote_literal to
double any quote marks in it. This technique is needed because we cannot safely use dollar quoting
for defining the new function: we do not know for sure what strings will be interpolated from
the referrer_key.key_string field. (We are assuming here that referrer_key.kind can
be trusted to always be host, domain, or url, but referrer_key.key_string might be
anything, in particular it might contain dollar signs.) This function is actually an improvement on the
Oracle original, because it will not generate broken code when referrer_key.key_string or
referrer_key.referrer_type contain quote marks.
Example 35-6 shows how to port a function with OUT parameters and string manipulation. PostgreSQL
does not have an instr function, but you can work around it using a combination of other functions. In
Section 35.11.3 there is a PL/pgSQL implementation of instr that you can use to make your porting
easier.
Example 35-6. Porting a Procedure With String Manipulation and OUT Parameters from PL/SQL
to PL/pgSQL
The following Oracle PL/SQL procedure is used to parse a URL and return several elements (host, path,
and query). In PostgreSQL, functions can return only one value. One way to work around this is to make
the return value a composite type (row type).
This is the Oracle version:
CREATE OR REPLACE PROCEDURE cs_parse_url(
v_url IN VARCHAR,
v_host OUT VARCHAR, -- This will be passed back
v_path OUT VARCHAR, -- This one too
v_query OUT VARCHAR) -- And this one
IS
a_pos1 INTEGER;
a_pos2 INTEGER;
BEGIN
v_host := NULL;
v_path := NULL;
v_query := NULL;
a_pos1 := instr(v_url, ’//’);
IF a_pos1 = 0 THEN
597
Chapter 35. PL/pgSQL - SQL Procedural Language
RETURN;
END IF;
a_pos2 := instr(v_url, ’/’, a_pos1 + 2);
IF a_pos2 = 0 THEN
v_host := substr(v_url, a_pos1 + 2);
v_path := ’/’;
RETURN;
END IF;
IF a_pos1 = 0 THEN
v_path := substr(v_url, a_pos2);
RETURN;
END IF;
IF a_pos1 = 0 THEN
RETURN res;
END IF;
a_pos2 := instr(v_url, ’/’, a_pos1 + 2);
IF a_pos2 = 0 THEN
res.v_host := substr(v_url, a_pos1 + 2);
res.v_path := ’/’;
RETURN res;
END IF;
598
Chapter 35. PL/pgSQL - SQL Procedural Language
IF a_pos1 = 0 THEN
res.v_path := substr(v_url, a_pos2);
RETURN res;
END IF;
Example 35-7 shows how to port a procedure that uses numerous features that are specific to Oracle.
BEGIN
INSERT INTO cs_jobs (job_id, start_stamp) VALUES (v_job_id, sysdate);
EXCEPTION
WHEN dup_val_on_index THEN NULL; -- don’t worry if it already exists
END;
COMMIT;
END;
/
show errors
Procedures like this can easily be converted into PostgreSQL functions returning void. This procedure in
particular is interesting because it can teach us some things:
599
Chapter 35. PL/pgSQL - SQL Procedural Language
➋ If you do a LOCK TABLE in PL/pgSQL, the lock will not be released until the calling transaction is
finished.
➌ You cannot issue COMMIT in a PL/pgSQL function. The function is running within some outer trans-
action and so COMMIT would imply terminating the function’s execution. However, in this particular
case it is not necessary anyway, because the lock obtained by the LOCK TABLE will be released when
we raise an error.
BEGIN
INSERT INTO cs_jobs (job_id, start_stamp) VALUES (v_job_id, now());
EXCEPTION
WHEN unique_violation THEN ➋
-- don’t worry if it already exists
END;
RETURN;
END;
$$ LANGUAGE plpgsql;
600
Chapter 35. PL/pgSQL - SQL Procedural Language
BEGIN
SAVEPOINT s1;
... code here ...
EXCEPTION
WHEN ... THEN
ROLLBACK TO s1;
... code here ...
WHEN ... THEN
ROLLBACK TO s1;
... code here ...
END;
If you are translating an Oracle procedure that uses SAVEPOINT and ROLLBACK TO in this style, your task
is easy: just omit the SAVEPOINT and ROLLBACK TO. If you have a procedure that uses SAVEPOINT and
ROLLBACK TO in a different way then some actual thought will be required.
35.11.2.2. EXECUTE
The PL/pgSQL version of EXECUTE works similarly to the PL/SQL version, but you have to remember to
use quote_literal(text) and quote_string(text) as described in Section 35.6.5. Constructs of
the type EXECUTE ’SELECT * FROM $1’; will not work unless you use these functions.
601
Chapter 35. PL/pgSQL - SQL Procedural Language
35.11.3. Appendix
This section contains the code for a set of Oracle-compatible instr functions that you can use to simplify
your porting efforts.
--
-- instr functions that mimic Oracle’s counterpart
-- Syntax: instr(string1, string2, [n], [m]) where [] denotes optional parameters.
--
-- Searches string1 beginning at the nth character for the mth occurrence
-- of string2. If n is negative, search backwards. If m is not passed,
-- assume 1 (search starts at first character).
--
IF pos = 0 THEN
RETURN 0;
ELSE
RETURN pos + beg_index - 1;
END IF;
ELSE
ss_length := char_length(string_to_search);
length := char_length(string);
beg := length + beg_index - ss_length + 2;
602
Chapter 35. PL/pgSQL - SQL Procedural Language
END IF;
beg := beg - 1;
END LOOP;
RETURN 0;
END IF;
END;
$$ LANGUAGE plpgsql STRICT IMMUTABLE;
IF i = 1 THEN
beg := beg + pos - 1;
ELSE
beg := beg + pos;
END IF;
IF pos = 0 THEN
RETURN 0;
ELSE
RETURN beg;
END IF;
ELSE
ss_length := char_length(string_to_search);
length := char_length(string);
beg := length + beg_index - ss_length + 2;
603
Chapter 35. PL/pgSQL - SQL Procedural Language
beg := beg - 1;
END LOOP;
RETURN 0;
END IF;
END;
$$ LANGUAGE plpgsql STRICT IMMUTABLE;
604
Chapter 36. PL/Tcl - Tcl Procedural Language
PL/Tcl is a loadable procedural language for the PostgreSQL database system that enables the Tcl1 lan-
guage to be used to write functions and trigger procedures.
36.1. Overview
PL/Tcl offers most of the capabilities a function writer has in the C language, except for some restrictions.
The good restriction is that everything is executed in a safe Tcl interpreter. In addition to the limited
command set of safe Tcl, only a few commands are available to access the database via SPI and to raise
messages via elog(). There is no way to access internals of the database server or to gain OS-level access
under the permissions of the PostgreSQL server process, as a C function can do. Thus, any unprivileged
database user may be permitted to use this language.
The other, implementation restriction is that Tcl functions cannot be used to create input/output functions
for new data types.
Sometimes it is desirable to write Tcl functions that are not restricted to safe Tcl. For example, one might
want a Tcl function that sends email. To handle these cases, there is a variant of PL/Tcl called PL/TclU
(for untrusted Tcl). This is the exact same language except that a full Tcl interpreter is used. If PL/TclU
is used, it must be installed as an untrusted procedural language so that only database superusers can
create functions in it. The writer of a PL/TclU function must take care that the function cannot be used to
do anything unwanted, since it will be able to do anything that could be done by a user logged in as the
database administrator.
The shared object for the PL/Tcl and PL/TclU call handlers is automatically built and installed in the Post-
greSQL library directory if Tcl support is specified in the configuration step of the installation procedure.
To install PL/Tcl and/or PL/TclU in a particular database, use the createlang program, for example
createlang pltcl dbname or createlang pltclu dbname.
PL/TclU is the same, except that the language has to be specified as pltclu.
The body of the function is simply a piece of Tcl script. When the function is called, the argument values
are passed as variables $1 ... $n to the Tcl script. The result is returned from the Tcl code in the usual
way, with a return statement.
For example, a function returning the greater of two integer values could be defined as:
1. http://www.tcl.tk/
605
Chapter 36. PL/Tcl - Tcl Procedural Language
return $2
$$ LANGUAGE pltcl STRICT;
Note the clause STRICT, which saves us from having to think about null input values: if a null value is
passed, the function will not be called at all, but will just return a null result automatically.
In a nonstrict function, if the actual value of an argument is null, the corresponding $n variable will be
set to an empty string. To detect whether a particular argument is null, use the function argisnull. For
example, suppose that we wanted tcl_max with one null and one nonnull argument to return the nonnull
argument, rather than null:
As shown above, to return a null value from a PL/Tcl function, execute return_null. This can be done
whether the function is strict or not.
Composite-type arguments are passed to the function as Tcl arrays. The element names of the array are
the attribute names of the composite type. If an attribute in the passed row has the null value, it will not
appear in the array. Here is an example:
There is currently no support for returning a composite-type result value, nor for returning sets.
PL/Tcl does not currently have full support for domain types: it treats a domain the same as the underlying
scalar type. This means that constraints associated with the domain will not be enforced. This is not an
issue for function arguments, but it is a hazard if you declare a PL/Tcl function as returning a domain
type.
606
Chapter 36. PL/Tcl - Tcl Procedural Language
Executes an SQL command given as a string. An error in the command causes an error to be raised.
Otherwise, the return value of spi_exec is the number of rows processed (selected, inserted, up-
dated, or deleted) by the command, or zero if the command is a utility statement. In addition, if the
command is a SELECT statement, the values of the selected columns are placed in Tcl variables as
described below.
The optional -count value tells spi_exec the maximum number of rows to process in the com-
mand. The effect of this is comparable to setting up a query as a cursor and then saying FETCH
n.
If the command is a SELECT statement, the values of the result columns are placed into Tcl variables
named after the columns. If the -array option is given, the column values are instead stored into the
named associative array, with the column names used as array indexes.
If the command is a SELECT statement and no loop-body script is given, then only the first row
of results are stored into Tcl variables; remaining rows, if any, are ignored. No storing occurs if the
query returns no rows. (This case can be detected by checking the result of spi_exec.) For example,
spi_exec "SELECT count(*) AS cnt FROM pg_proc"
will set the Tcl variable $cnt to the number of rows in the pg_proc system catalog.
607
Chapter 36. PL/Tcl - Tcl Procedural Language
If the optional loop-body argument is given, it is a piece of Tcl script that is executed once for
each row in the query result. (loop-body is ignored if the given command is not a SELECT.) The
values of the current row’s columns are stored into Tcl variables before each iteration. For example,
spi_exec -array C "SELECT * FROM pg_class" {
elog DEBUG "have table $C(relname)"
}
will print a log message for every row of pg_class. This feature works similarly to other Tcl looping
constructs; in particular continue and break work in the usual way inside the loop body.
If a column of a query result is null, the target variable for it is “unset” rather than being set.
spi_prepare query typelist
Prepares and saves a query plan for later execution. The saved plan will be retained for the life of the
current session.
The query may use parameters, that is, placeholders for values to be supplied whenever the plan
is actually executed. In the query string, refer to parameters by the symbols $1 ... $n. If the query
uses parameters, the names of the parameter types must be given as a Tcl list. (Write an empty list
for typelist if no parameters are used.) Presently, the parameter types must be identified by the
internal type names shown in the system table pg_type; for example int4 not integer.
The return value from spi_prepare is a query ID to be used in subsequent calls to spi_execp.
See spi_execp for an example.
spi_execp ?-count n? ?-array name? ?-nulls string? queryid ?value-list?
?loop-body?
We need backslashes inside the query string given to spi_prepare to ensure that the $n markers
will be passed through to spi_prepare as-is, and not replaced by Tcl variable substitution.
608
Chapter 36. PL/Tcl - Tcl Procedural Language
spi_lastoid
Returns the OID of the row inserted by the last spi_exec or spi_execp, if the command was a
single-row INSERT. (If not, you get zero.)
quote string
Doubles all occurrences of single quote and backslash characters in the given string. This may be
used to safely quote strings that are to be inserted into SQL commands given to spi_exec or
spi_prepare. For example, think about an SQL command string like
"SELECT ’$val’ AS ret"
where the Tcl variable val actually contains doesn’t. This would result in the final command string
SELECT ’doesn’t’ AS ret
which would cause a parse error during spi_exec or spi_prepare. To work properly, the submit-
ted command should contain
SELECT ’doesn”t’ AS ret
One advantage of spi_execp is that you don’t have to quote parameter values like this, since the
parameters are never parsed as part of an SQL command string.
elog level msg
Emits a log or error message. Possible levels are DEBUG, LOG, INFO, NOTICE, WARNING, ERROR, and
FATAL. ERROR raises an error condition; if this is not trapped by the surrounding Tcl code, the error
propagates out to the calling query, causing the current transaction or subtransaction to be aborted.
This is effectively the same as the Tcl error command. FATAL aborts the transaction and causes
the current session to shut down. (There is probably no good reason to use this error level in PL/Tcl
functions, but it’s provided for completeness.) The other levels only generate messages of different
priority levels. Whether messages of a particular priority are reported to the client, written to the
server log, or both is controlled by the log_min_messages and client_min_messages configuration
variables. See Section 16.4 for more information.
$TG_name
609
Chapter 36. PL/Tcl - Tcl Procedural Language
$TG_relatts
A Tcl list of the table column names, prefixed with an empty list element. So looking up a column
name in the list with Tcl’s lsearch command returns the element’s number starting with 1 for the
first column, the same way the columns are customarily numbered in PostgreSQL. (Empty list ele-
ments also appear in the positions of columns that have been dropped, so that the attribute numbering
is correct for columns to their right.)
$TG_when
The string INSERT, UPDATE, or DELETE depending on the type of trigger call.
$NEW
An associative array containing the values of the new table row for INSERT or UPDATE actions, or
empty for DELETE. The array is indexed by column name. Columns that are null will not appear in
the array.
$OLD
An associative array containing the values of the old table row for UPDATE or DELETE actions, or
empty for INSERT. The array is indexed by column name. Columns that are null will not appear in
the array.
$args
A Tcl list of the arguments to the procedure as given in the CREATE TRIGGER statement. These
arguments are also accessible as $1 ... $n in the procedure body.
The return value from a trigger procedure can be one of the strings OK or SKIP, or a list as returned by the
array get Tcl command. If the return value is OK, the operation (INSERT/UPDATE/DELETE) that fired
the trigger will proceed normally. SKIP tells the trigger manager to silently suppress the operation for this
row. If a list is returned, it tells PL/Tcl to return a modified row to the trigger manager that will be inserted
instead of the one given in $NEW. (This works for INSERT and UPDATE only.) Needless to say that all this
is only meaningful when the trigger is BEFORE and FOR EACH ROW; otherwise the return value is ignored.
Here’s a little example trigger procedure that forces an integer value in a table to keep track of the number
of updates that are performed on the row. For new rows inserted, the value is initialized to 0 and then
incremented on every update operation.
610
Chapter 36. PL/Tcl - Tcl Procedural Language
incr NEW($1)
}
default {
return OK
}
}
return [array get NEW]
$$ LANGUAGE pltcl;
Notice that the trigger procedure itself does not know the column name; that’s supplied from the trigger
arguments. This lets the trigger procedure be reused with different tables.
611
Chapter 37. PL/Perl - Perl Procedural Language
PL/Perl is a loadable procedural language that enables you to write PostgreSQL functions in the Perl1
programming language.
To install PL/Perl in a particular database, use createlang plperl dbname.
Tip: If a language is installed into template1, all subsequently created databases will have the lan-
guage installed automatically.
Note: Users of source packages must specially enable the build of PL/Perl during the installation
process. (Refer to Section 14.1 for more information.) Users of binary packages might find PL/Perl in
a separate subpackage.
If an SQL null value is passed to a function, the argument value will appear as “undefined” in Perl.
The above function definition will not behave very nicely with null inputs (in fact, it will act as though
they are zeroes). We could add STRICT to the function definition to make PostgreSQL do something
more reasonable: if a null value is passed, the function will not be called at all, but will just return a
null result automatically. Alternatively, we could check for undefined inputs in the function body. For
1. http://www.perl.com
612
Chapter 37. PL/Perl - Perl Procedural Language
example, suppose that we wanted perl_max with one null and one nonnull argument to return the nonnull
argument, rather than a null value:
As shown above, to return an SQL null value from a PL/Perl function, return an undefined value. This can
be done whether the function is strict or not.
Composite-type arguments are passed to the function as references to hashes. The keys of the hash are the
attribute names of the composite type. Here is an example:
A PL/Perl function can return a composite-type result using the same approach: return a reference to a
hash that has the required attributes. For example,
Any columns in the declared result data type that are not present in the hash will be returned as NULLs.
PL/Perl functions can also return sets of either scalar or composite types. To do this, return a reference to
an array that contains either scalars or references to hashes, respectively. Here are some simple examples:
613
Chapter 37. PL/Perl - Perl Procedural Language
Note that when you do this, Perl will have to build the entire array in memory; therefore the technique
does not scale to very large result sets.
PL/Perl does not currently have full support for domain types: it treats a domain the same as the underlying
scalar type. This means that constraints associated with the domain will not be enforced. This is not an
issue for function arguments, but it is a hazard if you declare a PL/Perl function as returning a domain
type.
spi_exec_query(query [, max-rows])
spi_exec_query(command)
Executes an SQL command. Here is an example of a query (SELECT command) with the optional
maximum number of rows:
$rv = spi_exec_query(’SELECT * FROM my_table’, 5);
This returns up to 5 rows from the table my_table. If my_table has a column my_column, you can
get that value from row $i of the result like this:
$foo = $rv->{rows}[$i]->{my_column};
The total number of rows returned from a SELECT query can be accessed like this:
$nrows = $rv->{processed}
2. http://www.cpan.org/modules/by-module/DBD/APILOS/
3. http://www.cpan.org/SITES.html
614
Chapter 37. PL/Perl - Perl Procedural Language
$rv = spi_exec_query($query);
You can then access the command status (e.g., SPI_OK_INSERT) like this:
$res = $rv->{status};
elog(level, msg)
Emit a log or error message. Possible levels are DEBUG, LOG, INFO, NOTICE, WARNING, and ERROR.
ERROR raises an error condition; if this is not trapped by the surrounding Perl code, the error propa-
gates out to the calling query, causing the current transaction or subtransaction to be aborted. This is
effectively the same as the Perl die command. The other levels only generate messages of different
priority levels. Whether messages of a particular priority are reported to the client, written to the
server log, or both is controlled by the log_min_messages and client_min_messages configuration
variables. See Section 16.4 for more information.
615
Chapter 37. PL/Perl - Perl Procedural Language
616
Chapter 37. PL/Perl - Perl Procedural Language
(You could have replaced the above with the one-liner return $_SHARED{myquote}->($_[0]); at
the expense of readability.)
The creation of the function will succeed, but executing it will not.
Sometimes it is desirable to write Perl functions that are not restricted. For example, one might want a Perl
function that sends mail. To handle these cases, PL/Perl can also be installed as an “untrusted” language
(usually called PL/PerlU). In this case the full Perl language is available. If the createlang program is
used to install the language, the language name plperlu will select the untrusted PL/Perl variant.
The writer of a PL/PerlU function must take care that the function cannot be used to do anything unwanted,
since it will be able to do anything that could be done by a user logged in as the database administrator.
Note that the database system allows only database superusers to create functions in untrusted languages.
If the above function was created by a superuser using the language plperlu, execution would succeed.
$_TD->{new}{foo}
$_TD->{old}{foo}
$_TD->{name}
617
Chapter 37. PL/Perl - Perl Procedural Language
$_TD->{event}
return;
Indicates that the NEW row was modified by the trigger function
618
Chapter 37. PL/Perl - Perl Procedural Language
• PL/Perl functions cannot call each other directly (because they are anonymous subroutines inside Perl).
• SPI is not yet fully implemented.
• In the current implementation, if you are fetching or returning very large data sets, you should be aware
that these will all go into memory.
619
Chapter 38. PL/Python - Python Procedural
Language
The PL/Python procedural language allows PostgreSQL functions to be written in the Python1 language.
To install PL/Python in a particular database, use createlang plpythonu dbname.
Tip: If a language is installed into template1, all subsequently created databases will have the lan-
guage installed automatically.
As of PostgreSQL 7.4, PL/Python is only available as an “untrusted” language (meaning it does not
offer any way of restricting what users can do in it). It has therefore been renamed to plpythonu. The
trusted variant plpython may become available again in future, if a new secure execution mechanism is
developed in Python.
Note: Users of source packages must specially enable the build of PL/Python during the installation
process. (Refer to the installation instructions for more information.) Users of binary packages might
find PL/Python in a separate subpackage.
The Python code that is given as the body of the function definition gets transformed into a Python func-
tion. For example, the above results in
def __plpython_procedure_myfunc_23456():
return args[0]
The global dictionary SD is available to store data between function calls. This variable is private static
data. The global dictionary GD is public data, available to all Python functions within a session. Use with
care.
1. http://www.python.org
620
Chapter 38. PL/Python - Python Procedural Language
Each function gets its own execution environment in the Python interpreter, so that global data and func-
tion arguments from myfunc are not available to myfunc2. The exception is the data in the GD dictionary,
as mentioned above.
If TD["when"] is BEFORE, you may return None or "OK" from the Python function to indicate the row is
unmodified, "SKIP" to abort the event, or "MODIFY" to indicate you’ve modified the row.
returns up to 5 rows from my_table. If my_table has a column my_column, it would be accessed as
foo = rv[i]["my_column"]
The second function, plpy.prepare, prepares the execution plan for a query. It is called with a query
string and a list of parameter types, if you have parameter references in the query. For example:
621
Chapter 38. PL/Python - Python Procedural Language
text is the type of the variable you will be passing for $1. After preparing a statement, you use the
function plpy.execute to run it:
rv = plpy.execute(plan, [ "name" ], 5)
622
Chapter 39. Server Programming Interface
The Server Programming Interface (SPI) gives writers of user-defined C functions the ability to run SQL
commands inside their functions. SPI is a set of interface functions to simplify access to the parser, planner,
optimizer, and executor. SPI also does some memory management.
Note: The available procedural languages provide various means to execute SQL commands from
procedures. Most of these facilities are based on SPI, so this documentation might be of use for users
of those languages as well.
To avoid misunderstanding we’ll use the term “function” when we speak of SPI interface functions and
“procedure” for a user-defined C-function that is using SPI.
Note that if a command invoked via SPI fails, then control will not be returned to your procedure. Rather,
the transaction or subtransaction in which your procedure executes will be rolled back. (This may seem
surprising given that the SPI functions mostly have documented error-return conventions. Those conven-
tions only apply for errors detected within the SPI functions themselves, however.) It is possible to recover
control after an error by establishing your own subtransaction surrounding SPI calls that might fail. This
is not currently documented because the mechanisms required are still in flux.
SPI functions return a nonnegative result on success (either via a returned integer value or in the global
variable SPI_result, as described below). On error, a negative result or NULL will be returned.
Source code files that use SPI must include the header file executor/spi.h.
SPI_connect
Name
SPI_connect — connect a procedure to the SPI manager
Synopsis
int SPI_connect(void)
Description
SPI_connect opens a connection from a procedure invocation to the SPI manager. You must call this
function if you want to execute commands through SPI. Some utility SPI functions may be called from
unconnected procedures.
623
SPI_connect
If your procedure is already connected, SPI_connect will return the error code
SPI_ERROR_CONNECT. This could happen if a procedure that has called SPI_connect directly calls
another procedure that calls SPI_connect. While recursive calls to the SPI manager are permitted when
an SQL command called through SPI invokes another function that uses SPI, directly nested calls to
SPI_connect and SPI_finish are forbidden. (But see SPI_push and SPI_pop.)
Return Value
SPI_OK_CONNECT
on success
SPI_ERROR_CONNECT
on error
624
SPI_finish
Name
SPI_finish — disconnect a procedure from the SPI manager
Synopsis
int SPI_finish(void)
Description
SPI_finish closes an existing connection to the SPI manager. You must call this function after complet-
ing the SPI operations needed during your procedure’s current invocation. You do not need to worry about
making this happen, however, if you abort the transaction via elog(ERROR). In that case SPI will clean
itself up automatically.
If SPI_finish is called without having a valid connection, it will return SPI_ERROR_UNCONNECTED.
There is no fundamental problem with this; it means that the SPI manager has nothing to do.
Return Value
SPI_OK_FINISH
if properly disconnected
SPI_ERROR_UNCONNECTED
625
SPI_push
Name
SPI_push — push SPI stack to allow recursive SPI usage
Synopsis
void SPI_push(void)
Description
SPI_push should be called before executing another procedure that might itself wish to use SPI. After
SPI_push, SPI is no longer in a “connected” state, and SPI function calls will be rejected unless a fresh
SPI_connect is done. This ensures a clean separation between your procedure’s SPI state and that of
another procedure you call. After the other procedure returns, call SPI_pop to restore access to your own
SPI state.
Note that SPI_execute and related functions automatically do the equivalent of SPI_push before pass-
ing control back to the SQL execution engine, so it is not necessary for you to worry about this when using
those functions. Only when you are directly calling arbitrary code that might contain SPI_connect calls
do you need to issue SPI_push and SPI_pop.
626
SPI_pop
Name
SPI_pop — pop SPI stack to return from recursive SPI usage
Synopsis
void SPI_pop(void)
Description
SPI_pop pops the previous environment from the SPI call stack. See SPI_push.
627
SPI_execute
Name
SPI_execute — execute a command
Synopsis
int SPI_execute(const char * command, bool read_only, int count)
Description
SPI_execute executes the specified SQL command for count rows. If read_only is true, the com-
mand must be read-only, and execution overhead is somewhat reduced.
This function may only be called from a connected procedure.
If count is zero then the command is executed for all rows that it applies to. If count is greater than 0,
then the number of rows for which the command will be executed is restricted (much like a LIMIT clause).
For example,
628
SPI_execute
typedef struct
{
MemoryContext tuptabcxt; /* memory context of result table */
uint32 alloced; /* number of alloced vals */
uint32 free; /* number of free vals */
TupleDesc tupdesc; /* row descriptor */
HeapTuple *vals; /* rows */
} SPITupleTable;
vals is an array of pointers to rows. (The number of valid entries is given by SPI_processed.) tupdesc
is a row descriptor which you may pass to SPI functions dealing with rows. tuptabcxt, alloced, and
free are internal fields not intended for use by SPI callers.
SPI_finish frees all SPITupleTables allocated during the current procedure. You can free a particular
result table earlier, if you are done with it, by calling SPI_freetuptable.
Arguments
int count
Return Value
If the execution of the command was successful then one of the following (nonnegative) values will be
returned:
SPI_OK_SELECT
629
SPI_execute
SPI_OK_UPDATE
SPI_ERROR_ARGUMENT
Notes
The functions SPI_execute, SPI_exec, SPI_execute_plan, and SPI_execp change both
SPI_processed and SPI_tuptable (just the pointer, not the contents of the structure). Save these two
global variables into local procedure variables if you need to access the result table of SPI_execute or
a related function across later calls.
630
SPI_exec
Name
SPI_exec — execute a read/write command
Synopsis
int SPI_exec(const char * command, int count)
Description
SPI_exec is the same as SPI_execute, with the latter’s read_only parameter always taken as false.
Arguments
Return Value
See SPI_execute.
631
SPI_prepare
Name
SPI_prepare — prepare a plan for a command, without executing it yet
Synopsis
void * SPI_prepare(const char * command, int nargs, Oid * argtypes)
Description
SPI_prepare creates and returns an execution plan for the specified command but doesn’t execute the
command. This function should only be called from a connected procedure.
When the same or a similar command is to be executed repeatedly, it may be advantageous to perform the
planning only once. SPI_prepare converts a command string into an execution plan that can be executed
repeatedly using SPI_execute_plan.
A prepared command can be generalized by writing parameters ($1, $2, etc.) in place of what would
be constants in a normal command. The actual values of the parameters are then specified when
SPI_execute_plan is called. This allows the prepared command to be used over a wider range of
situations than would be possible without parameters.
The plan returned by SPI_prepare can be used only in the current invocation of the procedure, since
SPI_finish frees memory allocated for a plan. But a plan can be saved for longer using the function
SPI_saveplan.
Arguments
command string
int nargs
pointer to an array containing the OIDs of the data types of the parameters
Return Value
SPI_prepare returns a non-null pointer to an execution plan. On error, NULL will be returned, and
SPI_result will be set to one of the same error codes used by SPI_execute, except that it is set to
SPI_ERROR_ARGUMENT if command is NULL, or if nargs is less than 0, or if nargs is greater than 0 and
argtypes is NULL.
632
SPI_prepare
Notes
There is a disadvantage to using parameters: since the planner does not know the values that will be sup-
plied for the parameters, it may make worse planning choices than it would make for a normal command
with all constants visible.
633
SPI_getargcount
Name
SPI_getargcount — return the number of arguments needed by a plan prepared by SPI_prepare
Synopsis
int SPI_getargcount(void * plan)
Description
SPI_getargcount returns the number of arguments needed to execute a plan prepared by
SPI_prepare.
Arguments
void * plan
Return Value
The expected argument count for the plan, or SPI_ERROR_ARGUMENT if the plan is NULL
634
SPI_getargtypeid
Name
SPI_getargtypeid — return the data type OID for an argument of a plan prepared by
SPI_prepare
Synopsis
Oid SPI_getargtypeid(void * plan, int argIndex)
Description
SPI_getargtypeid returns the OID representing the type id for the argIndex’th argument of a plan
prepared by SPI_prepare. First argument is at index zero.
Arguments
void * plan
Return Value
The type id of the argument at the given index, or SPI_ERROR_ARGUMENT if the plan is NULL or
argIndex is less than 0 or not less than the number of arguments declared for the plan
635
SPI_is_cursor_plan
Name
SPI_is_cursor_plan — return true if a plan prepared by SPI_prepare can be used with
SPI_cursor_open
Synopsis
bool SPI_is_cursor_plan(void * plan)
Description
SPI_is_cursor_plan returns true if a plan prepared by SPI_prepare can be passed as an argument
to SPI_cursor_open and false if that is not the case. The criteria are that the plan represents one
single command and that this command is a SELECT without an INTO clause.
Arguments
void * plan
Return Value
true or false to indicate if the plan can produce a cursor or not, or SPI_ERROR_ARGUMENT if the plan
is NULL
636
SPI_execute_plan
Name
SPI_execute_plan — execute a plan prepared by SPI_prepare
Synopsis
int SPI_execute_plan(void * plan, Datum * values, const char * nulls,
bool read_only, int count)
Description
SPI_execute_plan executes a plan prepared by SPI_prepare. read_only and count have the same
interpretation as in SPI_execute.
Arguments
void * plan
An array of actual parameter values. Must have same length as the plan’s number of arguments.
const char * nulls
An array describing which parameters are null. Must have same length as the plan’s number of
arguments. n indicates a null value (entry in values will be ignored); a space indicates a nonnull
value (entry in values is valid).
If nulls is NULL then SPI_execute_plan assumes that no parameters are null.
bool read_only
int count
Return Value
The return value is the same as for SPI_execute, with the following additional possible error (negative)
results:
SPI_ERROR_ARGUMENT
637
SPI_execute_plan
SPI_ERROR_PARAM
Notes
If one of the objects (a table, function, etc.) referenced by the prepared plan is dropped during the session
then the result of SPI_execute_plan for this plan will be unpredictable.
638
SPI_execp
Name
SPI_execp — execute a plan in read/write mode
Synopsis
int SPI_execp(void * plan, Datum * values, const char * nulls, int count)
Description
SPI_execp is the same as SPI_execute_plan, with the latter’s read_only parameter always taken as
false.
Arguments
void * plan
An array of actual parameter values. Must have same length as the plan’s number of arguments.
const char * nulls
An array describing which parameters are null. Must have same length as the plan’s number of
arguments. n indicates a null value (entry in values will be ignored); a space indicates a nonnull
value (entry in values is valid).
If nulls is NULL then SPI_execp assumes that no parameters are null.
int count
Return Value
See SPI_execute_plan.
SPI_processed and SPI_tuptable are set as in SPI_execute if successful.
639
SPI_cursor_open
Name
SPI_cursor_open — set up a cursor using a plan created with SPI_prepare
Synopsis
Portal SPI_cursor_open(const char * name, void * plan,
Datum * values, const char * nulls,
bool read_only)
Description
SPI_cursor_open sets up a cursor (internally, a portal) that will execute a plan prepared by
SPI_prepare. The parameters have the same meanings as the corresponding parameters to
SPI_execute_plan.
Using a cursor instead of executing the plan directly has two benefits. First, the result rows can be retrieved
a few at a time, avoiding memory overrun for queries that return many rows. Second, a portal can outlive
the current procedure (it can, in fact, live to the end of the current transaction). Returning the portal name
to the procedure’s caller provides a way of returning a row set as result.
Arguments
An array of actual parameter values. Must have same length as the plan’s number of arguments.
const char * nulls
An array describing which parameters are null. Must have same length as the plan’s number of
arguments. n indicates a null value (entry in values will be ignored); a space indicates a nonnull
value (entry in values is valid).
If nulls is NULL then SPI_cursor_open assumes that no parameters are null.
bool read_only
640
SPI_cursor_open
Return Value
pointer to portal containing the cursor, or NULL on error
641
SPI_cursor_find
Name
SPI_cursor_find — find an existing cursor by name
Synopsis
Portal SPI_cursor_find(const char * name)
Description
SPI_cursor_find finds an existing portal by name. This is primarily useful to resolve a cursor name
returned as text by some other function.
Arguments
Return Value
pointer to the portal with the specified name, or NULL if none was found
642
SPI_cursor_fetch
Name
SPI_cursor_fetch — fetch some rows from a cursor
Synopsis
void SPI_cursor_fetch(Portal portal, bool forward, int count)
Description
SPI_cursor_fetch fetches some rows from a cursor. This is equivalent to the SQL command FETCH.
Arguments
Portal portal
Return Value
SPI_processed and SPI_tuptable are set as in SPI_execute if successful.
643
SPI_cursor_move
Name
SPI_cursor_move — move a cursor
Synopsis
void SPI_cursor_move(Portal portal, bool forward, int count)
Description
SPI_cursor_move skips over some number of rows in a cursor. This is equivalent to the SQL command
MOVE.
Arguments
Portal portal
644
SPI_cursor_close
Name
SPI_cursor_close — close a cursor
Synopsis
void SPI_cursor_close(Portal portal)
Description
SPI_cursor_close closes a previously created cursor and releases its portal storage.
All open cursors are closed automatically at the end of a transaction. SPI_cursor_close need only be
invoked if it is desirable to release resources sooner.
Arguments
Portal portal
645
SPI_saveplan
Name
SPI_saveplan — save a plan
Synopsis
void * SPI_saveplan(void * plan)
Description
SPI_saveplan saves a passed plan (prepared by SPI_prepare) in memory protected from freeing by
SPI_finish and by the transaction manager and returns a pointer to the saved plan. This gives you the
ability to reuse prepared plans in the subsequent invocations of your procedure in the current session. You
may save the pointer returned in a local variable. Always check if this pointer is NULL or not either when
preparing a plan or using an already prepared plan in SPI_execute_plan.
Arguments
void * plan
Return Value
Pointer to the saved plan; NULL if unsuccessful. On error, SPI_result is set thus:
SPI_ERROR_ARGUMENT
if plan is NULL
SPI_ERROR_UNCONNECTED
Notes
If one of the objects (a table, function, etc.) referenced by the prepared plan is dropped during the session
then the results of SPI_execute_plan for this plan will be unpredictable.
646
39.2. Interface Support Functions
The functions described here provide an interface for extracting information from result sets returned by
SPI_execute and other SPI functions.
All functions described in this section may be used by both connected and unconnected procedures.
SPI_fname
Name
SPI_fname — determine the column name for the specified column number
Synopsis
char * SPI_fname(TupleDesc rowdesc, int colnumber)
Description
SPI_fname returns a copy of the column name of the specified column. (You can use pfree to release
the copy of the name when you don’t need it anymore.)
Arguments
TupleDesc rowdesc
Return Value
The column name; NULL if colnumber is out of range. SPI_result set to SPI_ERROR_NOATTRIBUTE
on error.
647
SPI_fnumber
Name
SPI_fnumber — determine the column number for the specified column name
Synopsis
int SPI_fnumber(TupleDesc rowdesc, const char * colname)
Description
SPI_fnumber returns the column number for the column with the specified name.
If colname refers to a system column (e.g., oid) then the appropriate negative column number
will be returned. The caller should be careful to test the return value for exact equality to
SPI_ERROR_NOATTRIBUTE to detect an error; testing the result for less than or equal to 0 is not correct
unless system columns should be rejected.
Arguments
TupleDesc rowdesc
column name
Return Value
Column number (count starts at 1), or SPI_ERROR_NOATTRIBUTE if the named column was not found.
648
SPI_getvalue
Name
SPI_getvalue — return the string value of the specified column
Synopsis
char * SPI_getvalue(HeapTuple row, TupleDesc rowdesc, int colnumber)
Description
SPI_getvalue returns the string representation of the value of the specified column.
The result is returned in memory allocated using palloc. (You can use pfree to release the memory
when you don’t need it anymore.)
Arguments
HeapTuple row
Return Value
Column value, or NULL if the column is null, colnumber is out of range (SPI_result is set
to SPI_ERROR_NOATTRIBUTE), or no no output function available (SPI_result is set to
SPI_ERROR_NOOUTFUNC).
649
SPI_getbinval
Name
SPI_getbinval — return the binary value of the specified column
Synopsis
Datum SPI_getbinval(HeapTuple row, TupleDesc rowdesc, int colnumber, bool * isnull)
Description
SPI_getbinval returns the value of the specified column in the internal form (as type Datum).
This function does not allocate new space for the datum. In the case of a pass-by-reference data type, the
return value will be a pointer into the passed row.
Arguments
HeapTuple row
Return Value
The binary value of the column is returned. The variable pointed to by isnull is set to true if the column
is null, else to false.
SPI_result is set to SPI_ERROR_NOATTRIBUTE on error.
650
SPI_gettype
Name
SPI_gettype — return the data type name of the specified column
Synopsis
char * SPI_gettype(TupleDesc rowdesc, int colnumber)
Description
SPI_gettype returns a copy of the data type name of the specified column. (You can use pfree to
release the copy of the name when you don’t need it anymore.)
Arguments
TupleDesc rowdesc
Return Value
The data type name of the specified column, or NULL on error. SPI_result is set to
SPI_ERROR_NOATTRIBUTE on error.
651
SPI_gettypeid
Name
SPI_gettypeid — return the data type OID of the specified column
Synopsis
Oid SPI_gettypeid(TupleDesc rowdesc, int colnumber)
Description
SPI_gettypeid returns the OID of the data type of the specified column.
Arguments
TupleDesc rowdesc
Return Value
The OID of the data type of the specified column or InvalidOid on error. On error, SPI_result is set
to SPI_ERROR_NOATTRIBUTE.
652
SPI_getrelname
Name
SPI_getrelname — return the name of the specified relation
Synopsis
char * SPI_getrelname(Relation rel)
Description
SPI_getrelname returns a copy of the name of the specified relation. (You can use pfree to release the
copy of the name when you don’t need it anymore.)
Arguments
Relation rel
input relation
Return Value
The name of the specified relation.
653
39.3. Memory Management
PostgreSQL allocates memory within memory contexts, which provide a convenient method of managing
allocations made in many different places that need to live for differing amounts of time. Destroying a
context releases all the memory that was allocated in it. Thus, it is not necessary to keep track of individual
objects to avoid memory leaks; instead only a relatively small number of contexts have to be managed.
palloc and related functions allocate memory from the “current” context.
SPI_connect creates a new memory context and makes it current. SPI_finish restores the previous
current memory context and destroys the context created by SPI_connect. These actions ensure that
transient memory allocations made inside your procedure are reclaimed at procedure exit, avoiding mem-
ory leakage.
However, if your procedure needs to return an object in allocated memory (such as a value of a pass-by-
reference data type), you cannot allocate that memory using palloc, at least not while you are connected
to SPI. If you try, the object will be deallocated by SPI_finish, and your procedure will not work
reliably. To solve this problem, use SPI_palloc to allocate memory for your return object. SPI_palloc
allocates memory in the “upper executor context”, that is, the memory context that was current when
SPI_connect was called, which is precisely the right context for a value returned from your procedure.
If SPI_palloc is called while the procedure is not connected to SPI, then it acts the same as a normal
palloc. Before a procedure connects to the SPI manager, the current memory context is the upper execu-
tor context, so all allocations made by the procedure via palloc or by SPI utility functions are made in
this context.
When SPI_connect is called, the private context of the procedure, which is created by SPI_connect, is
made the current context. All allocations made by palloc, repalloc, or SPI utility functions (except for
SPI_copytuple, SPI_returntuple, SPI_modifytuple, and SPI_palloc) are made in this context.
When a procedure disconnects from the SPI manager (via SPI_finish) the current context is restored
to the upper executor context, and all allocations made in the procedure memory context are freed and
cannot be used any more.
All functions described in this section may be used by both connected and unconnected procedures. In an
unconnected procedure, they act the same as the underlying ordinary server functions (palloc, etc.).
SPI_palloc
Name
SPI_palloc — allocate memory in the upper executor context
Synopsis
void * SPI_palloc(Size size)
Description
SPI_palloc allocates memory in the upper executor context.
654
SPI_palloc
Arguments
Size size
Return Value
pointer to new storage space of the specified size
655
SPI_repalloc
Name
SPI_repalloc — reallocate memory in the upper executor context
Synopsis
void * SPI_repalloc(void * pointer, Size size)
Description
SPI_repalloc changes the size of a memory segment previously allocated using SPI_palloc.
This function is no longer different from plain repalloc. It’s kept just for backward compatibility of
existing code.
Arguments
void * pointer
Return Value
pointer to new storage space of specified size with the contents copied from the existing area
656
SPI_pfree
Name
SPI_pfree — free memory in the upper executor context
Synopsis
void SPI_pfree(void * pointer)
Description
SPI_pfree frees memory previously allocated using SPI_palloc or SPI_repalloc.
This function is no longer different from plain pfree. It’s kept just for backward compatibility of existing
code.
Arguments
void * pointer
657
SPI_copytuple
Name
SPI_copytuple — make a copy of a row in the upper executor context
Synopsis
HeapTuple SPI_copytuple(HeapTuple row)
Description
SPI_copytuple makes a copy of a row in the upper executor context. This is normally used to return a
modified row from a trigger. In a function declared to return a composite type, use SPI_returntuple
instead.
Arguments
HeapTuple row
row to be copied
Return Value
the copied row; NULL only if tuple is NULL
658
SPI_returntuple
Name
SPI_returntuple — prepare to return a tuple as a Datum
Synopsis
HeapTupleHeader SPI_returntuple(HeapTuple row, TupleDesc rowdesc)
Description
SPI_returntuple makes a copy of a row in the upper executor context, returning it in the form of a
row type Datum. The returned pointer need only be converted to Datum via PointerGetDatum before
returning.
Note that this should be used for functions that are declared to return composite types. It is not used for
triggers; use SPI_copytuple for returning a modified row in a trigger.
Arguments
HeapTuple row
row to be copied
TupleDesc rowdesc
descriptor for row (pass the same descriptor each time for most effective caching)
Return Value
HeapTupleHeader pointing to copied row; NULL only if row or rowdesc is NULL
659
SPI_modifytuple
Name
SPI_modifytuple — create a row by replacing selected fields of a given row
Synopsis
HeapTuple SPI_modifytuple(Relation rel, HeapTuple row, ncols, colnum, Datum * values, cons
Description
SPI_modifytuple creates a new row by substituting new values for selected columns, copying the orig-
inal row’s columns at other positions. The input row is not modified.
Arguments
Relation rel
Used only as the source of the row descriptor for the row. (Passing a relation rather than a row
descriptor is a misfeature.)
HeapTuple row
row to be modified
int ncols
array of the numbers of the columns that are to be changed (column numbers start at 1)
Datum * values
which new values are null, if any (see SPI_execute_plan for the format)
Return Value
new row with modifications, allocated in the upper executor context; NULL only if row is NULL
On error, SPI_result is set as follows:
660
SPI_modifytuple
SPI_ERROR_ARGUMENT
if rel is NULL, or if row is NULL, or if ncols is less than or equal to 0, or if colnum is NULL, or if
values is NULL.
SPI_ERROR_NOATTRIBUTE
if colnum contains an invalid column number (less than or equal to 0 or greater than the number of
column in row)
661
SPI_freetuple
Name
SPI_freetuple — free a row allocated in the upper executor context
Synopsis
void SPI_freetuple(HeapTuple row)
Description
SPI_freetuple frees a row previously allocated in the upper executor context.
This function is no longer different from plain heap_freetuple. It’s kept just for backward compatibility
of existing code.
Arguments
HeapTuple row
row to free
662
SPI_freetuptable
Name
SPI_freetuptable — free a row set created by SPI_execute or a similar function
Synopsis
void SPI_freetuptable(SPITupleTable * tuptable)
Description
SPI_freetuptable frees a row set created by a prior SPI command execution function, such as
SPI_execute. Therefore, this function is usually called with the global variable SPI_tupletable as
argument.
This function is useful if a SPI procedure needs to execute multiple commands and does not want to keep
the results of earlier commands around until it ends. Note that any unfreed row sets will be freed anyway
at SPI_finish.
Arguments
SPITupleTable * tuptable
663
SPI_freeplan
Name
SPI_freeplan — free a previously saved plan
Synopsis
int SPI_freeplan(void *plan)
Description
SPI_freeplan releases a command execution plan previously returned by SPI_prepare or saved by
SPI_saveplan.
Arguments
void * plan
Return Value
SPI_ERROR_ARGUMENT if plan is NULL.
664
Chapter 39. Server Programming Interface
• During the execution of an SQL command, any data changes made by the command are invisible to the
command itself. For example, in
INSERT INTO a SELECT * FROM a;
The next section contains an example that illustrates the application of these rules.
39.5. Examples
This section contains a very simple example of SPI usage. The procedure execq takes an SQL command
as its first argument and a row count as its second, executes the command using SPI_exec and returns
the number of rows that were processed by the command. You can find more complex examples for SPI
in the source tree in src/test/regress/regress.c and in contrib/spi.
#include "executor/spi.h"
int
execq(text *sql, int cnt)
{
char *command;
int ret;
int proc;
SPI_connect();
665
Chapter 39. Server Programming Interface
proc = SPI_processed;
/*
* If this is a SELECT and some rows were fetched,
* then the rows are printed via elog(INFO).
*/
if (ret == SPI_OK_SELECT && SPI_processed > 0)
{
TupleDesc tupdesc = SPI_tuptable->tupdesc;
SPITupleTable *tuptable = SPI_tuptable;
char buf[8192];
int i, j;
SPI_finish();
pfree(command);
return (proc);
}
(This function uses call convention version 0, to make the example easier to understand. In real applica-
tions you should use the new version 1 interface.)
This is how you declare the function after having compiled it into a shared library:
666
Chapter 39. Server Programming Interface
execq
-------
2
(1 row)
execq
-------
3 -- 10 is the max value only, 3 is the real number of rows
(1 row)
667
Chapter 39. Server Programming Interface
INSERT 0 2
=> SELECT * FROM a;
x
---
1
2
2 -- 2 rows * 1 (x in first row)
6 -- 3 rows (2 + 1 just inserted) * 2 (x in second row)
(4 rows) ^^^^^^
rows visible to execq() in different invocations
668
VI. Reference
The entries in this Reference are meant to provide in reasonable length an authoritative, complete, and
formal summary about their respective subjects. More information about the use of PostgreSQL, in nar-
rative, tutorial, or example form, may be found in other parts of this book. See the cross-references listed
on each reference page.
The reference entries are also available as traditional “man” pages.
I. SQL Commands
This part contains reference information for the SQL commands supported by PostgreSQL. By “SQL”
the language in general is meant; information about the standards conformance and compatibility of each
command can be found on the respective reference page.
ABORT
Name
ABORT — abort the current transaction
Synopsis
ABORT [ WORK | TRANSACTION ]
Description
ABORT rolls back the current transaction and causes all the updates made by the transaction to be discarded.
This command is identical in behavior to the standard SQL command ROLLBACK, and is present only
for historical reasons.
Parameters
WORK
TRANSACTION
Notes
Use COMMIT to successfully terminate a transaction.
Issuing ABORT when not inside a transaction does no harm, but it will provoke a warning message.
Examples
To abort all changes:
ABORT;
672
ABORT
Compatibility
This command is a PostgreSQL extension present for historical reasons. ROLLBACK is the equivalent
standard SQL command.
See Also
BEGIN, COMMIT, ROLLBACK
673
ALTER AGGREGATE
Name
ALTER AGGREGATE — change the definition of an aggregate function
Synopsis
ALTER AGGREGATE name ( type ) RENAME TO newname
ALTER AGGREGATE name ( type ) OWNER TO newowner
Description
ALTER AGGREGATE changes the definition of an aggregate function.
Parameters
name
The name (optionally schema-qualified) of an existing aggregate function.
type
The argument data type of the aggregate function, or * if the function accepts any data type.
newname
The new name of the aggregate function.
newowner
The new owner of the aggregate function. You must be a superuser to change an aggregate’s owner.
Examples
To rename the aggregate function myavg for type integer to my_average:
To change the owner of the aggregate function myavg for type integer to joe:
674
ALTER AGGREGATE
Compatibility
There is no ALTER AGGREGATE statement in the SQL standard.
See Also
CREATE AGGREGATE, DROP AGGREGATE
675
ALTER CONVERSION
Name
ALTER CONVERSION — change the definition of a conversion
Synopsis
ALTER CONVERSION name RENAME TO newname
ALTER CONVERSION name OWNER TO newowner
Description
ALTER CONVERSION changes the definition of a conversion.
Parameters
name
The name (optionally schema-qualified) of an existing conversion.
newname
The new name of the conversion.
newowner
The new owner of the conversion. To change the owner of a conversion, you must be a superuser.
Examples
To rename the conversion iso_8859_1_to_utf_8 to latin1_to_unicode:
Compatibility
There is no ALTER CONVERSION statement in the SQL standard.
676
ALTER CONVERSION
See Also
CREATE CONVERSION, DROP CONVERSION
677
ALTER DATABASE
Name
ALTER DATABASE — change a database
Synopsis
ALTER DATABASE name SET parameter { TO | = } { value | DEFAULT }
ALTER DATABASE name RESET parameter
Description
ALTER DATABASE changes the attributes of a database.
The first two forms change the session default for a run-time configuration variable for a PostgreSQL
database. Whenever a new session is subsequently started in that database, the specified value becomes
the session default value. The database-specific default overrides whatever setting is present in
postgresql.conf or has been received from the postmaster command line. Only the database
owner or a superuser can change the session defaults for a database. Certain variables cannot be set this
way, or can only be set by a superuser.
The third form changes the name of the database. Only the database owner or a superuser can rename a
database; non-superuser owners must also have the CREATEDB privilege. The current database cannot be
renamed. (Connect to a different database if you need to do that.)
The fourth form changes the owner of the database. Only a superuser can change the database’s owner.
Parameters
name
The name of the database whose attributes are to be altered.
parameter
value
Set this database’s session default for the specified configuration parameter to the given value. If
value is DEFAULT or, equivalently, RESET is used, the database-specific setting is removed, so the
system-wide default setting will be inherited in new sessions. Use RESET ALL to clear all database-
specific settings.
See SET and Section 16.4 for more information about allowed parameter names and values.
newname
The new name of the database.
678
ALTER DATABASE
new_owner
The new owner of the database.
Notes
It is also possible to tie a session default to a specific user rather than to a database; see ALTER USER.
User-specific settings override database-specific ones if there is a conflict.
Examples
To disable index scans by default in the database test:
Compatibility
The ALTER DATABASE statement is a PostgreSQL extension.
See Also
CREATE DATABASE, DROP DATABASE, SET
679
ALTER DOMAIN
Name
ALTER DOMAIN — change the definition of a domain
Synopsis
ALTER DOMAIN name
{ SET DEFAULT expression | DROP DEFAULT }
ALTER DOMAIN name
{ SET | DROP } NOT NULL
ALTER DOMAIN name
ADD domain_constraint
ALTER DOMAIN name
DROP CONSTRAINT constraint_name [ RESTRICT | CASCADE ]
ALTER DOMAIN name
OWNER TO new_owner
Description
ALTER DOMAIN changes the definition of an existing domain. There are several sub-forms:
SET/DROP DEFAULT
These forms set or remove the default value for a domain. Note that defaults only apply to subsequent
INSERT commands; they do not affect rows already in a table using the domain.
You must own the domain to use ALTER DOMAIN; except for ALTER DOMAIN OWNER, which may only
be executed by a superuser.
680
ALTER DOMAIN
Parameters
name
The name (possibly schema-qualified) of an existing domain to alter.
domain_constraint
New domain constraint for the domain.
constraint_name
Name of an existing constraint to drop.
CASCADE
Refuse to drop the constraint if there are any dependent objects. This is the default behavior.
new_owner
The user name of the new owner of the domain.
Examples
To add a NOT NULL constraint to a domain:
681
ALTER DOMAIN
Compatibility
The ALTER DOMAIN statement is compatible with SQL:1999, except for the OWNER variant, which is a
PostgreSQL extension.
682
ALTER FUNCTION
Name
ALTER FUNCTION — change the definition of a function
Synopsis
ALTER FUNCTION name ( [ type [, ...] ] ) RENAME TO newname
ALTER FUNCTION name ( [ type [, ...] ] ) OWNER TO newowner
Description
ALTER FUNCTION changes the definition of a function.
Parameters
name
The name (optionally schema-qualified) of an existing function.
type
The data type of an argument of the function.
newname
The new name of the function.
newowner
The new owner of the function. To change the owner of a function, you must be a superuser. Note
that if the function is marked SECURITY DEFINER, it will subsequently execute as the new owner.
Examples
To rename the function sqrt for type integer to square_root:
To change the owner of the function sqrt for type integer to joe:
683
ALTER FUNCTION
Compatibility
There is an ALTER FUNCTION statement in the SQL standard, but it does not provide the option to rename
the function or change the owner.
See Also
CREATE FUNCTION, DROP FUNCTION
684
ALTER GROUP
Name
ALTER GROUP — change a user group
Synopsis
ALTER GROUP groupname ADD USER username [, ... ]
ALTER GROUP groupname DROP USER username [, ... ]
Description
ALTER GROUP changes the attributes of a user group.
The first two variants add users to a group or remove them from a group. Only database superusers can
use this command.
The third variant changes the name of the group. Only a database superuser can rename groups.
Parameters
groupname
The name of the group to modify.
username
Users that are to be added to or removed from the group. The users must already exist; ALTER GROUP
does not create or drop users.
newname
The new name of the group.
Examples
Add users to a group:
685
ALTER GROUP
Compatibility
There is no ALTER GROUP statement in the SQL standard. The concept of roles is similar.
See Also
CREATE GROUP, DROP GROUP
686
ALTER INDEX
Name
ALTER INDEX — change the definition of an index
Synopsis
ALTER INDEX name
action [, ... ]
ALTER INDEX name
RENAME TO new_name
OWNER TO new_owner
SET TABLESPACE indexspace_name
Description
ALTER INDEX changes the definition of an existing index. There are several subforms:
OWNER
This form changes the owner of the index to the specified user. This can only be done by a superuser.
SET TABLESPACE
This form changes the index’s tablespace to the specified tablespace and moves the data file(s) asso-
ciated with the index to the new tablespace. See also CREATE TABLESPACE.
RENAME
The RENAME form changes the name of the index. There is no effect on the stored data.
All the actions except RENAME can be combined into a list of multiple alterations to apply in parallel.
Parameters
name
The name (possibly schema-qualified) of an existing index to alter.
new_name
New name for the index.
new_owner
The user name of the new owner of the index.
687
ALTER INDEX
tablespace_name
The tablespace name to which the index will be moved.
Notes
These operations are also possible using ALTER TABLE. ALTER INDEX is in fact just an alias for the
forms of ALTER TABLE that apply to indexes.
Changing any part of a system catalog index is not permitted.
Examples
To rename an existing index:
Compatibility
ALTER INDEX is a PostgreSQL extension.
688
ALTER LANGUAGE
Name
ALTER LANGUAGE — change the definition of a procedural language
Synopsis
ALTER LANGUAGE name RENAME TO newname
Description
ALTER LANGUAGE changes the definition of a language. The only functionality is to rename the language.
Only a superuser can rename languages.
Parameters
name
Name of a language
newname
The new name of the language
Compatibility
There is no ALTER LANGUAGE statement in the SQL standard.
See Also
CREATE LANGUAGE, DROP LANGUAGE
689
ALTER OPERATOR
Name
ALTER OPERATOR — change the definition of an operator
Synopsis
ALTER OPERATOR name ( { lefttype | NONE } , { righttype | NONE } ) OWNER TO newowner
Description
ALTER OPERATOR changes the definition of an operator. The only currently available functionality is to
change the owner of the operator.
Parameters
name
The name (optionally schema-qualified) of an existing operator.
lefttype
The data type of the operator’s left operand; write NONE if the operator has no left operand.
righttype
The data type of the operator’s right operand; write NONE if the operator has no right operand.
newowner
The new owner of the operator. You must be a superuser to change the owner of an operator.
Examples
Change the owner of a custom operator a @@ b for type text:
Compatibility
There is no ALTER OPERATOR statement in the SQL standard.
690
ALTER OPERATOR
See Also
CREATE OPERATOR, DROP OPERATOR
691
ALTER OPERATOR CLASS
Name
ALTER OPERATOR CLASS — change the definition of an operator class
Synopsis
ALTER OPERATOR CLASS name USING index_method RENAME TO newname
ALTER OPERATOR CLASS name USING index_method OWNER TO newowner
Description
ALTER OPERATOR CLASS changes the definition of an operator class.
Parameters
name
The name (optionally schema-qualified) of an existing operator class.
index_method
The name of the index method this operator class is for.
newname
The new name of the operator class.
newowner
The new owner of the operator class. You must be a superuser to change the owner of an operator
class.
Compatibility
There is no ALTER OPERATOR CLASS statement in the SQL standard.
See Also
CREATE OPERATOR CLASS, DROP OPERATOR CLASS
692
ALTER SCHEMA
Name
ALTER SCHEMA — change the definition of a schema
Synopsis
ALTER SCHEMA name RENAME TO newname
ALTER SCHEMA name OWNER TO newowner
Description
ALTER SCHEMA changes the definition of a schema. To rename a schema you must own the schema and
have the privilege CREATE for the database. To change the owner of a schema, you must be a superuser.
Parameters
name
The name of an existing schema.
newname
The new name of the schema. The new name cannot begin with pg_, as such names are reserved for
system schemas.
newowner
The new owner of the schema.
Compatibility
There is no ALTER SCHEMA statement in the SQL standard.
See Also
CREATE SCHEMA, DROP SCHEMA
693
ALTER SEQUENCE
Name
ALTER SEQUENCE — change the definition of a sequence generator
Synopsis
ALTER SEQUENCE name [ INCREMENT [ BY ] increment ]
[ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
[ RESTART [ WITH ] start ] [ CACHE cache ] [ [ NO ] CYCLE ]
Description
ALTER SEQUENCE changes the parameters of an existing sequence generator. Any parameter not specifi-
cally set in the ALTER SEQUENCE command retains its prior setting.
Parameters
name
The name (optionally schema-qualified) of a sequence to be altered.
increment
The clause INCREMENT BY increment is optional. A positive value will make an ascending se-
quence, a negative one a descending sequence. If unspecified, the old increment value will be main-
tained.
minvalue
NO MINVALUE
The optional clause MINVALUE minvalue determines the minimum value a sequence can generate.
If NO MINVALUE is specified, the defaults of 1 and -263-1 for ascending and descending sequences,
respectively, will be used. If neither option is specified, the current minimum value will be main-
tained.
maxvalue
NO MAXVALUE
The optional clause MAXVALUE maxvalue determines the maximum value for the sequence. If NO
MAXVALUE is specified, the defaults are 263-1 and -1 for ascending and descending sequences, respec-
tively, will be used. If neither option is specified, the current maximum value will be maintained.
start
The optional clause RESTART WITH start changes the current value of the sequence.
694
ALTER SEQUENCE
cache
The clause CACHE cache enables sequence numbers to be preallocated and stored in memory for
faster access. The minimum value is 1 (only one value can be generated at a time, i.e., no cache). If
unspecified, the old cache value will be maintained.
CYCLE
The optional CYCLE key word may be used to enable the sequence to wrap around when the
maxvalue or minvalue has been reached by an ascending or descending sequence respectively.
If the limit is reached, the next number generated will be the minvalue or maxvalue,
respectively.
NO CYCLE
If the optional NO CYCLE key word is specified, any calls to nextval after the sequence has reached
its maximum value will return an error. If neither CYCLE or NO CYCLE are specified, the old cycle
behaviour will be maintained.
Examples
Restart a sequence called serial, at 105:
Notes
To avoid blocking of concurrent transactions that obtain numbers from the same sequence, ALTER
SEQUENCE is never rolled back; the changes take effect immediately and are not reversible.
ALTER SEQUENCE will not immediately affect nextval results in backends, other than the current one,
that have preallocated (cached) sequence values. They will use up all cached values prior to noticing the
changed sequence parameters. The current backend will be affected immediately.
Compatibility
ALTER SEQUENCE conforms with SQL:2003.
695
ALTER TABLE
Name
ALTER TABLE — change the definition of a table
Synopsis
ALTER TABLE [ ONLY ] name [ * ]
action [, ... ]
ALTER TABLE [ ONLY ] name [ * ]
RENAME [ COLUMN ] column TO new_column
ALTER TABLE name
RENAME TO new_name
Description
ALTER TABLE changes the definition of an existing table. There are several subforms:
ADD COLUMN
This form adds a new column to the table using the same syntax as CREATE TABLE.
DROP COLUMN
This form drops a column from a table. Indexes and table constraints involving the column will be
automatically dropped as well. You will need to say CASCADE if anything outside the table depends
on the column, for example, foreign key references or views.
696
ALTER TABLE
This form changes the type of a column of a table. Indexes and simple table constraints involving
the column will be automatically converted to use the new column type by reparsing the originally
supplied expression. The optional USING clause specifies how to compute the new column value
from the old; if omitted, the default conversion is the same as an assignment cast from old data type
to new. A USING clause must be provided if there is no implicit or assignment cast from old to new
type.
SET/DROP DEFAULT
These forms set or remove the default value for a column. The default values only apply to subsequent
INSERT commands; they do not cause rows already in the table to change. Defaults may also be
created for views, in which case they are inserted into INSERT statements on the view before the
view’s ON INSERT rule is applied.
SET/DROP NOT NULL
These forms change whether a column is marked to allow null values or to reject null values. You
can only use SET NOT NULL when the column contains no null values.
SET STATISTICS
This form sets the per-column statistics-gathering target for subsequent ANALYZE operations. The
target can be set in the range 0 to 1000; alternatively, set it to -1 to revert to using the system de-
fault statistics target (default_statistics_target). For more information on the use of statistics by the
PostgreSQL query planner, refer to Section 13.2.
SET STORAGE
This form sets the storage mode for a column. This controls whether this column is held inline or in
a supplementary table, and whether the data should be compressed or not. PLAIN must be used for
fixed-length values such as integer and is inline, uncompressed. MAIN is for inline, compressible
data. EXTERNAL is for external, uncompressed data, and EXTENDED is for external, compressed data.
EXTENDED is the default for most data types that support non-PLAIN storage. Use of EXTERNAL will
make substring operations on text and bytea columns faster, at the penalty of increased storage
space. Note that SET STORAGE doesn’t itself change anything in the table, it just sets the strategy to
be pursued during future table updates. See Section 49.2 for more information.
ADD table_constraint
This form adds a new constraint to a table using the same syntax as CREATE TABLE.
DROP CONSTRAINT
This form drops constraints on a table. Currently, constraints on tables are not required to have
unique names, so there may be more than one constraint matching the specified name. All matching
constraints will be dropped.
CLUSTER
This form selects the default index for future CLUSTER operations. It does not actually re-cluster the
table.
SET WITHOUT CLUSTER
This form removes the most recently used CLUSTER index specification from the table. This affects
future cluster operations that don’t specify an index.
697
ALTER TABLE
This form removes the oid system column from the table. This is exactly equivalent to DROP
COLUMN oid RESTRICT, except that it will not complain if there is already no oid column.
Note that there is no variant of ALTER TABLE that allows OIDs to be restored to a table once they
have been removed.
OWNER
This form changes the owner of the table, index, sequence, or view to the specified user.
SET TABLESPACE
This form changes the table’s tablespace to the specified tablespace and moves the data file(s) associ-
ated with the table to the new tablespace. Indexes on the table, if any, are not moved; but they can be
moved separately with additional SET TABLESPACE commands. See also CREATE TABLESPACE.
RENAME
The RENAME forms change the name of a table (or an index, sequence, or view) or the name of an
individual column in a table. There is no effect on the stored data.
All the actions except RENAME can be combined into a list of multiple alterations to apply in parallel.
For example, it is possible to add several columns and/or alter the type of several columns in a single
command. This is particularly useful with large tables, since only one pass over the table need be made.
You must own the table to use ALTER TABLE; except for ALTER TABLE OWNER, which may only be
executed by a superuser.
Parameters
name
The name (possibly schema-qualified) of an existing table to alter. If ONLY is specified, only that
table is altered. If ONLY is not specified, the table and all its descendant tables (if any) are updated.
* can be appended to the table name to indicate that descendant tables are to be altered, but in the
current version, this is the default behavior. (In releases before 7.1, ONLY was the default behavior.
The default can be altered by changing the configuration parameter sql_inheritance.)
column
Name of a new or existing column.
new_column
New name for an existing column.
new_name
New name for the table.
type
Data type of the new column, or new data type for an existing column.
698
ALTER TABLE
table_constraint
New table constraint for the table.
constraint_name
Name of an existing constraint to drop.
CASCADE
Automatically drop objects that depend on the dropped column or constraint (for example, views
referencing the column).
RESTRICT
Refuse to drop the column or constraint if there are any dependent objects. This is the default behav-
ior.
index_name
The index name on which the table should be marked for clustering.
new_owner
The user name of the new owner of the table.
tablespace_name
The tablespace name to which the table will be moved.
Notes
The key word COLUMN is noise and can be omitted.
When a column is added with ADD COLUMN, all existing rows in the table are initialized with the column’s
default value (NULL if no DEFAULT clause is specified).
Adding a column with a non-null default or changing the type of an existing column will require the entire
table to be rewritten. This may take a significant amount of time for a large table; and it will temporarily
require double the disk space.
Adding a CHECK or NOT NULL constraint requires scanning the table to verify that existing rows meet the
constraint.
The main reason for providing the option to specify multiple changes in a single ALTER TABLE is that
multiple table scans or rewrites can thereby be combined into a single pass over the table.
The DROP COLUMN form does not physically remove the column, but simply makes it invisible to SQL
operations. Subsequent insert and update operations in the table will store a null value for the column.
Thus, dropping a column is quick but it will not immediately reduce the on-disk size of your table, as the
space occupied by the dropped column is not reclaimed. The space will be reclaimed over time as existing
rows are updated.
The fact that ALTER TYPE requires rewriting the whole table is sometimes an advantage, because the
rewriting process eliminates any dead space in the table. For example, to reclaim the space occupied by a
dropped column immediately, the fastest way is
699
ALTER TABLE
where anycol is any remaining table column and anytype is the same type that column already has.
This results in no semantically-visible change in the table, but the command forces rewriting, which gets
rid of no-longer-useful data.
The USING option of ALTER TYPE can actually specify any expression involving the old values of the
row; that is, it can refer to other columns as well as the one being converted. This allows very general
conversions to be done with the ALTER TYPE syntax. Because of this flexibility, the USING expression is
not applied to the column’s default value (if any); the result might not be a constant expression as required
for a default. This means that when there is no implicit or assignment cast from old to new type, ALTER
TYPE may fail to convert the default even though a USING clause is supplied. In such cases, drop the
default with DROP DEFAULT, perform the ALTER TYPE, and then use SET DEFAULT to add a suitable
new default. Similar considerations apply to indexes and constraints involving the column.
If a table has any descendant tables, it is not permitted to add, rename, or change the type of a column in
the parent table without doing the same to the descendants. That is, ALTER TABLE ONLY will be rejected.
This ensures that the descendants always have columns matching the parent.
A recursive DROP COLUMN operation will remove a descendant table’s column only if the descendant
does not inherit that column from any other parents and never had an independent definition of the col-
umn. A nonrecursive DROP COLUMN (i.e., ALTER TABLE ONLY ... DROP COLUMN) never removes any
descendant columns, but instead marks them as independently defined rather than inherited.
Changing any part of a system catalog table is not permitted.
Refer to CREATE TABLE for a further description of valid parameters. Chapter 5 has further information
on inheritance.
Examples
To add a column of type varchar to a table:
To change an integer column containing UNIX timestamps to timestamp with time zone via a
USING clause:
700
ALTER TABLE
USING
timestamp with time zone ’epoch’ + foo_timestamp * interval ’1 second’;
ALTER TABLE distributors ADD CONSTRAINT distfk FOREIGN KEY (address) REFERENCES addresse
To add an automatically named primary key constraint to a table, noting that a table can only ever have
one primary key:
701
ALTER TABLE
Compatibility
The ADD, DROP, and SET DEFAULT forms conform with the SQL standard. The other forms are Post-
greSQL extensions of the SQL standard. Also, the ability to specify more than one manipulation in a
single ALTER TABLE command is an extension.
ALTER TABLE DROP COLUMN can be used to drop the only column of a table, leaving a zero-column
table. This is an extension of SQL, which disallows zero-column tables.
702
ALTER TABLESPACE
Name
ALTER TABLESPACE — change the definition of a tablespace
Synopsis
ALTER TABLESPACE name RENAME TO newname
ALTER TABLESPACE name OWNER TO newowner
Description
ALTER TABLESPACE changes the definition of a tablespace.
Parameters
name
The name of an existing tablespace.
newname
The new name of the tablespace. The new name cannot begin with pg_, as such names are reserved
for system tablespaces.
newowner
The new owner of the tablespace. You must be a superuser to change the owner of a tablespace.
Examples
Rename tablespace index_space to fast_raid:
703
ALTER TABLESPACE
Compatibility
There is no ALTER TABLESPACE statement in the SQL standard.
See Also
CREATE TABLESPACE, DROP TABLESPACE
704
ALTER TRIGGER
Name
ALTER TRIGGER — change the definition of a trigger
Synopsis
ALTER TRIGGER name ON table RENAME TO newname
Description
ALTER TRIGGER changes properties of an existing trigger. The RENAME clause changes the name of the
given trigger without otherwise changing the trigger definition.
You must own the table on which the trigger acts to be allowed to change its properties.
Parameters
name
The name of an existing trigger to alter.
table
The name of the table on which this trigger acts.
newname
The new name for the trigger.
Examples
To rename an existing trigger:
Compatibility
ALTER TRIGGER is a PostgreSQL extension of the SQL standard.
705
ALTER TYPE
Name
ALTER TYPE — change the definition of a type
Synopsis
ALTER TYPE name OWNER TO new_owner
Description
ALTER TYPE changes the definition of an existing type. The only currently available capability is chang-
ing the owner of a type.
Parameters
name
The name (possibly schema-qualified) of an existing type to alter.
new_owner
The user name of the new owner of the type. You must be a superuser to change a type’s owner.
Examples
To change the owner of the user-defined type email to joe:
Compatibility
There is no ALTER TYPE statement in the SQL standard.
706
ALTER USER
Name
ALTER USER — change a database user account
Synopsis
ALTER USER name [ [ WITH ] option [ ... ] ]
CREATEDB | NOCREATEDB
| CREATEUSER | NOCREATEUSER
| [ ENCRYPTED | UNENCRYPTED ] PASSWORD ’password’
| VALID UNTIL ’abstime’
Description
ALTER USER changes the attributes of a PostgreSQL user account. Attributes not mentioned in the com-
mand retain their previous settings.
The first variant of this command listed in the synopsis changes certain per-user privileges and authenti-
cation settings. (See below for details.) Database superusers can change any of these settings for any user.
Ordinary users can only change their own password.
The second variant changes the name of the user. Only a database superuser can rename user accounts. The
current session user cannot be renamed. (Connect as a different user if you need to do that.) Because MD5-
encrypted passwords use the user name as cryptographic salt, renaming a user clears their MD5 password.
The third and the fourth variant change a user’s session default for a specified configuration variable.
Whenever the user subsequently starts a new session, the specified value becomes the session default,
overriding whatever setting is present in postgresql.conf or has been received from the postmaster
command line. Ordinary users can change their own session defaults. Superusers can change anyone’s
session defaults. Certain variables cannot be set this way, or can only be set by a superuser.
Parameters
name
The name of the user whose attributes are to be altered.
707
ALTER USER
CREATEDB
NOCREATEDB
These clauses define a user’s ability to create databases. If CREATEDB is specified, the user will
be allowed to create his own databases. Using NOCREATEDB will deny a user the ability to create
databases. (If the user is also a superuser, then this setting has no real effect.)
CREATEUSER
NOCREATEUSER
These clauses determine whether a user will be permitted to create new users himself. CREATEUSER
will also make the user a superuser, who can override all access restrictions.
password
The new password to be used for this account.
ENCRYPTED
UNENCRYPTED
These key words control whether the password is stored encrypted in pg_shadow. (See CREATE
USER for more information about this choice.)
abstime
The date (and, optionally, the time) at which this user’s password is to expire. To set the password
never to expire, use ’infinity’.
newname
The new name of the user.
parameter
value
Set this user’s session default for the specified configuration parameter to the given value. If value
is DEFAULT or, equivalently, RESET is used, the user-specific variable setting is removed, so the user
will inherit the system-wide default setting in new sessions. Use RESET ALL to clear all user-specific
settings.
See SET and Section 16.4 for more information about allowed parameter names and values.
Notes
Use CREATE USER to add new users, and DROP USER to remove a user.
ALTER USER cannot change a user’s group memberships. Use ALTER GROUP to do that.
The VALID UNTIL clause defines an expiration time for a password only, not for the user account per se.
In particular, the expiration time is not enforced when logging in using a non-password-based authentica-
tion method.
It is also possible to tie a session default to a specific database rather than to a user; see ALTER DATABASE.
User-specific settings override database-specific ones if there is a conflict.
708
ALTER USER
Examples
Change a user’s password:
Change a password expiration date, specifying that the password should expire at midday on 4th May
2005 using the time zone which is one hour ahead of UTC:
Give a user the ability to create other users and new databases:
Compatibility
The ALTER USER statement is a PostgreSQL extension. The SQL standard leaves the definition of users
to the implementation.
See Also
CREATE USER, DROP USER, SET
709
ANALYZE
Name
ANALYZE — collect statistics about a database
Synopsis
ANALYZE [ VERBOSE ] [ table [ (column [, ...] ) ] ]
Description
ANALYZE collects statistics about the contents of tables in the database, and stores the results in the system
table pg_statistic. Subsequently, the query planner uses these statistics to help determine the most
efficient execution plans for queries.
With no parameter, ANALYZE examines every table in the current database. With a parameter, ANALYZE
examines only that table. It is further possible to give a list of column names, in which case only the
statistics for those columns are collected.
Parameters
VERBOSE
Outputs
When VERBOSE is specified, ANALYZE emits progress messages to indicate which table is currently being
processed. Various statistics about the tables are printed as well.
Notes
It is a good idea to run ANALYZE periodically, or just after making major changes in the contents of a table.
Accurate statistics will help the planner to choose the most appropriate query plan, and thereby improve
the speed of query processing. A common strategy is to run VACUUM and ANALYZE once a day during a
low-usage time of day.
710
ANALYZE
Unlike VACUUM FULL, ANALYZE requires only a read lock on the target table, so it can run in parallel with
other activity on the table.
The statistics collected by ANALYZE usually include a list of some of the most common values in each
column and a histogram showing the approximate data distribution in each column. One or both of these
may be omitted if ANALYZE deems them uninteresting (for example, in a unique-key column, there are
no common values) or if the column data type does not support the appropriate operators. There is more
information about the statistics in Chapter 21.
For large tables, ANALYZE takes a random sample of the table contents, rather than examining every row.
This allows even very large tables to be analyzed in a small amount of time. Note, however, that the
statistics are only approximate, and will change slightly each time ANALYZE is run, even if the actual
table contents did not change. This may result in small changes in the planner’s estimated costs shown
by EXPLAIN. In rare situations, this non-determinism will cause the query optimizer to choose a different
query plan between runs of ANALYZE. To avoid this, raise the amount of statistics collected by ANALYZE,
as described below.
The extent of analysis can be controlled by adjusting the default_statistics_target configuration variable, or
on a column-by-column basis by setting the per-column statistics target with ALTER TABLE ... ALTER
COLUMN ... SET STATISTICS (see ALTER TABLE). The target value sets the maximum number of
entries in the most-common-value list and the maximum number of bins in the histogram. The default
target value is 10, but this can be adjusted up or down to trade off accuracy of planner estimates against
the time taken for ANALYZE and the amount of space occupied in pg_statistic. In particular, setting
the statistics target to zero disables collection of statistics for that column. It may be useful to do that for
columns that are never used as part of the WHERE, GROUP BY, or ORDER BY clauses of queries, since the
planner will have no use for statistics on such columns.
The largest statistics target among the columns being analyzed determines the number of table rows sam-
pled to prepare the statistics. Increasing the target causes a proportional increase in the time and space
needed to do ANALYZE.
Compatibility
There is no ANALYZE statement in the SQL standard.
711
BEGIN
Name
BEGIN — start a transaction block
Synopsis
BEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]
Description
BEGIN initiates a transaction block, that is, all statements after a BEGIN command will be executed in
a single transaction until an explicit COMMIT or ROLLBACK is given. By default (without BEGIN),
PostgreSQL executes transactions in “autocommit” mode, that is, each statement is executed in its own
transaction and a commit is implicitly performed at the end of the statement (if execution was successful,
otherwise a rollback is done).
Statements are executed more quickly in a transaction block, because transaction start/commit requires
significant CPU and disk activity. Execution of multiple statements inside a transaction is also useful
to ensure consistency when making several related changes: other sessions will be unable to see the
intermediate states wherein not all the related updates have been done.
If the isolation level or read/write mode is specified, the new transaction has those characteristics, as if
SET TRANSACTION was executed.
Parameters
WORK
TRANSACTION
Refer to SET TRANSACTION for information on the meaning of the other parameters to this statement.
Notes
START TRANSACTION has the same functionality as BEGIN.
Use COMMIT or ROLLBACK to terminate a transaction block.
712
BEGIN
Issuing BEGIN when already inside a transaction block will provoke a warning message. The state of
the transaction is not affected. To nest transactions within a transaction block, use savepoints (see SAVE-
POINT).
For reasons of backwards compatibility, the commas between successive transaction_modes may
be omitted.
Examples
To begin a transaction block:
BEGIN;
Compatibility
BEGIN is a PostgreSQL language extension. It is equivalent to the SQL-standard command START
TRANSACTION, which see for additional compatibility information.
Incidentally, the BEGIN key word is used for a different purpose in embedded SQL. You are advised to be
careful about the transaction semantics when porting database applications.
See Also
COMMIT, ROLLBACK, START TRANSACTION, SAVEPOINT
713
CHECKPOINT
Name
CHECKPOINT — force a transaction log checkpoint
Synopsis
CHECKPOINT
Description
Write-Ahead Logging (WAL) puts a checkpoint in the transaction log every so often. (To adjust the
automatic checkpoint interval, see the run-time configuration options checkpoint_segments and check-
point_timeout.) The CHECKPOINT command forces an immediate checkpoint when the command is is-
sued, without waiting for a scheduled checkpoint.
A checkpoint is a point in the transaction log sequence at which all data files have been updated to reflect
the information in the log. All data files will be flushed to disk. Refer to Chapter 25 for more information
about the WAL system.
Only superusers may call CHECKPOINT. The command is not intended for use during normal operation.
Compatibility
The CHECKPOINT command is a PostgreSQL language extension.
714
CLOSE
Name
CLOSE — close a cursor
Synopsis
CLOSE name
Description
CLOSE frees the resources associated with an open cursor. After the cursor is closed, no subsequent oper-
ations are allowed on it. A cursor should be closed when it is no longer needed.
Every non-holdable open cursor is implicitly closed when a transaction is terminated by COMMIT or
ROLLBACK. A holdable cursor is implicitly closed if the transaction that created it aborts via ROLLBACK.
If the creating transaction successfully commits, the holdable cursor remains open until an explicit CLOSE
is executed, or the client disconnects.
Parameters
name
The name of an open cursor to close.
Notes
PostgreSQL does not have an explicit OPEN cursor statement; a cursor is considered open when it is
declared. Use the DECLARE statement to declare a cursor.
Examples
Close the cursor liahona:
CLOSE liahona;
Compatibility
CLOSE is fully conforming with the SQL standard.
715
CLOSE
See Also
DECLARE, FETCH, MOVE
716
CLUSTER
Name
CLUSTER — cluster a table according to an index
Synopsis
CLUSTER indexname ON tablename
CLUSTER tablename
CLUSTER
Description
CLUSTER instructs PostgreSQL to cluster the table specified by tablename based on the index specified
by indexname. The index must already have been defined on tablename.
When a table is clustered, it is physically reordered based on the index information. Clustering is a one-
time operation: when the table is subsequently updated, the changes are not clustered. That is, no attempt
is made to store new or updated rows according to their index order. If one wishes, one can periodically
recluster by issuing the command again.
When a table is clustered, PostgreSQL remembers on which index it was clustered. The form CLUSTER
tablename reclusters the table on the same index that it was clustered before.
CLUSTER without any parameter reclusters all the tables in the current database that the calling user owns,
or all tables if called by a superuser. (Never-clustered tables are not included.) This form of CLUSTER
cannot be called from inside a transaction or function.
When a table is being clustered, an ACCESS EXCLUSIVE lock is acquired on it. This prevents any other
database operations (both reads and writes) from operating on the table until the CLUSTER is finished.
Parameters
indexname
The name of an index.
tablename
The name (possibly schema-qualified) of a table.
Notes
In cases where you are accessing single rows randomly within a table, the actual order of the data in the
table is unimportant. However, if you tend to access some data more than others, and there is an index
that groups them together, you will benefit from using CLUSTER. If you are requesting a range of indexed
717
CLUSTER
values from a table, or a single indexed value that has multiple rows that match, CLUSTER will help
because once the index identifies the heap page for the first row that matches, all other rows that match
are probably already on the same heap page, and so you save disk accesses and speed up the query.
During the cluster operation, a temporary copy of the table is created that contains the table data in the
index order. Temporary copies of each index on the table are created as well. Therefore, you need free
space on disk at least equal to the sum of the table size and the index sizes.
Because CLUSTER remembers the clustering information, one can cluster the tables one wants clustered
manually the first time, and setup a timed event similar to VACUUM so that the tables are periodically
reclustered.
Because the planner records statistics about the ordering of tables, it is advisable to run ANALYZE on the
newly clustered table. Otherwise, the planner may make poor choices of query plans.
There is another way to cluster data. The CLUSTER command reorders the original table using the ordering
of the index you specify. This can be slow on large tables because the rows are fetched from the heap in
index order, and if the heap table is unordered, the entries are on random pages, so there is one disk page
retrieved for every row moved. (PostgreSQL has a cache, but the majority of a big table will not fit in the
cache.) The other way to cluster a table is to use
which uses the PostgreSQL sorting code in the ORDER BY clause to create the desired order; this is usually
much faster than an index scan for unordered data. You then drop the old table, use ALTER TABLE ...
RENAME to rename newtable to the old name, and recreate the table’s indexes. However, this approach
does not preserve OIDs, constraints, foreign key relationships, granted privileges, and other ancillary
properties of the table — all such items must be manually recreated.
Examples
Cluster the table employees on the basis of its index emp_ind:
Cluster the employees table using the same index that was used before:
CLUSTER emp;
Cluster all tables in the database that have previously been clustered:
CLUSTER;
Compatibility
There is no CLUSTER statement in the SQL standard.
718
CLUSTER
See Also
clusterdb
719
COMMENT
Name
COMMENT — define or change the comment of an object
Synopsis
COMMENT ON
{
TABLE object_name |
COLUMN table_name.column_name |
AGGREGATE agg_name (agg_type) |
CAST (sourcetype AS targettype) |
CONSTRAINT constraint_name ON table_name |
CONVERSION object_name |
DATABASE object_name |
DOMAIN object_name |
FUNCTION func_name (arg1_type, arg2_type, ...) |
INDEX object_name |
LARGE OBJECT large_object_oid |
OPERATOR op (leftoperand_type, rightoperand_type) |
OPERATOR CLASS object_name USING index_method |
[ PROCEDURAL ] LANGUAGE object_name |
RULE rule_name ON table_name |
SCHEMA object_name |
SEQUENCE object_name |
TRIGGER trigger_name ON table_name |
TYPE object_name |
VIEW object_name
} IS ’text’
Description
COMMENT stores a comment about a database object.
To modify a comment, issue a new COMMENT command for the same object. Only one comment string
is stored for each object. To remove a comment, write NULL in place of the text string. Comments are
automatically dropped when the object is dropped.
Comments can be easily retrieved with the psql commands \dd, \d+, and \l+. Other user interfaces to
retrieve comments can be built atop the same built-in functions that psql uses, namely obj_description
and col_description (see Table 9-43).
720
COMMENT
Parameters
object_name
table_name.column_name
agg_name
constraint_name
func_name
op
rule_name
trigger_name
The name of the object to be commented. Names of tables, aggregates, domains, functions, indexes,
operators, operator classes, sequences, types, and views may be schema-qualified.
agg_type
The argument data type of the aggregate function, or * if the function accepts any data type.
large_object_oid
The OID of the large object.
PROCEDURAL
Notes
A comment for a database can only be created in that database, and will only be visible in that database,
not in other databases.
There is presently no security mechanism for comments: any user connected to a database can see all the
comments for objects in that database (although only superusers can change comments for objects that
they don’t own). Therefore, don’t put security-critical information in comments.
Examples
Attach a comment to the table mytable:
Remove it again:
721
COMMENT
Compatibility
There is no COMMENT command in the SQL standard.
722
COMMIT
Name
COMMIT — commit the current transaction
Synopsis
COMMIT [ WORK | TRANSACTION ]
Description
COMMIT commits the current transaction. All changes made by the transaction become visible to others
and are guaranteed to be durable if a crash occurs.
Parameters
WORK
TRANSACTION
Notes
Use ROLLBACK to abort a transaction.
Issuing COMMIT when not inside a transaction does no harm, but it will provoke a warning message.
Examples
To commit the current transaction and make all changes permanent:
COMMIT;
Compatibility
The SQL standard only specifies the two forms COMMIT and COMMIT WORK. Otherwise, this command is
fully conforming.
723
COMMIT
See Also
BEGIN, ROLLBACK
724
COPY
Name
COPY — copy data between a file and a table
Synopsis
COPY tablename [ ( column [, ...] ) ]
FROM { ’filename’ | STDIN }
[ [ WITH ]
[ BINARY ]
[ OIDS ]
[ DELIMITER [ AS ] ’delimiter’ ]
[ NULL [ AS ] ’null string’ ]
[ CSV [ QUOTE [ AS ] ’quote’ ]
[ ESCAPE [ AS ] ’escape’ ]
[ FORCE NOT NULL column [, ...] ]
Description
COPY moves data between PostgreSQL tables and standard file-system files. COPY TO copies the contents
of a table to a file, while COPY FROM copies data from a file to a table (appending the data to whatever is
in the table already).
If a list of columns is specified, COPY will only copy the data in the specified columns to or from the file.
If there are any columns in the table that are not in the column list, COPY FROM will insert the default
values for those columns.
COPY with a file name instructs the PostgreSQL server to directly read from or write to a file. The file
must be accessible to the server and the name must be specified from the viewpoint of the server. When
STDIN or STDOUT is specified, data is transmitted via the connection between the client and the server.
Parameters
tablename
The name (optionally schema-qualified) of an existing table.
725
COPY
column
An optional list of columns to be copied. If no column list is specified, all columns will be used.
filename
The absolute path name of the input or output file.
STDIN
Causes all data to be stored or read in binary format rather than as text. You cannot specify the
DELIMITER, NULL, or CSV options in binary mode.
OIDS
Specifies copying the OID for each row. (An error is raised if OIDS is specified for a table that does
not have OIDs.)
delimiter
The single character that separates columns within each row (line) of the file. The default is a tab
character in text mode, a comma in CSV mode.
null string
The string that represents a null value. The default is \N (backslash-N) in text mode, and a empty
value with no quotes in CSV mode. You might prefer an empty string even in text mode for cases
where you don’t want to distinguish nulls from empty strings.
Note: When using COPY FROM, any data item that matches this string will be stored as a null
value, so you should make sure that you use the same string as you used with COPY TO.
CSV
In CSV COPY TO mode, forces quoting to be used for all non-NULL values in each specified column.
NULL output is never quoted.
726
COPY
In CSV COPY FROM mode, process each specified column as though it were quoted and hence not a
NULL value. For the default null string in CSV mode (”), this causes missing values to be input as
zero-length strings.
Notes
COPY can only be used with plain tables, not with views.
The BINARY key word causes all data to be stored/read as binary format rather than as text. It is somewhat
faster than the normal text mode, but a binary-format file is less portable across machine architectures and
PostgreSQL versions.
You must have select privilege on the table whose values are read by COPY TO, and insert privilege on the
table into which values are inserted by COPY FROM.
Files named in a COPY command are read or written directly by the server, not by the client application.
Therefore, they must reside on or be accessible to the database server machine, not the client. They must
be accessible to and readable or writable by the PostgreSQL user (the user ID the server runs as), not the
client. COPY naming a file is only allowed to database superusers, since it allows reading or writing any
file that the server has privileges to access.
Do not confuse COPY with the psql instruction \copy. \copy invokes COPY FROM STDIN or COPY TO
STDOUT, and then fetches/stores the data in a file accessible to the psql client. Thus, file accessibility and
access rights depend on the client rather than the server when \copy is used.
It is recommended that the file name used in COPY always be specified as an absolute path. This is enforced
by the server in the case of COPY TO, but for COPY FROM you do have the option of reading from a file
specified by a relative path. The path will be interpreted relative to the working directory of the server
process (somewhere below the data directory), not the client’s working directory.
COPY FROM will invoke any triggers and check constraints on the destination table. However, it will not
invoke rules.
COPY input and output is affected by DateStyle. To ensure portability to other PostgreSQL installations
that might use non-default DateStyle settings, DateStyle should be set to ISO before using COPY TO.
COPY stops operation at the first error. This should not lead to problems in the event of a COPY TO, but
the target table will already have received earlier rows in a COPY FROM. These rows will not be visible
or accessible, but they still occupy disk space. This may amount to a considerable amount of wasted disk
space if the failure happened well into a large copy operation. You may wish to invoke VACUUM to recover
the wasted space.
File Formats
Text Format
When COPY is used without the BINARY or CSV options, the data read or written is a text file with one line
per table row. Columns in a row are separated by the delimiter character. The column values themselves
727
COPY
are strings generated by the output function, or acceptable to the input function, of each attribute’s data
type. The specified null string is used in place of columns that are null. COPY FROM will raise an error if
any line of the input file contains more or fewer columns than are expected. If OIDS is specified, the OID
is read or written as the first column, preceding the user data columns.
End of data can be represented by a single line containing just backslash-period (\.). An end-of-data
marker is not necessary when reading from a file, since the end of file serves perfectly well; it is needed
only when copying data to or from client applications using pre-3.0 client protocol.
Backslash characters (\) may be used in the COPY data to quote data characters that might otherwise be
taken as row or column delimiters. In particular, the following characters must be preceded by a backslash
if they appear as part of a column value: backslash itself, newline, carriage return, and the current delimiter
character.
The specified null string is sent by COPY TO without adding any backslashes; conversely, COPY FROM
matches the input against the null string before removing backslashes. Therefore, a null string such as \N
cannot be confused with the actual data value \N (which would be represented as \\N).
The following special backslash sequences are recognized by COPY FROM:
Sequence Represents
\b Backspace (ASCII 8)
\f Form feed (ASCII 12)
\n Newline (ASCII 10)
\r Carriage return (ASCII 13)
\t Tab (ASCII 9)
\v Vertical tab (ASCII 11)
\digits Backslash followed by one to three octal digits
specifies the character with that numeric code
Presently, COPY TO will never emit an octal-digits backslash sequence, but it does use the other sequences
listed above for those control characters.
Any other backslashed character that is not mentioned in the above table will be taken to represent itself.
However, beware of adding backslashes unnecessarily, since that might accidentally produce a string
matching the end-of-data marker (\.) or the null string (\N by default). These strings will be recognized
before any other backslash processing is done.
It is strongly recommended that applications generating COPY data convert data newlines and carriage
returns to the \n and \r sequences respectively. At present it is possible to represent a data carriage
return by a backslash and carriage return, and to represent a data newline by a backslash and newline.
However, these representations might not be accepted in future releases. They are also highly vulnerable
to corruption if the COPY file is transferred across different machines (for example, from Unix to Windows
or vice versa).
COPY TO will terminate each row with a Unix-style newline (“\n”). Servers running on Microsoft Win-
dows instead output carriage return/newline (“\r\n”), but only for COPY to a server file; for consistency
across platforms, COPY TO STDOUT always sends “\n” regardless of server platform. COPY FROM can
handle lines ending with newlines, carriage returns, or carriage return/newlines. To reduce the risk of er-
ror due to un-backslashed newlines or carriage returns that were meant as data, COPY FROM will complain
if the line endings in the input are not all alike.
728
COPY
CSV Format
This format is used for importing and exporting the Comma Separated Value (CSV) file format used by
many other programs, such as spreadsheets. Instead of the escaping used by PostgreSQL’s standard text
mode, it produces and recognises the common CSV escaping mechanism.
The values in each record are separated by the DELIMITER character. If the value contains the delimiter
character, the QUOTE character, the NULL string, a carriage return, or line feed character, then the whole
value is prefixed and suffixed by the QUOTE character, and any occurrence within the value of a QUOTE
character or the ESCAPE character is preceded by the escape character. You can also use FORCE QUOTE to
force quotes when outputting non-NULL values in specific columns.
The CSV format has no standard way to distinguish a NULL value from an empty string. PostgreSQL’s
COPY handles this by quoting. A NULL is output as the NULL string and is not quoted, while a data value
matching the NULL string is quoted. Therefore, using the default settings, a NULL is written as an unquoted
empty string, while an empty string is written with double quotes (""). Reading values follows similar
rules. You can use FORCE NOT NULL to prevent NULL input comparisons for specific columns.
Note: CSV mode will both recognize and produce CSV files with quoted values containing embedded
carriage returns and line feeds. Thus the files are not strictly one line per table row like text-mode
files. However, PostgreSQL will reject COPY input if any fields contain embedded line end character
sequences that do not match the line ending convention used in the CSV file itself. It is generally safer
to import data containing embedded line end characters using the text or binary formats rather than
CSV.
Note: Many programs produce strange and occasionally perverse CSV files, so the file format is more
a convention than a standard. Thus you might encounter some files that cannot be imported using
this mechanism, and COPY might produce files that other programs cannot process.
Binary Format
The file format used for COPY BINARY changed in PostgreSQL 7.4. The new format consists of a file
header, zero or more tuples containing the row data, and a file trailer. Headers and data are now in network
byte order.
File Header
The file header consists of 15 bytes of fixed fields, followed by a variable-length header extension area.
The fixed fields are:
Signature
11-byte sequence PGCOPY\n\377\r\n\0 — note that the zero byte is a required part of the sig-
nature. (The signature is designed to allow easy identification of files that have been munged by a
non-8-bit-clean transfer. This signature will be changed by end-of-line-translation filters, dropped
zero bytes, dropped high bits, or parity changes.)
729
COPY
Flags field
32-bit integer bit mask to denote important aspects of the file format. Bits are numbered from 0 (LSB)
to 31 (MSB). Note that this field is stored in network byte order (most significant byte first), as are all
the integer fields used in the file format. Bits 16-31 are reserved to denote critical file format issues;
a reader should abort if it finds an unexpected bit set in this range. Bits 0-15 are reserved to signal
backwards-compatible format issues; a reader should simply ignore any unexpected bits set in this
range. Currently only one flag bit is defined, and the rest must be zero:
Bit 16
if 1, OIDs are included in the data; if 0, not
The header extension area is envisioned to contain a sequence of self-identifying chunks. The flags field
is not intended to tell readers what is in the extension area. Specific design of header extension contents
is left for a later release.
This design allows for both backwards-compatible header additions (add header extension chunks, or
set low-order flag bits) and non-backwards-compatible changes (set high-order flag bits to signal such
changes, and add supporting data to the extension area if needed).
Tuples
Each tuple begins with a 16-bit integer count of the number of fields in the tuple. (Presently, all tuples in
a table will have the same count, but that might not always be true.) Then, repeated for each field in the
tuple, there is a 32-bit length word followed by that many bytes of field data. (The length word does not
include itself, and can be zero.) As a special case, -1 indicates a NULL field value. No value bytes follow
in the NULL case.
There is no alignment padding or any other extra data between fields.
Presently, all data values in a COPY BINARY file are assumed to be in binary format (format code one).
It is anticipated that a future extension may add a header field that allows per-column format codes to be
specified.
To determine the appropriate binary format for the actual tuple data you should consult the PostgreSQL
source, in particular the *send and *recv functions for each column’s data type (typically these functions
are found in the src/backend/utils/adt/ directory of the source distribution).
If OIDs are included in the file, the OID field immediately follows the field-count word. It is a normal
field except that it’s not included in the field-count. In particular it has a length word — this will allow
730
COPY
handling of 4-byte vs. 8-byte OIDs without too much pain, and will allow OIDs to be shown as null if that
ever proves desirable.
File Trailer
The file trailer consists of a 16-bit integer word containing -1. This is easily distinguished from a tuple’s
field-count word.
A reader should report an error if a field-count word is neither -1 nor the expected number of columns.
This provides an extra check against somehow getting out of sync with the data.
Examples
The following example copies a table to the client using the vertical bar (|) as the field delimiter:
Here is a sample of data suitable for copying into a table from STDIN:
AF AFGHANISTAN
AL ALBANIA
DZ ALGERIA
ZM ZAMBIA
ZW ZIMBABWE
Note that the white space on each line is actually a tab character.
The following is the same data, output in binary format. The data is shown after filtering through the Unix
utility od -c. The table has three columns; the first has type char(2), the second has type text, and the
third has type integer. All the rows have a null value in the third column.
0000000 P G C O P Y \n 377 \r \n \0 \0 \0 \0 \0 \0
0000020 \0 \0 \0 \0 003 \0 \0 \0 002 A F \0 \0 \0 013 A
0000040 F G H A N I S T A N 377 377 377 377 \0 003
0000060 \0 \0 \0 002 A L \0 \0 \0 007 A L B A N I
0000100 A 377 377 377 377 \0 003 \0 \0 \0 002 D Z \0 \0 \0
0000120 007 A L G E R I A 377 377 377 377 \0 003 \0 \0
0000140 \0 002 Z M \0 \0 \0 006 Z A M B I A 377 377
0000160 377 377 \0 003 \0 \0 \0 002 Z W \0 \0 \0 \b Z I
0000200 M B A B W E 377 377 377 377 377 377
731
COPY
Compatibility
There is no COPY statement in the SQL standard.
The following syntax was used before PostgreSQL version 7.3 and is still supported:
732
CREATE AGGREGATE
Name
CREATE AGGREGATE — define a new aggregate function
Synopsis
CREATE AGGREGATE name (
BASETYPE = input_data_type,
SFUNC = sfunc,
STYPE = state_data_type
[ , FINALFUNC = ffunc ]
[ , INITCOND = initial_condition ]
)
Description
CREATE AGGREGATE defines a new aggregate function. Some basic and commonly-used aggregate func-
tions are included with the distribution; they are documented in Section 9.15. If one defines new types or
needs an aggregate function not already provided, then CREATE AGGREGATE can be used to provide the
desired features.
If a schema name is given (for example, CREATE AGGREGATE myschema.myagg ...) then the aggre-
gate function is created in the specified schema. Otherwise it is created in the current schema.
An aggregate function is identified by its name and input data type. Two aggregates in the same schema
can have the same name if they operate on different input types. The name and input data type of an
aggregate must also be distinct from the name and input data type(s) of every ordinary function in the
same schema.
An aggregate function is made from one or two ordinary functions: a state transition function sfunc, and
an optional final calculation function ffunc. These are used as follows:
PostgreSQL creates a temporary variable of data type stype to hold the current internal state of the
aggregate. At each input data item, the state transition function is invoked to calculate a new internal state
value. After all the data has been processed, the final function is invoked once to calculate the aggregate’s
return value. If there is no final function then the ending state value is returned as-is.
An aggregate function may provide an initial condition, that is, an initial value for the internal state value.
This is specified and stored in the database as a column of type text, but it must be a valid external
representation of a constant of the state value data type. If it is not supplied then the state value starts out
null.
If the state transition function is declared “strict”, then it cannot be called with null inputs. With such a
transition function, aggregate execution behaves as follows. Null input values are ignored (the function is
733
CREATE AGGREGATE
not called and the previous state value is retained). If the initial state value is null, then the first nonnull in-
put value replaces the state value, and the transition function is invoked beginning with the second nonnull
input value. This is handy for implementing aggregates like max. Note that this behavior is only available
when state_data_type is the same as input_data_type. When these types are different, you
must supply a nonnull initial condition or use a nonstrict transition function.
If the state transition function is not strict, then it will be called unconditionally at each input value, and
must deal with null inputs and null transition values for itself. This allows the aggregate author to have
full control over the aggregate’s handling of null values.
If the final function is declared “strict”, then it will not be called when the ending state value is null;
instead a null result will be returned automatically. (Of course this is just the normal behavior of strict
functions.) In any case the final function has the option of returning a null value. For example, the final
function for avg returns null when it sees there were zero input rows.
Parameters
name
The name (optionally schema-qualified) of the aggregate function to create.
input_data_type
The input data type on which this aggregate function operates. This can be specified as "ANY" for an
aggregate that does not examine its input values (an example is count(*)).
sfunc
The name of the state transition function to be called for each input data value. This is normally
a function of two arguments, the first being of type state_data_type and the second of type
input_data_type. Alternatively, for an aggregate that does not examine its input values, the
function takes just one argument of type state_data_type. In either case the function must
return a value of type state_data_type. This function takes the current state value and the
current input data item, and returns the next state value.
state_data_type
The data type for the aggregate’s state value.
ffunc
The name of the final function called to compute the aggregate’s result after all input data has been
traversed. The function must take a single argument of type state_data_type. The return data
type of the aggregate is defined as the return type of this function. If ffunc is not specified, then the
ending state value is used as the aggregate’s result, and the return type is state_data_type.
initial_condition
The initial setting for the state value. This must be a string constant in the form accepted for the data
type state_data_type. If not specified, the state value starts out null.
The parameters of CREATE AGGREGATE can be written in any order, not just the order illustrated above.
734
CREATE AGGREGATE
Examples
See Section 31.10.
Compatibility
CREATE AGGREGATE is a PostgreSQL language extension. The SQL standard does not provide for user-
defined aggregate functions.
See Also
ALTER AGGREGATE, DROP AGGREGATE
735
CREATE CAST
Name
CREATE CAST — define a new cast
Synopsis
CREATE CAST (sourcetype AS targettype)
WITH FUNCTION funcname (argtypes)
[ AS ASSIGNMENT | AS IMPLICIT ]
Description
CREATE CAST defines a new cast. A cast specifies how to perform a conversion between two data types.
For example,
converts the integer constant 42 to type text by invoking a previously specified function, in this case
text(int4). (If no suitable cast has been defined, the conversion fails.)
Two types may be binary compatible, which means that they can be converted into one another “for free”
without invoking any function. This requires that corresponding values use the same internal representa-
tion. For instance, the types text and varchar are binary compatible.
By default, a cast can be invoked only by an explicit cast request, that is an explicit CAST(x AS
typename) or x ::typename construct.
If the cast is marked AS ASSIGNMENT then it can be invoked implicitly when assigning a value to a
column of the target data type. For example, supposing that foo.f1 is a column of type text, then
will be allowed if the cast from type integer to type text is marked AS ASSIGNMENT, otherwise not.
(We generally use the term assignment cast to describe this kind of cast.)
If the cast is marked AS IMPLICIT then it can be invoked implicitly in any context, whether assignment
or internally in an expression. For example, since || takes text operands,
will be allowed only if the cast from type timestamp to text is marked AS IMPLICIT. Otherwise it
will be necessary to write the cast explicitly, for example
736
CREATE CAST
(We generally use the term implicit cast to describe this kind of cast.)
It is wise to be conservative about marking casts as implicit. An overabundance of implicit casting paths
can cause PostgreSQL to choose surprising interpretations of commands, or to be unable to resolve com-
mands at all because there are multiple possible interpretations. A good rule of thumb is to make a cast
implicitly invokable only for information-preserving transformations between types in the same general
type category. For example, the cast from int2 to int4 can reasonably be implicit, but the cast from
float8 to int4 should probably be assignment-only. Cross-type-category casts, such as text to int4,
are best made explicit-only.
To be able to create a cast, you must own the source or the target data type. To create a binary-compatible
cast, you must be superuser. (This restriction is made because an erroneous binary-compatible cast con-
version can easily crash the server.)
Parameters
sourcetype
The name of the source data type of the cast.
targettype
The name of the target data type of the cast.
funcname(argtypes)
The function used to perform the cast. The function name may be schema-qualified. If it is not, the
function will be looked up in the schema search path. The function’s result data type must match the
target type of the cast. Its arguments are discussed below.
WITHOUT FUNCTION
Indicates that the source type and the target type are binary compatible, so no function is required to
perform the cast.
AS ASSIGNMENT
Cast implementation functions may have one to three arguments. The first argument type must be identical
to the cast’s source type. The second argument, if present, must be type integer; it receives the type
modifier associated with the destination type, or -1 if there is none. The third argument, if present, must
be type boolean; it receives true if the cast is an explicit cast, false otherwise. (Bizarrely, the SQL
spec demands different behaviors for explicit and implicit casts in some cases. This argument is supplied
for functions that must implement such casts. It is not recommended that you design your own data types
so that this matters.)
Ordinarily a cast must have different source and target data types. However, it is allowed to declare a
cast with identical source and target types if it has a cast implementation function with more than one
argument. This is used to represent type-specific length coercion functions in the system catalogs. The
named function is used to coerce a value of the type to the type modifier value given by its second
737
CREATE CAST
argument. (Since the grammar presently permits only certain built-in data types to have type modifiers,
this feature is of no use for user-defined target types, but we mention it for completeness.)
When a cast has different source and target types and a function that takes more than one argument, it
represents converting from one type to another and applying a length coercion in a single step. When no
such entry is available, coercion to a type that uses a type modifier involves two steps, one to convert
between data types and a second to apply the modifier.
Notes
Use DROP CAST to remove user-defined casts.
Remember that if you want to be able to convert types both ways you need to declare casts both ways
explicitly.
Prior to PostgreSQL 7.3, every function that had the same name as a data type, returned that data type,
and took one argument of a different type was automatically a cast function. This convention has been
abandoned in face of the introduction of schemas and to be able to represent binary compatible casts in
the system catalogs. The built-in cast functions still follow this naming scheme, but they have to be shown
as casts in the system catalog pg_cast as well.
While not required, it is recommended that you continue to follow this old convention of naming cast
implementation functions after the target data type. Many users are used to being able to cast data types
using a function-style notation, that is typename(x). This notation is in fact nothing more nor less than
a call of the cast implementation function; it is not specially treated as a cast. If your conversion functions
are not named to support this convention then you will have surprised users. Since PostgreSQL allows
overloading of the same function name with different argument types, there is no difficulty in having
multiple conversion functions from different types that all use the target type’s name.
Note: There is one small lie in the preceding paragraph: there is still one case in which pg_cast will
be used to resolve the meaning of an apparent function call. If a function call name(x) matches no
actual function, but name is the name of a data type and pg_cast shows a binary-compatible cast to
this type from the type of x, then the call will be construed as an explicit cast. This exception is made
so that binary-compatible casts can be invoked using functional syntax, even though they lack any
function.
Examples
To create a cast from type text to type int4 using the function int4(text):
738
CREATE CAST
Compatibility
The CREATE CAST command conforms to SQL:1999, except that SQL:1999 does not make provisions for
binary-compatible types or extra arguments to implementation functions. AS IMPLICIT is a PostgreSQL
extension, too.
See Also
CREATE FUNCTION, CREATE TYPE, DROP CAST
739
CREATE CONSTRAINT TRIGGER
Name
CREATE CONSTRAINT TRIGGER — define a new constraint trigger
Synopsis
CREATE CONSTRAINT TRIGGER name
AFTER events ON
tablename constraint attributes
FOR EACH ROW EXECUTE PROCEDURE funcname ( args )
Description
CREATE CONSTRAINT TRIGGER is used within CREATE TABLE/ALTER TABLE and by pg_dump to cre-
ate the special triggers for referential integrity. It is not intended for general use.
Parameters
name
The name of the constraint trigger.
events
The event categories for which this trigger should be fired.
tablename
The name (possibly schema-qualified) of the table in which the triggering events occur.
constraint
Actual constraint specification.
attributes
The constraint attributes.
funcname(args)
The function to call as part of the trigger processing.
740
CREATE CONVERSION
Name
CREATE CONVERSION — define a new conversion
Synopsis
CREATE [DEFAULT] CONVERSION name
FOR source_encoding TO dest_encoding FROM funcname
Description
CREATE CONVERSION defines a new encoding conversion. Conversion names may be used in the
convert function to specify a particular encoding conversion. Also, conversions that are marked
DEFAULT can be used for automatic encoding conversion between client and server. For this purpose, two
conversions, from encoding A to B and from encoding B to A, must be defined.
To be able to create a conversion, you must have EXECUTE privilege on the function and CREATE privilege
on the destination schema.
Parameters
DEFAULT
The DEFAULT clause indicates that this conversion is the default for this particular source to destina-
tion encoding. There should be only one default encoding in a schema for the encoding pair.
name
The name of the conversion. The conversion name may be schema-qualified. If it is not, the conver-
sion is defined in the current schema. The conversion name must be unique within a schema.
source_encoding
The source encoding name.
dest_encoding
The destination encoding name.
funcname
The function used to perform the conversion. The function name may be schema-qualified. If it is
not, the function will be looked up in the path.
The function must have the following signature:
conv_proc(
integer, -- source encoding ID
integer, -- destination encoding ID
cstring, -- source string (null terminated C string)
cstring, -- destination string (null terminated C string)
741
CREATE CONVERSION
Notes
Use DROP CONVERSION to remove user-defined conversions.
The privileges required to create a conversion may be changed in a future release.
Examples
To create a conversion from encoding UNICODE to LATIN1 using myfunc:
Compatibility
CREATE CONVERSION is a PostgreSQL extension. There is no CREATE CONVERSION statement in the
SQL standard.
See Also
ALTER CONVERSION, CREATE FUNCTION, DROP CONVERSION
742
CREATE DATABASE
Name
CREATE DATABASE — create a new database
Synopsis
CREATE DATABASE name
[ [ WITH ] [ OWNER [=] dbowner ]
[ TEMPLATE [=] template ]
[ ENCODING [=] encoding ]
[ TABLESPACE [=] tablespace ] ]
Description
CREATE DATABASE creates a new PostgreSQL database.
To create a database, you must be a superuser or have the special CREATEDB privilege. See CREATE
USER.
Normally, the creator becomes the owner of the new database. Superusers can create databases owned
by other users using the OWNER clause. They can even create databases owned by users with no special
privileges. Non-superusers with CREATEDB privilege can only create databases owned by themselves.
By default, the new database will be created by cloning the standard system database template1. A
different template can be specified by writing TEMPLATE name. In particular, by writing TEMPLATE
template0, you can create a virgin database containing only the standard objects predefined by your
version of PostgreSQL. This is useful if you wish to avoid copying any installation-local objects that may
have been added to template1.
Parameters
name
The name of a database to create.
dbowner
The name of the database user who will own the new database, or DEFAULT to use the default
(namely, the user executing the command).
template
The name of the template from which to create the new database, or DEFAULT to use the default
template (template1).
743
CREATE DATABASE
encoding
Character set encoding to use in the new database. Specify a string constant (e.g., ’SQL_ASCII’), or
an integer encoding number, or DEFAULT to use the default encoding. The character sets supported
by the PostgreSQL server are described in Section 20.2.1.
tablespace
The name of the tablespace that will be associated with the new database, or DEFAULT to use the
template database’s tablespace. This tablespace will be the default tablespace used for objects created
in this database. See CREATE TABLESPACE for more information.
Optional parameters can be written in any order, not only the order illustrated above.
Notes
CREATE DATABASE cannot be executed inside a transaction block.
Errors along the line of “could not initialize database directory” are most likely related to insufficient
permissions on the data directory, a full disk, or other file system problems.
Use DROP DATABASE to remove a database.
The program createdb is a wrapper program around this command, provided for convenience.
Although it is possible to copy a database other than template1 by specifying its name as the template,
this is not (yet) intended as a general-purpose “COPY DATABASE” facility. We recommend that databases
used as templates be treated as read-only. See Section 18.3 for more information.
Examples
To create a new database:
To create a database sales owned by user salesapp with a default tablespace of salesspace:
744
CREATE DATABASE
Compatibility
There is no CREATE DATABASE statement in the SQL standard. Databases are equivalent to catalogs,
whose creation is implementation-defined.
745
CREATE DOMAIN
Name
CREATE DOMAIN — define a new domain
Synopsis
CREATE DOMAIN name [AS] data_type
[ DEFAULT expression ]
[ constraint [ ... ] ]
[ CONSTRAINT constraint_name ]
{ NOT NULL | NULL | CHECK (expression) }
Description
CREATE DOMAIN creates a new data domain. The user who defines a domain becomes its owner.
If a schema name is given (for example, CREATE DOMAIN myschema.mydomain ...) then the domain
is created in the specified schema. Otherwise it is created in the current schema. The domain name must
be unique among the types and domains existing in its schema.
Domains are useful for abstracting common fields between tables into a single location for maintenance.
For example, an email address column may be used in several tables, all with the same properties. Define
a domain and use that rather than setting up each table’s constraints individually.
Parameters
name
The name (optionally schema-qualified) of a domain to be created.
data_type
The underlying data type of the domain. This may include array specifiers.
DEFAULT expression
The DEFAULT clause specifies a default value for columns of the domain data type. The value is any
variable-free expression (but subqueries are not allowed). The data type of the default expression
must match the data type of the domain. If no default value is specified, then the default value is the
null value.
The default expression will be used in any insert operation that does not specify a value for the
column. If a default value is defined for a particular column, it overrides any default associated with
the domain. In turn, the domain default overrides any default value associated with the underlying
data type.
746
CREATE DOMAIN
CONSTRAINT constraint_name
An optional name for a constraint. If not specified, the system generates a name.
NOT NULL
CHECK clauses specify integrity constraints or tests which values of the domain must satisfy. Each
constraint must be an expression producing a Boolean result. It should use the name VALUE to refer
to the value being tested.
Currently, CHECK expressions cannot contain subqueries nor refer to variables other than VALUE.
Examples
This example creates the us_postal_code data type and then uses the type in a table definition. A
regular expression test is used to verify that the value looks like a valid US postal code.
Compatibility
The command CREATE DOMAIN conforms to the SQL standard.
747
CREATE DOMAIN
See Also
ALTER DOMAIN, DROP DOMAIN
748
CREATE FUNCTION
Name
CREATE FUNCTION — define a new function
Synopsis
CREATE [ OR REPLACE ] FUNCTION name ( [ [ argname ] argtype [, ...] ] )
RETURNS rettype
{ LANGUAGE langname
| IMMUTABLE | STABLE | VOLATILE
| CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT
| [ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER
| AS ’definition’
| AS ’obj_file’, ’link_symbol’
} ...
[ WITH ( attribute [, ...] ) ]
Description
CREATE FUNCTION defines a new function. CREATE OR REPLACE FUNCTION will either create a new
function, or replace an existing definition.
If a schema name is included, then the function is created in the specified schema. Otherwise it is created
in the current schema. The name of the new function must not match any existing function with the same
argument types in the same schema. However, functions of different argument types may share a name
(this is called overloading).
To update the definition of an existing function, use CREATE OR REPLACE FUNCTION. It is not possible
to change the name or argument types of a function this way (if you tried, you would actually be creating
a new, distinct function). Also, CREATE OR REPLACE FUNCTION will not let you change the return type
of an existing function. To do that, you must drop and recreate the function.
If you drop and then recreate a function, the new function is not the same entity as the old; you will
have to drop existing rules, views, triggers, etc. that refer to the old function. Use CREATE OR REPLACE
FUNCTION to change a function definition without breaking objects that refer to the function.
The user that creates the function becomes the owner of the function.
Parameters
name
The name (optionally schema-qualified) of the function to create.
argname
The name of an argument. Some languages (currently only PL/pgSQL) let you use the name in the
function body. For other languages the argument name is just extra documentation.
749
CREATE FUNCTION
argtype
The data type(s) of the function’s arguments (optionally schema-qualified), if any. The argument
types may be base, composite, or domain types, or may reference the type of a table column.
Depending on the implementation language it may also be allowed to specify “pseudotypes” such
as cstring. Pseudotypes indicate that the actual argument type is either incompletely specified, or
outside the set of ordinary SQL data types.
The type of a column is referenced by writing tablename.columnname%TYPE. Using this feature
can sometimes help make a function independent of changes to the definition of a table.
rettype
The return data type (optionally schema-qualified). The return type may be a base, composite, or do-
main type, or may reference the type of a table column. Depending on the implementation language
it may also be allowed to specify “pseudotypes” such as cstring.
The SETOF modifier indicates that the function will return a set of items, rather than a single item.
The type of a column is referenced by writing tablename.columnname%TYPE.
langname
The name of the language that the function is implemented in. May be SQL, C, internal, or the
name of a user-defined procedural language. For backward compatibility, the name may be enclosed
by single quotes.
IMMUTABLE
STABLE
VOLATILE
These attributes inform the system whether it is safe to replace multiple evaluations of the function
with a single evaluation, for run-time optimization. At most one choice may be specified. If none of
these appear, VOLATILE is the default assumption.
IMMUTABLE indicates that the function always returns the same result when given the same argument
values; that is, it does not do database lookups or otherwise use information not directly present in
its argument list. If this option is given, any call of the function with all-constant arguments can be
immediately replaced with the function value.
STABLE indicates that within a single table scan the function will consistently return the same result
for the same argument values, but that its result could change across SQL statements. This is the
appropriate selection for functions whose results depend on database lookups, parameter variables
(such as the current time zone), etc. Also note that the current_timestamp family of functions
qualify as stable, since their values do not change within a transaction.
VOLATILE indicates that the function value can change even within a single table scan, so no opti-
mizations can be made. Relatively few database functions are volatile in this sense; some examples
are random(), currval(), timeofday(). Note that any function that has side-effects must be
classified volatile, even if its result is quite predictable, to prevent calls from being optimized away;
an example is setval().
For additional details see Section 31.6.
750
CREATE FUNCTION
CALLED ON NULL INPUT (the default) indicates that the function will be called normally when
some of its arguments are null. It is then the function author’s responsibility to check for null values
if necessary and respond appropriately.
RETURNS NULL ON NULL INPUT or STRICT indicates that the function always returns null when-
ever any of its arguments are null. If this parameter is specified, the function is not executed when
there are null arguments; instead a null result is assumed automatically.
[EXTERNAL] SECURITY INVOKER
[EXTERNAL] SECURITY DEFINER
SECURITY INVOKER indicates that the function is to be executed with the privileges of the user that
calls it. That is the default. SECURITY DEFINER specifies that the function is to be executed with the
privileges of the user that created it.
The key word EXTERNAL is present for SQL conformance but is optional since, unlike in SQL, this
feature does not only apply to external functions.
definition
A string constant defining the function; the meaning depends on the language. It may be an internal
function name, the path to an object file, an SQL command, or text in a procedural language.
obj_file, link_symbol
This form of the AS clause is used for dynamically loadable C language functions when the function
name in the C language source code is not the same as the name of the SQL function. The string
obj_file is the name of the file containing the dynamically loadable object, and link_symbol
is the function’s link symbol, that is, the name of the function in the C language source code. If the
link symbol is omitted, it is assumed to be the same as the name of the SQL function being defined.
attribute
The historical way to specify optional pieces of information about the function. The following at-
tributes may appear here:
isStrict
751
CREATE FUNCTION
Notes
Refer to Section 31.3 for further information on writing functions.
The full SQL type syntax is allowed for input arguments and return value. However, some details of the
type specification (e.g., the precision field for type numeric) are the responsibility of the underlying
function implementation and are silently swallowed (i.e., not recognized or enforced) by the CREATE
FUNCTION command.
PostgreSQL allows function overloading; that is, the same name can be used for several different functions
so long as they have distinct argument types. However, the C names of all functions must be different, so
you must give overloaded C functions different C names (for example, use the argument types as part of
the C names).
When repeated CREATE FUNCTION calls refer to the same object file, the file is only loaded once. To
unload and reload the file (perhaps during development), use the LOAD command.
Use DROP FUNCTION to remove user-defined functions.
It is often helpful to use dollar quoting (see Section 4.1.2.2) to write the function definition string, rather
than the normal single quote syntax. Without dollar quoting, any single quotes or backslashes in the
function definition must be escaped by doubling them.
To be able to define a function, the user must have the USAGE privilege on the language.
Examples
Here is a trivial example to help you get started. For more information and examples, see Section 31.3.
Compatibility
A CREATE FUNCTION command is defined in SQL:1999 and later. The PostgreSQL version is similar but
not fully compatible. The attributes are not portable, neither are the different available languages.
752
CREATE FUNCTION
See Also
ALTER FUNCTION, DROP FUNCTION, GRANT, LOAD, REVOKE, createlang
753
CREATE GROUP
Name
CREATE GROUP — define a new user group
Synopsis
CREATE GROUP name [ [ WITH ] option [ ... ] ]
SYSID gid
| USER username [, ...]
Description
CREATE GROUP will create a new group of users. You must be a database superuser to use this command.
Note that both users and groups are defined at the database cluster level, and so are valid in all databases
in the cluster.
Use ALTER GROUP to change a group’s membership, and DROP GROUP to remove a group.
Parameters
name
The name of the group.
gid
The SYSID clause can be used to choose the PostgreSQL group ID of the new group. This is normally
not necessary, but may be useful if you need to recreate a group referenced in the permissions of some
object.
If this is not specified, the highest assigned group ID plus one (with a minimum of 100) will be used
as default.
username
A list of users to include in the group. The users must already exist.
Examples
Create an empty group:
754
CREATE GROUP
Compatibility
There is no CREATE GROUP statement in the SQL standard. Roles are similar in concept to groups.
See Also
ALTER GROUP, DROP GROUP
755
CREATE INDEX
Name
CREATE INDEX — define a new index
Synopsis
CREATE [ UNIQUE ] INDEX name ON table [ USING method ]
( { column | ( expression ) } [ opclass ] [, ...] )
[ TABLESPACE tablespace ]
[ WHERE predicate ]
Description
CREATE INDEX constructs an index index_name on the specified table. Indexes are primarily used to
enhance database performance (though inappropriate use will result in slower performance).
The key field(s) for the index are specified as column names, or alternatively as expressions written in
parentheses. Multiple fields can be specified if the index method supports multicolumn indexes.
An index field can be an expression computed from the values of one or more columns of the table row.
This feature can be used to obtain fast access to data based on some transformation of the basic data. For
example, an index computed on upper(col) would allow the clause WHERE upper(col) = ’JIM’ to
use an index.
PostgreSQL provides the index methods B-tree, R-tree, hash, and GiST. The B-tree index method is an
implementation of Lehman-Yao high-concurrency B-trees. The R-tree index method implements stan-
dard R-trees using Guttman’s quadratic split algorithm. The hash index method is an implementation of
Litwin’s linear hashing. Users can also define their own index methods, but that is fairly complicated.
When the WHERE clause is present, a partial index is created. A partial index is an index that contains
entries for only a portion of a table, usually a portion that is more useful for indexing than the rest of the
table. For example, if you have a table that contains both billed and unbilled orders where the unbilled
orders take up a small fraction of the total table and yet that is an often used section, you can improve
performance by creating an index on just that portion. Another possible application is to use WHERE with
UNIQUE to enforce uniqueness over a subset of a table. See Section 11.7 for more discussion.
The expression used in the WHERE clause may refer only to columns of the underlying table, but it can
use all columns, not just the ones being indexed. Presently, subqueries and aggregate expressions are also
forbidden in WHERE. The same restrictions apply to index fields that are expressions.
All functions and operators used in an index definition must be “immutable”, that is, their results must
depend only on their arguments and never on any outside influence (such as the contents of another table
or the current time). This restriction ensures that the behavior of the index is well-defined. To use a user-
defined function in an index expression or WHERE clause, remember to mark the function immutable when
you create it.
756
CREATE INDEX
Parameters
UNIQUE
Causes the system to check for duplicate values in the table when the index is created (if data already
exist) and each time data is added. Attempts to insert or update data which would result in duplicate
entries will generate an error.
name
The name of the index to be created. No schema name can be included here; the index is always
created in the same schema as its parent table.
table
The name (possibly schema-qualified) of the table to be indexed.
method
The name of the method to be used for the index. Choices are btree, hash, rtree, and gist. The
default method is btree.
column
The name of a column of the table.
expression
An expression based on one or more columns of the table. The expression usually must be written
with surrounding parentheses, as shown in the syntax. However, the parentheses may be omitted if
the expression has the form of a function call.
opclass
The name of an operator class. See below for details.
tablespace
The tablespace in which to create the index. If not specified, default_tablespace is used, or the
database’s default tablespace if default_tablespace is an empty string.
predicate
The constraint expression for a partial index.
Notes
See Chapter 11 for information about when indexes can be used, when they are not used, and in which
particular situations they can be useful.
Currently, only the B-tree and GiST index methods support multicolumn indexes. Up to 32 fields may be
specified by default. (This limit can be altered when building PostgreSQL.) Only B-tree currently supports
unique indexes.
An operator class can be specified for each column of an index. The operator class identifies the operators
to be used by the index for that column. For example, a B-tree index on four-byte integers would use the
int4_ops class; this operator class includes comparison functions for four-byte integers. In practice the
default operator class for the column’s data type is usually sufficient. The main point of having operator
757
CREATE INDEX
classes is that for some data types, there could be more than one meaningful ordering. For example, we
might want to sort a complex-number data type either by absolute value or by real part. We could do this
by defining two operator classes for the data type and then selecting the proper class when making an
index. More information about operator classes is in Section 11.6 and in Section 31.14.
Use DROP INDEX to remove an index.
Indexes are not used for IS NULL clauses by default. The best way to use indexes in such cases is to create
a partial index using an IS NULL predicate.
Examples
To create a B-tree index on the column title in the table films:
To create an index on the column code in the table films and have the index reside in the tablespace
indexspace:
Compatibility
CREATE INDEX is a PostgreSQL language extension. There are no provisions for indexes in the SQL
standard.
See Also
ALTER INDEX, DROP INDEX
758
CREATE LANGUAGE
Name
CREATE LANGUAGE — define a new procedural language
Synopsis
CREATE [ TRUSTED ] [ PROCEDURAL ] LANGUAGE name
HANDLER call_handler [ VALIDATOR valfunction ]
Description
Using CREATE LANGUAGE, a PostgreSQL user can register a new procedural language with a PostgreSQL
database. Subsequently, functions and trigger procedures can be defined in this new language. The user
must have the PostgreSQL superuser privilege to register a new language.
CREATE LANGUAGE effectively associates the language name with a call handler that is responsible for
executing functions written in the language. Refer to Chapter 34 for more information about language call
handlers.
Note that procedural languages are local to individual databases. To make a language available in all
databases by default, it should be installed into the template1 database.
Parameters
TRUSTED
TRUSTED specifies that the call handler for the language is safe, that is, it does not offer an unprivi-
leged user any functionality to bypass access restrictions. If this key word is omitted when registering
the language, only users with the PostgreSQL superuser privilege can use this language to create new
functions.
PROCEDURAL
call_handler is the name of a previously registered function that will be called to execute the
procedural language functions. The call handler for a procedural language must be written in a com-
piled language such as C with version 1 call convention and registered with PostgreSQL as a function
taking no arguments and returning the language_handler type, a placeholder type that is simply
used to identify the function as a call handler.
759
CREATE LANGUAGE
VALIDATOR valfunction
valfunction is the name of a previously registered function that will be called when a new
function in the language is created, to validate the new function. If no validator function is specified,
then a new function will not be checked when it is created. The validator function must take one
argument of type oid, which will be the OID of the to-be-created function, and will typically return
void.
A validator function would typically inspect the function body for syntactical correctness, but it
can also look at other properties of the function, for example if the language cannot handle certain
argument types. To signal an error, the validator function should use the ereport() function. The
return value of the function is ignored.
Notes
This command normally should not be executed directly by users. For the procedural languages supplied
in the PostgreSQL distribution, the createlang program should be used, which will also install the correct
call handler. (createlang will call CREATE LANGUAGE internally.)
In PostgreSQL versions before 7.3, it was necessary to declare handler functions as returning the place-
holder type opaque, rather than language_handler. To support loading of old dump files, CREATE
LANGUAGE will accept a function declared as returning opaque, but it will issue a notice and change the
function’s declared return type to language_handler.
Use the CREATE FUNCTION command to create a new function.
Use DROP LANGUAGE, or better yet the droplang program, to drop procedural languages.
The system catalog pg_language (see Section 41.18) records information about the currently installed
languages. Also createlang has an option to list the installed languages.
To be able to use a procedural language, a user must be granted the USAGE privilege. The createlang
program automatically grants permissions to everyone if the language is known to be trusted.
Examples
The following two commands executed in sequence will register a new procedural language and the asso-
ciated call handler.
Compatibility
CREATE LANGUAGE is a PostgreSQL extension.
760
CREATE LANGUAGE
See Also
ALTER LANGUAGE, CREATE FUNCTION, DROP LANGUAGE, GRANT, REVOKE, createlang,
droplang
761
CREATE OPERATOR
Name
CREATE OPERATOR — define a new operator
Synopsis
CREATE OPERATOR name (
PROCEDURE = funcname
[, LEFTARG = lefttype ] [, RIGHTARG = righttype ]
[, COMMUTATOR = com_op ] [, NEGATOR = neg_op ]
[, RESTRICT = res_proc ] [, JOIN = join_proc ]
[, HASHES ] [, MERGES ]
[, SORT1 = left_sort_op ] [, SORT2 = right_sort_op ]
[, LTCMP = less_than_op ] [, GTCMP = greater_than_op ]
)
Description
CREATE OPERATOR defines a new operator, name. The user who defines an operator becomes its owner.
If a schema name is given then the operator is created in the specified schema. Otherwise it is created in
the current schema.
The operator name is a sequence of up to NAMEDATALEN-1 (63 by default) characters from the following
list:
+-*/<>=~!@#%^&|‘?
• -- and /* cannot appear anywhere in an operator name, since they will be taken as the start of a
comment.
• A multicharacter operator name cannot end in + or -, unless the name also contains at least one of these
characters:
~!@#%^&|‘?
For example, @- is an allowed operator name, but *- is not. This restriction allows PostgreSQL to parse
SQL-compliant commands without requiring spaces between tokens.
The operator != is mapped to <> on input, so these two names are always equivalent.
At least one of LEFTARG and RIGHTARG must be defined. For binary operators, both must be defined. For
right unary operators, only LEFTARG should be defined, while for left unary operators only RIGHTARG
should be defined.
762
CREATE OPERATOR
The funcname procedure must have been previously defined using CREATE FUNCTION and must be
defined to accept the correct number of arguments (either one or two) of the indicated types.
The other clauses specify optional operator optimization clauses. Their meaning is detailed in Section
31.13.
Parameters
name
The name of the operator to be defined. See above for allowable characters. The name may be
schema-qualified, for example CREATE OPERATOR myschema.+ (...). If not, then the operator
is created in the current schema. Two operators in the same schema can have the same name if they
operate on different data types. This is called overloading.
funcname
The function used to implement this operator.
lefttype
The data type of the operator’s left operand, if any. This option would be omitted for a left-unary
operator.
righttype
The data type of the operator’s right operand, if any. This option would be omitted for a right-unary
operator.
com_op
The commutator of this operator.
neg_op
The negator of this operator.
res_proc
The restriction selectivity estimator function for this operator.
join_proc
The join selectivity estimator function for this operator.
HASHES
763
CREATE OPERATOR
right_sort_op
If this operator can support a merge join, the less-than operator that sorts the right-hand data type of
this operator.
less_than_op
If this operator can support a merge join, the less-than operator that compares the input data types of
this operator.
greater_than_op
If this operator can support a merge join, the greater-than operator that compares the input data types
of this operator.
To give a schema-qualified operator name in com_op or the other optional arguments, use the
OPERATOR() syntax, for example
COMMUTATOR = OPERATOR(myschema.===) ,
Notes
Refer to Section 31.12 for further information.
Use DROP OPERATOR to delete user-defined operators from a database. Use ALTER OPERATOR to
modify operators in a database.
Examples
The following command defines a new operator, area-equality, for the data type box:
764
CREATE OPERATOR
Compatibility
CREATE OPERATOR is a PostgreSQL extension. There are no provisions for user-defined operators in the
SQL standard.
See Also
ALTER OPERATOR, CREATE OPERATOR CLASS, DROP OPERATOR
765
CREATE OPERATOR CLASS
Name
CREATE OPERATOR CLASS — define a new operator class
Synopsis
CREATE OPERATOR CLASS name [ DEFAULT ] FOR TYPE data_type USING index_method AS
{ OPERATOR strategy_number operator_name [ ( op_type, op_type ) ] [ RECHECK ]
| FUNCTION support_number funcname ( argument_type [, ...] )
| STORAGE storage_type
} [, ... ]
Description
CREATE OPERATOR CLASS creates a new operator class. An operator class defines how a particular data
type can be used with an index. The operator class specifies that certain operators will fill particular roles
or “strategies” for this data type and this index method. The operator class also specifies the support
procedures to be used by the index method when the operator class is selected for an index column. All
the operators and functions used by an operator class must be defined before the operator class is created.
If a schema name is given then the operator class is created in the specified schema. Otherwise it is created
in the current schema. Two operator classes in the same schema can have the same name only if they are
for different index methods.
The user who defines an operator class becomes its owner. Presently, the creating user must be a superuser.
(This restriction is made because an erroneous operator class definition could confuse or even crash the
server.)
CREATE OPERATOR CLASS does not presently check whether the operator class definition includes all
the operators and functions required by the index method. It is the user’s responsibility to define a valid
operator class.
Refer to Section 31.14 for further information.
Parameters
name
The name of the operator class to be created. The name may be schema-qualified.
DEFAULT
If present, the operator class will become the default operator class for its data type. At most one
operator class can be the default for a specific data type and index method.
data_type
The column data type that this operator class is for.
766
CREATE OPERATOR CLASS
index_method
The name of the index method this operator class is for.
strategy_number
The index method’s strategy number for an operator associated with the operator class.
operator_name
The name (optionally schema-qualified) of an operator associated with the operator class.
op_type
The operand data type(s) of an operator, or NONE to signify a left-unary or right-unary operator. The
operand data types may be omitted in the normal case where they are the same as the operator class’s
data type.
RECHECK
If present, the index is “lossy” for this operator, and so the rows retrieved using the index must be
rechecked to verify that they actually satisfy the qualification clause involving this operator.
support_number
The index method’s support procedure number for a function associated with the operator class.
funcname
The name (optionally schema-qualified) of a function that is an index method support procedure for
the operator class.
argument_types
The parameter data type(s) of the function.
storage_type
The data type actually stored in the index. Normally this is the same as the column data type, but
some index methods (only GiST at this writing) allow it to be different. The STORAGE clause must
be omitted unless the index method allows a different type to be used.
The OPERATOR, FUNCTION, and STORAGE clauses may appear in any order.
Notes
The operators should not be defined by SQL functions. A SQL function is likely to be inlined into the
calling query, which will prevent the optimizer from recognizing that the query matches an index.
Examples
The following example command defines a GiST index operator class for the data type _int4 (array of
int4). See contrib/intarray/ for the complete example.
767
CREATE OPERATOR CLASS
OPERATOR 6 = RECHECK,
OPERATOR 7 @,
OPERATOR 8 ~,
OPERATOR 20 @@ (_int4, query_int),
FUNCTION 1 g_int_consistent (internal, _int4, int4),
FUNCTION 2 g_int_union (bytea, internal),
FUNCTION 3 g_int_compress (internal),
FUNCTION 4 g_int_decompress (internal),
FUNCTION 5 g_int_penalty (internal, internal, internal),
FUNCTION 6 g_int_picksplit (internal, internal),
FUNCTION 7 g_int_same (_int4, _int4, internal);
Compatibility
CREATE OPERATOR CLASS is a PostgreSQL extension. There is no CREATE OPERATOR CLASS state-
ment in the SQL standard.
See Also
ALTER OPERATOR CLASS, DROP OPERATOR CLASS
768
CREATE RULE
Name
CREATE RULE — define a new rewrite rule
Synopsis
CREATE [ OR REPLACE ] RULE name AS ON event
TO table [ WHERE condition ]
DO [ ALSO | INSTEAD ] { NOTHING | command | ( command ; command ... ) }
Description
CREATE RULE defines a new rule applying to a specified table or view. CREATE OR REPLACE RULE will
either create a new rule, or replace an existing rule of the same name for the same table.
The PostgreSQL rule system allows one to define an alternate action to be performed on insertions, up-
dates, or deletions in database tables. Roughly speaking, a rule causes additional commands to be executed
when a given command on a given table is executed. Alternatively, an INSTEAD rule can replace a given
command by another, or cause a command not to be executed at all. Rules are used to implement table
views as well. It is important to realize that a rule is really a command transformation mechanism, or
command macro. The transformation happens before the execution of the commands starts. If you actu-
ally want an operation that fires independently for each physical row, you probably want to use a trigger,
not a rule. More information about the rules system is in Chapter 33.
Presently, ON SELECT rules must be unconditional INSTEAD rules and must have actions that consist of a
single SELECT command. Thus, an ON SELECT rule effectively turns the table into a view, whose visible
contents are the rows returned by the rule’s SELECT command rather than whatever had been stored in the
table (if anything). It is considered better style to write a CREATE VIEW command than to create a real
table and define an ON SELECT rule for it.
You can create the illusion of an updatable view by defining ON INSERT, ON UPDATE, and ON DELETE
rules (or any subset of those that’s sufficient for your purposes) to replace update actions on the view with
appropriate updates on other tables.
There is a catch if you try to use conditional rules for view updates: there must be an unconditional
INSTEAD rule for each action you wish to allow on the view. If the rule is conditional, or is not INSTEAD,
then the system will still reject attempts to perform the update action, because it thinks it might end up
trying to perform the action on the dummy table of the view in some cases. If you want to handle all
the useful cases in conditional rules, add an unconditional DO INSTEAD NOTHING rule to ensure that the
system understands it will never be called on to update the dummy table. Then make the conditional rules
non-INSTEAD; in the cases where they are applied, they add to the default INSTEAD NOTHING action.
769
CREATE RULE
Parameters
name
The name of a rule to create. This must be distinct from the name of any other rule for the same table.
Multiple rules on the same table and same event type are applied in alphabetical name order.
event
The event is one of SELECT, INSERT, UPDATE, or DELETE.
table
The name (optionally schema-qualified) of the table or view the rule applies to.
condition
Any SQL conditional expression (returning boolean). The condition expression may not refer to
any tables except NEW and OLD, and may not contain aggregate functions.
INSTEAD
INSTEAD indicates that the commands should be executed instead of the original command.
ALSO
ALSO indicates that the commands should be executed in addition to the original command.
Within condition and command, the special table names NEW and OLD may be used to refer to values
in the referenced table. NEW is valid in ON INSERT and ON UPDATE rules to refer to the new row being
inserted or updated. OLD is valid in ON UPDATE and ON DELETE rules to refer to the existing row being
updated or deleted.
Notes
You must have the privilege RULE on a table to be allowed to define a rule on it.
It is very important to take care to avoid circular rules. For example, though each of the following two
rule definitions are accepted by PostgreSQL, the SELECT command would cause PostgreSQL to report an
error because the query cycled too many times:
770
CREATE RULE
Presently, if a rule action contains a NOTIFY command, the NOTIFY command will be executed uncondi-
tionally, that is, the NOTIFY will be issued even if there are not any rows that the rule should apply to. For
example, in
one NOTIFY event will be sent during the UPDATE, whether or not there are any rows that match the
condition id = 42. This is an implementation restriction that may be fixed in future releases.
Compatibility
CREATE RULE is a PostgreSQL language extension, as is the entire query rewrite system.
771
CREATE SCHEMA
Name
CREATE SCHEMA — define a new schema
Synopsis
CREATE SCHEMA schemaname [ AUTHORIZATION username ] [ schema_element [ ... ] ]
CREATE SCHEMA AUTHORIZATION username [ schema_element [ ... ] ]
Description
CREATE SCHEMA enters a new schema into the current database. The schema name must be distinct from
the name of any existing schema in the current database.
A schema is essentially a namespace: it contains named objects (tables, data types, functions, and oper-
ators) whose names may duplicate those of other objects existing in other schemas. Named objects are
accessed either by “qualifying” their names with the schema name as a prefix, or by setting a search path
that includes the desired schema(s). A CREATE command specifying an unqualified object name creates
the object in the current schema (the one at the front of the search path, which can be determined with the
function current_schema).
Optionally, CREATE SCHEMA can include subcommands to create objects within the new schema. The
subcommands are treated essentially the same as separate commands issued after creating the schema,
except that if the AUTHORIZATION clause is used, all the created objects will be owned by that user.
Parameters
schemaname
The name of a schema to be created. If this is omitted, the user name is used as the schema name.
The name cannot begin with pg_, as such names are reserved for system schemas.
username
The name of the user who will own the schema. If omitted, defaults to the user executing the com-
mand. Only superusers may create schemas owned by users other than themselves.
schema_element
An SQL statement defining an object to be created within the schema. Currently, only CREATE
TABLE, CREATE VIEW, CREATE INDEX, CREATE SEQUENCE, CREATE TRIGGER and GRANT are ac-
cepted as clauses within CREATE SCHEMA. Other kinds of objects may be created in separate com-
mands after the schema is created.
772
CREATE SCHEMA
Notes
To create a schema, the invoking user must have the CREATE privilege for the current database. (Of course,
superusers bypass this check.)
Examples
Create a schema:
Create a schema for user joe; the schema will also be named joe:
Compatibility
The SQL standard allows a DEFAULT CHARACTER SET clause in CREATE SCHEMA, as well as more sub-
command types than are presently accepted by PostgreSQL.
The SQL standard specifies that the subcommands in CREATE SCHEMA may appear in any order. The
present PostgreSQL implementation does not handle all cases of forward references in subcommands; it
may sometimes be necessary to reorder the subcommands in order to avoid forward references.
According to the SQL standard, the owner of a schema always owns all objects within it. PostgreSQL
allows schemas to contain objects owned by users other than the schema owner. This can happen only if
the schema owner grants the CREATE privilege on his schema to someone else.
773
CREATE SCHEMA
See Also
ALTER SCHEMA, DROP SCHEMA
774
CREATE SEQUENCE
Name
CREATE SEQUENCE — define a new sequence generator
Synopsis
CREATE [ TEMPORARY | TEMP ] SEQUENCE name [ INCREMENT [ BY ] increment ]
[ MINVALUE minvalue | NO MINVALUE ] [ MAXVALUE maxvalue | NO MAXVALUE ]
[ START [ WITH ] start ] [ CACHE cache ] [ [ NO ] CYCLE ]
Description
CREATE SEQUENCE creates a new sequence number generator. This involves creating and initializing a
new special single-row table with the name name. The generator will be owned by the user issuing the
command.
If a schema name is given then the sequence is created in the specified schema. Otherwise it is created in
the current schema. Temporary sequences exist in a special schema, so a schema name may not be given
when creating a temporary sequence. The sequence name must be distinct from the name of any other
sequence, table, index, or view in the same schema.
After a sequence is created, you use the functions nextval, currval, and setval to operate on the
sequence. These functions are documented in Section 9.12.
Although you cannot update a sequence directly, you can use a query like
to examine the parameters and current state of a sequence. In particular, the last_value field of the
sequence shows the last value allocated by any session. (Of course, this value may be obsolete by the time
it’s printed, if other sessions are actively doing nextval calls.)
Parameters
TEMPORARY or TEMP
If specified, the sequence object is created only for this session, and is automatically dropped on
session exit. Existing permanent sequences with the same name are not visible (in this session) while
the temporary sequence exists, unless they are referenced with schema-qualified names.
name
The name (optionally schema-qualified) of the sequence to be created.
775
CREATE SEQUENCE
increment
The optional clause INCREMENT BY increment specifies which value is added to the current se-
quence value to create a new value. A positive value will make an ascending sequence, a negative
one a descending sequence. The default value is 1.
minvalue
NO MINVALUE
The optional clause MINVALUE minvalue determines the minimum value a sequence can generate.
If this clause is not supplied or NO MINVALUE is specified, then defaults will be used. The defaults
are 1 and -263-1 for ascending and descending sequences, respectively.
maxvalue
NO MAXVALUE
The optional clause MAXVALUE maxvalue determines the maximum value for the sequence. If this
clause is not supplied or NO MAXVALUE is specified, then default values will be used. The defaults
are 263-1 and -1 for ascending and descending sequences, respectively.
start
The optional clause START WITH start allows the sequence to begin anywhere. The default start-
ing value is minvalue for ascending sequences and maxvalue for descending ones.
cache
The optional clause CACHE cache specifies how many sequence numbers are to be preallocated and
stored in memory for faster access. The minimum value is 1 (only one value can be generated at a
time, i.e., no cache), and this is also the default.
CYCLE
NO CYCLE
The CYCLE option allows the sequence to wrap around when the maxvalue or minvalue has
been reached by an ascending or descending sequence respectively. If the limit is reached, the next
number generated will be the minvalue or maxvalue, respectively.
If NO CYCLE is specified, any calls to nextval after the sequence has reached its maximum value
will return an error. If neither CYCLE or NO CYCLE are specified, NO CYCLE is the default.
Notes
Use DROP SEQUENCE to remove a sequence.
Sequences are based on bigint arithmetic, so the range cannot exceed the range of an eight-byte in-
teger (-9223372036854775808 to 9223372036854775807). On some older platforms, there may be no
compiler support for eight-byte integers, in which case sequences use regular integer arithmetic (range
-2147483648 to +2147483647).
Unexpected results may be obtained if a cache setting greater than one is used for a sequence object that
will be used concurrently by multiple sessions. Each session will allocate and cache successive sequence
values during one access to the sequence object and increase the sequence object’s last_value accord-
ingly. Then, the next cache-1 uses of nextval within that session simply return the preallocated values
776
CREATE SEQUENCE
without touching the sequence object. So, any numbers allocated but not used within a session will be lost
when that session ends, resulting in “holes” in the sequence.
Furthermore, although multiple sessions are guaranteed to allocate distinct sequence values, the values
may be generated out of sequence when all the sessions are considered. For example, with a cache setting
of 10, session A might reserve values 1..10 and return nextval=1, then session B might reserve values
11..20 and return nextval=11 before session A has generated nextval=2. Thus, with a cache setting
of one it is safe to assume that nextval values are generated sequentially; with a cache setting greater
than one you should only assume that the nextval values are all distinct, not that they are generated
purely sequentially. Also, last_value will reflect the latest value reserved by any session, whether or
not it has yet been returned by nextval.
Another consideration is that a setval executed on such a sequence will not be noticed by other sessions
until they have used up any preallocated values they have cached.
Examples
Create an ascending sequence called serial, starting at 101:
SELECT nextval(’serial’);
nextval
---------
114
BEGIN;
COPY distributors FROM ’input_file’;
SELECT setval(’serial’, max(id)) FROM distributors;
END;
777
CREATE SEQUENCE
Compatibility
CREATE SEQUENCE is is specified in SQL:2003. PostgreSQL conforms with the standard, with the fol-
lowing exceptions:
778
CREATE TABLE
Name
CREATE TABLE — define a new table
Synopsis
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } ] TABLE table_name (
{ column_name data_type [ DEFAULT default_expr ] [ column_constraint [ ... ] ]
| table_constraint
| LIKE parent_table [ { INCLUDING | EXCLUDING } DEFAULTS ] } [, ... ]
)
[ INHERITS ( parent_table [, ... ] ) ]
[ WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace ]
[ CONSTRAINT constraint_name ]
{ NOT NULL |
NULL |
UNIQUE [ USING INDEX TABLESPACE tablespace ] |
PRIMARY KEY [ USING INDEX TABLESPACE tablespace ] |
CHECK (expression) |
REFERENCES reftable [ ( refcolumn ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ]
[ ON DELETE action ] [ ON UPDATE action ] }
[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
[ CONSTRAINT constraint_name ]
{ UNIQUE ( column_name [, ... ] ) [ USING INDEX TABLESPACE tablespace ] |
PRIMARY KEY ( column_name [, ... ] ) [ USING INDEX TABLESPACE tablespace ] |
CHECK ( expression ) |
FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ]
[ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE action ] [ ON UPDATE actio
[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]
Description
CREATE TABLE will create a new, initially empty table in the current database. The table will be owned
by the user issuing the command.
If a schema name is given (for example, CREATE TABLE myschema.mytable ...) then the table is
created in the specified schema. Otherwise it is created in the current schema. Temporary tables exist in
a special schema, so a schema name may not be given when creating a temporary table. The table name
must be distinct from the name of any other table, sequence, index, or view in the same schema.
779
CREATE TABLE
CREATE TABLE also automatically creates a data type that represents the composite type corresponding
to one row of the table. Therefore, tables cannot have the same name as any existing data type in the same
schema.
The optional constraint clauses specify constraints (tests) that new or updated rows must satisfy for an
insert or update operation to succeed. A constraint is an SQL object that helps define the set of valid
values in the table in various ways.
There are two ways to define constraints: table constraints and column constraints. A column constraint is
defined as part of a column definition. A table constraint definition is not tied to a particular column, and it
can encompass more than one column. Every column constraint can also be written as a table constraint;
a column constraint is only a notational convenience for use when the constraint only affects one column.
Parameters
TEMPORARY or TEMP
If specified, the table is created as a temporary table. Temporary tables are automatically dropped
at the end of a session, or optionally at the end of the current transaction (see ON COMMIT below).
Existing permanent tables with the same name are not visible to the current session while the tempo-
rary table exists, unless they are referenced with schema-qualified names. Any indexes created on a
temporary table are automatically temporary as well.
Optionally, GLOBAL or LOCAL can be written before TEMPORARY or TEMP. This makes no difference
in PostgreSQL, but see Compatibility.
table_name
The name (optionally schema-qualified) of the table to be created.
column_name
The name of a column to be created in the new table.
data_type
The data type of the column. This may include array specifiers. For more information on the data
types supported by PostgreSQL, refer to Chapter 8.
DEFAULT default_expr
The DEFAULT clause assigns a default data value for the column whose column definition it appears
within. The value is any variable-free expression (subqueries and cross-references to other columns
in the current table are not allowed). The data type of the default expression must match the data type
of the column.
The default expression will be used in any insert operation that does not specify a value for the
column. If there is no default for a column, then the default is null.
INHERITS ( parent_table [, ... ] )
The optional INHERITS clause specifies a list of tables from which the new table automatically
inherits all columns.
Use of INHERITS creates a persistent relationship between the new child table and its parent table(s).
Schema modifications to the parent(s) normally propagate to children as well, and by default the data
of the child table is included in scans of the parent(s).
780
CREATE TABLE
If the same column name exists in more than one parent table, an error is reported unless the data
types of the columns match in each of the parent tables. If there is no conflict, then the duplicate
columns are merged to form a single column in the new table. If the column name list of the new
table contains a column name that is also inherited, the data type must likewise match the inherited
column(s), and the column definitions are merged into one. However, inherited and new column
declarations of the same name need not specify identical constraints: all constraints provided from
any declaration are merged together and all are applied to the new table. If the new table explicitly
specifies a default value for the column, this default overrides any defaults from inherited declarations
of the column. Otherwise, any parents that specify default values for the column must all specify the
same default, or an error will be reported.
LIKE parent_table [ { INCLUDING | EXCLUDING } DEFAULTS ]
The LIKE clause specifies a table from which the new table automatically copies all column names,
their data types, and their not-null constraints.
Unlike INHERITS, the new table and original table are completely decoupled after creation is com-
plete. Changes to the original table will not be applied to the new table, and it is not possible to
include data of the new table in scans of the original table.
Default expressions for the copied column definitions will only be copied if INCLUDING DEFAULTS
is specified. The default behavior is to exclude default expressions, resulting in all columns of the
new table having null defaults.
WITH OIDS
WITHOUT OIDS
This optional clause specifies whether rows of the new table should have OIDs (object identifiers)
assigned to them. If neither WITH OIDS nor WITHOUT OIDS is specified, the default value depends
upon the default_with_oids configuration parameter. (If the new table inherits from any tables that
have OIDs, then WITH OIDS is forced even if the command says WITHOUT OIDS.)
If WITHOUT OIDS is specified or implied, the new table does not store OIDs and no OID will be
assigned for a row inserted into it. This is generally considered worthwhile, since it will reduce
OID consumption and thereby postpone the wraparound of the 32-bit OID counter. Once the counter
wraps around, OIDs can no longer be assumed to be unique, which makes them considerably less
useful. In addition, excluding OIDs from a table reduces the space required to store the table on disk
by 4 bytes per row (on most machines), slightly improving performance.
To remove OIDs from a table after it has been created, use ALTER TABLE.
CONSTRAINT constraint_name
An optional name for a column or table constraint. If not specified, the system generates a name.
NOT NULL
781
CREATE TABLE
The UNIQUE constraint specifies that a group of one or more columns of a table may contain only
unique values. The behavior of the unique table constraint is the same as that for column constraints,
with the additional capability to span multiple columns.
For the purpose of a unique constraint, null values are not considered equal.
Each unique table constraint must name a set of columns that is different from the set of columns
named by any other unique or primary key constraint defined for the table. (Otherwise it would just
be the same constraint listed twice.)
PRIMARY KEY (column constraint)
PRIMARY KEY ( column_name [, ... ] ) (table constraint)
The primary key constraint specifies that a column or columns of a table may contain only unique
(non-duplicate), nonnull values. Technically, PRIMARY KEY is merely a combination of UNIQUE and
NOT NULL, but identifying a set of columns as primary key also provides metadata about the design
of the schema, as a primary key implies that other tables may rely on this set of columns as a unique
identifier for rows.
Only one primary key can be specified for a table, whether as a column constraint or a table constraint.
The primary key constraint should name a set of columns that is different from other sets of columns
named by any unique constraint defined for the same table.
CHECK (expression)
The CHECK clause specifies an expression producing a Boolean result which new or updated rows
must satisfy for an insert or update operation to succeed. Expressions evaluating to TRUE or UN-
KNOWN succeed. Should any row of an insert or update operation produce a FALSE result an error
exception is raised and the insert or update does not alter the database. A check constraint specified
as a column constraint should reference that column’s value only, while an expression appearing in a
table constraint may reference multiple columns.
Currently, CHECK expressions cannot contain subqueries nor refer to variables other than columns of
the current row.
REFERENCES reftable [ ( refcolumn ) ] [ MATCH matchtype ] [ ON DELETE action
] [ ON UPDATE action ] (column constraint)
FOREIGN KEY ( column [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] )
] [ MATCH matchtype ] [ ON DELETE action ] [ ON UPDATE action ] (table constraint)
These clauses specify a foreign key constraint, which requires that a group of one or more columns
of the new table must only contain values that match values in the referenced column(s) of some
row of the referenced table. If refcolumn is omitted, the primary key of the reftable is used.
The referenced columns must be the columns of a unique or primary key constraint in the referenced
table.
A value inserted into the referencing column(s) is matched against the values of the referenced table
and referenced columns using the given match type. There are three match types: MATCH FULL,
MATCH PARTIAL, and MATCH SIMPLE, which is also the default. MATCH FULL will not allow one
column of a multicolumn foreign key to be null unless all foreign key columns are null. MATCH
SIMPLE allows some foreign key columns to be null while other parts of the foreign key are not null.
MATCH PARTIAL is not yet implemented.
782
CREATE TABLE
In addition, when the data in the referenced columns is changed, certain actions are performed on the
data in this table’s columns. The ON DELETE clause specifies the action to perform when a referenced
row in the referenced table is being deleted. Likewise, the ON UPDATE clause specifies the action to
perform when a referenced column in the referenced table is being updated to a new value. If the row
is updated, but the referenced column is not actually changed, no action is done. Referential actions
other than the NO ACTION check cannot be deferred, even if the constraint is declared deferrable.
There are the following possible actions for each clause:
NO ACTION
Produce an error indicating that the deletion or update would create a foreign key constraint
violation. If the constraint is deferred, this error will be produced at constraint check time if
there still exist any referencing rows. This is the default action.
RESTRICT
Produce an error indicating that the deletion or update would create a foreign key constraint
violation. This is the same as NO ACTION except that the check is not deferrable.
CASCADE
Delete any rows referencing the deleted row, or update the value of the referencing column to
the new value of the referenced column, respectively.
SET NULL
If the referenced column(s) are changed frequently, it may be wise to add an index to the foreign key
column so that referential actions associated with the foreign key column can be performed more
efficiently.
DEFERRABLE
NOT DEFERRABLE
This controls whether the constraint can be deferred. A constraint that is not deferrable will be
checked immediately after every command. Checking of constraints that are deferrable may be post-
poned until the end of the transaction (using the SET CONSTRAINTS command). NOT DEFERRABLE
is the default. Only foreign key constraints currently accept this clause. All other constraint types are
not deferrable.
INITIALLY IMMEDIATE
INITIALLY DEFERRED
If a constraint is deferrable, this clause specifies the default time to check the constraint. If the
constraint is INITIALLY IMMEDIATE, it is checked after each statement. This is the default. If the
constraint is INITIALLY DEFERRED, it is checked only at the end of the transaction. The constraint
check time can be altered with the SET CONSTRAINTS command.
783
CREATE TABLE
ON COMMIT
The behavior of temporary tables at the end of a transaction block can be controlled using ON
COMMIT. The three options are:
PRESERVE ROWS
No special action is taken at the ends of transactions. This is the default behavior.
DELETE ROWS
All rows in the temporary table will be deleted at the end of each transaction block. Essentially,
an automatic TRUNCATE is done at each commit.
DROP
The temporary table will be dropped at the end of the current transaction block.
TABLESPACE tablespace
The tablespace is the name of the tablespace in which the new table is to be created. If not
specified, default_tablespace is used, or the database’s default tablespace if default_tablespace
is an empty string.
USING INDEX TABLESPACE tablespace
This clause allows selection of the tablespace in which the index associated with a UNIQUE or
PRIMARY KEY constraint will be created. If not specified, default_tablespace is used, or the
database’s default tablespace if default_tablespace is an empty string.
Notes
Using OIDs in new applications is not recommended: where possible, using a SERIAL or other sequence
generator as the table’s primary key is preferred. However, if your application does make use of OIDs to
identify specific rows of a table, it is recommended to create a unique constraint on the oid column of that
table, to ensure that OIDs in the table will indeed uniquely identify rows even after counter wraparound.
Avoid assuming that OIDs are unique across tables; if you need a database-wide unique identifier, use the
combination of tableoid and row OID for the purpose.
Tip: The use of WITHOUT OIDS is not recommended for tables with no primary key, since without either
an OID or a unique data key, it is difficult to identify specific rows.
PostgreSQL automatically creates an index for each unique constraint and primary key constraint to en-
force uniqueness. Thus, it is not necessary to create an index explicitly for primary key columns. (See
CREATE INDEX for more information.)
Unique constraints and primary keys are not inherited in the current implementation. This makes the
combination of inheritance and unique constraints rather dysfunctional.
784
CREATE TABLE
A table cannot have more than 1600 columns. (In practice, the effective limit is lower because of tuple-
length constraints.)
Examples
Create table films and table distributors:
Define a unique table constraint for the table films. Unique table constraints can be defined on one or
more columns of the table.
785
CREATE TABLE
Define a primary key table constraint for the table films. Primary key table constraints can be defined on
one or more columns of the table.
Define a primary key constraint for table distributors. The following two examples are equivalent,
the first using the table constraint syntax, the second the column constraint syntax.
This assigns a literal constant default value for the column name, arranges for the default value of column
did to be generated by selecting the next value of a sequence object, and makes the default value of
modtime be the time at which the row is inserted.
Define two NOT NULL column constraints on the table distributors, one of which is explicitly given
a name:
786
CREATE TABLE
Compatibility
The CREATE TABLE command conforms to SQL-92 and to a subset of SQL:1999, with exceptions listed
below.
Temporary Tables
Although the syntax of CREATE TEMPORARY TABLE resembles that of the SQL standard, the effect is not
the same. In the standard, temporary tables are defined just once and automatically exist (starting with
empty contents) in every session that needs them. PostgreSQL instead requires each session to issue its
own CREATE TEMPORARY TABLE command for each temporary table to be used. This allows different
sessions to use the same temporary table name for different purposes, whereas the standard’s approach
constrains all instances of a given temporary table name to have the same table structure.
The standard’s definition of the behavior of temporary tables is widely ignored. PostgreSQL’s behavior
on this point is similar to that of several other SQL databases.
The standard’s distinction between global and local temporary tables is not in PostgreSQL, since that
distinction depends on the concept of modules, which PostgreSQL does not have. For compatibility’s
787
CREATE TABLE
sake, PostgreSQL will accept the GLOBAL and LOCAL keywords in a temporary table declaration, but they
have no effect.
The ON COMMIT clause for temporary tables also resembles the SQL standard, but has some differences. If
the ON COMMIT clause is omitted, SQL specifies that the default behavior is ON COMMIT DELETE ROWS.
However, the default behavior in PostgreSQL is ON COMMIT PRESERVE ROWS. The ON COMMIT DROP
option does not exist in SQL.
NULL “Constraint”
The NULL “constraint” (actually a non-constraint) is a PostgreSQL extension to the SQL standard that
is included for compatibility with some other database systems (and for symmetry with the NOT NULL
constraint). Since it is the default for any column, its presence is simply noise.
Inheritance
Multiple inheritance via the INHERITS clause is a PostgreSQL language extension. SQL:1999 (but not
SQL-92) defines single inheritance using a different syntax and different semantics. SQL:1999-style in-
heritance is not yet supported by PostgreSQL.
Object IDs
The PostgreSQL concept of OIDs is not standard.
Zero-column tables
PostgreSQL allows a table of no columns to be created (for example, CREATE TABLE foo();). This is
an extension from the SQL standard, which does not allow zero-column tables. Zero-column tables are
not in themselves very useful, but disallowing them creates odd special cases for ALTER TABLE DROP
COLUMN, so it seems cleaner to ignore this spec restriction.
Tablespaces
The PostgreSQL concept of tablespaces is not part of the standard. Hence, the clauses TABLESPACE and
USING INDEX TABLESPACE are extensions.
See Also
ALTER TABLE, DROP TABLE, CREATE TABLESPACE
788
CREATE TABLE AS
Name
CREATE TABLE AS — define a new table from the results of a query
Synopsis
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } ] TABLE table_name [ (column_name [, ..
AS query
Description
CREATE TABLE AS creates a table and fills it with data computed by a SELECT command or an EXECUTE
that runs a prepared SELECT command. The table columns have the names and data types associated with
the output columns of the SELECT (except that you can override the column names by giving an explicit
list of new column names).
CREATE TABLE AS bears some resemblance to creating a view, but it is really quite different: it creates
a new table and evaluates the query just once to fill the new table initially. The new table will not track
subsequent changes to the source tables of the query. In contrast, a view re-evaluates its defining SELECT
statement whenever it is queried.
Parameters
GLOBAL or LOCAL
TEMPORARY or TEMP
If specified, the table is created as a temporary table. Refer to CREATE TABLE for details.
table_name
The name (optionally schema-qualified) of the table to be created.
column_name
The name of a column in the new table. If column names are not provided, they are taken from the
output column names of the query. If the table is created from an EXECUTE command, a column
name list cannot be specified.
WITH OIDS
WITHOUT OIDS
This optional clause specifies whether the table created by CREATE TABLE AS should include OIDs.
If neither form of this clause is specified, the value of the default_with_oids configuration parameter
is used.
789
CREATE TABLE AS
query
A query statement (that is, a SELECT command or an EXECUTE command that runs a prepared
SELECT command). Refer to SELECT or EXECUTE, respectively, for a description of the allowed
syntax.
Notes
This command is functionally similar to SELECT INTO, but it is preferred since it is less likely to be
confused with other uses of the SELECT INTO syntax. Furthermore, CREATE TABLE AS offers a superset
of the functionality offered by SELECT INTO.
Prior to PostgreSQL 8.0, CREATE TABLE AS always included OIDs in the table it produced. As of Post-
gresSQL 8.0, the CREATE TABLE AS command allows the user to explicitly specify whether OIDs should
be included. If the presence of OIDs is not explicitly specified, the default_with_oids configuration vari-
able is used. While this variable currently defaults to true, the default value may be changed in the future.
Therefore, applications that require OIDs in the table created by CREATE TABLE AS should explicitly
specify WITH OIDS to ensure compatibility with future versions of PostgreSQL.
Examples
Create a new table films_recent consisting of only recent entries from the table films:
Compatibility
CREATE TABLE AS is specified by the SQL:2003 standard. There are some small differences between the
definition of the command in SQL:2003 and its implementation in PostgreSQL:
• The standard requires parentheses around the subquery clause; in PostgreSQL, these parentheses are
optional.
• The standard defines an ON COMMIT clause; this is not currently implemented by PostgreSQL.
• The standard defines a WITH DATA clause; this is not currently implemented by PostgreSQL.
See Also
CREATE TABLE, EXECUTE, SELECT, SELECT INTO
790
CREATE TABLESPACE
Name
CREATE TABLESPACE — define a new tablespace
Synopsis
CREATE TABLESPACE tablespacename [ OWNER username ] LOCATION ’directory’
Description
CREATE TABLESPACE registers a new cluster-wide tablespace. The tablespace name must be distinct from
the name of any existing tablespace in the database cluster.
A tablespace allows superusers to define an alternative location on the file system where the data files
containing database objects (such as tables and indexes) may reside.
A user with appropriate privileges can pass tablespacename to CREATE DATABASE, CREATE TABLE,
CREATE INDEX or ADD CONSTRAINT to have the data files for these objects stored within the specified
tablespace.
Parameters
tablespacename
The name of a tablespace to be created. The name cannot begin with pg_, as such names are reserved
for system tablespaces.
username
The name of the user who will own the tablespace. If omitted, defaults to the user executing the
command. Only superusers may create tablespaces, but they can assign ownership of tablespaces to
non-superusers.
directory
The directory that will be used for the tablespace. The directory must be empty and must be owned
by the PostgreSQL system user. The directory must be specified by an absolute path name.
Notes
Tablespaces are only supported on systems that support symbolic links.
791
CREATE TABLESPACE
Examples
Create a tablespace dbspace at /data/dbs:
Compatibility
CREATE TABLESPACE is a PostgreSQL extension.
See Also
CREATE DATABASE, CREATE TABLE, CREATE INDEX, DROP TABLESPACE, ALTER TABLESPACE
792
CREATE TRIGGER
Name
CREATE TRIGGER — define a new trigger
Synopsis
CREATE TRIGGER name { BEFORE | AFTER } { event [ OR ... ] }
ON table [ FOR [ EACH ] { ROW | STATEMENT } ]
EXECUTE PROCEDURE funcname ( arguments )
Description
CREATE TRIGGER creates a new trigger. The trigger will be associated with the specified table and will
execute the specified function funcname when certain events occur.
The trigger can be specified to fire either before the operation is attempted on a row (before constraints
are checked and the INSERT, UPDATE, or DELETE is attempted) or after the operation has completed (after
constraints are checked and the INSERT, UPDATE, or DELETE has completed). If the trigger fires before
the event, the trigger may skip the operation for the current row, or change the row being inserted (for
INSERT and UPDATE operations only). If the trigger fires after the event, all changes, including the last
insertion, update, or deletion, are “visible” to the trigger.
A trigger that is marked FOR EACH ROW is called once for every row that the operation modifies. For
example, a DELETE that affects 10 rows will cause any ON DELETE triggers on the target relation to
be called 10 separate times, once for each deleted row. In contrast, a trigger that is marked FOR EACH
STATEMENT only executes once for any given operation, regardless of how many rows it modifies (in
particular, an operation that modifies zero rows will still result in the execution of any applicable FOR
EACH STATEMENT triggers).
If multiple triggers of the same kind are defined for the same event, they will be fired in alphabetical order
by name.
SELECT does not modify any rows so you can not create SELECT triggers. Rules and views are more
appropriate in such cases.
Refer to Chapter 32 for more information about triggers.
Parameters
name
The name to give the new trigger. This must be distinct from the name of any other trigger for the
same table.
BEFORE
AFTER
793
CREATE TRIGGER
event
One of INSERT, UPDATE, or DELETE; this specifies the event that will fire the trigger. Multiple events
can be specified using OR.
table
The name (optionally schema-qualified) of the table the trigger is for.
FOR EACH ROW
FOR EACH STATEMENT
This specifies whether the trigger procedure should be fired once for every row affected by the trigger
event, or just once per SQL statement. If neither is specified, FOR EACH STATEMENT is the default.
funcname
A user-supplied function that is declared as taking no arguments and returning type trigger, which
is executed when the trigger fires.
arguments
An optional comma-separated list of arguments to be provided to the function when the trigger is
executed. The arguments are literal string constants. Simple names and numeric constants may be
written here, too, but they will all be converted to strings. Please check the description of the imple-
mentation language of the trigger function about how the trigger arguments are accessible within the
function; it may be different from normal function arguments.
Notes
To create a trigger on a table, the user must have the TRIGGER privilege on the table.
In PostgreSQL versions before 7.3, it was necessary to declare trigger functions as returning the place-
holder type opaque, rather than trigger. To support loading of old dump files, CREATE TRIGGER will
accept a function declared as returning opaque, but it will issue a notice and change the function’s de-
clared return type to trigger.
Use DROP TRIGGER to remove a trigger.
Examples
Section 32.4 contains a complete example.
Compatibility
The CREATE TRIGGER statement in PostgreSQL implements a subset of the SQL:1999 standard. (There
are no provisions for triggers in SQL-92.) The following functionality is missing:
• SQL:1999 allows triggers to fire on updates to specific columns (e.g., AFTER UPDATE OF col1,
col2).
794
CREATE TRIGGER
• SQL:1999 allows you to define aliases for the “old” and “new” rows or tables for use in the defini-
tion of the triggered action (e.g., CREATE TRIGGER ... ON tablename REFERENCING OLD ROW
AS somename NEW ROW AS othername ...). Since PostgreSQL allows trigger procedures to be
written in any number of user-defined languages, access to the data is handled in a language-specific
way.
• PostgreSQL only allows the execution of a user-defined function for the triggered action. SQL:1999
allows the execution of a number of other SQL commands, such as CREATE TABLE as triggered action.
This limitation is not hard to work around by creating a user-defined function that executes the desired
commands.
SQL:1999 specifies that multiple triggers should be fired in time-of-creation order. PostgreSQL uses name
order, which was judged more convenient to work with.
The ability to specify multiple actions for a single trigger using OR is a PostgreSQL extension of the SQL
standard.
See Also
CREATE FUNCTION, ALTER TRIGGER, DROP TRIGGER
795
CREATE TYPE
Name
CREATE TYPE — define a new data type
Synopsis
CREATE TYPE name AS
( attribute_name data_type [, ... ] )
Description
CREATE TYPE registers a new data type for use in the current database. The user who defines a type
becomes its owner.
If a schema name is given then the type is created in the specified schema. Otherwise it is created in the
current schema. The type name must be distinct from the name of any existing type or domain in the same
schema. (Because tables have associated data types, the type name must also be distinct from the name of
any existing table in the same schema.)
Composite Types
The first form of CREATE TYPE creates a composite type. The composite type is specified by a list of
attribute names and data types. This is essentially the same as the row type of a table, but using CREATE
TYPE avoids the need to create an actual table when all that is wanted is to define a type. A stand-alone
composite type is useful as the argument or return type of a function.
Base Types
The second form of CREATE TYPE creates a new base type (scalar type). The parameters may appear in
any order, not only that illustrated above, and most are optional. You must register two or more functions
(using CREATE FUNCTION) before defining the type. The support functions input_function and
796
CREATE TYPE
797
CREATE TYPE
representation of all variable-length types must start with a 4-byte integer giving the total length of this
value of the type.
The optional flag PASSEDBYVALUE indicates that values of this data type are passed by value, rather than
by reference. You may not pass by value types whose internal representation is larger than the size of the
Datum type (4 bytes on most machines, 8 bytes on a few).
The alignment parameter specifies the storage alignment required for the data type. The allowed val-
ues equate to alignment on 1, 2, 4, or 8 byte boundaries. Note that variable-length types must have an
alignment of at least 4, since they necessarily contain an int4 as their first component.
The storage parameter allows selection of storage strategies for variable-length data types. (Only
plain is allowed for fixed-length types.) plain specifies that data of the type will always be stored
in-line and not compressed. extended specifies that the system will first try to compress a long data
value, and will move the value out of the main table row if it’s still too long. external allows the value
to be moved out of the main table, but the system will not try to compress it. main allows compression,
but discourages moving the value out of the main table. (Data items with this storage strategy may still be
moved out of the main table if there is no other way to make a row fit, but they will be kept in the main
table preferentially over extended and external items.)
A default value may be specified, in case a user wants columns of the data type to default to something
other than the null value. Specify the default with the DEFAULT key word. (Such a default may be over-
ridden by an explicit DEFAULT clause attached to a particular column.)
To indicate that a type is an array, specify the type of the array elements using the ELEMENT key word.
For example, to define an array of 4-byte integers (int4), specify ELEMENT = int4. More details about
array types appear below.
To indicate the delimiter to be used between values in the external representation of arrays of this type,
delimiter can be set to a specific character. The default delimiter is the comma (,). Note that the
delimiter is associated with the array element type, not the array type itself.
Array Types
Whenever a user-defined base data type is created, PostgreSQL automatically creates an associated array
type, whose name consists of the base type’s name prepended with an underscore. The parser under-
stands this naming convention, and translates requests for columns of type foo[] into requests for type
_foo. The implicitly-created array type is variable length and uses the built-in input and output functions
array_in and array_out.
You might reasonably ask why there is an ELEMENT option, if the system makes the correct array type
automatically. The only case where it’s useful to use ELEMENT is when you are making a fixed-length type
that happens to be internally an array of a number of identical things, and you want to allow these things
to be accessed directly by subscripting, in addition to whatever operations you plan to provide for the type
as a whole. For example, type name allows its constituent char elements to be accessed this way. A 2-D
point type could allow its two component numbers to be accessed like point[0] and point[1]. Note
that this facility only works for fixed-length types whose internal form is exactly a sequence of identical
fixed-length fields. A subscriptable variable-length type must have the generalized internal representation
used by array_in and array_out. For historical reasons (i.e., this is clearly wrong but it’s far too
late to change it), subscripting of fixed-length array types starts from zero, rather than from one as for
variable-length arrays.
798
CREATE TYPE
Parameters
name
The name (optionally schema-qualified) of a type to be created.
attribute_name
The name of an attribute (column) for the composite type.
data_type
The name of an existing data type to become a column of the composite type.
input_function
The name of a function that converts data from the type’s external textual form to its internal form.
output_function
The name of a function that converts data from the type’s internal form to its external textual form.
receive_function
The name of a function that converts data from the type’s external binary form to its internal form.
send_function
The name of a function that converts data from the type’s internal form to its external binary form.
analyze_function
The name of a function that performs statistical analysis for the data type.
internallength
A numeric constant that specifies the length in bytes of the new type’s internal representation. The
default assumption is that it is variable-length.
alignment
The storage alignment requirement of the data type. If specified, it must be char, int2, int4, or
double; the default is int4.
storage
The storage strategy for the data type. If specified, must be plain, external, extended, or main;
the default is plain.
default
The default value for the data type. If this is omitted, the default is null.
element
The type being created is an array; this specifies the type of the array elements.
delimiter
The delimiter character to be used between values in arrays made of this type.
799
CREATE TYPE
Notes
User-defined type names cannot begin with the underscore character (_) and can only be 62 characters
long (or in general NAMEDATALEN - 2, rather than the NAMEDATALEN - 1 characters allowed for other
names). Type names beginning with underscore are reserved for internally-created array type names.
In PostgreSQL versions before 7.3, it was customary to avoid creating a shell type by replacing the func-
tions’ forward references to the type name with the placeholder pseudotype opaque. The cstring ar-
guments and results also had to be declared as opaque before 7.3. To support loading of old dump files,
CREATE TYPE will accept functions declared using opaque, but it will issue a notice and change the
function’s declaration to use the correct types.
Examples
This example creates a composite type and uses it in a function definition:
This example creates the base data type box and then uses the type in a table definition:
If the internal structure of box were an array of four float4 elements, we might instead use
which would allow a box value’s component numbers to be accessed by subscripting. Otherwise the type
behaves the same as before.
This example creates a large object type and uses it in a table definition:
800
CREATE TYPE
More examples, including suitable input and output functions, are in Section 31.11.
Compatibility
This CREATE TYPE command is a PostgreSQL extension. There is a CREATE TYPE statement in
SQL:1999 and later that is rather different in detail.
See Also
CREATE FUNCTION, DROP TYPE, ALTER TYPE
801
CREATE USER
Name
CREATE USER — define a new database user account
Synopsis
CREATE USER name [ [ WITH ] option [ ... ] ]
SYSID uid
| CREATEDB | NOCREATEDB
| CREATEUSER | NOCREATEUSER
| IN GROUP groupname [, ...]
| [ ENCRYPTED | UNENCRYPTED ] PASSWORD ’password’
| VALID UNTIL ’abstime’
Description
CREATE USER adds a new user to a PostgreSQL database cluster. Refer to Chapter 17 and Chapter 19
for information about managing users and authentication. You must be a database superuser to use this
command.
Parameters
name
The name of the new user.
uid
The SYSID clause can be used to choose the PostgreSQL user ID of the new user. This is normally
not necessary, but may be useful if you need to recreate the owner of an orphaned object.
If this is not specified, the highest assigned user ID plus one (with a minimum of 100) will be used
as default.
CREATEDB
NOCREATEDB
These clauses define a user’s ability to create databases. If CREATEDB is specified, the user being
defined will be allowed to create his own databases. Using NOCREATEDB will deny a user the ability
to create databases. If not specified, NOCREATEDB is the default.
802
CREATE USER
CREATEUSER
NOCREATEUSER
These clauses determine whether a user will be permitted to create new users himself. CREATEUSER
will also make the user a superuser, who can override all access restrictions. If not specified,
NOCREATEUSER is the default.
groupname
A name of an existing group into which to insert the user as a new member. Multiple group names
may be listed.
password
Sets the user’s password. If you do not plan to use password authentication you can omit this option,
but then the user won’t be able to connect if you decide to switch to password authentication. The
password can be set or changed later, using ALTER USER.
ENCRYPTED
UNENCRYPTED
These key words control whether the password is stored encrypted in the system catalogs. (If neither
is specified, the default behavior is determined by the configuration parameter password_encryption.)
If the presented password string is already in MD5-encrypted format, then it is stored encrypted
as-is, regardless of whether ENCRYPTED or UNENCRYPTED is specified (since the system cannot de-
crypt the specified encrypted password string). This allows reloading of encrypted passwords during
dump/restore.
Note that older clients may lack support for the MD5 authentication mechanism that is needed to
work with passwords that are stored encrypted.
abstime
The VALID UNTIL clause sets an absolute time after which the user’s password is no longer valid. If
this clause is omitted the password will be valid for all time.
Notes
Use ALTER USER to change the attributes of a user, and DROP USER to remove a user. Use ALTER
GROUP to add the user to groups or remove the user from groups.
PostgreSQL includes a program createuser that has the same functionality as CREATE USER (in fact, it
calls this command) but can be run from the command shell.
The VALID UNTIL clause defines an expiration time for a password only, not for the user account per se.
In particular, the expiration time is not enforced when logging in using a non-password-based authentica-
tion method.
Examples
Create a user with no password:
803
CREATE USER
Create a user with a password that is valid until the end of 2004. After one second has ticked in 2005, the
password is no longer valid.
Compatibility
The CREATE USER statement is a PostgreSQL extension. The SQL standard leaves the definition of users
to the implementation.
See Also
ALTER USER, DROP USER, createuser
804
CREATE VIEW
Name
CREATE VIEW — define a new view
Synopsis
CREATE [ OR REPLACE ] VIEW name [ ( column_name [, ...] ) ] AS query
Description
CREATE VIEW defines a view of a query. The view is not physically materialized. Instead, the query is run
every time the view is referenced in a query.
CREATE OR REPLACE VIEW is similar, but if a view of the same name already exists, it is replaced. You
can only replace a view with a new query that generates the identical set of columns (i.e., same column
names and data types).
If a schema name is given (for example, CREATE VIEW myschema.myview ...) then the view is cre-
ated in the specified schema. Otherwise it is created in the current schema. The view name must be distinct
from the name of any other view, table, sequence, or index in the same schema.
Parameters
name
The name (optionally schema-qualified) of a view to be created.
column_name
An optional list of names to be used for columns of the view. If not given, the column names are
deduced from the query.
query
A query (that is, a SELECT statement) which will provide the columns and rows of the view.
Refer to SELECT for more information about valid queries.
Notes
Currently, views are read only: the system will not allow an insert, update, or delete on a view. You can
get the effect of an updatable view by creating rules that rewrite inserts, etc. on the view into appropriate
actions on other tables. For more information see CREATE RULE.
Use the DROP VIEW statement to drop views.
Be careful that the names and types of the view’s columns will be assigned the way you want. For example,
805
CREATE VIEW
is bad form in two ways: the column name defaults to ?column?, and the column data type defaults to
unknown. If you want a string literal in a view’s result, use something like
Access to tables referenced in the view is determined by permissions of the view owner. However, func-
tions called in the view are treated the same as if they had been called directly from the query using the
view. Therefore the user of a view must have permissions to call all functions used by the view.
Examples
Create a view consisting of all comedy films:
Compatibility
The SQL standard specifies some additional capabilities for the CREATE VIEW statement:
CHECK OPTION
This option has to do with updatable views. All INSERT and UPDATE commands on the view will
be checked to ensure data satisfy the view-defining condition (that is, the new data would be visible
through the view). If they do not, the update will be rejected.
LOCAL
Check for integrity on this view.
CASCADE
Check for integrity on this view and on any dependent view. CASCADE is assumed if neither CASCADE
nor LOCAL is specified.
806
CREATE VIEW
See Also
DROP VIEW
807
DEALLOCATE
Name
DEALLOCATE — deallocate a prepared statement
Synopsis
DEALLOCATE [ PREPARE ] plan_name
Description
DEALLOCATE is used to deallocate a previously prepared SQL statement. If you do not explicitly deallocate
a prepared statement, it is deallocated when the session ends.
For more information on prepared statements, see PREPARE.
Parameters
PREPARE
Compatibility
The SQL standard includes a DEALLOCATE statement, but it is only for use in embedded SQL.
See Also
EXECUTE, PREPARE
808
DECLARE
Name
DECLARE — define a cursor
Synopsis
DECLARE name [ BINARY ] [ INSENSITIVE ] [ [ NO ] SCROLL ]
CURSOR [ { WITH | WITHOUT } HOLD ] FOR query
[ FOR { READ ONLY | UPDATE [ OF column [, ...] ] } ]
Description
DECLARE allows a user to create cursors, which can be used to retrieve a small number of rows at a time
out of a larger query. Cursors can return data either in text or in binary format using FETCH.
Normal cursors return data in text format, the same as a SELECT would produce. Since data is stored na-
tively in binary format, the system must do a conversion to produce the text format. Once the information
comes back in text form, the client application may need to convert it to a binary format to manipulate it.
In addition, data in the text format is often larger in size than in the binary format. Binary cursors return
the data in a binary representation that may be more easily manipulated. Nevertheless, if you intend to
display the data as text anyway, retrieving it in text form will save you some effort on the client side.
As an example, if a query returns a value of one from an integer column, you would get a string of 1
with a default cursor whereas with a binary cursor you would get a 4-byte field containing the internal
representation of the value (in big-endian byte order).
Binary cursors should be used carefully. Many applications, including psql, are not prepared to handle
binary cursors and expect data to come back in the text format.
Note: When the client application uses the “extended query” protocol to issue a FETCH command, the
Bind protocol message specifies whether data is to be retrieved in text or binary format. This choice
overrides the way that the cursor is defined. The concept of a binary cursor as such is thus obsolete
when using extended query protocol — any cursor can be treated as either text or binary.
Parameters
name
The name of the cursor to be created.
BINARY
Causes the cursor to return data in binary rather than in text format.
809
DECLARE
INSENSITIVE
Indicates that data retrieved from the cursor should be unaffected by updates to the tables underlying
the cursor while the cursor exists. In PostgreSQL, all cursors are insensitive; this key word currently
has no effect and is present for compatibility with the SQL standard.
SCROLL
NO SCROLL
SCROLL specifies that the cursor may be used to retrieve rows in a nonsequential fashion (e.g., back-
ward). Depending upon the complexity of the query’s execution plan, specifying SCROLL may impose
a performance penalty on the query’s execution time. NO SCROLL specifies that the cursor cannot be
used to retrieve rows in a nonsequential fashion.
WITH HOLD
WITHOUT HOLD
WITH HOLD specifies that the cursor may continue to be used after the transaction that created it suc-
cessfully commits. WITHOUT HOLD specifies that the cursor cannot be used outside of the transaction
that created it. If neither WITHOUT HOLD nor WITH HOLD is specified, WITHOUT HOLD is the default.
query
A SELECT command that will provide the rows to be returned by the cursor. Refer to SELECT for
further information about valid queries.
FOR READ ONLY
FOR UPDATE
FOR READ ONLY indicates that the cursor will be used in a read-only mode. FOR UPDATE indicates
that the cursor will be used to update tables. Since cursor updates are not currently supported in
PostgreSQL, specifying FOR UPDATE will cause an error message and specifying FOR READ ONLY
has no effect.
column
Column(s) to be updated by the cursor. Since cursor updates are not currently supported in Post-
greSQL, the FOR UPDATE clause provokes an error message.
The key words BINARY, INSENSITIVE, and SCROLL may appear in any order.
Notes
Unless WITH HOLD is specified, the cursor created by this command can only be used within the current
transaction. Thus, DECLARE without WITH HOLD is useless outside a transaction block: the cursor would
survive only to the completion of the statement. Therefore PostgreSQL reports an error if this command
is used outside a transaction block. Use BEGIN, COMMIT and ROLLBACK to define a transaction block.
If WITH HOLD is specified and the transaction that created the cursor successfully commits, the cursor can
continue to be accessed by subsequent transactions in the same session. (But if the creating transaction
is aborted, the cursor is removed.) A cursor created with WITH HOLD is closed when an explicit CLOSE
command is issued on it, or the session ends. In the current implementation, the rows represented by a
held cursor are copied into a temporary file or memory area so that they remain available for subsequent
transactions.
810
DECLARE
The SCROLL option should be specified when defining a cursor that will be used to fetch backwards. This
is required by the SQL standard. However, for compatibility with earlier versions, PostgreSQL will allow
backward fetches without SCROLL, if the cursor’s query plan is simple enough that no extra overhead is
needed to support it. However, application developers are advised not to rely on using backward fetches
from a cursor that has not been created with SCROLL. If NO SCROLL is specified, then backward fetches
are disallowed in any case.
The SQL standard only makes provisions for cursors in embedded SQL. The PostgreSQL server does not
implement an OPEN statement for cursors; a cursor is considered to be open when it is declared. However,
ECPG, the embedded SQL preprocessor for PostgreSQL, supports the standard SQL cursor conventions,
including those involving DECLARE and OPEN statements.
Examples
To declare a cursor:
Compatibility
The SQL standard allows cursors only in embedded SQL and in modules. PostgreSQL permits cursors to
be used interactively.
The SQL standard allows cursors to update table data. All PostgreSQL cursors are read only.
Binary cursors are a PostgreSQL extension.
See Also
CLOSE, FETCH, MOVE
811
DELETE
Name
DELETE — delete rows of a table
Synopsis
DELETE FROM [ ONLY ] table [ WHERE condition ]
Description
DELETE deletes rows that satisfy the WHERE clause from the specified table. If the WHERE clause is absent,
the effect is to delete all rows in the table. The result is a valid, but empty table.
Tip: TRUNCATE is a PostgreSQL extension that provides a faster mechanism to remove all rows from
a table.
By default, DELETE will delete rows in the specified table and all its subtables. If you wish to delete only
from the specific table mentioned, you must use the ONLY clause.
You must have the DELETE privilege on the table to delete from it, as well as the SELECT privilege for any
table whose values are read in the condition.
Parameters
table
The name (optionally schema-qualified) of an existing table.
condition
A value expression that returns a value of type boolean that determines the rows which are to be
deleted.
Outputs
On successful completion, a DELETE command returns a command tag of the form
DELETE count
The count is the number of rows deleted. If count is 0, no rows matched the condition (this is not
considered an error).
812
DELETE
Notes
PostgreSQL lets you reference columns of other tables in the WHERE condition. For example, to delete all
films produced by a given producer, one might do
What is essentially happening here is a join between films and producers, with all successfully joined
films rows being marked for deletion. This syntax is not standard. A more standard way to do it is
In some cases the join style is easier to write or faster to execute than the sub-select style. One objection to
the join style is that there is no explicit list of what tables are being used, which makes the style somewhat
error-prone; also it cannot handle self-joins.
Examples
Delete all films but musicals:
Compatibility
This command conforms to the SQL standard, except that the ability to reference other tables in the WHERE
clause is a PostgreSQL extension.
813
DROP AGGREGATE
Name
DROP AGGREGATE — remove an aggregate function
Synopsis
DROP AGGREGATE name ( type ) [ CASCADE | RESTRICT ]
Description
DROP AGGREGATE will delete an existing aggregate function. To execute this command the current user
must be the owner of the aggregate function.
Parameters
name
The name (optionally schema-qualified) of an existing aggregate function.
type
The argument data type of the aggregate function, or * if the function accepts any data type.
CASCADE
Refuse to drop the aggregate function if any objects depend on it. This is the default.
Examples
To remove the aggregate function myavg for type integer:
Compatibility
There is no DROP AGGREGATE statement in the SQL standard.
814
DROP AGGREGATE
See Also
ALTER AGGREGATE, CREATE AGGREGATE
815
DROP CAST
Name
DROP CAST — remove a cast
Synopsis
DROP CAST (sourcetype AS targettype) [ CASCADE | RESTRICT ]
Description
DROP CAST removes a previously defined cast.
To be able to drop a cast, you must own the source or the target data type. These are the same privileges
that are required to create a cast.
Parameters
sourcetype
The name of the source data type of the cast.
targettype
The name of the target data type of the cast.
CASCADE
RESTRICT
These key words do not have any effect, since there are no dependencies on casts.
Examples
To drop the cast from type text to type int:
Compatibility
The DROP CAST command conforms to the SQL standard.
816
DROP CAST
See Also
CREATE CAST
817
DROP CONVERSION
Name
DROP CONVERSION — remove a conversion
Synopsis
DROP CONVERSION name [ CASCADE | RESTRICT ]
Description
DROP CONVERSION removes a previously defined conversion. To be able to drop a conversion, you must
own the conversion.
Parameters
name
The name of the conversion. The conversion name may be schema-qualified.
CASCADE
RESTRICT
These key words do not have any effect, since there are no dependencies on conversions.
Examples
To drop the conversion named myname:
Compatibility
There is no DROP CONVERSION statement in the SQL standard.
See Also
ALTER CONVERSION, CREATE CONVERSION
818
DROP DATABASE
Name
DROP DATABASE — remove a database
Synopsis
DROP DATABASE name
Description
DROP DATABASE drops a database. It removes the catalog entries for the database and deletes the directory
containing the data. It can only be executed by the database owner. Also, it cannot be executed while you
or anyone else are connected to the target database. (Connect to template1 or any other database to issue
this command.)
DROP DATABASE cannot be undone. Use it with care!
Parameters
name
The name of the database to remove.
Notes
DROP DATABASE cannot be executed inside a transaction block.
This command cannot be executed while connected to the target database. Thus, it might be more conve-
nient to use the program dropdb instead, which is a wrapper around this command.
Compatibility
The is no DROP DATABASE statement in the SQL standard.
See Also
CREATE DATABASE
819
DROP DOMAIN
Name
DROP DOMAIN — remove a domain
Synopsis
DROP DOMAIN name [, ...] [ CASCADE | RESTRICT ]
Description
DROP DOMAIN will remove a domain. Only the owner of a domain can remove it.
Parameters
name
The name (optionally schema-qualified) of an existing domain.
CASCADE
Automatically drop objects that depend on the domain (such as table columns).
RESTRICT
Refuse to drop the domain if any objects depend on it. This is the default.
Examples
To remove the domain box:
Compatibility
This command conforms to the SQL standard.
See Also
CREATE DOMAIN
820
DROP FUNCTION
Name
DROP FUNCTION — remove a function
Synopsis
DROP FUNCTION name ( [ type [, ...] ] ) [ CASCADE | RESTRICT ]
Description
DROP FUNCTION removes the definition of an existing function. To execute this command the user must
be the owner of the function. The argument types to the function must be specified, since several different
functions may exist with the same name and different argument lists.
Parameters
name
The name (optionally schema-qualified) of an existing function.
type
The data type of an argument of the function.
CASCADE
Automatically drop objects that depend on the function (such as operators or triggers).
RESTRICT
Refuse to drop the function if any objects depend on it. This is the default.
Examples
This command removes the square root function:
Compatibility
A DROP FUNCTION statement is defined in the SQL standard, but it is not compatible with this command.
821
DROP FUNCTION
See Also
CREATE FUNCTION, ALTER FUNCTION
822
DROP GROUP
Name
DROP GROUP — remove a user group
Synopsis
DROP GROUP name
Description
DROP GROUP removes the specified group. The users in the group are not removed.
Parameters
name
The name of an existing group.
Notes
It is unwise to drop a group that has any granted permissions on objects. Currently, this is not enforced,
but it is likely that future versions of PostgreSQL will check for the error.
Examples
To drop a group:
Compatibility
There is no DROP GROUP statement in the SQL standard.
See Also
ALTER GROUP, CREATE GROUP
823
DROP INDEX
Name
DROP INDEX — remove an index
Synopsis
DROP INDEX name [, ...] [ CASCADE | RESTRICT ]
Description
DROP INDEX drops an existing index from the database system. To execute this command you must be
the owner of the index.
Parameters
name
The name (optionally schema-qualified) of an index to remove.
CASCADE
Refuse to drop the index if any objects depend on it. This is the default.
Examples
This command will remove the index title_idx:
Compatibility
DROP INDEX is a PostgreSQL language extension. There are no provisions for indexes in the SQL stan-
dard.
See Also
CREATE INDEX
824
DROP LANGUAGE
Name
DROP LANGUAGE — remove a procedural language
Synopsis
DROP [ PROCEDURAL ] LANGUAGE name [ CASCADE | RESTRICT ]
Description
DROP LANGUAGE will remove the definition of the previously registered procedural language called name.
Parameters
name
The name of an existing procedural language. For backward compatibility, the name may be enclosed
by single quotes.
CASCADE
Automatically drop objects that depend on the language (such as functions in the language).
RESTRICT
Refuse to drop the language if any objects depend on it. This is the default.
Examples
This command removes the procedural language plsample:
Compatibility
There is no DROP LANGUAGE statement in the SQL standard.
See Also
ALTER LANGUAGE, CREATE LANGUAGE, droplang
825
DROP OPERATOR
Name
DROP OPERATOR — remove an operator
Synopsis
DROP OPERATOR name ( { lefttype | NONE } , { righttype | NONE } ) [ CASCADE | RESTRICT ]
Description
DROP OPERATOR drops an existing operator from the database system. To execute this command you
must be the owner of the operator.
Parameters
name
The name (optionally schema-qualified) of an existing operator.
lefttype
The data type of the operator’s left operand; write NONE if the operator has no left operand.
righttype
The data type of the operator’s right operand; write NONE if the operator has no right operand.
CASCADE
Refuse to drop the operator if any objects depend on it. This is the default.
Examples
Remove the power operator a^b for type integer:
Remove the left unary bitwise complement operator ~b for type bit:
826
DROP OPERATOR
Compatibility
There is no DROP OPERATOR statement in the SQL standard.
See Also
CREATE OPERATOR, ALTER OPERATOR
827
DROP OPERATOR CLASS
Name
DROP OPERATOR CLASS — remove an operator class
Synopsis
DROP OPERATOR CLASS name USING index_method [ CASCADE | RESTRICT ]
Description
DROP OPERATOR CLASS drops an existing operator class. To execute this command you must be the
owner of the operator class.
Parameters
name
The name (optionally schema-qualified) of an existing operator class.
index_method
The name of the index access method the operator class is for.
CASCADE
Refuse to drop the operator class if any objects depend on it. This is the default.
Examples
Remove the B-tree operator class widget_ops:
This command will not succeed if there are any existing indexes that use the operator class. Add CASCADE
to drop such indexes along with the operator class.
Compatibility
There is no DROP OPERATOR CLASS statement in the SQL standard.
828
DROP OPERATOR CLASS
See Also
ALTER OPERATOR CLASS, CREATE OPERATOR CLASS
829
DROP RULE
Name
DROP RULE — remove a rewrite rule
Synopsis
DROP RULE name ON relation [ CASCADE | RESTRICT ]
Description
DROP RULE drops a rewrite rule.
Parameters
name
The name of the rule to drop.
relation
The name (optionally schema-qualified) of the table or view that the rule applies to.
CASCADE
Refuse to drop the rule if any objects depend on it. This is the default.
Examples
To drop the rewrite rule newrule:
Compatibility
There is no DROP RULE statement in the SQL standard.
830
DROP RULE
See Also
CREATE RULE
831
DROP SCHEMA
Name
DROP SCHEMA — remove a schema
Synopsis
DROP SCHEMA name [, ...] [ CASCADE | RESTRICT ]
Description
DROP SCHEMA removes schemas from the database.
A schema can only be dropped by its owner or a superuser. Note that the owner can drop the schema (and
thereby all contained objects) even if he does not own some of the objects within the schema.
Parameters
name
The name of a schema.
CASCADE
Automatically drop objects (tables, functions, etc.) that are contained in the schema.
RESTRICT
Refuse to drop the schema if it contains any objects. This is the default.
Examples
To remove schema mystuff from the database, along with everything it contains:
Compatibility
DROP SCHEMA is fully conforming with the SQL standard, except that the standard only allows one
schema to be dropped per command.
832
DROP SCHEMA
See Also
ALTER SCHEMA, CREATE SCHEMA
833
DROP SEQUENCE
Name
DROP SEQUENCE — remove a sequence
Synopsis
DROP SEQUENCE name [, ...] [ CASCADE | RESTRICT ]
Description
DROP SEQUENCE removes sequence number generators.
Parameters
name
The name (optionally schema-qualified) of a sequence.
CASCADE
Refuse to drop the sequence if any objects depend on it. This is the default.
Examples
To remove the sequence serial:
Compatibility
DROP SEQUENCE conforms with SQL:2003, except that the standard only allows one sequence to be
dropped per command.
See Also
CREATE SEQUENCE
834
DROP TABLE
Name
DROP TABLE — remove a table
Synopsis
DROP TABLE name [, ...] [ CASCADE | RESTRICT ]
Description
DROP TABLE removes tables from the database. Only its owner may destroy a table. To empty a table of
rows, without destroying the table, use DELETE.
DROP TABLE always removes any indexes, rules, triggers, and constraints that exist for the target table.
However, to drop a table that is referenced by a view or a foreign-key constraint of another table, CASCADE
must be specified. (CASCADE will remove a dependent view entirely, but in the foreign-key case it will only
remove the foreign-key constraint, not the other table entirely.)
Parameters
name
The name (optionally schema-qualified) of the table to drop.
CASCADE
Refuse to drop the table if any objects depend on it. This is the default.
Examples
To destroy two tables, films and distributors:
Compatibility
This command conforms to the SQL standard, except that the standard only allows one table to be dropped
per command.
835
DROP TABLE
See Also
ALTER TABLE, CREATE TABLE
836
DROP TABLESPACE
Name
DROP TABLESPACE — remove a tablespace
Synopsis
DROP TABLESPACE tablespacename
Description
DROP TABLESPACE removes a tablespace from the system.
A tablespace can only be dropped by its owner or a superuser. The tablespace must be empty of all
database objects before it can be dropped. It is possible that objects in other databases may still reside in
the tablespace even if no objects in the current database are using the tablespace.
Parameters
tablespacename
The name of a tablespace.
Examples
To remove tablespace mystuff from the system:
Compatibility
DROP TABLESPACE is a PostgreSQL extension.
See Also
CREATE TABLESPACE, ALTER TABLESPACE
837
DROP TRIGGER
Name
DROP TRIGGER — remove a trigger
Synopsis
DROP TRIGGER name ON table [ CASCADE | RESTRICT ]
Description
DROP TRIGGER will remove an existing trigger definition. To execute this command, the current user
must be the owner of the table for which the trigger is defined.
Parameters
name
The name of the trigger to remove.
table
The name (optionally schema-qualified) of the table for which the trigger is defined.
CASCADE
Refuse to drop the trigger if any objects depend on it. This is the default.
Examples
Destroy the trigger if_dist_exists on the table films:
Compatibility
The DROP TRIGGER statement in PostgreSQL is incompatible with the SQL standard. In the SQL stan-
dard, trigger names are not local to tables, so the command is simply DROP TRIGGER name.
838
DROP TRIGGER
See Also
CREATE TRIGGER
839
DROP TYPE
Name
DROP TYPE — remove a data type
Synopsis
DROP TYPE name [, ...] [ CASCADE | RESTRICT ]
Description
DROP TYPE will remove a user-defined data type. Only the owner of a type can remove it.
Parameters
name
The name (optionally schema-qualified) of the data type to remove.
CASCADE
Automatically drop objects that depend on the type (such as table columns, functions, operators).
RESTRICT
Refuse to drop the type if any objects depend on it. This is the default.
Examples
To remove the data type box:
Compatibility
This command is similar to the corresponding command in the SQL standard, but note that the CREATE
TYPE command and the data type extension mechanisms in PostgreSQL differ from the SQL standard.
See Also
CREATE TYPE, ALTER TYPE
840
DROP USER
Name
DROP USER — remove a database user account
Synopsis
DROP USER name
Description
DROP USER removes the specified user. It does not remove tables, views, or other objects owned by the
user. If the user owns any database, an error is raised.
Parameters
name
The name of the user to remove.
Notes
PostgreSQL includes a program dropuser that has the same functionality as this command (in fact, it calls
this command) but can be run from the command shell.
To drop a user who owns a database, first drop the database or change its ownership.
It is unwise to drop a user who either owns any database objects or has any granted permissions on objects.
Currently, this is only enforced for the case of owners of databases, but it is likely that future versions of
PostgreSQL will check other cases.
Examples
To drop a user account:
Compatibility
The DROP USER statement is a PostgreSQL extension. The SQL standard leaves the definition of users to
the implementation.
841
DROP USER
See Also
ALTER USER, CREATE USER
842
DROP VIEW
Name
DROP VIEW — remove a view
Synopsis
DROP VIEW name [, ...] [ CASCADE | RESTRICT ]
Description
DROP VIEW drops an existing view. To execute this command you must be the owner of the view.
Parameters
name
The name (optionally schema-qualified) of the view to remove.
CASCADE
Automatically drop objects that depend on the view (such as other views).
RESTRICT
Refuse to drop the view if any objects depend on it. This is the default.
Examples
This command will remove the view called kinds:
Compatibility
This command conforms to the SQL standard, except that the standard only allows one view to be dropped
per command.
See Also
CREATE VIEW
843
END
Name
END — commit the current transaction
Synopsis
END [ WORK | TRANSACTION ]
Description
END commits the current transaction. All changes made by the transaction become visible to others and
are guaranteed to be durable if a crash occurs. This command is a PostgreSQL extension that is equivalent
to COMMIT.
Parameters
WORK
TRANSACTION
Notes
Use ROLLBACK to abort a transaction.
Issuing END when not inside a transaction does no harm, but it will provoke a warning message.
Examples
To commit the current transaction and make all changes permanent:
END;
Compatibility
END is a PostgreSQL extension that provides functionality equivalent to COMMIT, which is specified in
the SQL standard.
844
END
See Also
BEGIN, COMMIT, ROLLBACK
845
EXECUTE
Name
EXECUTE — execute a prepared statement
Synopsis
EXECUTE plan_name [ (parameter [, ...] ) ]
Description
EXECUTE is used to execute a previously prepared statement. Since prepared statements only exist for the
duration of a session, the prepared statement must have been created by a PREPARE statement executed
earlier in the current session.
If the PREPARE statement that created the statement specified some parameters, a compatible set of pa-
rameters must be passed to the EXECUTE statement, or else an error is raised. Note that (unlike functions)
prepared statements are not overloaded based on the type or number of their parameters; the name of a
prepared statement must be unique within a database session.
For more information on the creation and usage of prepared statements, see PREPARE.
Parameters
plan_name
The name of the prepared statement to execute.
parameter
The actual value of a parameter to the prepared statement. This must be an expression yielding a
value of a type compatible with the data type specified for this parameter position in the PREPARE
command that created the prepared statement.
Outputs
The command tag returned by EXECUTE is that of the prepared statement, and not EXECUTE.
Examples
Examples are given in the Examples section of the PREPARE documentation.
846
EXECUTE
Compatibility
The SQL standard includes an EXECUTE statement, but it is only for use in embedded SQL. This version
of the EXECUTE statement also uses a somewhat different syntax.
See Also
DEALLOCATE, PREPARE
847
EXPLAIN
Name
EXPLAIN — show the execution plan of a statement
Synopsis
EXPLAIN [ ANALYZE ] [ VERBOSE ] statement
Description
This command displays the execution plan that the PostgreSQL planner generates for the supplied state-
ment. The execution plan shows how the table(s) referenced by the statement will be scanned — by plain
sequential scan, index scan, etc. — and if multiple tables are referenced, what join algorithms will be used
to bring together the required rows from each input table.
The most critical part of the display is the estimated statement execution cost, which is the planner’s
guess at how long it will take to run the statement (measured in units of disk page fetches). Actually two
numbers are shown: the start-up time before the first row can be returned, and the total time to return all
the rows. For most queries the total time is what matters, but in contexts such as a subquery in EXISTS,
the planner will choose the smallest start-up time instead of the smallest total time (since the executor will
stop after getting one row, anyway). Also, if you limit the number of rows to return with a LIMIT clause,
the planner makes an appropriate interpolation between the endpoint costs to estimate which plan is really
the cheapest.
The ANALYZE option causes the statement to be actually executed, not only planned. The total elapsed
time expended within each plan node (in milliseconds) and total number of rows it actually returned are
added to the display. This is useful for seeing whether the planner’s estimates are close to reality.
Important: Keep in mind that the statement is actually executed when ANALYZE is used. Although
EXPLAIN will discard any output that a SELECT would return, other side effects of the statement will
happen as usual. If you wish to use EXPLAIN ANALYZE on an INSERT, UPDATE, DELETE, or EXECUTE
statement without letting the command affect your data, use this approach:
BEGIN;
EXPLAIN ANALYZE ...;
ROLLBACK;
Parameters
ANALYZE
Carry out the command and show the actual run times.
848
EXPLAIN
VERBOSE
Show the full internal representation of the plan tree, rather than just a summary. Usually this option
is only useful for specialized debugging purposes. The VERBOSE output is either pretty-printed or
not, depending on the setting of the explain_pretty_print configuration parameter.
statement
Any SELECT, INSERT, UPDATE, DELETE, EXECUTE, or DECLARE statement, whose execution plan
you wish to see.
Notes
There is only sparse documentation on the optimizer’s use of cost information in PostgreSQL. Refer to
Section 13.1 for more information.
In order to allow the PostgreSQL query planner to make reasonably informed decisions when optimizing
queries, the ANALYZE statement should be run to record statistics about the distribution of data within
the table. If you have not done this (or if the statistical distribution of the data in the table has changed
significantly since the last time ANALYZE was run), the estimated costs are unlikely to conform to the real
properties of the query, and consequently an inferior query plan may be chosen.
Prior to PostgreSQL 7.3, the plan was emitted in the form of a NOTICE message. Now it appears as a
query result (formatted like a table with a single text column).
Examples
To show the plan for a simple query on a table with a single integer column and 10000 rows:
QUERY PLAN
---------------------------------------------------------
Seq Scan on foo (cost=0.00..155.00 rows=10000 width=4)
(1 row)
If there is an index and we use a query with an indexable WHERE condition, EXPLAIN might show a
different plan:
QUERY PLAN
--------------------------------------------------------------
Index Scan using fi on foo (cost=0.00..5.98 rows=1 width=4)
Index Cond: (i = 4)
(2 rows)
And here is an example of a query plan for a query using an aggregate function:
849
EXPLAIN
QUERY PLAN
---------------------------------------------------------------------
Aggregate (cost=23.93..23.93 rows=1 width=4)
-> Index Scan using fi on foo (cost=0.00..23.92 rows=6 width=4)
Index Cond: (i < 10)
(3 rows)
Here is an example of using EXPLAIN EXECUTE to display the execution plan for a prepared query:
QUERY PLAN
----------------------------------------------------------------------------------------
HashAggregate (cost=39.53..39.53 rows=1 width=8) (actual time=0.661..0.672 rows=7 loop
-> Index Scan using test_pkey on test (cost=0.00..32.97 rows=1311 width=8) (actual
Index Cond: ((id > $1) AND (id < $2))
Total runtime: 0.851 ms
(4 rows)
Of course, the specific numbers shown here depend on the actual contents of the tables involved. Also
note that the numbers, and even the selected query strategy, may vary between PostgreSQL releases due
to planner improvements. In addition, the ANALYZE command uses random sampling to estimate data
statistics; therefore, it is possible for cost estimates to change after a fresh run of ANALYZE, even if the
actual distribution of data in the table has not changed.
Compatibility
There is no EXPLAIN statement defined in the SQL standard.
See Also
ANALYZE
850
FETCH
Name
FETCH — retrieve rows from a query using a cursor
Synopsis
FETCH [ direction { FROM | IN } ] cursorname
NEXT
PRIOR
FIRST
LAST
ABSOLUTE count
RELATIVE count
count
ALL
FORWARD
FORWARD count
FORWARD ALL
BACKWARD
BACKWARD count
BACKWARD ALL
Description
FETCH retrieves rows using a previously-created cursor.
A cursor has an associated position, which is used by FETCH. The cursor position can be before the first
row of the query result, on any particular row of the result, or after the last row of the result. When created,
a cursor is positioned before the first row. After fetching some rows, the cursor is positioned on the row
most recently retrieved. If FETCH runs off the end of the available rows then the cursor is left positioned
after the last row, or before the first row if fetching backward. FETCH ALL or FETCH BACKWARD ALL
will always leave the cursor positioned after the last row or before the first row.
The forms NEXT, PRIOR, FIRST, LAST, ABSOLUTE, RELATIVE fetch a single row after moving the cursor
appropriately. If there is no such row, an empty result is returned, and the cursor is left positioned before
the first row or after the last row as appropriate.
The forms using FORWARD and BACKWARD retrieve the indicated number of rows moving in the forward or
backward direction, leaving the cursor positioned on the last-returned row (or after/before all rows, if the
count exceeds the number of rows available).
RELATIVE 0, FORWARD 0, and BACKWARD 0 all request fetching the current row without moving the
cursor, that is, re-fetching the most recently fetched row. This will succeed unless the cursor is positioned
before the first row or after the last row; in which case, no row is returned.
851
FETCH
Parameters
direction
direction defines the fetch direction and number of rows to fetch. It can be one of the following:
NEXT
Fetch the count’th row of the query, or the abs(count)’th row from the end if count is
negative. Position before first row or after last row if count is out of range; in particular,
ABSOLUTE 0 positions before the first row.
RELATIVE count
Fetch the count’th succeeding row, or the abs(count)’th prior row if count is negative.
RELATIVE 0 re-fetches the current row, if any.
count
Fetch the next count rows (same as FORWARD count).
ALL
Fetch the next count rows. FORWARD 0 re-fetches the current row.
FORWARD ALL
Fetch the prior count rows (scanning backwards). BACKWARD 0 re-fetches the current row.
852
FETCH
BACKWARD ALL
count
count is a possibly-signed integer constant, determining the location or number of rows to fetch.
For FORWARD and BACKWARD cases, specifying a negative count is equivalent to changing the sense
of FORWARD and BACKWARD.
cursorname
An open cursor’s name.
Outputs
On successful completion, a FETCH command returns a command tag of the form
FETCH count
The count is the number of rows fetched (possibly zero). Note that in psql, the command tag will not
actually be displayed, since psql displays the fetched rows instead.
Notes
The cursor should be declared with the SCROLL option if one intends to use any variants of FETCH other
than FETCH NEXT or FETCH FORWARD with a positive count. For simple queries PostgreSQL will allow
backwards fetch from cursors not declared with SCROLL, but this behavior is best not relied on. If the
cursor is declared with NO SCROLL, no backward fetches are allowed.
ABSOLUTE fetches are not any faster than navigating to the desired row with a relative move: the under-
lying implementation must traverse all the intermediate rows anyway. Negative absolute fetches are even
worse: the query must be read to the end to find the last row, and then traversed backward from there.
However, rewinding to the start of the query (as with FETCH ABSOLUTE 0) is fast.
Updating data via a cursor is currently not supported by PostgreSQL.
DECLARE is used to define a cursor. Use MOVE to change cursor position without retrieving data.
Examples
The following example traverses a table using a cursor.
BEGIN WORK;
-- Set up a cursor:
DECLARE liahona SCROLL CURSOR FOR SELECT * FROM films;
853
FETCH
Compatibility
The SQL standard defines FETCH for use in embedded SQL only. The variant of FETCH described here
returns the data as if it were a SELECT result rather than placing it in host variables. Other than this point,
FETCH is fully upward-compatible with the SQL standard.
The FETCH forms involving FORWARD and BACKWARD, as well as the forms FETCH count and FETCH
ALL, in which FORWARD is implicit, are PostgreSQL extensions.
The SQL standard allows only FROM preceding the cursor name; the option to use IN is an extension.
See Also
CLOSE, DECLARE, MOVE
854
GRANT
Name
GRANT — define access privileges
Synopsis
GRANT { { SELECT | INSERT | UPDATE | DELETE | RULE | REFERENCES | TRIGGER }
[,...] | ALL [ PRIVILEGES ] }
ON [ TABLE ] tablename [, ...]
TO { username | GROUP groupname | PUBLIC } [, ...] [ WITH GRANT OPTION ]
Description
The GRANT command gives specific privileges on an object (table, view, sequence, database, function,
procedural language, schema, or tablespace) to one or more users or groups of users. These privileges are
added to those already granted, if any.
The key word PUBLIC indicates that the privileges are to be granted to all users, including those that may
be created later. PUBLIC may be thought of as an implicitly defined group that always includes all users.
Any particular user will have the sum of privileges granted directly to him, privileges granted to any group
he is presently a member of, and privileges granted to PUBLIC.
If WITH GRANT OPTION is specified, the recipient of the privilege may in turn grant it to others. Without
a grant option, the recipient cannot do that. At present, grant options can only be granted to individual
users, not to groups or PUBLIC.
There is no need to grant privileges to the owner of an object (usually the user that created it), as the owner
has all privileges by default. (The owner could, however, choose to revoke some of his own privileges for
855
GRANT
safety.) The right to drop an object, or to alter its definition in any way is not described by a grantable
privilege; it is inherent in the owner, and cannot be granted or revoked. The owner implicitly has all grant
options for the object, too.
Depending on the type of object, the initial default privileges may include granting some privileges to
PUBLIC. The default is no public access for tables, schemas, and tablespaces; TEMP table creation privilege
for databases; EXECUTE privilege for functions; and USAGE privilege for languages. The object owner may
of course revoke these privileges. (For maximum security, issue the REVOKE in the same transaction that
creates the object; then there is no window in which another user may use the object.)
The possible privileges are:
SELECT
Allows SELECT from any column of the specified table, view, or sequence. Also allows the use of
COPY TO. For sequences, this privilege also allows the use of the currval function.
INSERT
Allows INSERT of a new row into the specified table. Also allows COPY FROM.
UPDATE
Allows UPDATE of any column of the specified table. SELECT ... FOR UPDATE also requires this
privilege (besides the SELECT privilege). For sequences, this privilege allows the use of the nextval
and setval functions.
DELETE
Allows DELETE of a row from the specified table.
RULE
Allows the creation of a rule on the table/view. (See the CREATE RULE statement.)
REFERENCES
To create a foreign key constraint, it is necessary to have this privilege on both the referencing and
referenced tables.
TRIGGER
Allows the creation of a trigger on the specified table. (See the CREATE TRIGGER statement.)
CREATE
For databases, allows new schemas to be created within the database.
For schemas, allows new objects to be created within the schema. To rename an existing object, you
must own the object and have this privilege for the containing schema.
For tablespaces, allows tables and indexes to be created within the tablespace, and allows databases
to be created that have the tablespace as their default tablespace. (Note that revoking this privilege
will not alter the placement of existing objects.)
TEMPORARY
TEMP
Allows temporary tables to be created while using the database.
856
GRANT
EXECUTE
Allows the use of the specified function and the use of any operators that are implemented on top of
the function. This is the only type of privilege that is applicable to functions. (This syntax works for
aggregate functions, as well.)
USAGE
For procedural languages, allows the use of the specified language for the creation of functions in
that language. This is the only type of privilege that is applicable to procedural languages.
For schemas, allows access to objects contained in the specified schema (assuming that the objects’
own privilege requirements are also met). Essentially this allows the grantee to “look up” objects
within the schema.
ALL PRIVILEGES
Grant all of the available privileges at once. The PRIVILEGES key word is optional in PostgreSQL,
though it is required by strict SQL.
The privileges required by other commands are listed on the reference page of the respective command.
Notes
The REVOKE command is used to revoke access privileges.
When a non-owner of an object attempts to GRANT privileges on the object, the command will fail outright
if the user has no privileges whatsoever on the object. As long as some privilege is available, the command
will proceed, but it will grant only those privileges for which the user has grant options. The GRANT ALL
PRIVILEGES forms will issue a warning message if no grant options are held, while the other forms will
issue a warning if grant options for any of the privileges specifically named in the command are not held.
(In principle these statements apply to the object owner as well, but since the owner is always treated as
holding all grant options, the cases can never occur.)
It should be noted that database superusers can access all objects regardless of object privilege settings.
This is comparable to the rights of root in a Unix system. As with root, it’s unwise to operate as a
superuser except when absolutely necessary.
If a superuser chooses to issue a GRANT or REVOKE command, the command is performed as though it
were issued by the owner of the affected object. In particular, privileges granted via such a command will
appear to have been granted by the object owner.
Currently, PostgreSQL does not support granting or revoking privileges for individual columns of a table.
One possible workaround is to create a view having just the desired columns and then grant privileges to
that view.
Use psql’s \z command to obtain information about existing privileges, for example:
=> \z mytable
857
GRANT
(1 row)
r -- SELECT ("read")
w -- UPDATE ("write")
a -- INSERT ("append")
d -- DELETE
R -- RULE
x -- REFERENCES
t -- TRIGGER
X -- EXECUTE
U -- USAGE
C -- CREATE
T -- TEMPORARY
arwdRxt -- ALL PRIVILEGES (for tables)
* -- grant option for preceding privilege
The above example display would be seen by user miriam after creating table mytable and doing
If the “Access privileges” column is empty for a given object, it means the object has default priv-
ileges (that is, its privileges column is null). Default privileges always include all privileges for the
owner, and may include some privileges for PUBLIC depending on the object type, as explained above.
The first GRANT or REVOKE on an object will instantiate the default privileges (producing, for example,
{miriam=arwdRxt/miriam}) and then modify them per the specified request.
Notice that the owner’s implicit grant options are not marked in the access privileges display. A * will
appear only when grant options have been explicitly granted to someone.
Examples
Grant insert privilege to all users on table films:
858
GRANT
Note that while the above will indeed grant all privileges if executed by a superuser or the owner of kinds,
when executed by someone else it will only grant those permissions for which the someone else has grant
options.
Compatibility
According to the SQL standard, the PRIVILEGES key word in ALL PRIVILEGES is required. The SQL
standard does not support setting the privileges on more than one object per command.
PostgreSQL allows an object owner to revoke his own ordinary privileges: for example, a table owner can
make the table read-only to himself by revoking his own INSERT, UPDATE, and DELETE privileges.
This is not possible according to the SQL standard. The reason is that PostgreSQL treats the owner’s
privileges as having been granted by the owner to himself; therefore he can revoke them too. In the SQL
standard, the owner’s privileges are granted by an assumed entity “_SYSTEM”. Not being “_SYSTEM”,
the owner cannot revoke these rights.
The SQL standard allows setting privileges for individual columns within a table:
GRANT privileges
ON table [ ( column [, ...] ) ] [, ...]
TO { PUBLIC | username [, ...] } [ WITH GRANT OPTION ]
The SQL standard provides for a USAGE privilege on other kinds of objects: character sets, collations,
translations, domains.
The RULE privilege, and privileges on databases, tablespaces, schemas, languages, and sequences are
PostgreSQL extensions.
See Also
REVOKE
859
INSERT
Name
INSERT — create new rows in a table
Synopsis
INSERT INTO table [ ( column [, ...] ) ]
{ DEFAULT VALUES | VALUES ( { expression | DEFAULT } [, ...] ) | query }
Description
INSERT inserts new rows into a table. One can insert a single row specified by value expressions, or
several rows as a result of a query.
The target column names may be listed in any order. If no list of column names is given at all, the default
is all the columns of the table in their declared order; or the first N column names, if there are only N
columns supplied by the VALUES clause or query. The values supplied by the VALUES clause or query
are associated with the explicit or implicit column list left-to-right.
Each column not present in the explicit or implicit column list will be filled with a default value, either its
declared default value or null if there is none.
If the expression for any column is not of the correct data type, automatic type conversion will be at-
tempted.
You must have INSERT privilege to a table in order to insert into it. If you use the query clause to insert
rows from a query, you also need to have SELECT privilege on any table used in the query.
Parameters
table
The name (optionally schema-qualified) of an existing table.
column
The name of a column in table. The column name can be qualified with a subfield name or array
subscript, if needed. (Inserting into only some fields of a composite column leaves the other fields
null.)
DEFAULT VALUES
860
INSERT
query
A query (SELECT statement) that supplies the rows to be inserted. Refer to the SELECT statement for
a description of the syntax.
Outputs
On successful completion, an INSERT command returns a command tag of the form
The count is the number of rows inserted. If count is exactly one, and the target table has OIDs, then
oid is the OID assigned to the inserted row. Otherwise oid is zero.
Examples
Insert a single row into table films:
In this example, the len column is omitted and therefore it will have the default value:
This example uses the DEFAULT clause for the date columns rather than specifying a value:
This example inserts some rows into table films from a table tmp_films with the same column layout
as films:
INSERT INTO films SELECT * FROM tmp_films WHERE date_prod < ’2004-05-07’;
861
INSERT
Compatibility
INSERT conforms to the SQL standard. The case in which a column name list is omitted, but not all the
columns are filled from the VALUES clause or query, is disallowed by the standard.
Possible limitations of the query clause are documented under SELECT.
862
LISTEN
Name
LISTEN — listen for a notification
Synopsis
LISTEN name
Description
LISTEN registers the current session as a listener on the notification condition name. If the current session
is already registered as a listener for this notification condition, nothing is done.
Whenever the command NOTIFY name is invoked, either by this session or another one connected to the
same database, all the sessions currently listening on that notification condition are notified, and each will
in turn notify its connected client application. See the discussion of NOTIFY for more information.
A session can be unregistered for a given notify condition with the UNLISTEN command. A session’s
listen registrations are automatically cleared when the session ends.
The method a client application must use to detect notification events depends on which PostgreSQL
application programming interface it uses. With the libpq library, the application issues LISTEN as an
ordinary SQL command, and then must periodically call the function PQnotifies to find out whether
any notification events have been received. Other interfaces such as libpgtcl provide higher-level methods
for handling notify events; indeed, with libpgtcl the application programmer should not even issue LISTEN
or UNLISTEN directly. See the documentation for the interface you are using for more details.
NOTIFY contains a more extensive discussion of the use of LISTEN and NOTIFY.
Parameters
name
Name of a notify condition (any identifier).
Examples
Configure and execute a listen/notify sequence from psql:
LISTEN virtual;
NOTIFY virtual;
Asynchronous notification "virtual" received from server process with PID 8448.
863
LISTEN
Compatibility
There is no LISTEN statement in the SQL standard.
See Also
NOTIFY, UNLISTEN
864
LOAD
Name
LOAD — load or reload a shared library file
Synopsis
LOAD ’filename’
Description
This command loads a shared library file into the PostgreSQL server’s address space. If the file had
been loaded previously, it is first unloaded. This command is primarily useful to unload and reload a
shared library file that has been changed since the server first loaded it. To make use of the shared library,
function(s) in it need to be declared using the CREATE FUNCTION command.
The file name is specified in the same way as for shared library names in CREATE FUNCTION; in
particular, one may rely on a search path and automatic addition of the system’s standard shared library
file name extension. See Section 31.9 for more information on this topic.
Compatibility
LOAD is a PostgreSQL extension.
See Also
CREATE FUNCTION
865
LOCK
Name
LOCK — lock a table
Synopsis
LOCK [ TABLE ] name [, ...] [ IN lockmode MODE ] [ NOWAIT ]
Description
LOCK TABLE obtains a table-level lock, waiting if necessary for any conflicting locks to be released. If
NOWAIT is specified, LOCK TABLE does not wait to acquire the desired lock: if it cannot be acquired
immediately, the command is aborted and an error is emitted. Once obtained, the lock is held for the
remainder of the current transaction. (There is no UNLOCK TABLE command; locks are always released at
transaction end.)
When acquiring locks automatically for commands that reference tables, PostgreSQL always uses the
least restrictive lock mode possible. LOCK TABLE provides for cases when you might need more restrictive
locking. For example, suppose an application runs a transaction at the Read Committed isolation level and
needs to ensure that data in a table remains stable for the duration of the transaction. To achieve this you
could obtain SHARE lock mode over the table before querying. This will prevent concurrent data changes
and ensure subsequent reads of the table see a stable view of committed data, because SHARE lock mode
conflicts with the ROW EXCLUSIVE lock acquired by writers, and your LOCK TABLE name IN SHARE
MODE statement will wait until any concurrent holders of ROW EXCLUSIVE mode locks commit or roll
back. Thus, once you obtain the lock, there are no uncommitted writes outstanding; furthermore none can
begin until you release the lock.
To achieve a similar effect when running a transaction at the Serializable isolation level, you have to
execute the LOCK TABLE statement before executing any SELECT or data modification statement. A se-
rializable transaction’s view of data will be frozen when its first SELECT or data modification statement
begins. A LOCK TABLE later in the transaction will still prevent concurrent writes — but it won’t ensure
that what the transaction reads corresponds to the latest committed values.
If a transaction of this sort is going to change the data in the table, then it should use SHARE ROW
EXCLUSIVE lock mode instead of SHARE mode. This ensures that only one transaction of this type runs at
a time. Without this, a deadlock is possible: two transactions might both acquire SHARE mode, and then be
unable to also acquire ROW EXCLUSIVE mode to actually perform their updates. (Note that a transaction’s
own locks never conflict, so a transaction can acquire ROW EXCLUSIVE mode when it holds SHARE mode
— but not if anyone else holds SHARE mode.) To avoid deadlocks, make sure all transactions acquire locks
on the same objects in the same order, and if multiple lock modes are involved for a single object, then
transactions should always acquire the most restrictive mode first.
866
LOCK
More information about the lock modes and locking strategies can be found in Section 12.3.
Parameters
name
The name (optionally schema-qualified) of an existing table to lock.
The command LOCK TABLE a, b; is equivalent to LOCK TABLE a; LOCK TABLE b;. The tables
are locked one-by-one in the order specified in the LOCK TABLE command.
lockmode
The lock mode specifies which locks this lock conflicts with. Lock modes are described in Section
12.3.
If no lock mode is specified, then ACCESS EXCLUSIVE, the most restrictive mode, is used.
NOWAIT
Specifies that LOCK TABLE should not wait for any conflicting locks to be released: if the specified
lock(s) cannot be acquired immediately without waiting, the transaction is aborted.
Notes
LOCK TABLE ... IN ACCESS SHARE MODE requires SELECT privileges on the target table. All other
forms of LOCK require UPDATE and/or DELETE privileges.
LOCK TABLE is useful only inside a transaction block (BEGIN/COMMIT pair), since the lock is dropped as
soon as the transaction ends. A LOCK TABLE command appearing outside any transaction block forms a
self-contained transaction, so the lock will be dropped as soon as it is obtained.
LOCK TABLE only deals with table-level locks, and so the mode names involving ROW are all misnomers.
These mode names should generally be read as indicating the intention of the user to acquire row-level
locks within the locked table. Also, ROW EXCLUSIVE mode is a sharable table lock. Keep in mind that all
the lock modes have identical semantics so far as LOCK TABLE is concerned, differing only in the rules
about which modes conflict with which. For information on how to acquire an actual row-level lock, see
Section 12.3.2 and the FOR UPDATE Clause in the SELECT reference documentation.
Examples
Obtain a SHARE lock on a primary key table when going to perform inserts into a foreign key table:
BEGIN WORK;
LOCK TABLE films IN SHARE MODE;
SELECT id FROM films
WHERE name = ’Star Wars: Episode I - The Phantom Menace’;
-- Do ROLLBACK if record was not returned
INSERT INTO films_user_comments VALUES
(_id_, ’GREAT! I was waiting for it for so long!’);
COMMIT WORK;
867
LOCK
Take a SHARE ROW EXCLUSIVE lock on a primary key table when going to perform a delete operation:
BEGIN WORK;
LOCK TABLE films IN SHARE ROW EXCLUSIVE MODE;
DELETE FROM films_user_comments WHERE id IN
(SELECT id FROM films WHERE rating < 5);
DELETE FROM films WHERE rating < 5;
COMMIT WORK;
Compatibility
There is no LOCK TABLE in the SQL standard, which instead uses SET TRANSACTION to specify concur-
rency levels on transactions. PostgreSQL supports that too; see SET TRANSACTION for details.
Except for ACCESS SHARE, ACCESS EXCLUSIVE, and SHARE UPDATE EXCLUSIVE lock modes, the
PostgreSQL lock modes and the LOCK TABLE syntax are compatible with those present in Oracle.
868
MOVE
Name
MOVE — position a cursor
Synopsis
MOVE [ direction { FROM | IN } ] cursorname
Description
MOVE repositions a cursor without retrieving any data. MOVE works exactly like the FETCH command,
except it only positions the cursor and does not return rows.
Refer to FETCH for details on syntax and usage.
Outputs
On successful completion, a MOVE command returns a command tag of the form
MOVE count
The count is the number of rows that a FETCH command with the same parameters would have returned
(possibly zero).
Examples
BEGIN WORK;
DECLARE liahona CURSOR FOR SELECT * FROM films;
869
MOVE
Compatibility
There is no MOVE statement in the SQL standard.
See Also
CLOSE, DECLARE, FETCH
870
NOTIFY
Name
NOTIFY — generate a notification
Synopsis
NOTIFY name
Description
The NOTIFY command sends a notification event to each client application that has previously executed
LISTEN name for the specified notification name in the current database.
NOTIFY provides a simple form of signal or interprocess communication mechanism for a collection of
processes accessing the same PostgreSQL database. Higher-level mechanisms can be built by using tables
in the database to pass additional data (beyond a mere notification name) from notifier to listener(s).
The information passed to the client for a notification event includes the notification name and the notify-
ing session’s server process PID. It is up to the database designer to define the notification names that will
be used in a given database and what each one means.
Commonly, the notification name is the same as the name of some table in the database, and the notify
event essentially means, “I changed this table, take a look at it to see what’s new”. But no such association
is enforced by the NOTIFY and LISTEN commands. For example, a database designer could use several
different notification names to signal different sorts of changes to a single table.
When NOTIFY is used to signal the occurrence of changes to a particular table, a useful programming
technique is to put the NOTIFY in a rule that is triggered by table updates. In this way, notification happens
automatically when the table is changed, and the application programmer can’t accidentally forget to do
it.
NOTIFY interacts with SQL transactions in some important ways. Firstly, if a NOTIFY is executed inside
a transaction, the notify events are not delivered until and unless the transaction is committed. This is
appropriate, since if the transaction is aborted, all the commands within it have had no effect, including
NOTIFY. But it can be disconcerting if one is expecting the notification events to be delivered immediately.
Secondly, if a listening session receives a notification signal while it is within a transaction, the notification
event will not be delivered to its connected client until just after the transaction is completed (either
committed or aborted). Again, the reasoning is that if a notification were delivered within a transaction
that was later aborted, one would want the notification to be undone somehow — but the server cannot
“take back” a notification once it has sent it to the client. So notification events are only delivered between
transactions. The upshot of this is that applications using NOTIFY for real-time signaling should try to
keep their transactions short.
NOTIFY behaves like Unix signals in one important respect: if the same notification name is signaled
multiple times in quick succession, recipients may get only one notification event for several executions
of NOTIFY. So it is a bad idea to depend on the number of notifications received. Instead, use NOTIFY
871
NOTIFY
to wake up applications that need to pay attention to something, and use a database object (such as a
sequence) to keep track of what happened or how many times it happened.
It is common for a client that executes NOTIFY to be listening on the same notification name itself. In
that case it will get back a notification event, just like all the other listening sessions. Depending on the
application logic, this could result in useless work, for example, reading a database table to find the same
updates that that session just wrote out. It is possible to avoid such extra work by noticing whether the
notifying session’s server process PID (supplied in the notification event message) is the same as one’s
own session’s PID (available from libpq). When they are the same, the notification event is one’s own
work bouncing back, and can be ignored. (Despite what was said in the preceding paragraph, this is a safe
technique. PostgreSQL keeps self-notifications separate from notifications arriving from other sessions,
so you cannot miss an outside notification by ignoring your own notifications.)
Parameters
name
Name of the notification to be signaled (any identifier).
Examples
Configure and execute a listen/notify sequence from psql:
LISTEN virtual;
NOTIFY virtual;
Asynchronous notification "virtual" received from server process with PID 8448.
Compatibility
There is no NOTIFY statement in the SQL standard.
See Also
LISTEN, UNLISTEN
872
PREPARE
Name
PREPARE — prepare a statement for execution
Synopsis
PREPARE plan_name [ (datatype [, ...] ) ] AS statement
Description
PREPARE creates a prepared statement. A prepared statement is a server-side object that can be used
to optimize performance. When the PREPARE statement is executed, the specified statement is parsed,
rewritten, and planned. When an EXECUTE command is subsequently issued, the prepared statement need
only be executed. Thus, the parsing, rewriting, and planning stages are only performed once, instead of
every time the statement is executed.
Prepared statements can take parameters: values that are substituted into the statement when it is executed.
To include parameters in a prepared statement, supply a list of data types in the PREPARE statement,
and, in the statement to be prepared itself, refer to the parameters by position using $1, $2, etc. When
executing the statement, specify the actual values for these parameters in the EXECUTE statement. Refer
to EXECUTE for more information about that.
Prepared statements only last for the duration of the current database session. When the session ends, the
prepared statement is forgotten, so it must be recreated before being used again. This also means that a
single prepared statement cannot be used by multiple simultaneous database clients; however, each client
can create their own prepared statement to use. The prepared statement can be manually cleaned up using
the DEALLOCATE command.
Prepared statements have the largest performance advantage when a single session is being used to exe-
cute a large number of similar statements. The performance difference will be particularly significant if
the statements are complex to plan or rewrite, for example, if the query involves a join of many tables
or requires the application of several rules. If the statement is relatively simple to plan and rewrite but
relatively expensive to execute, the performance advantage of prepared statements will be less noticeable.
Parameters
plan_name
An arbitrary name given to this particular prepared statement. It must be unique within a single
session and is subsequently used to execute or deallocate a previously prepared statement.
datatype
The data type of a parameter to the prepared statement. To refer to the parameters in the prepared
statement itself, use $1, $2, etc.
873
PREPARE
statement
Any SELECT, INSERT, UPDATE, or DELETE statement.
Notes
In some situations, the query plan produced for a prepared statement will be inferior to the query plan
that would have been chosen if the statement had been submitted and executed normally. This is because
when the statement is planned and the planner attempts to determine the optimal query plan, the actual
values of any parameters specified in the statement are unavailable. PostgreSQL collects statistics on the
distribution of data in the table, and can use constant values in a statement to make guesses about the
likely result of executing the statement. Since this data is unavailable when planning prepared statements
with parameters, the chosen plan may be suboptimal. To examine the query plan PostgreSQL has chosen
for a prepared statement, use EXPLAIN.
For more information on query planning and the statistics collected by PostgreSQL for that purpose, see
the ANALYZE documentation.
Examples
Create a prepared query for an INSERT statement, and then execute it:
Create a prepared query for a SELECT statement, and then execute it:
Compatibility
The SQL standard includes a PREPARE statement, but it is only for use in embedded SQL. This version of
the PREPARE statement also uses a somewhat different syntax.
See Also
DEALLOCATE, EXECUTE
874
REINDEX
Name
REINDEX — rebuild indexes
Synopsis
REINDEX { DATABASE | TABLE | INDEX } name [ FORCE ]
Description
REINDEX rebuilds an index based on the data stored in the index’s table, replacing the old copy of the
index. There are two main reasons to use REINDEX:
• An index has become corrupted, and no longer contains valid data. Although in theory this should never
happen, in practice indexes may become corrupted due to software bugs or hardware failures. REINDEX
provides a recovery method.
• The index in question contains a lot of dead index pages that are not being reclaimed. This can occur
with B-tree indexes in PostgreSQL under certain access patterns. REINDEX provides a way to reduce
the space consumption of the index by writing a new version of the index without the dead pages. See
Section 21.2 for more information.
Parameters
DATABASE
Recreate all system indexes of a specified database. Indexes on user tables are not processed. Also,
indexes on shared system catalogs are skipped except in stand-alone mode (see below).
TABLE
Recreate all indexes of a specified table. If the table has a secondary “TOAST” table, that is reindexed
as well.
INDEX
875
REINDEX
FORCE
Notes
If you suspect corruption of an index on a user table, you can simply rebuild that index, or all indexes on
the table, using REINDEX INDEX or REINDEX TABLE.
Things are more difficult if you need to recover from corruption of an index on a system table. In this
case it’s important for the system to not have used any of the suspect indexes itself. (Indeed, in this sort
of scenario you may find that server processes are crashing immediately at start-up, due to reliance on
the corrupted indexes.) To recover safely, the server must be started with the -P option, which prevents it
from using indexes for system catalog lookups.
One way to do this is to shut down the postmaster and start a stand-alone PostgreSQL server with the -P
option included on its command line. Then, REINDEX DATABASE, REINDEX TABLE, or REINDEX INDEX
can be issued, depending on how much you want to reconstruct. If in doubt, use REINDEX DATABASE to
select reconstruction of all system indexes in the database. Then quit the standalone server session and
restart the regular server. See the postgres reference page for more information about how to interact with
the stand-alone server interface.
Alternatively, a regular server session can be started with -P included in its command line options.
The method for doing this varies across clients, but in all libpq-based clients, it is possible to set the
PGOPTIONS environment variable to -P before starting the client. Note that while this method does not
require locking out other clients, it may still be wise to prevent other users from connecting to the damaged
database until repairs have been completed.
If corruption is suspected in the indexes of any of the shared system catalogs (pg_database, pg_group,
pg_shadow, or pg_tablespace), then a standalone server must be used to repair it. REINDEX will not
process shared catalogs in multiuser mode.
For all indexes except the shared system catalogs, REINDEX is crash-safe and transaction-safe. REINDEX
is not crash-safe for shared indexes, which is why this case is disallowed during normal operation. If a
failure occurs while reindexing one of these catalogs in standalone mode, it will not be possible to restart
the regular server until the problem is rectified. (The typical symptom of a partially rebuilt shared index
is “index is not a btree” errors.)
REINDEX is similar to a drop and recreate of the index in that the index contents are rebuilt from scratch.
However, the locking considerations are rather different. REINDEX locks out writes but not reads of the
index’s parent table. It also takes an exclusive lock on the specific index being processed, which will block
reads that attempt to use that index. In contrast, DROP INDEX momentarily takes exclusive lock on the
parent table, blocking both writes and reads. The subsequent CREATE INDEX locks out writes but not
reads; since the index is not there, no read will attempt to use it, meaning that there will be no blocking
but reads may be forced into expensive sequential scans. Another important point is that the drop/create
approach invalidates any cached query plans that use the index, while REINDEX does not.
Prior to PostgreSQL 7.4, REINDEX TABLE did not automatically process TOAST tables, and so those had
to be reindexed by separate commands. This is still possible, but redundant.
876
REINDEX
Examples
Recreate the indexes on the table my_table:
Rebuild all system indexes in a particular database, without trusting them to be valid already:
$ export PGOPTIONS="-P"
$ psql broken_db
...
broken_db=> REINDEX DATABASE broken_db;
broken_db=> \q
Compatibility
There is no REINDEX command in the SQL standard.
877
RELEASE SAVEPOINT
Name
RELEASE SAVEPOINT — destroy a previously defined savepoint
Synopsis
RELEASE [ SAVEPOINT ] savepoint_name
Description
RELEASE SAVEPOINT destroys a savepoint previously defined in the current transaction.
Destroying a savepoint makes it unavailable as a rollback point, but it has no other user visible behavior.
It does not undo the effects of commands executed after the savepoint was established. (To do that, see
ROLLBACK TO SAVEPOINT.) Destroying a savepoint when it is no longer needed may allow the system
to reclaim some resources earlier than transaction end.
RELEASE SAVEPOINT also destroys all savepoints that were established after the named savepoint was
established.
Parameters
savepoint_name
The name of the savepoint to destroy.
Notes
Specifying a savepoint name that was not previously defined is an error.
It is not possible to release a savepoint when the transaction is in an aborted state.
If multiple savepoints have the same name, only the one that was most recently defined is released.
Examples
To establish and later destroy a savepoint:
BEGIN;
INSERT INTO table1 VALUES (3);
SAVEPOINT my_savepoint;
INSERT INTO table1 VALUES (4);
RELEASE SAVEPOINT my_savepoint;
COMMIT;
878
RELEASE SAVEPOINT
Compatibility
This command conforms to the SQL:2003 standard. The standard specifies that the key word SAVEPOINT
is mandatory, but PostgreSQL allows it to be omitted.
See Also
BEGIN, COMMIT, ROLLBACK, ROLLBACK TO SAVEPOINT, SAVEPOINT
879
RESET
Name
RESET — restore the value of a run-time parameter to the default value
Synopsis
RESET name
RESET ALL
Description
RESET restores run-time parameters to their default values. RESET is an alternative spelling for
Parameters
name
The name of a run-time parameter. See SET for a list.
ALL
Examples
Set the geqo configuration variable to its default value:
RESET geqo;
Compatibility
RESET is a PostgreSQL extension.
880
REVOKE
Name
REVOKE — remove access privileges
Synopsis
REVOKE [ GRANT OPTION FOR ]
{ { SELECT | INSERT | UPDATE | DELETE | RULE | REFERENCES | TRIGGER }
[,...] | ALL [ PRIVILEGES ] }
ON [ TABLE ] tablename [, ...]
FROM { username | GROUP groupname | PUBLIC } [, ...]
[ CASCADE | RESTRICT ]
881
REVOKE
Description
The REVOKE command revokes previously granted privileges from one or more users or groups of users.
The key word PUBLIC refers to the implicitly defined group of all users.
See the description of the GRANT command for the meaning of the privilege types.
Note that any particular user will have the sum of privileges granted directly to him, privileges granted to
any group he is presently a member of, and privileges granted to PUBLIC. Thus, for example, revoking
SELECT privilege from PUBLIC does not necessarily mean that all users have lost SELECT privilege on
the object: those who have it granted directly or via a group will still have it.
If GRANT OPTION FOR is specified, only the grant option for the privilege is revoked, not the privilege
itself. Otherwise, both the privilege and the grant option are revoked.
If a user holds a privilege with grant option and has granted it to other users then the privileges held by
those other users are called dependent privileges. If the privilege or the grant option held by the first user
is being revoked and dependent privileges exist, those dependent privileges are also revoked if CASCADE
is specified, else the revoke action will fail. This recursive revocation only affects privileges that were
granted through a chain of users that is traceable to the user that is the subject of this REVOKE command.
Thus, the affected users may effectively keep the privilege if it was also granted through other users.
Notes
Use psql’s \z command to display the privileges granted on existing objects. See GRANT for information
about the format.
A user can only revoke privileges that were granted directly by that user. If, for example, user A has
granted a privilege with grant option to user B, and user B has in turned granted it to user C, then user A
cannot revoke the privilege directly from C. Instead, user A could revoke the grant option from user B and
use the CASCADE option so that the privilege is in turn revoked from user C. For another example, if both
A and B have granted the same privilege to C, A can revoke his own grant but not B’s grant, so C will still
effectively have the privilege.
When a non-owner of an object attempts to REVOKE privileges on the object, the command will fail
outright if the user has no privileges whatsoever on the object. As long as some privilege is available,
the command will proceed, but it will revoke only those privileges for which the user has grant options.
The REVOKE ALL PRIVILEGES forms will issue a warning message if no grant options are held, while
the other forms will issue a warning if grant options for any of the privileges specifically named in the
command are not held. (In principle these statements apply to the object owner as well, but since the
owner is always treated as holding all grant options, the cases can never occur.)
If a superuser chooses to issue a GRANT or REVOKE command, the command is performed as though it
were issued by the owner of the affected object. Since all privileges ultimately come from the object owner
(possibly indirectly via chains of grant options), it is possible for a superuser to revoke all privileges, but
this may require use of CASCADE as stated above.
Examples
Revoke insert privilege for the public on table films:
882
REVOKE
Note that this actually means “revoke all privileges that I granted”.
Compatibility
The compatibility notes of the GRANT command apply analogously to REVOKE. The syntax summary is:
One of RESTRICT or CASCADE is required according to the standard, but PostgreSQL assumes RESTRICT
by default.
See Also
GRANT
883
ROLLBACK
Name
ROLLBACK — abort the current transaction
Synopsis
ROLLBACK [ WORK | TRANSACTION ]
Description
ROLLBACK rolls back the current transaction and causes all the updates made by the transaction to be
discarded.
Parameters
WORK
TRANSACTION
Notes
Use COMMIT to successfully terminate a transaction.
Issuing ROLLBACK when not inside a transaction does no harm, but it will provoke a warning message.
Examples
To abort all changes:
ROLLBACK;
Compatibility
The SQL standard only specifies the two forms ROLLBACK and ROLLBACK WORK. Otherwise, this com-
mand is fully conforming.
884
ROLLBACK
See Also
BEGIN, COMMIT, ROLLBACK TO SAVEPOINT
885
ROLLBACK TO SAVEPOINT
Name
ROLLBACK TO SAVEPOINT — roll back to a savepoint
Synopsis
ROLLBACK [ WORK | TRANSACTION ] TO [ SAVEPOINT ] savepoint_name
Description
Roll back all commands that were executed after the savepoint was established. The savepoint remains
valid and can be rolled back to again later, if needed.
ROLLBACK TO SAVEPOINT implicitly destroys all savepoints that were established after the named save-
point.
Parameters
savepoint_name
The savepoint to roll back to.
Notes
Use RELEASE SAVEPOINT to destroy a savepoint without discarding the effects of commands executed
after it was established.
Specifying a savepoint name that has not been established is an error.
Cursors have somewhat non-transactional behavior with respect to savepoints. Any cursor that is opened
inside the savepoint is not closed when the savepoint is rolled back. If a cursor is affected by a FETCH
command inside a savepoint that is later rolled back, the cursor position remains at the position that FETCH
left it pointing to (that is, FETCH is not rolled back). A cursor whose execution causes a transaction to abort
is put in a can’t-execute state, so while the transaction can be restored using ROLLBACK TO SAVEPOINT,
the cursor can no longer be used.
Examples
To undo the effects of the commands executed after my_savepoint was established:
886
ROLLBACK TO SAVEPOINT
BEGIN;
SAVEPOINT foo;
COMMIT;
Compatibility
The SQL:2003 standard specifies that the key word SAVEPOINT is mandatory, but PostgreSQL and Oracle
allow it to be omitted. SQL:2003 allows only WORK, not TRANSACTION, as a noise word after ROLLBACK.
Also, SQL:2003 has an optional clause AND [ NO ] CHAIN which is not currently supported by Post-
greSQL. Otherwise, this command conforms to the SQL standard.
See Also
BEGIN, COMMIT, RELEASE SAVEPOINT, ROLLBACK, SAVEPOINT
887
SAVEPOINT
Name
SAVEPOINT — define a new savepoint within the current transaction
Synopsis
SAVEPOINT savepoint_name
Description
SAVEPOINT establishes a new savepoint within the current transaction.
A savepoint is a special mark inside a transaction that allows all commands that are executed after it was
established to be rolled back, restoring the transaction state to what it was at the time of the savepoint.
Parameters
savepoint_name
The name to give to the new savepoint.
Notes
Use ROLLBACK TO SAVEPOINT to rollback to a savepoint. Use RELEASE SAVEPOINT to destroy a
savepoint, keeping the effects of commands executed after it was established.
Savepoints can only be established when inside a transaction block. There can be multiple savepoints
defined within a transaction.
Examples
To establish a savepoint and later undo the effects of all commands executed after it was established:
BEGIN;
INSERT INTO table1 VALUES (1);
SAVEPOINT my_savepoint;
INSERT INTO table1 VALUES (2);
ROLLBACK TO SAVEPOINT my_savepoint;
INSERT INTO table1 VALUES (3);
COMMIT;
The above transaction will insert the values 1 and 3, but not 2.
To establish and later destroy a savepoint:
888
SAVEPOINT
BEGIN;
INSERT INTO table1 VALUES (3);
SAVEPOINT my_savepoint;
INSERT INTO table1 VALUES (4);
RELEASE SAVEPOINT my_savepoint;
COMMIT;
Compatibility
SQL requires a savepoint to be destroyed automatically when another savepoint with the same name
is established. In PostgreSQL, the old savepoint is kept, though only the more recent one will be used
when rolling back or releasing. (Releasing the newer savepoint will cause the older one to again become
accessible to ROLLBACK TO SAVEPOINT and RELEASE SAVEPOINT.) Otherwise, SAVEPOINT is fully
SQL conforming.
See Also
BEGIN, COMMIT, RELEASE SAVEPOINT, ROLLBACK, ROLLBACK TO SAVEPOINT
889
SELECT
Name
SELECT — retrieve rows from a table or view
Synopsis
SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ]
* | expression [ AS output_name ] [, ...]
[ FROM from_item [, ...] ]
[ WHERE condition ]
[ GROUP BY expression [, ...] ]
[ HAVING condition [, ...] ]
[ { UNION | INTERSECT | EXCEPT } [ ALL ] select ]
[ ORDER BY expression [ ASC | DESC | USING operator ] [, ...] ]
[ LIMIT { count | ALL } ]
[ OFFSET start ]
[ FOR UPDATE [ OF table_name [, ...] ] ]
Description
SELECT retrieves rows from one or more tables. The general processing of SELECT is as follows:
1. All elements in the FROM list are computed. (Each element in the FROM list is a real or virtual table.)
If more than one element is specified in the FROM list, they are cross-joined together. (See FROM
Clause below.)
2. If the WHERE clause is specified, all rows that do not satisfy the condition are eliminated from the
output. (See WHERE Clause below.)
3. If the GROUP BY clause is specified, the output is divided into groups of rows that match on one
or more values. If the HAVING clause is present, it eliminates groups that do not satisfy the given
condition. (See GROUP BY Clause and HAVING Clause below.)
4. The actual output rows are computed using the SELECT output expressions for each selected row.
(See SELECT List below.)
5. Using the operators UNION, INTERSECT, and EXCEPT, the output of more than one SELECT statement
can be combined to form a single result set. The UNION operator returns all rows that are in one or
both of the result sets. The INTERSECT operator returns all rows that are strictly in both result sets.
The EXCEPT operator returns the rows that are in the first result set but not in the second. In all
890
SELECT
three cases, duplicate rows are eliminated unless ALL is specified. (See UNION Clause, INTERSECT
Clause, and EXCEPT Clause below.)
6. If the ORDER BY clause is specified, the returned rows are sorted in the specified order. If ORDER BY
is not given, the rows are returned in whatever order the system finds fastest to produce. (See ORDER
BY Clause below.)
7. DISTINCT eliminates duplicate rows from the result. DISTINCT ON eliminates rows that match on
all the specified expressions. ALL (the default) will return all candidate rows, including duplicates.
(See DISTINCT Clause below.)
8. If the LIMIT or OFFSET clause is specified, the SELECT statement only returns a subset of the result
rows. (See LIMIT Clause below.)
9. The FOR UPDATE clause causes the SELECT statement to lock the selected rows against concurrent
updates. (See FOR UPDATE Clause below.)
You must have SELECT privilege on a table to read its values. The use of FOR UPDATE requires UPDATE
privilege as well.
Parameters
FROM Clause
The FROM clause specifies one or more source tables for the SELECT. If multiple sources are specified,
the result is the Cartesian product (cross join) of all the sources. But usually qualification conditions are
added to restrict the returned rows to a small subset of the Cartesian product.
The FROM clause can contain the following elements:
table_name
The name (optionally schema-qualified) of an existing table or view. If ONLY is specified, only that
table is scanned. If ONLY is not specified, the table and all its descendant tables (if any) are scanned.
* can be appended to the table name to indicate that descendant tables are to be scanned, but in the
current version, this is the default behavior. (In releases before 7.1, ONLY was the default behavior.)
The default behavior can be modified by changing the sql_inheritance configuration option.
alias
A substitute name for the FROM item containing the alias. An alias is used for brevity or to eliminate
ambiguity for self-joins (where the same table is scanned multiple times). When an alias is provided,
it completely hides the actual name of the table or function; for example given FROM foo AS f, the
remainder of the SELECT must refer to this FROM item as f not foo. If an alias is written, a column
alias list can also be written to provide substitute names for one or more columns of the table.
select
A sub-SELECT can appear in the FROM clause. This acts as though its output were created as a
temporary table for the duration of this single SELECT command. Note that the sub-SELECT must be
surrounded by parentheses, and an alias must be provided for it.
891
SELECT
function_name
Function calls can appear in the FROM clause. (This is especially useful for functions that return result
sets, but any function can be used.) This acts as though its output were created as a temporary table
for the duration of this single SELECT command. An alias may also be used. If an alias is written,
a column alias list can also be written to provide substitute names for one or more attributes of the
function’s composite return type. If the function has been defined as returning the record data type,
then an alias or the key word AS must be present, followed by a column definition list in the form (
column_name data_type [, ... ] ). The column definition list must match the actual number
and types of columns returned by the function.
join_type
One of
• [ INNER ] JOIN
• CROSS JOIN
For the INNER and OUTER join types, a join condition must be specified, namely exactly one of
NATURAL, ON join_condition, or USING (join_column [, ...]). See below for the mean-
ing. For CROSS JOIN, none of these clauses may appear.
A JOIN clause combines two FROM items. Use parentheses if necessary to determine the order of
nesting. In the absence of parentheses, JOINs nest left-to-right. In any case JOIN binds more tightly
than the commas separating FROM items.
CROSS JOIN and INNER JOIN produce a simple Cartesian product, the same result as you get from
listing the two items at the top level of FROM, but restricted by the join condition (if any). CROSS
JOIN is equivalent to INNER JOIN ON (TRUE), that is, no rows are removed by qualification. These
join types are just a notational convenience, since they do nothing you couldn’t do with plain FROM
and WHERE.
LEFT OUTER JOIN returns all rows in the qualified Cartesian product (i.e., all combined rows that
pass its join condition), plus one copy of each row in the left-hand table for which there was no
right-hand row that passed the join condition. This left-hand row is extended to the full width of the
joined table by inserting null values for the right-hand columns. Note that only the JOIN clause’s
own condition is considered while deciding which rows have matches. Outer conditions are applied
afterwards.
Conversely, RIGHT OUTER JOIN returns all the joined rows, plus one row for each unmatched right-
hand row (extended with nulls on the left). This is just a notational convenience, since you could
convert it to a LEFT OUTER JOIN by switching the left and right inputs.
FULL OUTER JOIN returns all the joined rows, plus one row for each unmatched left-hand row
(extended with nulls on the right), plus one row for each unmatched right-hand row (extended with
nulls on the left).
ON join_condition
892
SELECT
NATURAL is shorthand for a USING list that mentions all columns in the two tables that have the same
names.
WHERE Clause
WHERE condition
where condition is any expression that evaluates to a result of type boolean. Any row that does not
satisfy this condition will be eliminated from the output. A row satisfies the condition if it returns true
when the actual row values are substituted for any variable references.
GROUP BY Clause
GROUP BY will condense into a single row all selected rows that share the same values for the grouped
expressions. expression can be an input column name, or the name or ordinal number of an output
column (SELECT list item), or an arbitrary expression formed from input-column values. In case of am-
biguity, a GROUP BY name will be interpreted as an input-column name rather than an output column
name.
Aggregate functions, if any are used, are computed across all rows making up each group, producing a
separate value for each group (whereas without GROUP BY, an aggregate produces a single value computed
across all the selected rows). When GROUP BY is present, it is not valid for the SELECT list expressions
to refer to ungrouped columns except within aggregate functions, since there would be more than one
possible value to return for an ungrouped column.
HAVING Clause
HAVING condition
893
SELECT
HAVING eliminates group rows that do not satisfy the condition. HAVING is different from WHERE: WHERE
filters individual rows before the application of GROUP BY, while HAVING filters group rows created by
GROUP BY. Each column referenced in condition must unambiguously reference a grouping column,
unless the reference appears within an aggregate function.
SELECT List
The SELECT list (between the key words SELECT and FROM) specifies expressions that form the output
rows of the SELECT statement. The expressions can (and usually do) refer to columns computed in the
FROM clause. Using the clause AS output_name, another name can be specified for an output column.
This name is primarily used to label the column for display. It can also be used to refer to the column’s
value in ORDER BY and GROUP BY clauses, but not in the WHERE or HAVING clauses; there you must write
out the expression instead.
Instead of an expression, * can be written in the output list as a shorthand for all the columns of the
selected rows. Also, one can write table_name.* as a shorthand for the columns coming from just that
table.
UNION Clause
select_statement is any SELECT statement without an ORDER BY, LIMIT, or FOR UPDATE clause.
(ORDER BY and LIMIT can be attached to a subexpression if it is enclosed in parentheses. Without paren-
theses, these clauses will be taken to apply to the result of the UNION, not to its right-hand input expres-
sion.)
The UNION operator computes the set union of the rows returned by the involved SELECT statements. A
row is in the set union of two result sets if it appears in at least one of the result sets. The two SELECT
statements that represent the direct operands of the UNION must produce the same number of columns,
and corresponding columns must be of compatible data types.
The result of UNION does not contain any duplicate rows unless the ALL option is specified. ALL prevents
elimination of duplicates. (Therefore, UNION ALL is usually significantly quicker than UNION; use ALL
when you can.)
Multiple UNION operators in the same SELECT statement are evaluated left to right, unless otherwise
indicated by parentheses.
Currently, FOR UPDATE may not be specified either for a UNION result or for any input of a UNION.
INTERSECT Clause
select_statement is any SELECT statement without an ORDER BY, LIMIT, or FOR UPDATE clause.
894
SELECT
The INTERSECT operator computes the set intersection of the rows returned by the involved SELECT
statements. A row is in the intersection of two result sets if it appears in both result sets.
The result of INTERSECT does not contain any duplicate rows unless the ALL option is specified. With
ALL, a row that has m duplicates in the left table and n duplicates in the right table will appear min(m,n)
times in the result set.
Multiple INTERSECT operators in the same SELECT statement are evaluated left to right, unless parenthe-
ses dictate otherwise. INTERSECT binds more tightly than UNION. That is, A UNION B INTERSECT C
will be read as A UNION (B INTERSECT C).
Currently, FOR UPDATE may not be specified either for an INTERSECT result or for any input of an
INTERSECT.
EXCEPT Clause
select_statement is any SELECT statement without an ORDER BY, LIMIT, or FOR UPDATE clause.
The EXCEPT operator computes the set of rows that are in the result of the left SELECT statement but not
in the result of the right one.
The result of EXCEPT does not contain any duplicate rows unless the ALL option is specified. With ALL, a
row that has m duplicates in the left table and n duplicates in the right table will appear max(m-n,0) times
in the result set.
Multiple EXCEPT operators in the same SELECT statement are evaluated left to right, unless parentheses
dictate otherwise. EXCEPT binds at the same level as UNION.
Currently, FOR UPDATE may not be specified either for an EXCEPT result or for any input of an EXCEPT.
ORDER BY Clause
expression can be the name or ordinal number of an output column (SELECT list item), or it can be an
arbitrary expression formed from input-column values.
The ORDER BY clause causes the result rows to be sorted according to the specified expressions. If two
rows are equal according to the leftmost expression, the are compared according to the next expression
and so on. If they are equal according to all specified expressions, they are returned in an implementation-
dependent order.
The ordinal number refers to the ordinal (left-to-right) position of the result column. This feature makes
it possible to define an ordering on the basis of a column that does not have a unique name. This is never
absolutely necessary because it is always possible to assign a name to a result column using the AS clause.
It is also possible to use arbitrary expressions in the ORDER BY clause, including columns that do not
appear in the SELECT result list. Thus the following statement is valid:
895
SELECT
A limitation of this feature is that an ORDER BY clause applying to the result of a UNION, INTERSECT, or
EXCEPT clause may only specify an output column name or number, not an expression.
If an ORDER BY expression is a simple name that matches both a result column name and an input column
name, ORDER BY will interpret it as the result column name. This is the opposite of the choice that GROUP
BY will make in the same situation. This inconsistency is made to be compatible with the SQL standard.
Optionally one may add the key word ASC (ascending) or DESC (descending) after any expression in the
ORDER BY clause. If not specified, ASC is assumed by default. Alternatively, a specific ordering operator
name may be specified in the USING clause. ASC is usually equivalent to USING < and DESC is usually
equivalent to USING >. (But the creator of a user-defined data type can define exactly what the default
sort ordering is, and it might correspond to operators with other names.)
The null value sorts higher than any other value. In other words, with ascending sort order, null values
sort at the end, and with descending sort order, null values sort at the beginning.
Character-string data is sorted according to the locale-specific collation order that was established when
the database cluster was initialized.
DISTINCT Clause
If DISTINCT is specified, all duplicate rows are removed from the result set (one row is kept from each
group of duplicates). ALL specifies the opposite: all rows are kept; that is the default.
DISTINCT ON ( expression [, ...] ) keeps only the first row of each set of rows where the given
expressions evaluate to equal. The DISTINCT ON expressions are interpreted using the same rules as for
ORDER BY (see above). Note that the “first row” of each set is unpredictable unless ORDER BY is used to
ensure that the desired row appears first. For example,
retrieves the most recent weather report for each location. But if we had not used ORDER BY to force
descending order of time values for each location, we’d have gotten a report from an unpredictable time
for each location.
The DISTINCT ON expression(s) must match the leftmost ORDER BY expression(s). The ORDER BY
clause will normally contain additional expression(s) that determine the desired precedence of rows
within each DISTINCT ON group.
LIMIT Clause
count specifies the maximum number of rows to return, while start specifies the number of rows to
skip before starting to return rows. When both are specified, start rows are skipped before starting to
896
SELECT
FOR UPDATE causes the rows retrieved by the SELECT statement to be locked as though for update. This
prevents them from being modified or deleted by other transactions until the current transaction ends.
That is, other transactions that attempt UPDATE, DELETE, or SELECT FOR UPDATE of these rows will be
blocked until the current transaction ends. Also, if an UPDATE, DELETE, or SELECT FOR UPDATE from
another transaction has already locked a selected row or rows, SELECT FOR UPDATE will wait for the
other transaction to complete, and will then lock and return the updated row (or no row, if the row was
deleted). For further discussion see Chapter 12.
If specific tables are named in FOR UPDATE, then only rows coming from those tables are locked; any
other tables used in the SELECT are simply read as usual.
FOR UPDATE cannot be used in contexts where returned rows can’t be clearly identified with individual
table rows; for example it can’t be used with aggregation.
FOR UPDATE may appear before LIMIT for compatibility with PostgreSQL versions before 7.3. It effec-
tively executes after LIMIT, however, and so that is the recommended place to write it.
Examples
To join the table films with the table distributors:
897
SELECT
...
To sum the column len of all films and group the results by kind:
kind | total
----------+-------
Action | 07:34
Comedy | 02:58
Drama | 14:28
Musical | 06:42
Romantic | 04:38
To sum the column len of all films, group the results by kind and show those group totals that are less
than 5 hours:
kind | total
----------+-------
Comedy | 02:58
Romantic | 04:38
The following two examples are identical ways of sorting the individual results according to the contents
of the second column (name):
did | name
-----+------------------
109 | 20th Century Fox
110 | Bavaria Atelier
101 | British Lion
107 | Columbia
102 | Jean Luc Godard
113 | Luso films
104 | Mosfilm
103 | Paramount
106 | Toho
105 | United Artists
111 | Walt Disney
112 | Warner Bros.
108 | Westward
898
SELECT
The next example shows how to obtain the union of the tables distributors and actors, restricting
the results to those that begin with the letter W in each table. Only distinct rows are wanted, so the key
word ALL is omitted.
distributors: actors:
did | name id | name
-----+-------------- ----+----------------
108 | Westward 1 | Woody Allen
111 | Walt Disney 2 | Warren Beatty
112 | Warner Bros. 3 | Walter Matthau
... ...
SELECT distributors.name
FROM distributors
WHERE distributors.name LIKE ’W%’
UNION
SELECT actors.name
FROM actors
WHERE actors.name LIKE ’W%’;
name
----------------
Walt Disney
Walter Matthau
Warner Bros.
Warren Beatty
Westward
Woody Allen
This example shows how to use a function in the FROM clause, both with and without a column definition
list:
899
SELECT
Compatibility
Of course, the SELECT statement is compatible with the SQL standard. But there are some extensions and
some missing features.
SELECT 2+2;
?column?
----------
4
Some other SQL databases cannot do this except by introducing a dummy one-row table from which to
do the SELECT.
A less obvious use is to abbreviate a normal SELECT from tables:
did | name
-----+----------
108 | Westward
This works because an implicit FROM item is added for each table that is referenced in other parts of the
SELECT statement but not mentioned in FROM.
While this is a convenient shorthand, it’s easy to misuse. For example, the command
that he will actually get. To help detect this sort of mistake, PostgreSQL will warn if the implicit-FROM
feature is used in a SELECT statement that also contains an explicit FROM clause. Also, it is possible to
disable the implicit-FROM feature by setting the add_missing_from parameter to false.
900
SELECT
Nonstandard Clauses
The clauses DISTINCT ON, LIMIT, and OFFSET are not defined in the SQL standard.
901
SELECT INTO
Name
SELECT INTO — define a new table from the results of a query
Synopsis
SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ]
* | expression [ AS output_name ] [, ...]
INTO [ TEMPORARY | TEMP ] [ TABLE ] new_table
[ FROM from_item [, ...] ]
[ WHERE condition ]
[ GROUP BY expression [, ...] ]
[ HAVING condition [, ...] ]
[ { UNION | INTERSECT | EXCEPT } [ ALL ] select ]
[ ORDER BY expression [ ASC | DESC | USING operator ] [, ...] ]
[ LIMIT { count | ALL } ]
[ OFFSET start ]
[ FOR UPDATE [ OF tablename [, ...] ] ]
Description
SELECT INTO creates a new table and fills it with data computed by a query. The data is not returned
to the client, as it is with a normal SELECT. The new table’s columns have the names and data types
associated with the output columns of the SELECT.
Parameters
TEMPORARY or TEMP
If specified, the table is created as a temporary table. Refer to CREATE TABLE for details.
new_table
The name (optionally schema-qualified) of the table to be created.
Notes
CREATE TABLE AS is functionally similar to SELECT INTO. CREATE TABLE AS is the recommended
syntax, since this form of SELECT INTO is not available in ECPG or PL/pgSQL, because they interpret the
INTO clause differently. Furthermore, CREATE TABLE AS offers a superset of the functionality provided
by SELECT INTO.
902
SELECT INTO
Prior to PostgreSQL 8.0, the table created by SELECT INTO always included OIDs. As of PostgreSQL
8.0, the inclusion of OIDs in the table created by SELECT INTO is controlled by the default_with_oids
configuration variable. This variable currently defaults to true, but will likely default to false in a future
release of PostgreSQL.
Examples
Create a new table films_recent consisting of only recent entries from the table films:
Compatibility
The SQL standard uses SELECT INTO to represent selecting values into scalar variables of a host program,
rather than creating a new table. This indeed is the usage found in ECPG (see Chapter 29) and PL/pgSQL
(see Chapter 35). The PostgreSQL usage of SELECT INTO to represent table creation is historical. It is
best to use CREATE TABLE AS for this purpose in new code.
See Also
CREATE TABLE AS
903
SET
Name
SET — change a run-time parameter
Synopsis
SET [ SESSION | LOCAL ] name { TO | = } { value | ’value’ | DEFAULT }
SET [ SESSION | LOCAL ] TIME ZONE { timezone | LOCAL | DEFAULT }
Description
The SET command changes run-time configuration parameters. Many of the run-time parameters listed
in Section 16.4 can be changed on-the-fly with SET. (But some require superuser privileges to change,
and others cannot be changed after server or session start.) SET only affects the value used by the current
session.
If SET or SET SESSION is issued within a transaction that is later aborted, the effects of the SET com-
mand disappear when the transaction is rolled back. (This behavior represents a change from PostgreSQL
versions prior to 7.3, where the effects of SET would not roll back after a later error.) Once the surrounding
transaction is committed, the effects will persist until the end of the session, unless overridden by another
SET.
The effects of SET LOCAL last only till the end of the current transaction, whether committed or not. A
special case is SET followed by SET LOCAL within a single transaction: the SET LOCAL value will be
seen until the end of the transaction, but afterwards (if the transaction is committed) the SET value will
take effect.
Parameters
SESSION
Specifies that the command takes effect for the current session. (This is the default if neither SESSION
nor LOCAL appears.)
LOCAL
Specifies that the command takes effect for only the current transaction. After COMMIT or ROLLBACK,
the session-level setting takes effect again. Note that SET LOCAL will appear to have no effect if it is
executed outside a BEGIN block, since the transaction will end immediately.
name
Name of a settable run-time parameter. Available parameters are documented in Section 16.4 and
below.
904
SET
value
New value of parameter. Values can be specified as string constants, identifiers, numbers, or comma-
separated lists of these. DEFAULT can be used to specify resetting the parameter to its default value.
Besides the configuration parameters documented in Section 16.4, there are a few that can only be adjusted
using the SET command or that have a special syntax:
NAMES
SEED
Sets the internal seed for the random number generator (the function random). Allowed values are
floating-point numbers between 0 and 1, which are then multiplied by 231-1.
The seed can also be set by invoking the function setseed:
SELECT setseed(value);
TIME ZONE
SET TIME ZONE value is an alias for SET timezone TO value. The syntax SET TIME ZONE
allows special syntax for the time zone specification. Here are examples of valid values:
’PST8PDT’
The time zone 7 hours west from UTC (equivalent to PDT). Positive values are east from UTC.
INTERVAL ’-08:00’ HOUR TO MINUTE
Set the time zone to your local time zone (the one that the server’s operating system defaults
to).
See Section 8.5 for more information about time zones. Also, Appendix B has a list of the recognized
names for time zones.
905
SET
Notes
The function set_config provides equivalent functionality. See Section 9.20.
Examples
Set the schema search path:
Set the style of date to traditional POSTGRES with “day before month” input convention:
Compatibility
SET TIME ZONE extends syntax defined in the SQL standard. The standard allows only numeric time
zone offsets while PostgreSQL allows more flexible time-zone specifications. All other SET features are
PostgreSQL extensions.
See Also
RESET, SHOW
906
SET CONSTRAINTS
Name
SET CONSTRAINTS — set constraint checking modes for the current transaction
Synopsis
SET CONSTRAINTS { ALL | name [, ...] } { DEFERRED | IMMEDIATE }
Description
SET CONSTRAINTS sets the behavior of constraint checking within the current transaction. IMMEDIATE
constraints are checked at the end of each statement. DEFERRED constraints are not checked until transac-
tion commit. Each constraint has its own IMMEDIATE or DEFERRED mode.
Upon creation, a constraint is given one of three characteristics: DEFERRABLE INITIALLY DEFERRED,
DEFERRABLE INITIALLY IMMEDIATE, or NOT DEFERRABLE. The third class is always IMMEDIATE and
is not affected by the SET CONSTRAINTS command. The first two classes start every transaction in the
indicated mode, but their behavior can be changed within a transaction by SET CONSTRAINTS.
SET CONSTRAINTS with a list of constraint names changes the mode of just those constraints (which
must all be deferrable). If there are multiple constraints matching any given name, all are affected. SET
CONSTRAINTS ALL changes the mode of all deferrable constraints.
When SET CONSTRAINTS changes the mode of a constraint from DEFERRED to IMMEDIATE, the new
mode takes effect retroactively: any outstanding data modifications that would have been checked at the
end of the transaction are instead checked during the execution of the SET CONSTRAINTS command. If
any such constraint is violated, the SET CONSTRAINTS fails (and does not change the constraint mode).
Thus, SET CONSTRAINTS can be used to force checking of constraints to occur at a specific point in a
transaction.
Currently, only foreign key constraints are affected by this setting. Check and unique constraints are
always effectively not deferrable.
Notes
This command only alters the behavior of constraints within the current transaction. Thus, if you execute
this command outside of a transaction block (BEGIN/COMMIT pair), it will not appear to have any effect.
Compatibility
This command complies with the behavior defined in the SQL standard, except for the limitation that, in
PostgreSQL, it only applies to foreign-key constraints.
The SQL standard says that constraint names appearing in SET CONSTRAINTS can be schema-qualified.
This is not yet supported by PostgreSQL: the names must be unqualified, and all constraints matching the
command will be affected no matter which schema they are in.
907
SET SESSION AUTHORIZATION
Name
SET SESSION AUTHORIZATION — set the session user identifier and the current user identifier of
the current session
Synopsis
SET [ SESSION | LOCAL ] SESSION AUTHORIZATION username
SET [ SESSION | LOCAL ] SESSION AUTHORIZATION DEFAULT
RESET SESSION AUTHORIZATION
Description
This command sets the session user identifier and the current user identifier of the current SQL-session
context to be username. The user name may be written as either an identifier or a string literal. Using
this command, it is possible, for example, to temporarily become an unprivileged user and later switch
back to become a superuser.
The session user identifier is initially set to be the (possibly authenticated) user name provided by the
client. The current user identifier is normally equal to the session user identifier, but may change temporar-
ily in the context of “setuid” functions and similar mechanisms. The current user identifier is relevant for
permission checking.
The session user identifier may be changed only if the initial session user (the authenticated user) had the
superuser privilege. Otherwise, the command is accepted only if it specifies the authenticated user name.
The SESSION and LOCAL modifiers act the same as for the regular SET command.
The DEFAULT and RESET forms reset the session and current user identifiers to be the originally authenti-
cated user name. These forms may be executed by any user.
Examples
SELECT SESSION_USER, CURRENT_USER;
session_user | current_user
--------------+--------------
peter | peter
session_user | current_user
--------------+--------------
paul | paul
908
SET SESSION AUTHORIZATION
Compatibility
The SQL standard allows some other expressions to appear in place of the literal username which are
not important in practice. PostgreSQL allows identifier syntax ("username"), which SQL does not. SQL
does not allow this command during a transaction; PostgreSQL does not make this restriction because
there is no reason to. The privileges necessary to execute this command are left implementation-defined
by the standard.
909
SET TRANSACTION
Name
SET TRANSACTION — set the characteristics of the current transaction
Synopsis
SET TRANSACTION transaction_mode [, ...]
SET SESSION CHARACTERISTICS AS TRANSACTION transaction_mode [, ...]
Description
The SET TRANSACTION command sets the characteristics of the current transaction. It has no effect on
any subsequent transactions. SET SESSION CHARACTERISTICS sets the default transaction characteris-
tics for subsequent transactions of a session. These defaults can be overridden by SET TRANSACTION for
an individual transaction.
The available transaction characteristics are the transaction isolation level and the transaction access mode
(read/write or read-only).
The isolation level of a transaction determines what data the transaction can see when other transactions
are running concurrently:
READ COMMITTED
A statement can only see rows committed before it began. This is the default.
SERIALIZABLE
All statements of the current transaction can only see rows committed before the first query or data-
modification statement was executed in this transaction.
The SQL standard defines two additional levels, READ UNCOMMITTED and REPEATABLE READ. In Post-
greSQL READ UNCOMMITTED is treated as READ COMMITTED, while REPEATABLE READ is treated as
SERIALIZABLE.
The transaction isolation level cannot be changed after the first query or data-modification statement
(SELECT, INSERT, DELETE, UPDATE, FETCH, or COPY) of a transaction has been executed. See Chapter
12 for more information about transaction isolation and concurrency control.
The transaction access mode determines whether the transaction is read/write or read-only. Read/write is
the default. When a transaction is read-only, the following SQL commands are disallowed: INSERT,
UPDATE, DELETE, and COPY TO if the table they would write to is not a temporary table; all CREATE,
ALTER, and DROP commands; COMMENT, GRANT, REVOKE, TRUNCATE; and EXPLAIN ANALYZE and
910
SET TRANSACTION
EXECUTE if the command they would execute is among those listed. This is a high-level notion of
read-only that does not prevent all writes to disk.
Notes
If SET TRANSACTION is executed without a prior START TRANSACTION or BEGIN, it will appear to have
no effect, since the transaction will immediately end.
It is possible to dispense with SET TRANSACTION by instead specifying the desired
transaction_modes in BEGIN or START TRANSACTION.
The session default transaction modes can also be set by setting the configuration parameters
default_transaction_isolation and default_transaction_read_only. (In fact SET SESSION
CHARACTERISTICS is just a verbose equivalent for setting these variables with SET.) This means the
defaults can be set in the configuration file, via ALTER DATABASE, etc. Consult Section 16.4 for more
information.
Compatibility
Both commands are defined in the SQL standard. SERIALIZABLE is the default transaction isolation
level in the standard. In PostgreSQL the default is ordinarily READ COMMITTED, but you can change it as
mentioned above. Because of lack of predicate locking, the SERIALIZABLE level is not truly serializable.
See Chapter 12 for details.
In the SQL standard, there is one other transaction characteristic that can be set with these commands: the
size of the diagnostics area. This concept is specific to embedded SQL, and therefore is not implemented
in the PostgreSQL server.
The SQL standard requires commas between successive transaction_modes, but for historical rea-
sons PostgreSQL allows the commas to be omitted.
911
SHOW
Name
SHOW — show the value of a run-time parameter
Synopsis
SHOW name
SHOW ALL
Description
SHOW will display the current setting of run-time parameters. These variables can be set using the SET
statement, by editing the postgresql.conf configuration file, through the PGOPTIONS environmental
variable (when using libpq or a libpq-based application), or through command-line flags when starting the
postmaster. See Section 16.4 for details.
Parameters
name
The name of a run-time parameter. Available parameters are documented in Section 16.4 and on the
SET reference page. In addition, there are a few parameters that can be shown but not set:
SERVER_VERSION
Shows the server-side character set encoding. At present, this parameter can be shown but not
set, because the encoding is determined at database creation time.
LC_COLLATE
Shows the database’s locale setting for collation (text ordering). At present, this parameter can
be shown but not set, because the setting is determined at initdb time.
LC_CTYPE
Shows the database’s locale setting for character classification. At present, this parameter can
be shown but not set, because the setting is determined at initdb time.
IS_SUPERUSER
912
SHOW
ALL
Notes
The function current_setting produces equivalent output. See Section 9.20.
Examples
Show the current setting of the parameter DateStyle:
SHOW DateStyle;
DateStyle
-----------
ISO, MDY
(1 row)
SHOW geqo;
geqo
------
on
(1 row)
SHOW ALL;
name | setting
--------------------------------+----------------------------------------------
add_missing_from | on
archive_command | unset
australian_timezones | off
.
.
.
work_mem | 1024
zero_damaged_pages | off
(140 rows)
913
SHOW
Compatibility
The SHOW command is a PostgreSQL extension.
See Also
SET, RESET
914
START TRANSACTION
Name
START TRANSACTION — start a transaction block
Synopsis
START TRANSACTION [ transaction_mode [, ...] ]
Description
This command begins a new transaction block. If the isolation level or read/write mode is specified, the
new transaction has those characteristics, as if SET TRANSACTION was executed. This is the same as the
BEGIN command.
Parameters
Refer to SET TRANSACTION for information on the meaning of the parameters to this statement.
Compatibility
In the standard, it is not necessary to issue START TRANSACTION to start a transaction block: any SQL
command implicitly begins a block. PostgreSQL’s behavior can be seen as implicitly issuing a COMMIT af-
ter each command that does not follow START TRANSACTION (or BEGIN), and it is therefore often called
“autocommit”. Other relational database systems may offer an autocommit feature as a convenience.
The SQL standard requires commas between successive transaction_modes, but for historical rea-
sons PostgreSQL allows the commas to be omitted.
See also the compatibility section of SET TRANSACTION.
See Also
BEGIN, COMMIT, ROLLBACK, SAVEPOINT, SET TRANSACTION
915
TRUNCATE
Name
TRUNCATE — empty a table
Synopsis
TRUNCATE [ TABLE ] name
Description
TRUNCATE quickly removes all rows from a table. It has the same effect as an unqualified DELETE but
since it does not actually scan the table it is faster. This is most useful on large tables.
Parameters
name
The name (optionally schema-qualified) of the table to be truncated.
Notes
TRUNCATE cannot be used if there are foreign-key references to the table from other tables. Checking
validity in such cases would require table scans, and the whole point is not to do one.
TRUNCATE will not run any user-defined ON DELETE triggers that might exist for the table.
Examples
Truncate the table bigtable:
Compatibility
There is no TRUNCATE command in the SQL standard.
916
UNLISTEN
Name
UNLISTEN — stop listening for a notification
Synopsis
UNLISTEN { name | * }
Description
UNLISTEN is used to remove an existing registration for NOTIFY events. UNLISTEN cancels any existing
registration of the current PostgreSQL session as a listener on the notification name. The special wildcard
* cancels all listener registrations for the current session.
NOTIFY contains a more extensive discussion of the use of LISTEN and NOTIFY.
Parameters
name
Name of a notification (any identifier).
*
Notes
You may unlisten something you were not listening for; no warning or error will appear.
At the end of each session, UNLISTEN * is automatically executed.
Examples
To make a registration:
LISTEN virtual;
NOTIFY virtual;
Asynchronous notification "virtual" received from server process with PID 8448.
Once UNLISTEN has been executed, further NOTIFY commands will be ignored:
UNLISTEN virtual;
917
UNLISTEN
NOTIFY virtual;
-- no NOTIFY event is received
Compatibility
There is no UNLISTEN command in the SQL standard.
See Also
LISTEN, NOTIFY
918
UPDATE
Name
UPDATE — update rows of a table
Synopsis
UPDATE [ ONLY ] table SET column = { expression | DEFAULT } [, ...]
[ FROM fromlist ]
[ WHERE condition ]
Description
UPDATE changes the values of the specified columns in all rows that satisfy the condition. Only the
columns to be modified need be mentioned in the SET clause; columns not explicitly modified retain
their previous values.
By default, UPDATE will update rows in the specified table and all its subtables. If you wish to only update
the specific table mentioned, you must use the ONLY clause.
There are two ways to modify a table using information contained in other tables in the database: us-
ing sub-selects, or specifying additional tables in the FROM clause. Which technique is more appropriate
depends on the specific circumstances.
You must have the UPDATE privilege on the table to update it, as well as the SELECT privilege to any table
whose values are read in the expressions or condition.
Parameters
table
The name (optionally schema-qualified) of the table to update.
column
The name of a column in table. The column name can be qualified with a subfield name or array
subscript, if needed.
expression
An expression to assign to the column. The expression may use the old values of this and other
columns in the table.
DEFAULT
Set the column to its default value (which will be NULL if no specific default expression has been
assigned to it).
919
UPDATE
fromlist
A list of table expressions, allowing columns from other tables to appear in the WHERE condition and
the update expressions. This is similar to the list of tables that can be specified in the FROM Clause
of a SELECT statement. Note that the target table must not appear in the fromlist, unless you
intend a self-join (in which case it must appear with an alias in the fromlist).
condition
An expression that returns a value of type boolean. Only rows for which this expression returns
true will be updated.
Outputs
On successful completion, an UPDATE command returns a command tag of the form
UPDATE count
The count is the number of rows updated. If count is 0, no rows matched the condition (this is not
considered an error).
Notes
When a FROM clause is present, what essentially happens is that the target table is joined to the tables
mentioned in the fromlist, and each output row of the join represents an update operation for the
target table. When using FROM you should ensure that the join produces at most one output row for each
row to be modified. In other words, a target row shouldn’t join to more than one row from the other
table(s). If it does, then only one of the join rows will be used to update the target row, but which one will
be used is not readily predictable.
Because of this indeterminancy, referencing other tables only within sub-selects is safer, though often
harder to read and slower than using a join.
Examples
Change the word Drama to Dramatic in the column kind of the table films:
Adjust temperature entries and reset precipitation to its default value in one row of the table weather:
Increment the sales count of the salesperson who manages the account for Acme Corporation, using the
FROM clause syntax:
920
UPDATE
Attempt to insert a new stock item along with the quantity of stock. If the item already exists, instead
update the stock count of the existing item. To do this without failing the entire transaction, use savepoints.
BEGIN;
-- other operations
SAVEPOINT sp1;
INSERT INTO wines VALUES(’Chateau Lafite 2003’, ’24’);
-- Assume the above fails because of a unique key violation,
-- so now we issue these commands:
ROLLBACK TO sp1;
UPDATE wines SET stock = stock + 24 WHERE winename = ’Chateau Lafite 2003’;
-- continue with other operations, and eventually
COMMIT;
Compatibility
This command conforms to the SQL standard, except that the FROM clause is a PostgreSQL extension.
Some other database systems offer a FROM option in which the target table is supposed to be listed again
within FROM. That is not how PostgreSQL interprets FROM. Be careful when porting applications that use
this extension.
921
VACUUM
Name
VACUUM — garbage-collect and optionally analyze a database
Synopsis
VACUUM [ FULL | FREEZE ] [ VERBOSE ] [ table ]
VACUUM [ FULL | FREEZE ] [ VERBOSE ] ANALYZE [ table [ (column [, ...] ) ] ]
Description
VACUUM reclaims storage occupied by deleted tuples. In normal PostgreSQL operation, tuples that are
deleted or obsoleted by an update are not physically removed from their table; they remain present until
a VACUUM is done. Therefore it’s necessary to do VACUUM periodically, especially on frequently-updated
tables.
With no parameter, VACUUM processes every table in the current database. With a parameter, VACUUM
processes only that table.
VACUUM ANALYZE performs a VACUUM and then an ANALYZE for each selected table. This is a handy
combination form for routine maintenance scripts. See ANALYZE for more details about its processing.
Plain VACUUM (without FULL) simply reclaims space and makes it available for re-use. This form of the
command can operate in parallel with normal reading and writing of the table, as an exclusive lock is not
obtained. VACUUM FULL does more extensive processing, including moving of tuples across blocks to try
to compact the table to the minimum number of disk blocks. This form is much slower and requires an
exclusive lock on each table while it is being processed.
FREEZE is a special-purpose option that causes tuples to be marked “frozen” as soon as possible, rather
than waiting until they are quite old. If this is done when there are no other open transactions in the
same database, then it is guaranteed that all tuples in the database are “frozen” and will not be subject
to transaction ID wraparound problems, no matter how long the database is left unvacuumed. FREEZE
is not recommended for routine use. Its only intended usage is in connection with preparation of user-
defined template databases, or other databases that are completely read-only and will not receive routine
maintenance VACUUM operations. See Chapter 21 for details.
Parameters
FULL
Selects “full” vacuum, which may reclaim more space, but takes much longer and exclusively locks
the table.
FREEZE
922
VACUUM
VERBOSE
Updates statistics used by the planner to determine the most efficient way to execute a query.
table
The name (optionally schema-qualified) of a specific table to vacuum. Defaults to all tables in the
current database.
column
The name of a specific column to analyze. Defaults to all columns.
Outputs
When VERBOSE is specified, VACUUM emits progress messages to indicate which table is currently being
processed. Various statistics about the tables are printed as well.
Notes
We recommend that active production databases be vacuumed frequently (at least nightly), in order to
remove expired rows. After adding or deleting a large number of rows, it may be a good idea to issue a
VACUUM ANALYZE command for the affected table. This will update the system catalogs with the results
of all recent changes, and allow the PostgreSQL query planner to make better choices in planning queries.
The FULL option is not recommended for routine use, but may be useful in special cases. An example is
when you have deleted most of the rows in a table and would like the table to physically shrink to occupy
less disk space. VACUUM FULL will usually shrink the table more than a plain VACUUM would.
Examples
The following is an example from running VACUUM on a table in the regression database:
923
VACUUM
Compatibility
There is no VACUUM statement in the SQL standard.
See Also
vacuumdb
924
II. PostgreSQL Client Applications
This part contains reference information for PostgreSQL client applications and utilities. Not all of these
commands are of general utility, some may require special privileges. The common feature of these appli-
cations is that they can be run on any host, independent of where the database server resides.
925
clusterdb
Name
clusterdb — cluster a PostgreSQL database
Synopsis
Description
clusterdb is a utility for reclustering tables in a PostgreSQL database. It finds tables that have previously
been clustered, and clusters them again on the same index that was last used. Tables that have never been
clustered are not affected.
clusterdb is a wrapper around the SQL command CLUSTER. There is no effective difference between
clustering databases via this utility and via other methods for accessing the server.
Options
clusterdb accepts the following command-line arguments:
-a
--all
Specifies the name of the database to be clustered. If this is not specified and -a (or --all) is not
used, the database name is read from the environment variable PGDATABASE. If that is not set, the
user name specified for the connection is used.
-e
--echo
Echo the commands that clusterdb generates and sends to the server.
-q
--quiet
Do not display a response.
926
clusterdb
-t table
--table table
clusterdb also accepts the following command-line arguments for connection parameters:
-h host
--host host
Specifies the host name of the machine on which the server is running. If the value begins with a
slash, it is used as the directory for the Unix domain socket.
-p port
--port port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for
connections.
-U username
--username username
Environment
PGDATABASE
PGHOST
PGPORT
PGUSER
Diagnostics
In case of difficulty, see CLUSTER and psql for discussions of potential problems and error messages.
The database server must be running at the targeted host. Also, any default connection settings and envi-
ronment variables used by the libpq front-end library will apply.
927
clusterdb
Examples
To cluster the database test:
$ clusterdb test
See Also
CLUSTER
928
createdb
Name
createdb — create a new PostgreSQL database
Synopsis
Description
createdb creates a new PostgreSQL database.
Normally, the database user who executes this command becomes the owner of the new database. However
a different owner can be specified via the -O option, if the executing user has appropriate privileges.
createdb is a wrapper around the SQL command CREATE DATABASE. There is no effective difference
between creating databases via this utility and via other methods for accessing the server.
Options
createdb accepts the following command-line arguments:
dbname
Specifies the name of the database to be created. The name must be unique among all PostgreSQL
databases in this cluster. The default is to create a database with the same name as the current system
user.
description
Specifies a comment to be associated with the newly created database.
-D tablespace
--tablespace tablespace
Echo the commands that createdb generates and sends to the server.
-E encoding
--encoding encoding
Specifies the character encoding scheme to be used in this database. The character sets supported by
the PostgreSQL server are described in Section 20.2.1.
929
createdb
-O owner
--owner owner
Specifies the database user who will own the new database.
-q
--quiet
The options -D, -E, -O, and -T correspond to options of the underlying SQL command CREATE
DATABASE; see there for more information about them.
createdb also accepts the following command-line arguments for connection parameters:
-h host
--host host
Specifies the host name of the machine on which the server is running. If the value begins with a
slash, it is used as the directory for the Unix domain socket.
-p port
--port port
Specifies the TCP port or the local Unix domain socket file extension on which the server is listening
for connections.
-U username
--username username
Environment
PGDATABASE
If set, the name of the database to create, unless overridden on the command line.
930
createdb
PGHOST
PGPORT
PGUSER
Default connection parameters. PGUSER also determines the name of the database to create, if it is
not specified on the command line or by PGDATABASE.
Diagnostics
In case of difficulty, see CREATE DATABASE and psql for discussions of potential problems and error
messages. The database server must be running at the targeted host. Also, any default connection settings
and environment variables used by the libpq front-end library will apply.
Examples
To create the database demo using the default database server:
$ createdb demo
CREATE DATABASE
The response is the same as you would have gotten from running the CREATE DATABASE SQL command.
To create the database demo using the server on host eden, port 5000, using the LATIN1 encoding scheme
with a look at the underlying command:
See Also
dropdb, CREATE DATABASE
931
createlang
Name
createlang — define a new PostgreSQL procedural language
Synopsis
Description
createlang is a utility for adding a new programming language to a PostgreSQL database. createlang can
handle all the languages supplied in the default PostgreSQL distribution, but not languages provided by
other parties.
Although backend programming languages can be added directly using several SQL commands, it is
recommended to use createlang because it performs a number of checks and is much easier to use. See
CREATE LANGUAGE for additional information.
Options
createlang accepts the following command-line arguments:
langname
Specifies the name of the procedural programming language to be defined.
[-d] dbname
[--dbname] dbname
Specifies to which database the language should be added. The default is to use the database with the
same name as the current system user.
-e
--echo
Specifies the directory in which the language interpreter is to be found. The directory is normally
found automatically; this option is primarily for debugging purposes.
932
createlang
createlang also accepts the following command-line arguments for connection parameters:
-h host
--host host
Specifies the host name of the machine on which the server is running. If the value begins with a
slash, it is used as the directory for the Unix domain socket.
-p port
--port port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for
connections.
-U username
--username username
Environment
PGDATABASE
PGHOST
PGPORT
PGUSER
Diagnostics
Most error messages are self-explanatory. If not, run createlang with the --echo option and see under the
respective SQL command for details.
Notes
Use droplang to remove a language.
933
createlang
Examples
To install the language pltcl into the database template1:
See Also
droplang, CREATE LANGUAGE
934
createuser
Name
createuser — define a new PostgreSQL user account
Synopsis
Description
createuser creates a new PostgreSQL user. Only superusers (users with usesuper set in the pg_shadow
table) can create new PostgreSQL users, so createuser must be invoked by someone who can connect as a
PostgreSQL superuser.
Being a superuser also implies the ability to bypass access permission checks within the database, so
superuserdom should not be granted lightly.
createuser is a wrapper around the SQL command CREATE USER. There is no effective difference be-
tween creating users via this utility and via other methods for accessing the server.
Options
createuser accepts the following command-line arguments:
username
Specifies the name of the PostgreSQL user to be created. This name must be unique among all
PostgreSQL users.
-a
--adduser
The new user is allowed to create other users. (Note: Actually, this makes the new user a superuser.
The option is poorly named.)
-A
--no-adduser
The new user is not allowed to create other users (i.e., the new user is a regular user, not a superuser).
This is the default.
-d
--createdb
The new user is allowed to create databases.
-D
--no-createdb
The new user is not allowed to create databases. This is the default.
935
createuser
-e
--echo
Echo the commands that createuser generates and sends to the server.
-E
--encrypted
Encrypts the user’s password stored in the database. If not specified, the default password behavior
is used.
-i number
--sysid number
Allows you to pick a non-default user ID for the new user. This is not necessary, but some people
like it.
-N
--unencrypted
Does not encrypt the user’s password stored in the database. If not specified, the default password
behavior is used.
-P
--pwprompt
If given, createuser will issue a prompt for the password of the new user. This is not necessary if you
do not plan on using password authentication.
-q
--quiet
You will be prompted for a name and other missing information if it is not specified on the command line.
createuser also accepts the following command-line arguments for connection parameters:
-h host
--host host
Specifies the host name of the machine on which the server is running. If the value begins with a
slash, it is used as the directory for the Unix domain socket.
-p port
--port port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for
connections.
-U username
--username username
936
createuser
-W
--password
Force password prompt (to connect to the server, not for the password of the new user).
Environment
PGHOST
PGPORT
PGUSER
Diagnostics
In case of difficulty, see CREATE USER and psql for discussions of potential problems and error mes-
sages. The database server must be running at the targeted host. Also, any default connection settings and
environment variables used by the libpq front-end library will apply.
Examples
To create a user joe on the default database server:
$ createuser joe
Is the new user allowed to create databases? (y/n) n
Shall the new user be allowed to create more new users? (y/n) n
CREATE USER
To create the same user joe using the server on host eden, port 5000, avoiding the prompts and taking a
look at the underlying command:
See Also
dropuser, CREATE USER
937
dropdb
Name
dropdb — remove a PostgreSQL database
Synopsis
Description
dropdb destroys an existing PostgreSQL database. The user who executes this command must be a
database superuser or the owner of the database.
dropdb is a wrapper around the SQL command DROP DATABASE. There is no effective difference be-
tween dropping databases via this utility and via other methods for accessing the server.
Options
dropdb accepts the following command-line arguments:
dbname
Specifies the name of the database to be removed.
-e
--echo
Echo the commands that dropdb generates and sends to the server.
-i
--interactive
dropdb also accepts the following command-line arguments for connection parameters:
-h host
--host host
Specifies the host name of the machine on which the server is running. If the value begins with a
slash, it is used as the directory for the Unix domain socket.
938
dropdb
-p port
--port port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for
connections.
-U username
--username username
Environment
PGHOST
PGPORT
PGUSER
Diagnostics
In case of difficulty, see DROP DATABASE and psql for discussions of potential problems and error
messages. The database server must be running at the targeted host. Also, any default connection settings
and environment variables used by the libpq front-end library will apply.
Examples
To destroy the database demo on the default database server:
$ dropdb demo
DROP DATABASE
To destroy the database demo using the server on host eden, port 5000, with verification and a peek at the
underlying command:
939
dropdb
See Also
createdb, DROP DATABASE
940
droplang
Name
droplang — remove a PostgreSQL procedural language
Synopsis
Description
droplang is a utility for removing an existing programming language from a PostgreSQL database.
droplang can drop any procedural language, even those not supplied by the PostgreSQL distribution.
Although backend programming languages can be removed directly using several SQL commands, it is
recommended to use droplang because it performs a number of checks and is much easier to use. See
DROP LANGUAGE for more.
Options
droplang accepts the following command line arguments:
langname
Specifies the name of the backend programming language to be removed.
[-d] dbname
[--dbname] dbname
Specifies from which database the language should be removed. The default is to use the database
with the same name as the current system user.
-e
--echo
droplang also accepts the following command line arguments for connection parameters:
941
droplang
-h host
--host host
Specifies the host name of the machine on which the server is running. If host begins with a slash, it
is used as the directory for the Unix domain socket.
-p port
--port port
Specifies the Internet TCP/IP port or local Unix domain socket file extension on which the server is
listening for connections.
-U username
--username username
Environment
PGDATABASE
PGHOST
PGPORT
PGUSER
Diagnostics
Most error messages are self-explanatory. If not, run droplang with the --echo option and see under the
respective SQL command for details.
Notes
Use createlang to add a language.
Examples
To remove the language pltcl:
942
droplang
See Also
createlang, DROP LANGUAGE
943
dropuser
Name
dropuser — remove a PostgreSQL user account
Synopsis
Description
dropuser removes an existing PostgreSQL user and the databases which that user owned. Only superusers
(users with usesuper set in the pg_shadow table) can destroy PostgreSQL users.
dropuser is a wrapper around the SQL command DROP USER. There is no effective difference between
dropping users via this utility and via other methods for accessing the server.
Options
dropuser accepts the following command-line arguments:
username
Specifies the name of the PostgreSQL user to be removed. You will be prompted for a name if none
is specified on the command line.
-e
--echo
Echo the commands that dropuser generates and sends to the server.
-i
--interactive
dropuser also accepts the following command-line arguments for connection parameters:
944
dropuser
-h host
--host host
Specifies the host name of the machine on which the server is running. If the value begins with a
slash, it is used as the directory for the Unix domain socket.
-p port
--port port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for
connections.
-U username
--username username
Force password prompt (to connect to the server, not for the password of the user to be dropped).
Environment
PGHOST
PGPORT
PGUSER
Diagnostics
In case of difficulty, see DROP USER and psql for discussions of potential problems and error messages.
The database server must be running at the targeted host. Also, any default connection settings and envi-
ronment variables used by the libpq front-end library will apply.
Examples
To remove user joe from the default database server:
$ dropuser joe
DROP USER
To remove user joe using the server on host eden, port 5000, with verification and a peek at the underly-
ing command:
945
dropuser
See Also
createuser, DROP USER
946
ecpg
Name
ecpg — embedded SQL C preprocessor
Synopsis
Description
ecpg is the embedded SQL preprocessor for C programs. It converts C programs with embedded SQL
statements to normal C code by replacing the SQL invocations with special function calls. The output files
can then be processed with any C compiler tool chain.
ecpg will convert each input file given on the command line to the corresponding C output file. Input files
preferably have the extension .pgc, in which case the extension will be replaced by .c to determine the
output file name. If the extension of the input file is not .pgc, then the output file name is computed by
appending .c to the full file name. The output file name can also be overridden using the -o option.
This reference page does not describe the embedded SQL language. See Chapter 29 for more information
on that topic.
Options
ecpg accepts the following command-line arguments:
-c
Automatically generate certain C code from SQL code. Currently, this works for EXEC SQL TYPE.
-C mode
Specify an additional include path, used to find files included via EXEC SQL INCLUDE. Defaults are
. (current directory), /usr/local/include, the PostgreSQL include directory which is defined at
compile time (default: /usr/local/pgsql/include), and /usr/include, in that order.
-o filename
Specifies that ecpg should write all its output to the given filename.
947
ecpg
-r option
Turn on autocommit of transactions. In this mode, each SQL command is automatically committed
unless it is inside an explicit transaction block. In the default mode, commands are committed only
when EXEC SQL COMMIT is issued.
-v
Print additional information including the version and the include path.
--help
Notes
When compiling the preprocessed C code files, the compiler needs to be able to find the ECPG header
files in the PostgreSQL include directory. Therefore, one might have to use the -I option when invoking
the compiler (e.g., -I/usr/local/pgsql/include).
Programs using C code with embedded SQL have to be linked against the libecpg library, for example
using the linker options -L/usr/local/pgsql/lib -lecpg.
The value of either of these directories that is appropriate for the installation can be found out using
pg_config.
Examples
If you have an embedded SQL C source file named prog1.pgc, you can create an executable program
using the following sequence of commands:
ecpg prog1.pgc
cc -I/usr/local/pgsql/include -c prog1.c
cc -o prog1 prog1.o -L/usr/local/pgsql/lib -lecpg
948
pg_config
Name
pg_config — retrieve information about the installed version of PostgreSQL
Synopsis
Description
The pg_config utility prints configuration parameters of the currently installed version of PostgreSQL. It
is intended, for example, to be used by software packages that want to interface to PostgreSQL to facilitate
finding the required header files and libraries.
Options
To use pg_config, supply one or more of the following options:
--bindir
Print the location of user executables. Use this, for example, to find the psql program. This is
normally also the location where the pg_config program resides.
--includedir
Print the location of dynamically loadable modules, or where the server would search for them.
(Other architecture-dependent data files may also be installed in this directory.)
--pgxs
Print the options that were given to the configure script when PostgreSQL was configured for
building. This can be used to reproduce the identical configuration, or to find out with what options a
binary package was built. (Note however that binary packages often contain vendor-specific custom
patches.)
949
pg_config
--version
Notes
The option --includedir-server was new in PostgreSQL 7.2. In prior releases, the server include
files were installed in the same location as the client headers, which could be queried with the option
--includedir. To make your package handle both cases, try the newer option first and test the exit
status to see whether it succeeded.
In releases prior to PostgreSQL 7.1, before pg_config came to be, a method for finding the equivalent
configuration information did not exist.
History
The pg_config utility first appeared in PostgreSQL 7.1.
950
pg_dump
Name
pg_dump — extract a PostgreSQL database into a script file or other archive file
Synopsis
Description
pg_dump is a utility for backing up a PostgreSQL database. It makes consistent backups even if the
database is being used concurrently. pg_dump does not block other users accessing the database (readers
or writers).
Dumps can be output in script or archive file formats. Script dumps are plain-text files containing the
SQL commands required to reconstruct the database to the state it was in at the time it was saved. To
restore from such a script, feed it to psql. Script files can be used to reconstruct the database even on other
machines and other architectures; with some modifications even on other SQL database products.
The alternative archive file formats must be used with pg_restore to rebuild the database. They allow
pg_restore to be selective about what is restored, or even to reorder the items prior to being restored. The
archive formats also allow saving and restoring “large objects”, which is not possible in a script dump.
The archive files are also designed to be portable across architectures.
When used with one of the archive file formats and combined with pg_restore, pg_dump provides a flexi-
ble archival and transfer mechanism. pg_dump can be used to backup an entire database, then pg_restore
can be used to examine the archive and/or select which parts of the database are to be restored. The
most flexible output file format is the “custom” format (-Fc). It allows for selection and reordering of
all archived items, and is compressed by default. The tar format (-Ft) is not compressed and it is not
possible to reorder data when loading, but it is otherwise quite flexible; moreover, it can be manipulated
with standard Unix tools such as tar.
While running pg_dump, one should examine the output for any warnings (printed on standard error),
especially in light of the limitations listed below.
Options
The following command-line options control the content and format of the output.
dbname
Specifies the name of the database to be dumped. If this is not specified, the environment variable
PGDATABASE is used. If that is not set, the user name specified for the connection is used.
951
pg_dump
-a
--data-only
Include large objects in the dump. A non-text output format must be selected.
-c
--clean
Output commands to clean (drop) database objects prior to (the commands for) creating them.
This option is only meaningful for the plain-text format. For the archive formats, you may specify
the option when you call pg_restore.
-C
--create
Begin the output with a command to create the database itself and reconnect to the created database.
(With a script of this form, it doesn’t matter which database you connect to before running the script.)
This option is only meaningful for the plain-text format. For the archive formats, you may specify
the option when you call pg_restore.
-d
--inserts
Dump data as INSERT commands (rather than COPY). This will make restoration very slow; it is
mainly useful for making dumps that can be loaded into non-PostgreSQL databases. Note that the
restore may fail altogether if you have rearranged column order. The -D option is safer, though even
slower.
-D
--column-inserts
--attribute-inserts
Dump data as INSERT commands with explicit column names (INSERT INTO table (column,
...) VALUES ...). This will make restoration very slow; it is mainly useful for making dumps
that can be loaded into non-PostgreSQL databases.
-f file
--file=file
Send output to the specified file. If this is omitted, the standard output is used.
-F format
--format=format
Selects the format of the output. format can be one of the following:
952
pg_dump
Output a tar archive suitable for input into pg_restore. Using this archive format allows re-
ordering and/or exclusion of database objects at the time the database is restored. It is also
possible to limit which data is reloaded at restore time.
c
Output a custom archive suitable for input into pg_restore. This is the most flexible format in that
it allows reordering of loading data as well as object definitions. This format is also compressed
by default.
-i
--ignore-version
Dump the contents of schema only. If this option is not specified, all non-system schemas in the
target database will be dumped.
Note: In this mode, pg_dump makes no attempt to dump any other database objects that objects
in the selected schema may depend upon. Therefore, there is no guarantee that the results of a
single-schema dump can be successfully restored by themselves into a clean database.
-o
--oids
Dump object identifiers (OIDs) as part of the data for every table. Use this option if your application
references the OID columns in some way (e.g., in a foreign key constraint). Otherwise, this option
should not be used.
-O
--no-owner
Do not output commands to set ownership of objects to match the original database. By default,
pg_dump issues ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership of
created database objects. These statements will fail when the script is run unless it is started by a
superuser (or the same user that owns all of the objects in the script). To make a script that can be
restored by any user, but will give that user ownership of all the objects, specify -O.
This option is only meaningful for the plain-text format. For the archive formats, you may specify
the option when you call pg_restore.
953
pg_dump
-R
--no-reconnect
Specify the superuser user name to use when disabling triggers. This is only relevant if
--disable-triggers is used. (Usually, it’s better to leave this out, and instead start the resulting
script as superuser.)
-t table
--table=table
Dump data for table only. It is possible for there to be multiple tables with the same name in
different schemas; if that is the case, all matching tables will be dumped. Specify both --schema
and --table to select just one table.
Note: In this mode, pg_dump makes no attempt to dump any other database objects that the
selected table may depend upon. Therefore, there is no guarantee that the results of a single-
table dump can be successfully restored by themselves into a clean database.
-v
--verbose
Specifies verbose mode. This will cause pg_dump to output detailed object comments and start/stop
times to the dump file, and progress messages to standard error.
-x
--no-privileges
--no-acl
This option disables the use of dollar quoting for function bodies, and forces them to be quoted using
SQL standard string syntax.
-X disable-triggers
--disable-triggers
This option is only relevant when creating a data-only dump. It instructs pg_dump to include com-
mands to temporarily disable triggers on the target tables while the data is reloaded. Use this if you
have referential integrity checks or other triggers on the tables that you do not want to invoke during
data reload.
954
pg_dump
Presently, the commands emitted for --disable-triggers must be done as superuser. So, you
should also specify a superuser name with -S, or preferably be careful to start the resulting script as
a superuser.
This option is only meaningful for the plain-text format. For the archive formats, you may specify
the option when you call pg_restore.
-X use-set-session-authorization
--use-set-session-authorization
Output SQL standard SET SESSION AUTHORIZATION commands instead of OWNER TO com-
mands. This makes the dump more standards compatible, but depending on the history of the objects
in the dump, may not restore properly.
-Z 0..9
--compress=0..9
Specify the compression level to use in archive formats that support compression. (Currently only
the custom archive format supports compression.)
-h host
--host=host
Specifies the host name of the machine on which the server is running. If the value begins with a
slash, it is used as the directory for the Unix domain socket. The default is taken from the PGHOST
environment variable, if set, else a Unix domain socket connection is attempted.
-p port
--port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for
connections. Defaults to the PGPORT environment variable, if set, or a compiled-in default.
-U username
Force a password prompt. This should happen automatically if the server requires password authen-
tication.
955
pg_dump
Environment
PGDATABASE
PGHOST
PGPORT
PGUSER
Diagnostics
pg_dump internally executes SELECT statements. If you have problems running pg_dump, make sure you
are able to select information from the database using, for example, psql.
Notes
If your database cluster has any local additions to the template1 database, be careful to restore the output
of pg_dump into a truly empty database; otherwise you are likely to get errors due to duplicate definitions
of the added objects. To make an empty database without any local additions, copy from template0 not
template1, for example:
• When dumping a single table or as plain text, pg_dump does not handle large objects. Large objects
must be dumped with the entire database using one of the non-text archive formats.
• When a data-only dump is chosen and the option --disable-triggers is used, pg_dump emits
commands to disable triggers on user tables before inserting the data and commands to re-enable them
after the data has been inserted. If the restore is stopped in the middle, the system catalogs may be left
in the wrong state.
Members of tar archives are limited to a size less than 8 GB. (This is an inherent limitation of the tar file
format.) Therefore this format cannot be used if the textual representation of any one table exceeds that
size. The total size of a tar archive and any of the other output formats is not limited, except possibly by
the operating system.
The dump file produced by pg_dump does not contain the statistics used by the optimizer to make query
planning decisions. Therefore, it is wise to run ANALYZE after restoring from a dump file to ensure good
performance.
956
pg_dump
Examples
To dump a database:
To dump a database called mydb that contains large objects to a tar file:
To reload this database (with large objects) to an existing database called newdb:
History
The pg_dump utility first appeared in Postgres95 release 0.02. The non-plain-text output formats were
introduced in PostgreSQL release 7.1.
See Also
pg_dumpall, pg_restore, psql
957
pg_dumpall
Name
pg_dumpall — extract a PostgreSQL database cluster into a script file
Synopsis
pg_dumpall [option...]
Description
pg_dumpall is a utility for writing out (“dumping”) all PostgreSQL databases of a cluster into one script
file. The script file contains SQL commands that can be used as input to psql to restore the databases. It
does this by calling pg_dump for each database in a cluster. pg_dumpall also dumps global objects that
are common to all databases. (pg_dump does not save these objects.) This currently includes information
about database users and groups, and access permissions that apply to databases as a whole.
Thus, pg_dumpall is an integrated solution for backing up your databases. But note a limitation: it cannot
dump “large objects”, since pg_dump cannot dump such objects into text files. If you have databases
containing large objects, they should be dumped using one of pg_dump’s non-text output modes.
Since pg_dumpall reads tables from all databases you will most likely have to connect as a database
superuser in order to produce a complete dump. Also you will need superuser privileges to execute the
saved script in order to be allowed to add users and groups, and to create databases.
The SQL script will be written to the standard output. Shell operators should be used to redirect it into a
file.
pg_dumpall needs to connect several times to the PostgreSQL server (once per database). If you use
password authentication it is likely to ask for a password each time. It is convenient to have a ~/.pgpass
file in such cases. See Section 27.12 for more information.
Options
The following command-line options control the content and format of the output.
-a
--data-only
Include SQL commands to clean (drop) the databases before recreating them.
958
pg_dumpall
-d
--inserts
Dump data as INSERT commands (rather than COPY). This will make restoration very slow; it is
mainly useful for making dumps that can be loaded into non-PostgreSQL databases. Note that the
restore may fail altogether if you have rearranged column order. The -D option is safer, though even
slower.
-D
--column-inserts
--attribute-inserts
Dump data as INSERT commands with explicit column names (INSERT INTO table (column,
...) VALUES ...). This will make restoration very slow; it is mainly useful for making dumps
that can be loaded into non-PostgreSQL databases.
-g
--globals-only
Dump object identifiers (OIDs) as part of the data for every table. Use this option if your application
references the OID columns in some way (e.g., in a foreign key constraint). Otherwise, this option
should not be used.
-O
--no-owner
Do not output commands to set ownership of objects to match the original database. By default,
pg_dumpall issues ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership
of created schema elements. These statements will fail when the script is run unless it is started by
a superuser (or the same user that owns all of the objects in the script). To make a script that can be
restored by any user, but will give that user ownership of all the objects, specify -O.
-s
--schema-only
Specify the superuser user name to use when disabling triggers. This is only relevant if
--disable-triggers is used. (Usually, it’s better to leave this out, and instead start the resulting
script as superuser.)
959
pg_dumpall
-v
--verbose
Specifies verbose mode. This will cause pg_dumpall to output start/stop times to the dump file, and
progress messages to standard error. It will also enable verbose output in pg_dump.
-x
--no-privileges
--no-acl
This option disables the use of dollar quoting for function bodies, and forces them to be quoted using
SQL standard string syntax.
-X disable-triggers
--disable-triggers
This option is only relevant when creating a data-only dump. It instructs pg_dumpall to include
commands to temporarily disable triggers on the target tables while the data is reloaded. Use this if
you have referential integrity checks or other triggers on the tables that you do not want to invoke
during data reload.
Presently, the commands emitted for --disable-triggers must be done as superuser. So, you
should also specify a superuser name with -S, or preferably be careful to start the resulting script as
a superuser.
-X use-set-session-authorization
--use-set-session-authorization
Output SQL standard SET SESSION AUTHORIZATION commands instead of OWNER TO com-
mands. This makes the dump more standards compatible, but depending on the history of the objects
in the dump, may not restore properly.
-h host
Specifies the host name of the machine on which the database server is running. If the value begins
with a slash, it is used as the directory for the Unix domain socket. The default is taken from the
PGHOST environment variable, if set, else a Unix domain socket connection is attempted.
-p port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for
connections. Defaults to the PGPORT environment variable, if set, or a compiled-in default.
-U username
Connect as the given user.
960
pg_dumpall
-W
Force a password prompt. This should happen automatically if the server requires password authen-
tication.
Environment
PGHOST
PGPORT
PGUSER
Notes
Since pg_dumpall calls pg_dump internally, some diagnostic messages will refer to pg_dump.
Once restored, it is wise to run ANALYZE on each database so the optimizer has useful statistics. You can
also run vacuumdb -a -z to analyze all databases.
Examples
To dump all databases:
(It is not important to which database you connect here since the script file created by pg_dumpall will
contain the appropriate commands to create and connect to the saved databases.)
See Also
pg_dump. Check there for details on possible error conditions.
961
pg_restore
Name
pg_restore — restore a PostgreSQL database from an archive file created by pg_dump
Synopsis
Description
pg_restore is a utility for restoring a PostgreSQL database from an archive created by pg_dump in one of
the non-plain-text formats. It will issue the commands necessary to reconstruct the database to the state
it was in at the time it was saved. The archive files also allow pg_restore to be selective about what is
restored, or even to reorder the items prior to being restored. The archive files are designed to be portable
across architectures.
pg_restore can operate in two modes: If a database name is specified, the archive is restored directly into
the database. (Large objects can only be restored by using such a direct database connection.) Otherwise,
a script containing the SQL commands necessary to rebuild the database is created (and written to a file
or standard output), similar to the ones created by the pg_dump plain text format. Some of the options
controlling the script output are therefore analogous to pg_dump options.
Obviously, pg_restore cannot restore information that is not present in the archive file. For instance, if the
archive was made using the “dump data as INSERT commands” option, pg_restore will not be able to load
the data using COPY statements.
Options
pg_restore accepts the following command line arguments.
filename
Specifies the location of the archive file to be restored. If not specified, the standard input is used.
-a
--data-only
962
pg_restore
-C
--create
Create the database before restoring into it. (When this option is used, the database named with -d
is used only to issue the initial CREATE DATABASE command. All data is restored into the database
name that appears in the archive.)
-d dbname
--dbname=dbname
Exit if an error is encountered while sending SQL commands to the database. The default is to
continue and to display a count of errors at the end of the restoration.
-f filename
--file=filename
Specify output file for generated script, or for the listing when used with -l. Default is the standard
output.
-F format
--format=format
Specify format of the archive. It is not necessary to specify the format, since pg_restore will deter-
mine the format automatically. If specified, it can be one of the following:
The archive is a tar archive. Using this archive format allows reordering and/or exclusion of
schema elements at the time the database is restored. It is also possible to limit which data is
reloaded at restore time.
c
The archive is in the custom format of pg_dump. This is the most flexible format in that it allows
reordering of data load as well as schema elements. This format is also compressed by default.
-i
--ignore-version
List the contents of the archive. The output of this operation can be used with the -L option to restrict
and reorder the items that are restored.
963
pg_restore
-L list-file
--use-list=list-file
Restore elements in list-file only, and in the order they appear in the file. Lines can be moved
and may also be commented out by placing a ; at the start of the line. (See below for examples.)
-O
--no-owner
Do not output commands to set ownership of objects to match the original database. By default,
pg_restore issues ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership of
created schema elements. These statements will fail unless the initial connection to the database is
made by a superuser (or the same user that owns all of the objects in the script). With -O, any user
name can be used for the initial connection, and this user will own all the created objects.
-P function-name(argtype [, ...])
--function=function-name(argtype [, ...])
Restore the named function only. Be careful to spell the function name and arguments exactly as they
appear in the dump file’s table of contents.
-R
--no-reconnect
Restore only the schema (data definitions), not the data. Sequence values will be reset.
-S username
--superuser=username
Specify the superuser user name to use when disabling triggers. This is only relevant if
--disable-triggers is used.
-t table
--table=table
964
pg_restore
-X use-set-session-authorization
--use-set-session-authorization
Output SQL standard SET SESSION AUTHORIZATION commands instead of OWNER TO com-
mands. This makes the dump more standards compatible, but depending on the history of the objects
in the dump, may not restore properly.
-X disable-triggers
--disable-triggers
This option is only relevant when performing a data-only restore. It instructs pg_restore to execute
commands to temporarily disable triggers on the target tables while the data is reloaded. Use this if
you have referential integrity checks or other triggers on the tables that you do not want to invoke
during data reload.
Presently, the commands emitted for --disable-triggers must be done as superuser. So, you
should also specify a superuser name with -S, or preferably run pg_restore as a PostgreSQL supe-
ruser.
pg_restore also accepts the following command line arguments for connection parameters:
-h host
--host=host
Specifies the host name of the machine on which the server is running. If the value begins with a
slash, it is used as the directory for the Unix domain socket. The default is taken from the PGHOST
environment variable, if set, else a Unix domain socket connection is attempted.
-p port
--port=port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for
connections. Defaults to the PGPORT environment variable, if set, or a compiled-in default.
-U username
Force a password prompt. This should happen automatically if the server requires password authen-
tication.
965
pg_restore
Environment
PGHOST
PGPORT
PGUSER
Diagnostics
When a direct database connection is specified using the -d option, pg_restore internally executes SQL
statements. If you have problems running pg_restore, make sure you are able to select information from
the database using, for example, psql.
Notes
If your installation has any local additions to the template1 database, be careful to load the output of
pg_restore into a truly empty database; otherwise you are likely to get errors due to duplicate definitions
of the added objects. To make an empty database without any local additions, copy from template0 not
template1, for example:
• When restoring data to a pre-existing table and the option --disable-triggers is used, pg_restore
emits commands to disable triggers on user tables before inserting the data then emits commands to re-
enable them after the data has been inserted. If the restore is stopped in the middle, the system catalogs
may be left in the wrong state.
• pg_restore will not restore large objects for a single table. If an archive contains large objects, then all
large objects will be restored.
Examples
To dump a database called mydb that contains large objects to a tar file:
To reload this database (with large objects) to an existing database called newdb:
966
pg_restore
To reorder database items, it is first necessary to dump the table of contents of the archive:
The listing file consists of a header and one line for each item, e.g.,
;
; Archive created at Fri Jul 28 22:28:36 2000
; dbname: birds
; TOC Entries: 74
; Compression: 0
; Dump Version: 1.4-0
; Format: CUSTOM
;
;
; Selected TOC Entries:
;
2; 145344 TABLE species postgres
3; 145344 ACL species
4; 145359 TABLE nt_header postgres
5; 145359 ACL nt_header
6; 145402 TABLE species_records postgres
7; 145402 ACL species_records
8; 145416 TABLE ss_old postgres
9; 145416 ACL ss_old
10; 145433 TABLE map_resolutions postgres
11; 145433 ACL map_resolutions
12; 145443 TABLE hs_old postgres
13; 145443 ACL hs_old
Semicolons start a comment, and the numbers at the start of lines refer to the internal archive ID assigned
to each item.
Lines in the file can be commented out, deleted, and reordered. For example,
could be used as input to pg_restore and would only restore items 10 and 6, in that order:
967
pg_restore
History
The pg_restore utility first appeared in PostgreSQL 7.1.
See Also
pg_dump, pg_dumpall, psql
968
psql
Name
psql — PostgreSQL interactive terminal
Synopsis
Description
psql is a terminal-based front-end to PostgreSQL. It enables you to type in queries interactively, issue them
to PostgreSQL, and see the query results. Alternatively, input can be from a file. In addition, it provides
a number of meta-commands and various shell-like features to facilitate writing scripts and automating a
wide variety of tasks.
Options
-a
--echo-all
Print all input lines to standard output as they are read. This is more useful for script processing
rather than interactive mode. This is equivalent to setting the variable ECHO to all.
-A
--no-align
Switches to unaligned output mode. (The default output mode is otherwise aligned.)
-c command
--command command
Specifies that psql is to execute one command string, command, and then exit. This is useful in shell
scripts.
command must be either a command string that is completely parsable by the server (i.e., it contains
no psql specific features), or a single backslash command. Thus you cannot mix SQL and psql meta-
commands. To achieve that, you could pipe the string into psql, like this: echo "\x \\ select *
from foo;" | psql.
If the command string contains multiple SQL commands, they are processed in a single transaction,
unless there are explicit BEGIN/COMMIT commands included in the string to divide it into multiple
transactions. This is different from the behavior when the same string is fed to psql’s standard input.
-d dbname
--dbname dbname
Specifies the name of the database to connect to. This is equivalent to specifying dbname as the first
non-option argument on the command line.
969
psql
-e
--echo-queries
Copy all SQL commands sent to the server to standard output as well. This is equivalent to setting
the variable ECHO to queries.
-E
--echo-hidden
Echo the actual queries generated by \d and other backslash commands. You can use this to study
psql’s internal operations. This is equivalent to setting the variable ECHO_HIDDEN from within psql.
-f filename
--file filename
Use the file filename as the source of commands instead of reading commands interactively. After
the file is processed, psql terminates. This is in many ways equivalent to the internal command \i.
If filename is - (hyphen), then standard input is read.
Using this option is subtly different from writing psql < filename. In general, both will do what
you expect, but using -f enables some nice features such as error messages with line numbers. There
is also a slight chance that using this option will reduce the start-up overhead. On the other hand, the
variant using the shell’s input redirection is (in theory) guaranteed to yield exactly the same output
that you would have gotten had you entered everything by hand.
-F separator
--field-separator separator
Use separator as the field separator. This is equivalent to \pset fieldsep or \f.
-h hostname
--host hostname
Specifies the host name of the machine on which the server is running. If the value begins with a
slash, it is used as the directory for the Unix-domain socket.
-H
--html
Turn on HTML tabular output. This is equivalent to \pset format html or the \H command.
-l
--list
List all available databases, then exit. Other non-connection options are ignored. This is similar to
the internal command \list.
-o filename
--output filename
Put all query output into file filename. This is equivalent to the command \o.
-p port
--port port
Specifies the TCP port or the local Unix-domain socket file extension on which the server is listening
for connections. Defaults to the value of the PGPORT environment variable or, if not set, to the port
specified at compile time, usually 5432.
970
psql
-P assignment
--pset assignment
Allows you to specify printing options in the style of \pset on the command line. Note that here
you have to separate name and value with an equal sign instead of a space. Thus to set the output
format to LaTeX, you could write -P format=latex.
-q
--quiet
Specifies that psql should do its work quietly. By default, it prints welcome messages and various
informational output. If this option is used, none of this happens. This is useful with the -c option.
Within psql you can also set the QUIET variable to achieve the same effect.
-R separator
--record-separator separator
Use separator as the record separator. This is equivalent to the \pset recordsep command.
-s
--single-step
Run in single-step mode. That means the user is prompted before each command is sent to the server,
with the option to cancel execution as well. Use this to debug scripts.
-S
--single-line
Runs in single-line mode where a newline terminates an SQL command, as a semicolon does.
Note: This mode is provided for those who insist on it, but you are not necessarily encouraged
to use it. In particular, if you mix SQL and meta-commands on a line the order of execution might
not always be clear to the inexperienced user.
-t
--tuples-only
Turn off printing of column names and result row count footers, etc. This is equivalent to the \t
command.
-T table_options
--table-attr table_options
Allows you to specify options to be placed within the HTML table tag. See \pset for details.
-u
Makes psql prompt for the user name and password before connecting to the database.
This option is deprecated, as it is conceptually flawed. (Prompting for a non-default user name and
prompting for a password because the server requires it are really two different things.) You are
encouraged to look at the -U and -W options instead.
971
psql
-U username
--username username
Connect to the database as the user username instead of the default. (You must have permission to
do so, of course.)
-v assignment
--set assignment
--variable assignment
Perform a variable assignment, like the \set internal command. Note that you must separate name
and value, if any, by an equal sign on the command line. To unset a variable, leave off the equal sign.
To just set a variable without a value, use the equal sign but leave off the value. These assignments
are done during a very early stage of start-up, so variables reserved for internal purposes might get
overwritten later.
-V
--version
Cause psql to prompt for a password before connecting to a database. This will remain set for the
entire session, even if you change the database connection with the meta-command \connect.
In the current version, psql automatically issues a password prompt whenever the server requests
password authentication. Because this is currently based on a hack, the automatic recognition might
mysteriously fail, hence this option to force a prompt. If no password prompt is issued and the server
requires password authentication, the connection attempt will fail.
-x
--expanded
Turn on the extended table formatting mode. This is equivalent to the command \x.
-X,
--no-psqlrc
Do not read the start-up file (neither the system-wide psqlrc file nor the user’s ~/.psqlrc file).
-?
--help
Exit Status
psql returns 0 to the shell if it finished normally, 1 if a fatal error of its own (out of memory, file not found)
occurs, 2 if the connection to the server went bad and the session was not interactive, and 3 if an error
occurred in a script and the variable ON_ERROR_STOP was set.
972
psql
Usage
Connecting To A Database
psql is a regular PostgreSQL client application. In order to connect to a database you need to know the
name of your target database, the host name and port number of the server and what user name you want
to connect as. psql can be told about those parameters via command line options, namely -d, -h, -p,
and -U respectively. If an argument is found that does not belong to any option it will be interpreted as
the database name (or the user name, if the database name is already given). Not all these options are
required; there are useful defaults. If you omit the host name, psql will connect via a Unix-domain socket
to a server on the local host, or via TCP/IP to localhost on machines that don’t have Unix-domain
sockets. The default port number is determined at compile time. Since the database server uses the same
default, you will not have to specify the port in most cases. The default user name is your Unix user name,
as is the default database name. Note that you can’t just connect to any database under any user name.
Your database administrator should have informed you about your access rights.
When the defaults aren’t quite right, you can save yourself some typing by setting the environment vari-
ables PGDATABASE, PGHOST, PGPORT and/or PGUSER to appropriate values.
If the connection could not be made for any reason (e.g., insufficient privileges, server is not running on
the targeted host, etc.), psql will return an error and terminate.
$ psql testdb
Welcome to psql 8.0.0, the PostgreSQL interactive terminal.
testdb=>
At the prompt, the user may type in SQL commands. Ordinarily, input lines are sent to the server when
a command-terminating semicolon is reached. An end of line does not terminate a command. Thus com-
mands can be spread over several lines for clarity. If the command was sent and executed without error,
the results of the command are displayed on the screen.
Whenever a command is executed, psql also polls for asynchronous notification events generated by LIS-
TEN and NOTIFY.
973
psql
Meta-Commands
Anything you enter in psql that begins with an unquoted backslash is a psql meta-command that is pro-
cessed by psql itself. These commands help make psql more useful for administration or scripting. Meta-
commands are more commonly called slash or backslash commands.
The format of a psql command is the backslash, followed immediately by a command verb, then any argu-
ments. The arguments are separated from the command verb and each other by any number of whitespace
characters.
To include whitespace into an argument you may quote it with a single quote. To include a single quote
into such an argument, precede it by a backslash. Anything contained in single quotes is furthermore
subject to C-like substitutions for \n (new line), \t (tab), \digits, \0digits, and \0xdigits (the
character with the given decimal, octal, or hexadecimal code).
If an unquoted argument begins with a colon (:), it is taken as a psql variable and the value of the variable
is used as the argument instead.
Arguments that are enclosed in backquotes (‘) are taken as a command line that is passed to the shell. The
output of the command (with any trailing newline removed) is taken as the argument value. The above
escape sequences also apply in backquotes.
Some commands take an SQL identifier (such as a table name) as argument. These arguments follow the
syntax rules of SQL: Unquoted letters are forced to lowercase, while double quotes (") protect letters
from case conversion and allow incorporation of whitespace into the identifier. Within double quotes,
paired double quotes reduce to a single double quote in the resulting name. For example, FOO"BAR"BAZ
is interpreted as fooBARbaz, and "A weird"" name" becomes A weird" name.
Parsing for arguments stops when another unquoted backslash occurs. This is taken as the beginning
of a new meta-command. The special sequence \\ (two backslashes) marks the end of arguments and
continues parsing SQL commands, if any. That way SQL and psql commands can be freely mixed on a
line. But in any case, the arguments of a meta-command cannot continue beyond the end of the line.
The following meta-commands are defined:
\a
If the current table output format is unaligned, it is switched to aligned. If it is not unaligned, it is
set to unaligned. This command is kept for backwards compatibility. See \pset for a more general
solution.
\cd [ directory ]
Changes the current working directory to directory. Without argument, changes to the current
user’s home directory.
\C [ title ]
Sets the title of any tables being printed as the result of a query or unset any such title. This command
is equivalent to \pset title title. (The name of this command derives from “caption”, as it was
previously only used to set the caption in an HTML table.)
974
psql
Establishes a connection to a new database and/or under a user name. The previous connection is
closed. If dbname is - the current database name is assumed.
If username is omitted the current user name is assumed.
As a special rule, \connect without any arguments will connect to the default database as the default
user (as you would have gotten by starting psql without any arguments).
If the connection attempt failed (wrong user name, access denied, etc.), the previous connection
will be kept if and only if psql is in interactive mode. When executing a non-interactive script,
processing will immediately stop with an error. This distinction was chosen as a user convenience
against typos on the one hand, and a safety mechanism that scripts are not accidentally acting on the
wrong database on the other hand.
\copy table [ ( column_list ) ] { from | to } { filename | stdin | stdout
| pstdin | pstdout } [ with ] [ oids ] [ delimiter [ as ] ’character’ ] [
null [ as ] ’string’ ] [ csv [ quote [ as ] ’character’ ] [ escape [ as ]
’character’ ] [ force quote column_list ] [ force not null column_list ] ]
Performs a frontend (client) copy. This is an operation that runs an SQL COPY command, but instead
of the server reading or writing the specified file, psql reads or writes the file and routes the data
between the server and the local file system. This means that file accessibility and privileges are
those of the local user, not the server, and no SQL superuser privileges are required.
The syntax of the command is similar to that of the SQL COPY command. Note that, because of this,
special parsing rules apply to the \copy command. In particular, the variable substitution rules and
backslash escapes do not apply.
\copy table from stdin | stdout reads/writes based on the command input and output re-
spectively. All rows are read from the same source that issued the command, continuing until \. is
read or the stream reaches EOF. Output is sent to the same place as command output. To read/write
from psql’s standard input or output, use pstdin or pstdout. This option is useful for populating
tables in-line within a SQL script file.
Tip: This operation is not as efficient as the SQL COPY command because all data must pass
through the client/server connection. For large amounts of data the SQL command may be
preferable.
\copyright
Shows the copyright and distribution terms of PostgreSQL.
\d [ pattern ]
\d+ [ pattern ]
For each relation (table, view, index, or sequence) matching the pattern, show all columns, their
types, the tablespace (if not the default) and any special attributes such as NOT NULL or defaults, if
any. Associated indexes, constraints, rules, and triggers are also shown, as is the view definition if
the relation is a view. (“Matching the pattern” is defined below.)
The command form \d+ is identical, except that more information is displayed: any comments asso-
ciated with the columns of the table are shown, as is the presence of OIDs in the table.
975
psql
Note: If \d is used without a pattern argument, it is equivalent to \dtvs which will show a list
of all tables, views, and sequences. This is purely a convenience measure.
\da [ pattern ]
Lists all available aggregate functions, together with the data type they operate on. If pattern is
specified, only aggregates whose names match the pattern are shown.
\db [ pattern ]
\db+ [ pattern ]
Lists all available tablespaces. If pattern is specified, only tablespaces whose names match the
pattern are shown. If + is appended to the command name, each object is listed with its associated
permissions.
\dc [ pattern ]
Lists all available conversions between character-set encodings. If pattern is specified, only con-
versions whose names match the pattern are listed.
\dC
Shows the descriptions of objects matching the pattern, or of all visible objects if no argument is
given. But in either case, only objects that have a description are listed. (“Object” covers aggregates,
functions, operators, types, relations (tables, views, indexes, sequences, large objects), rules, and
triggers.) For example:
=> \dd version
Object descriptions
Schema | Name | Object | Description
------------+---------+----------+---------------------------
pg_catalog | version | function | PostgreSQL version string
(1 row)
Descriptions for objects can be created with the COMMENT SQL command.
\dD [ pattern ]
Lists all available domains. If pattern is specified, only matching domains are shown.
\df [ pattern ]
\df+ [ pattern ]
Lists available functions, together with their argument and return types. If pattern is specified,
only functions whose names match the pattern are shown. If the form \df+ is used, additional infor-
mation about each function, including language and description, is shown.
Note: To look up functions taking argument or returning values of a specific type, use your pager’s
search capability to scroll through the \df output.
To reduce clutter, \df does not show data type I/O functions. This is implemented by ignoring
functions that accept or return type cstring.
976
psql
\dg [ pattern ]
Lists all database groups. If pattern is specified, only those groups whose names match the pattern
are listed.
\distvS [ pattern ]
This is not the actual command name: the letters i, s, t, v, S stand for index, sequence, table, view,
and system table, respectively. You can specify any or all of these letters, in any order, to obtain a
listing of all the matching objects. The letter S restricts the listing to system objects; without S, only
non-system objects are shown. If + is appended to the command name, each object is listed with its
associated description, if any.
If pattern is specified, only objects whose names match the pattern are listed.
\dl
Lists all available schemas (namespaces). If pattern (a regular expression) is specified, only
schemas whose names match the pattern are listed. Non-local temporary schemas are suppressed.
If + is appended to the command name, each object is listed with its associated permissions and
description, if any.
\do [ pattern ]
Lists available operators with their operand and return types. If pattern is specified, only operators
whose names match the pattern are listed.
\dp [ pattern ]
Produces a list of all available tables, views and sequences with their associated access privileges. If
pattern is specified, only tables, views and sequences whose names match the pattern are listed.
The commands GRANT and REVOKE are used to set access privileges. See GRANT for more
information.
\dT [ pattern ]
\dT+ [ pattern ]
Lists all data types or only those that match pattern. The command form \dT+ shows extra infor-
mation.
\du [ pattern ]
If filename is specified, the file is edited; after the editor exits, its content is copied back to the
query buffer. If no argument is given, the current query buffer is copied to a temporary file which is
then edited in the same fashion.
The new query buffer is then re-parsed according to the normal rules of psql, where the whole buffer
is treated as a single line. (Thus you cannot make scripts this way. Use \i for that.) This means also
977
psql
that if the query ends with (or rather contains) a semicolon, it is immediately executed. In other cases
it will merely wait in the query buffer.
Tip: psql searches the environment variables PSQL_EDITOR, EDITOR, and VISUAL (in that order)
for an editor to use. If all of them are unset, vi is used on Unix systems, notepad.exe on
Windows systems.
Prints the arguments to the standard output, separated by one space and followed by a newline. This
can be useful to intersperse information in the output of scripts. For example:
=> \echo ‘date‘
Tue Oct 26 21:40:57 CEST 1999
Tip: If you use the \o command to redirect your query output you may wish to use \qecho instead
of this command.
\encoding [ encoding ]
Sets the client character set encoding. Without an argument, this command shows the current encod-
ing.
\f [ string ]
Sets the field separator for unaligned query output. The default is the vertical bar (|). See also \pset
for a generic way of setting output options.
\g [ { filename | |command } ]
Sends the current query input buffer to the server and optionally stores the query’s output in
filename or pipes the output into a separate Unix shell executing command. A bare \g
is virtually equivalent to a semicolon. A \g with argument is a “one-shot” alternative to the \o
command.
\help (or \h) [ command ]
Gives syntax help on the specified SQL command. If command is not specified, then psql will list
all the commands for which syntax help is available. If command is an asterisk (*), then syntax help
on all SQL commands is shown.
Note: To simplify typing, commands that consists of several words do not have to be quoted.
Thus it is fine to type \help alter table.
\H
Turns on HTML query output format. If the HTML format is already on, it is switched back to the
default aligned text format. This command is for compatibility and convenience, but see \pset about
setting other output options.
978
psql
\i filename
Reads input from the file filename and executes it as though it had been typed on the keyboard.
Note: If you want to see the lines on the screen as they are read you must set the variable ECHO
to all.
\l (or \list)
\l+ (or \list+)
List the names, owners, and character set encodings of all the databases in the server. If + is appended
to the command name, database descriptions are also displayed.
\lo_export loid filename
Reads the large object with OID loid from the database and writes it to filename. Note that this
is subtly different from the server function lo_export, which acts with the permissions of the user
that the database server runs as and on the server’s file system.
Stores the file into a PostgreSQL large object. Optionally, it associates the given comment with the
object. Example:
foo=> \lo_import ’/home/peter/pictures/photo.xcf’ ’a picture of me’
lo_import 152801
The response indicates that the large object received object ID 152801 which one ought to remember
if one wants to access the object ever again. For that reason it is recommended to always associate a
human-readable comment with every object. Those can then be seen with the \lo_list command.
Note that this command is subtly different from the server-side lo_import because it acts as the
local user on the local file system, rather than the server’s user and file system.
\lo_list
Shows a list of all PostgreSQL large objects currently stored in the database, along with any com-
ments provided for them.
\lo_unlink loid
Deletes the large object with OID loid from the database.
\o [ {filename | |command} ]
Saves future query results to the file filename or pipes future results into a separate Unix shell
to execute command. If no arguments are specified, the query output will be reset to the standard
output.
979
psql
“Query results” includes all tables, command responses, and notices obtained from the database
server, as well as output of various backslash commands that query the database (such as \d), but not
error messages.
\p
This command sets options affecting the output of query result tables. parameter describes which
option is to be set. The semantics of value depend thereon.
Adjustable printing options are:
format
Sets the output format to one of unaligned, aligned, html, or latex. Unique abbreviations
are allowed. (That would mean one letter is enough.)
“Unaligned” writes all columns of a row on a line, separated by the currently active field sepa-
rator. This is intended to create output that might be intended to be read in by other programs
(tab-separated, comma-separated). “Aligned” mode is the standard, human-readable, nicely for-
matted text output that is default. The “HTML” and “LaTeX” modes put out tables that are
intended to be included in documents using the respective mark-up language. They are not
complete documents! (This might not be so dramatic in HTML, but in LaTeX you must have a
complete document wrapper.)
border
The second argument must be a number. In general, the higher the number the more borders
and lines the tables will have, but this depends on the particular format. In HTML mode, this
will translate directly into the border=... attribute, in the others only values 0 (no border), 1
(internal dividing lines), and 2 (table frame) make sense.
expanded (or x)
Toggles between regular and expanded format. When expanded format is enabled, all output
has two columns with the column name on the left and the data on the right. This mode is useful
if the data wouldn’t fit on the screen in the normal “horizontal” mode.
Expanded mode is supported by all four output formats.
null
The second argument is a string that should be printed whenever a column is null. The default
is not to print anything, which can easily be mistaken for, say, an empty string. Thus, one might
choose to write \pset null ’(null)’.
fieldsep
Specifies the field separator to be used in unaligned output mode. That way one can create,
for example, tab- or comma-separated output, which other programs might prefer. To set a tab
980
psql
as field separator, type \pset fieldsep ’\t’. The default field separator is ’|’ (a vertical
bar).
footer
Specifies the record (line) separator to use in unaligned output mode. The default is a newline
character.
tuples_only (or t)
Toggles between tuples only and full display. Full display may show extra information such as
column headers, titles, and various footers. In tuples only mode, only actual table data is shown.
title [ text ]
Sets the table title for any subsequently printed tables. This can be used to give your output
descriptive tags. If no argument is given, the title is unset.
tableattr (or T) [ text ]
Allows you to specify any attributes to be placed inside the HTML table tag. This could for
example be cellpadding or bgcolor. Note that you probably don’t want to specify border
here, as that is already taken care of by \pset border.
pager
Controls use of a pager for query and psql help output. If the environment variable PAGER is set,
the output is piped to the specified program. Otherwise a platform-dependent default (such as
more) is used.
When the pager is off, the pager is not used. When the pager is on, the pager is used only when
appropriate, i.e. the output is to a terminal and will not fit on the screen. (psql does not do a
perfect job of estimating when to use the pager.) \pset pager turns the pager on and off.
Pager can also be set to always, which causes the pager to be always used.
Illustrations on how these different formats look can be seen in the Examples section.
Tip: There are various shortcut commands for \pset. See \a, \C, \H, \t, \T, and \x.
Note: It is an error to call \pset without arguments. In the future this call might show the current
status of all printing options.
\q
981
psql
This command is identical to \echo except that the output will be written to the query output channel,
as set by \o.
\r
Print or save the command line history to filename. If filename is omitted, the history is written
to the standard output. This option is only available if psql is configured to use the GNU Readline
library.
Note: In the current version, it is no longer necessary to save the command history, since that
will be done automatically on program termination. The history is also loaded automatically every
time psql starts up.
Sets the internal variable name to value or, if more than one value is given, to the concatenation of
all of them. If no second argument is given, the variable is just set with no value. To unset a variable,
use the \unset command.
Valid variable names can contain characters, digits, and underscores. See the section Variables below
for details. Variable names are case-sensitive.
Although you are welcome to set any variable to anything you want, psql treats several variables as
special. They are documented in the section about variables.
Note: This command is totally separate from the SQL command SET .
\t
Toggles the display of output column name headings and row count footer. This command is equiv-
alent to \pset tuples_only and is provided for convenience.
\T table_options
Allows you to specify attributes to be placed within the table tag in HTML tabular output mode.
This command is equivalent to \pset tableattr table_options.
\timing
Outputs the current query buffer to the file filename or pipes it to the Unix command command.
\x
982
psql
\z [ pattern ]
Produces a list of all available tables, views and sequences with their associated access privileges. If
a pattern is specified, only tables,views and sequences whose names match the pattern are listed.
The commands GRANT and REVOKE are used to set access privileges. See GRANT for more
information.
This is an alias for \dp (“display privileges”).
\! [ command ]
Escapes to a separate Unix shell or executes the Unix command command. The arguments are not
further interpreted, the shell will see them as is.
\?
The various \d commands accept a pattern parameter to specify the object name(s) to be displayed. *
means “any sequence of characters” and ? means “any single character”. (This notation is comparable to
Unix shell file name patterns.) Advanced users can also use regular-expression notations such as character
classes, for example [0-9] to match “any digit”. To make any of these pattern-matching characters be
interpreted literally, surround it with double quotes.
A pattern that contains an (unquoted) dot is interpreted as a schema name pattern followed by an object
name pattern. For example, \dt foo*.bar* displays all tables in schemas whose name starts with foo
and whose table name starts with bar. If no dot appears, then the pattern matches only objects that are
visible in the current schema search path.
Whenever the pattern parameter is omitted completely, the \d commands display all objects that are
visible in the current schema search path. To see all objects in the database, use the pattern *.*.
Advanced features
Variables
psql provides variable substitution features similar to common Unix command shells. Variables are simply
name/value pairs, where the value can be any string of any length. To set variables, use the psql meta-
command \set:
sets the variable foo to the value bar. To retrieve the content of the variable, precede the name with a
colon and use it as the argument of any slash command:
983
psql
Note: The arguments of \set are subject to the same substitution rules as with other commands.
Thus you can construct interesting references such as \set :foo ’something’ and get “soft links”
or “variable variables” of Perl or PHP fame, respectively. Unfortunately (or fortunately?), there is no
way to do anything useful with these constructs. On the other hand, \set bar :foo is a perfectly
valid way to copy a variable.
If you call \set without a second argument, the variable is set, with an empty string as value. To unset
(or delete) a variable, use the command \unset.
psql’s internal variable names can consist of letters, numbers, and underscores in any order and any num-
ber of them. A number of these variables are treated specially by psql. They indicate certain option settings
that can be changed at run time by altering the value of the variable or represent some state of the applica-
tion. Although you can use these variables for any other purpose, this is not recommended, as the program
behavior might grow really strange really quickly. By convention, all specially treated variables consist of
all upper-case letters (and possibly numbers and underscores). To ensure maximum compatibility in the
future, avoid using such variable names for your own purposes. A list of all specially treated variables
follows.
AUTOCOMMIT
When on (the default), each SQL command is automatically committed upon successful completion.
To postpone commit in this mode, you must enter a BEGIN or START TRANSACTION SQL command.
When off or unset, SQL commands are not committed until you explicitly issue COMMIT or END.
The autocommit-off mode works by issuing an implicit BEGIN for you, just before any command that
is not already in a transaction block and is not itself a BEGIN or other transaction-control command,
nor a command that cannot be executed inside a transaction block (such as VACUUM).
Note: In autocommit-off mode, you must explicitly abandon any failed transaction by entering
ABORT or ROLLBACK. Also keep in mind that if you exit the session without committing, your work
will be lost.
Note: The autocommit-on mode is PostgreSQL’s traditional behavior, but autocommit-off is closer
to the SQL spec. If you prefer autocommit-off, you may wish to set it in the system-wide psqlrc
file or your ~/.psqlrc file.
DBNAME
The name of the database you are currently connected to. This is set every time you connect to a
database (including program start-up), but can be unset.
ECHO
If set to all, all lines entered from the keyboard or from a script are written to the standard output
before they are parsed or executed. To select this behavior on program start-up, use the switch -a. If
set to queries, psql merely prints all queries as they are sent to the server. The switch for this is -e.
984
psql
ECHO_HIDDEN
When this variable is set and a backslash command queries the database, the query is first shown.
This way you can study the PostgreSQL internals and provide similar functionality in your own
programs. (To select this behavior on program start-up, use the switch -E.) If you set the variable to
the value noexec, the queries are just shown but are not actually sent to the server and executed.
ENCODING
If this variable is set to ignorespace, lines which begin with a space are not entered into the history
list. If set to a value of ignoredups, lines matching the previous history line are not entered. A value
of ignoreboth combines the two options. If unset, or if set to any other value than those above, all
lines read in interactive mode are saved on the history list.
HISTSIZE
The number of commands to store in the command history. The default value is 500.
HOST
The database server host you are currently connected to. This is set every time you connect to a
database (including program start-up), but can be unset.
IGNOREEOF
If unset, sending an EOF character (usually Control+D) to an interactive session of psql will ter-
minate the application. If set to a numeric value, that many EOF characters are ignored before the
application terminates. If the variable is set but has no numeric value, the default is 10.
LASTOID
The value of the last affected OID, as returned from an INSERT or lo_insert command. This
variable is only guaranteed to be valid until after the result of the next SQL command has been
displayed.
ON_ERROR_STOP
985
psql
not called from an interactive psql session but rather using the -f option, psql will return error code
3, to distinguish this case from fatal error conditions (error code 1).
PORT
The database server port to which you are currently connected. This is set every time you connect to
a database (including program start-up), but can be unset.
PROMPT1
PROMPT2
PROMPT3
These specify what the prompts psql issues should look like. See Prompting below.
QUIET
This variable is equivalent to the command line option -q. It is probably not too useful in interactive
mode.
SINGLELINE
The database user you are currently connected as. This is set every time you connect to a database
(including program start-up), but can be unset.
VERBOSITY
This variable can be set to the values default, verbose, or terse to control the verbosity of error
reports.
SQL Interpolation
An additional useful feature of psql variables is that you can substitute (“interpolate”) them into regular
SQL statements. The syntax for this is again to prepend the variable name with a colon (:).
would then query the table my_table. The value of the variable is copied literally, so it can even contain
unbalanced quotes or backslash commands. You must make sure that it makes sense where you put it.
Variable interpolation will not be performed into quoted SQL entities.
A popular application of this facility is to refer to the last inserted OID in subsequent statements to build a
foreign key scenario. Another possible use of this mechanism is to copy the contents of a file into a table
column. First load the file into a variable and then proceed as above.
986
psql
One possible problem with this approach is that my_file.txt might contain single quotes. These need
to be escaped so that they don’t cause a syntax error when the second line is processed. This could be
done with the program sed:
Observe the correct number of backslashes (6)! It works this way: After psql has parsed this line, it passes
sed -e "s/’/\\\’/g" < my_file.txt to the shell. The shell will do its own thing inside the double
quotes and execute sed with the arguments -e and s/’/\\’/g. When sed parses this it will replace
the two backslashes with a single one and then do the substitution. Perhaps at one point you thought
it was great that all Unix commands use the same escape character. And this is ignoring the fact that
you might have to escape all backslashes as well because SQL text constants are also subject to certain
interpretations. In that case you might be better off preparing the file externally.
Since colons may legally appear in SQL commands, the following rule applies: the character sequence
“:name” is not changed unless “name” is the name of a variable that is currently set. In any case you can
escape a colon with a backslash to protect it from substitution. (The colon syntax for variables is standard
SQL for embedded query languages, such as ECPG. The colon syntax for array slices and type casts are
PostgreSQL extensions, hence the conflict.)
Prompting
The prompts psql issues can be customized to your preference. The three variables PROMPT1, PROMPT2,
and PROMPT3 contain strings and special escape sequences that describe the appearance of the prompt.
Prompt 1 is the normal prompt that is issued when psql requests a new command. Prompt 2 is issued when
more input is expected during command input because the command was not terminated with a semicolon
or a quote was not closed. Prompt 3 is issued when you run an SQL COPY command and you are expected
to type in the row values on the terminal.
The value of the selected prompt variable is printed literally, except where a percent sign (%) is encoun-
tered. Depending on the next character, certain other text is substituted instead. Defined substitutions are:
%M
The full host name (with domain name) of the database server, or [local] if the connection is over
a Unix domain socket, or [local:/dir/name], if the Unix domain socket is not at the compiled in
default location.
%m
The host name of the database server, truncated at the first dot, or [local] if the connection is over
a Unix domain socket.
%>
The database session user name. (The expansion of this value might change during a database session
as the result of the command SET SESSION AUTHORIZATION.)
987
psql
%/
Like %/, but the output is ~ (tilde) if the database is your default database.
%#
If the session user is a database superuser, then a #, otherwise a >. (The expansion of this value might
change during a database session as the result of the command SET SESSION AUTHORIZATION.)
%R
In prompt 1 normally =, but ^ if in single-line mode, and ! if the session is disconnected from the
database (which can happen if \connect fails). In prompt 2 the sequence is replaced by -, *, a
single quote, a double quote, or a dollar sign, depending on whether psql expects more input because
the command wasn’t terminated yet, because you are inside a /* ... */ comment, or because you
are inside a quoted or dollar-escaped string. In prompt 3 the sequence doesn’t produce anything.
%x
Transaction status: an empty string when not in a transaction block, or * when in a transaction block,
or ! when in a failed transaction block, or ? when the transaction state is indeterminate (for example,
because there is no connection).
%digits
The character with the indicated numeric code is substituted. If digits starts with 0x the rest of
the characters are interpreted as hexadecimal; otherwise if the first digit is 0 the digits are interpreted
as octal; otherwise the digits are read as a decimal number.
%:name:
The value of the psql variable name. See the section Variables for details.
%‘command ‘
Prompts may contain terminal control characters which, for example, change the color, background,
or style of the prompt text, or change the title of the terminal window. In order for the line editing
features of Readline to work properly, these non-printing control characters must be designated as
invisible by surrounding them with %[ and %]. Multiple pairs of these may occur within the prompt.
For example,
testdb=> \set PROMPT1 ’%[%033[1;33;40m%]%n@%/%R%[%033[0m%#%] ’
988
psql
Command-Line Editing
psql supports the Readline library for convenient line editing and retrieval. The command history is au-
tomatically saved when psql exits and is reloaded when psql starts up. Tab-completion is also supported,
although the completion logic makes no claim to be an SQL parser. If for some reason you do not like the
tab completion, you can turn it off by putting this in a file named .inputrc in your home directory:
$if psql
set disable-completion on
$endif
(This is not a psql but a Readline feature. Read its documentation for further details.)
Environment
PAGER
If the query results do not fit on the screen, they are piped through this command. Typical values are
more or less. The default is platform-dependent. The use of the pager can be disabled by using the
\pset command.
PGDATABASE
Editor used by the \e command. The variables are examined in the order listed; the first that is set is
used.
SHELL
989
psql
Files
• Before starting up, psql attempts to read and execute commands from the system-wide
psqlrc file and the user’s ~/.psqlrc file. (On Windows, the user’s startup file is named
%APPDATA%\postgresql\psqlrc.conf.) See PREFIX/share/psqlrc.sample for information
on setting up the system-wide file. It could be used to set up the client or the server to taste (using the
\set and SET commands).
• Both the system-wide psqlrc file and the user’s ~/.psqlrc file can be made version-specific by
appending a dash and the PostgreSQL release number, for example ~/.psqlrc-8.0.0. A matching
version-specific file will be read in preference to a non-version-specific file.
• The command-line history is stored in the file ~/.psql_history, or
%APPDATA%\postgresql\psql_history on Windows.
Notes
• In an earlier life psql allowed the first argument of a single-letter backslash command to start directly
after the command, without intervening whitespace. For compatibility this is still supported to some
extent, but we are not going to explain the details here as this use is discouraged. If you get strange
messages, keep this in mind. For example
testdb=> \foo
Field separator is "oo".
• Set the code page by entering cmd.exe /c chcp 1252. (1252 is a code page that is appropri-
ate for German; replace it with your value.) If you are using Cygwin, you can put this command in
/etc/profile.
• Set the console font to “Lucida Console”, because the raster font does not work with the ANSI code
page.
990
psql
Examples
The first example shows how to spread a command over several lines of input. Notice the changing prompt:
testdb=> \d my_table
Table "my_table"
Attribute | Type | Modifier
-----------+---------+--------------------
first | integer | not null default 0
second | text |
Let’s assume you have filled the table with data and want to take a look at it:
You can display tables in different ways by using the \pset command:
991
psql
peter@localhost testdb=> \a \t \x
Output format is aligned.
Tuples only is off.
Expanded display is on.
peter@localhost testdb=> SELECT * FROM my_table;
-[ RECORD 1 ]-
first | 1
second | one
-[ RECORD 2 ]-
first | 2
second | two
-[ RECORD 3 ]-
first | 3
second | three
-[ RECORD 4 ]-
first | 4
second | four
992
vacuumdb
Name
vacuumdb — garbage-collect and analyze a PostgreSQL database
Synopsis
vacuumdb [connection-option...] [--full | -f] [--verbose | -v] [--analyze | -z] [--table | -t table [(
column [,...] )] ] [dbname]
vacuumdb [connection-options...] [--all | -a] [--full | -f] [--verbose | -v] [--analyze | -z]
Description
vacuumdb is a utility for cleaning a PostgreSQL database. vacuumdb will also generate internal statistics
used by the PostgreSQL query optimizer.
vacuumdb is a wrapper around the SQL command VACUUM. There is no effective difference between
vacuuming databases via this utility and via other methods for accessing the server.
Options
vacuumdb accepts the following command-line arguments:
-a
--all
Specifies the name of the database to be cleaned or analyzed. If this is not specified and -a (or
--all) is not used, the database name is read from the environment variable PGDATABASE. If that is
not set, the user name specified for the connection is used.
-e
--echo
Echo the commands that vacuumdb generates and sends to the server.
-f
--full
993
vacuumdb
Clean or analyze table only. Column names may be specified only in conjunction with the
--analyze option.
Tip: If you specify columns, you probably have to escape the parentheses from the shell. (See
examples below.)
-v
--verbose
vacuumdb also accepts the following command-line arguments for connection parameters:
-h host
--host host
Specifies the host name of the machine on which the server is running. If the value begins with a
slash, it is used as the directory for the Unix domain socket.
-p port
--port port
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for
connections.
-U username
--username username
994
vacuumdb
Environment
PGDATABASE
PGHOST
PGPORT
PGUSER
Diagnostics
In case of difficulty, see VACUUM and psql for discussions of potential problems and error messages. The
database server must be running at the targeted host. Also, any default connection settings and environ-
ment variables used by the libpq front-end library will apply.
Notes
vacuumdb might need to connect several times to the PostgreSQL server, asking for a password each time.
It is convenient to have a ~/.pgpass file in such cases. See Section 27.12 for more information.
Examples
To clean the database test:
$ vacuumdb test
To clean a single table foo in a database named xyzzy, and analyze a single column bar of the table for
the optimizer:
See Also
VACUUM
995
III. PostgreSQL Server Applications
This part contains reference information for PostgreSQL server applications and support utilities. These
commands can only be run usefully on the host where the database server resides. Other utility programs
are listed in Reference II, PostgreSQL Client Applications.
996
initdb
Name
initdb — create a new PostgreSQL database cluster
Synopsis
Description
initdb creates a new PostgreSQL database cluster. A database cluster is a collection of databases that
are managed by a single server instance.
Creating a database cluster consists of creating the directories in which the database data will live, gen-
erating the shared catalog tables (tables that belong to the whole cluster rather than to any particular
database), and creating the template1 database. When you later create a new database, everything in the
template1 database is copied. It contains catalog tables filled in for things like the built-in types.
initdb initializes the database cluster’s default locale and character set encoding. Some locale categories
are fixed for the lifetime of the cluster. There is also a performance impact in using locales other than C
or POSIX. Therefore it is important to make the right choice when running initdb. Other locale cate-
gories can be changed later when the server is started. initdb will write those locale settings into the
postgresql.conf configuration file so they are the default, but they can be changed by editing that
file. To set the locale that initdb uses, see the description of the --locale option. The character set
encoding can be set separately for each database as it is created. initdb determines the encoding for the
template1 database, which will serve as the default for all other databases. To alter the default encoding
use the --encoding option.
initdb must be run as the user that will own the server process, because the server needs to have access
to the files and directories that initdb creates. Since the server may not be run as root, you must not run
initdb as root either. (It will in fact refuse to do so.)
Although initdb will attempt to create the specified data directory, often it won’t have permission to do
so, since the parent of the desired data directory is often a root-owned directory. To set up an arrangement
like this, create an empty data directory as root, then use chown to hand over ownership of that directory
to the database user account, then su to become the database user, and finally run initdb as the database
user.
997
initdb
Options
-A authmethod
--auth=authmethod
This option specifies the authentication method for local users used in pg_hba.conf. Do not use
trust unless you trust all local users on your system. Trust is the default for ease of installation.
-D directory
--pgdata=directory
This option specifies the directory where the database cluster should be stored. This is the only
information required by initdb, but you can avoid writing it by setting the PGDATA environment
variable, which can be convenient since the database server (postmaster) can find the database
directory later by the same variable.
-E encoding
--encoding=encoding
Selects the encoding of the template database. This will also be the default encoding of any database
you create later, unless you override it there. The default is derived from the locale, or SQL_ASCII if
that does not work. The character sets supported by the PostgreSQL server are described in Section
20.2.1.
--locale=locale
Sets the default locale for the database cluster. If this option is not specified, the locale is inherited
from the environment that initdb runs in. Locale support is described in Section 20.1.
--lc-collate=locale
--lc-ctype=locale
--lc-messages=locale
--lc-monetary=locale
--lc-numeric=locale
--lc-time=locale
Like --locale, but only sets the locale in the specified category.
-U username
--username=username
Selects the user name of the database superuser. This defaults to the name of the effective user
running initdb. It is really not important what the superuser’s name is, but one might choose to
keep the customary name postgres, even if the operating system user’s name is different.
-W
--pwprompt
Makes initdb prompt for a password to give the database superuser. If you don’t plan on using
password authentication, this is not important. Otherwise you won’t be able to use password authen-
tication until you have a password set up.
--pwfile=filename
Makes initdb read the database superuser’s password from a file. The first line of the file is taken
as the password.
998
initdb
-d
--debug
Print debugging output from the bootstrap backend and a few other messages of lesser interest for
the general public. The bootstrap backend is the program initdb uses to create the catalog tables.
This option generates a tremendous amount of extremely boring output.
-L directory
Specifies where initdb should find its input files to initialize the database cluster. This is normally
not necessary. You will be told if you need to specify their location explicitly.
-n
--noclean
By default, when initdb determines that an error prevented it from completely creating the database
cluster, it removes any files it may have created before discovering that it can’t finish the job. This
option inhibits tidying-up and is thus useful for debugging.
Environment
PGDATA
Specifies the directory where the database cluster is to be stored; may be overridden using the -D
option.
See Also
postgres, postmaster
999
ipcclean
Name
ipcclean — remove shared memory and semaphores from a failed PostgreSQL server
Synopsis
ipcclean
Description
ipcclean removes all shared memory segments and semaphore sets owned by the current user. It is
intended to be used for cleaning up after a crashed PostgreSQL server (postmaster). Note that immediately
restarting the server will also clean up shared memory and semaphores, so this command is of little real
utility.
Only the database administrator should execute this program as it can cause bizarre behavior (i.e., crashes)
if run during multiuser execution. If this command is executed while a server is running, the shared mem-
ory and semaphores allocated by that server will be deleted, which would have rather severe consequences
for that server.
Notes
This script is a hack, but in the many years since it was written, no one has come up with an equally
effective and portable solution. Since the postmaster can now clean up by itself, it is unlikely that
ipcclean will be improved upon in the future.
The script makes assumptions about the output format of the ipcs utility which may not be true across
different operating systems. Therefore, it may not work on your particular OS. It’s wise to look at the
script before trying it.
1000
pg_controldata
Name
pg_controldata — display control information of a PostgreSQL database cluster
Synopsis
pg_controldata [datadir]
Description
pg_controldata prints information initialized during initdb, such as the catalog version and server
locale. It also shows information about write-ahead logging and checkpoint processing. This information
is cluster-wide, and not specific to any one database.
This utility may only be run by the user who initialized the cluster because it requires read access to the
data directory. You can specify the data directory on the command line, or use the environment variable
PGDATA.
Environment
PGDATA
1001
pg_ctl
Name
pg_ctl — start, stop, or restart a PostgreSQL server
Synopsis
pg_ctl start [-w] [-s] [-D datadir] [-l filename] [-o options] [-p path]
pg_ctl stop [-W] [-s] [-D datadir] [-m s[mart] | f[ast] | i[mmediate] ]
pg_ctl restart [-w] [-s] [-D datadir] [-m s[mart] | f[ast] | i[mmediate] ] [-o options]
pg_ctl reload [-s] [-D datadir]
pg_ctl status [-D datadir]
pg_ctl kill [signal_name] [process_id]
Description
pg_ctl is a utility for starting, stopping, or restarting the PostgreSQL backend server (postmaster), or
displaying the status of a running server. Although the server can be started manually, pg_ctl encapsulates
tasks such as redirecting log output and properly detaching from the terminal and process group. It also
provides convenient options for controlled shutdown.
In start mode, a new server is launched. The server is started in the background, and standard input is
attached to /dev/null. The standard output and standard error are either appended to a log file (if the -l
option is used), or redirected to pg_ctl’s standard output (not standard error). If no log file is chosen, the
standard output of pg_ctl should be redirected to a file or piped to another process such as a log rotating
program like rotatelogs; otherwise the postmaster will write its output to the controlling terminal (from
the background) and will not leave the shell’s process group.
In stop mode, the server that is running in the specified data directory is shut down. Three different
shutdown methods can be selected with the -m option: “Smart” mode waits for all the clients to disconnect.
This is the default. “Fast” mode does not wait for clients to disconnect. All active transactions are rolled
back and clients are forcibly disconnected, then the server is shut down. “Immediate” mode will abort all
server processes without a clean shutdown. This will lead to a recovery run on restart.
restart mode effectively executes a stop followed by a start. This allows changing the postmaster
command-line options.
reload mode simply sends the postmaster process a SIGHUP signal, causing it to reread its configu-
ration files (postgresql.conf, pg_hba.conf, etc.). This allows changing of configuration-file options
that do not require a complete restart to take effect.
status mode checks whether a server is running in the specified data directory. If it is, the PID and the
command line options that were used to invoke it are displayed.
kill mode allows you to send a signal to a specified process. This is particularly valuable for Microsoft
Windows which does not have a kill command. Use --help to see a list of supported signal names.
1002
pg_ctl
Options
-D datadir
Specifies the file system location of the database files. If this is omitted, the environment variable
PGDATA is used.
-l filename
Append the server log output to filename. If the file does not exist, it is created. The umask is set
to 077, so access to the log file from other users is disallowed by default.
-m mode
Specifies the shutdown mode. mode may be smart, fast, or immediate, or the first letter of one
of these three.
-o options
Specifies the location of the postmaster executable. By default the postmaster executable is
taken from the same directory as pg_ctl, or failing that, the hard-wired installation directory. It
is not necessary to use this option unless you are doing something unusual and get errors that the
postmaster executable was not found.
-s
Wait for the start or shutdown to complete. Times out after 60 seconds. This is the default for shut-
downs. A successful shutdown is indicated by removal of the PID file. For starting up, a success-
ful psql -l indicates success. pg_ctl will attempt to use the proper port for psql. If the envi-
ronment variable PGPORT exists, that is used. Otherwise, it will see if a port has been set in the
postgresql.conf file. If neither of those is used, it will use the default port that PostgreSQL was
compiled with (5432 by default). When waiting, pg_ctl will return an accurate exit code based on
the success of the startup or shutdown.
-W
Do not wait for start or shutdown to complete. This is the default for starts and restarts.
Environment
PGDATA
1003
pg_ctl
PGPORT
Files
postmaster.pid
The existence of this file in the data directory is used to help pg_ctl determine if the server is currently
running or not.
postmaster.opts.default
If this file exists in the data directory, pg_ctl (in start mode) will pass the contents of the file as
options to the postmaster command, unless overridden by the -o option.
postmaster.opts
If this file exists in the data directory, pg_ctl (in restart mode) will pass the contents of the file
as options to the postmaster, unless overridden by the -o option. The contents of this file are also
displayed in status mode.
postgresql.conf
This file, located in the data directory, is parsed to find the proper port to use with psql when the -w
is given in start mode.
Notes
Waiting for complete start is not a well-defined operation and may fail if access control is set up so that a
local client cannot connect without manual interaction (e.g., password authentication).
Examples
$ pg_ctl start
An example of starting the server, blocking until the server has come up is:
$ pg_ctl -w start
1004
pg_ctl
For a server using port 5433, and running without fsync, use:
$ pg_ctl stop
stops the server. Using the -m switch allows one to control how the backend shuts down.
$ pg_ctl restart
$ pg_ctl -w restart
$ pg_ctl status
pg_ctl: postmaster is running (pid: 13718)
Command line was:
/usr/local/pgsql/bin/postmaster ’-D’ ’/usr/local/pgsql/data’ ’-p’ ’5433’ ’-B’ ’128’
See Also
postmaster
1005
pg_resetxlog
Name
pg_resetxlog — reset the write-ahead log and other control information of a PostgreSQL database
cluster
Synopsis
Description
pg_resetxlog clears the write-ahead log (WAL) and optionally resets some other control information
(stored in the pg_control file). This function is sometimes needed if these files have become corrupted.
It should be used only as a last resort, when the server will not start due to such corruption.
After running this command, it should be possible to start the server, but bear in mind that the database
may contain inconsistent data due to partially-committed transactions. You should immediately dump
your data, run initdb, and reload. After reload, check for inconsistencies and repair as needed.
This utility can only be run by the user who installed the server, because it requires read/write access
to the data directory. For safety reasons, you must specify the data directory on the command line.
pg_resetxlog does not use the environment variable PGDATA.
If pg_resetxlog complains that it cannot determine valid data for pg_control, you can force it to
proceed anyway by specifying the -f (force) switch. In this case plausible values will be substituted for
the missing data. Most of the fields can be expected to match, but manual assistance may be needed for
the next OID, next transaction ID, WAL starting address, and database locale fields. The first three of
these can be set using the switches discussed below. pg_resetxlog’s own environment is the source for
its guess at the locale fields; take care that LANG and so forth match the environment that initdb was
run in. If you are not able to determine correct values for all these fields, -f can still be used, but the
recovered database must be treated with even more suspicion than usual: an immediate dump and reload
is imperative. Do not execute any data-modifying operations in the database before you dump; as any such
action is likely to make the corruption worse.
The -o, -x, and -l switches allow the next OID, next transaction ID, and WAL starting address values to
be set manually. These are only needed when pg_resetxlog is unable to determine appropriate values
by reading pg_control. A safe value for the next transaction ID may be determined by looking for the
numerically largest file name in the directory pg_clog under the data directory, adding one, and then
multiplying by 1048576. Note that the file names are in hexadecimal. It is usually easiest to specify the
switch value in hexadecimal too. For example, if 0011 is the largest entry in pg_clog, -x 0x1200000
will work (five trailing zeroes provide the proper multiplier). The WAL starting address should be larger
than any file name currently existing in the directory pg_xlog under the data directory. These names are
also in hexadecimal and have three parts. The first part is the “timeline ID” and should usually be kept
the same. Do not choose a value larger than 255 (0xFF) for the third part; instead increment the second
part and reset the third part to 0. For example, if 00000001000000320000004A is the largest entry in
pg_xlog, -l 0x1,0x32,0x4B will work; but if the largest entry is 000000010000003A000000FF,
1006
pg_resetxlog
choose -l 0x1,0x3B,0x0 or more. There is no comparably easy way to determine a next OID that’s
beyond the largest one in the database, but fortunately it is not critical to get the next-OID setting right.
The -n (no operation) switch instructs pg_resetxlog to print the values reconstructed from
pg_control and then exit without modifying anything. This is mainly a debugging tool, but may be
useful as a sanity check before allowing pg_resetxlog to proceed for real.
Notes
This command must not be used when the server is running. pg_resetxlog will refuse to start up if
it finds a server lock file in the data directory. If the server crashed then a lock file may have been left
behind; in that case you can remove the lock file to allow pg_resetxlog to run. But before you do so,
make doubly certain that there is no postmaster nor any backend server process still alive.
1007
postgres
Name
postgres — run a PostgreSQL server in single-user mode
Synopsis
postgres [-A 0 | 1 ] [-B nbuffers] [-c name=value] [-d debug-level] [--describe-config] [-D
datadir] [-e] [-E] [-f s | i | t | n | m | h ] [-F] [-N] [-o filename] [-O] [-P] [-s | -t pa | pl | ex ] [-S
work-mem] [-W seconds] [--name=value] database
postgres [-A 0 | 1 ] [-B nbuffers] [-c name=value] [-d debug-level] [-D datadir] [-e] [-f s |
i | t | n | m | h ] [-F] [-o filename] [-O] [-p database] [-P] [-s | -t pa | pl | ex ] [-S work-mem] [-v
protocol] [-W seconds] [--name=value]
Description
The postgres executable is the actual PostgreSQL server process that processes queries. It is normally
not called directly; instead a postmaster multiuser server is started.
The second form above is how postgres is invoked by the postmaster (only conceptually, since both
postmaster and postgres are in fact the same program); it should not be invoked directly this way.
The first form invokes the server directly in interactive single-user mode. The primary use for this mode
is during bootstrapping by initdb. Sometimes it is used for debugging or disaster recovery.
When invoked in interactive mode from the shell, the user can enter queries and the results will be printed
to the screen, but in a form that is more useful for developers than end users. But note that running a single-
user server is not truly suitable for debugging the server since no realistic interprocess communication and
locking will happen.
When running a stand-alone server, the session user will be set to the user with ID 1. This user does
not actually have to exist, so a stand-alone server can be used to manually recover from certain kinds of
accidental damage to the system catalogs. Implicit superuser powers are granted to the user with ID 1 in
stand-alone mode.
Options
When postgres is started by a postmaster then it inherits all options set by the latter. Additionally,
postgres-specific options can be passed from the postmaster with the -o switch.
You can avoid having to type these options by setting up a configuration file. See Section 16.4 for details.
Some (safe) options can also be set from the connecting client in an application-dependent way. For
example, if the environment variable PGOPTIONS is set, then libpq-based clients will pass that string to
the server, which will interpret it as postgres command-line options.
1008
postgres
General Purpose
The options -A, -B, -c, -d, -D, -F, and --name have the same meanings as the postmaster except that
-d 0 prevents the server log level of the postmaster from being propagated to postgres.
-e
Sets the default date style to “European”, that is DMY ordering of input date fields. This also causes
the day to be printed before the month in certain date output formats. See Section 8.5 for more
information.
-o filename
Send all server log output to filename. If postgres is running under the postmaster, this option
is ignored, and the stderr inherited from the postmaster is used.
-P
Ignore system indexes when reading system tables (but still update the indexes when modifying the
tables). This is useful when recovering from damaged system indexes.
-s
Print time information and other statistics at the end of each command. This is useful for benchmark-
ing or for use in tuning the number of buffers.
-S work-mem
Specifies the amount of memory to be used by internal sorts and hashes before resorting to temporary
disk files. See the description of the work_mem configuration parameter in Section 16.4.3.1.
database
Specifies the name of the database to be accessed. If it is omitted it defaults to the user name.
-E
Semi-internal Options
There are several other options that may be specified, used mainly for debugging purposes. These are listed
here only for the use by PostgreSQL system developers. Use of any of these options is highly discouraged.
Furthermore, any of these options may disappear or change in a future release without notice.
1009
postgres
-f { s | i | m | n | h }
Forbids the use of particular scan and join methods: s and i disable sequential and index scans
respectively, while n, m, and h disable nested-loop, merge and hash joins respectively.
Note: Neither sequential scans nor nested-loop joins can be disabled completely; the -fs and
-fn options simply discourage the optimizer from using those plan types if it has any other
alternative.
-O
Indicates that this process has been started by a postmaster and specifies the database to use. etc.
-t pa[rser] | pl[anner] | e[xecutor]
Print timing statistics for each query relating to each of the major system modules. This option cannot
be used together with the -s option.
-v protocol
Specifies the version number of the frontend/backend protocol to be used for this particular session.
-W seconds
As soon as this option is encountered, the process sleeps for the specified amount of seconds. This
gives developers time to attach a debugger to the server process.
--describe-config
This option dumps out the server’s internal configuration variables, descriptions, and defaults in tab-
delimited COPY format. It is designed primarily for use by administration tools.
Environment
PGDATA
For others, which have little influence during single-user mode, see postmaster.
Notes
To cancel a running query, send the SIGINT signal to the postgres process running that command.
To tell postgres to reload the configuration files, send a SIGHUP signal. Normally it’s best to SIGHUP
the postmaster instead; the postmaster will in turn SIGHUP each of its children. But in some cases it
might be desirable to have only one postgres process reload the configuration files.
1010
postgres
The postmaster uses SIGTERM to tell a postgres process to quit normally and SIGQUIT to terminate
without the normal cleanup. These signals should not be used by users. It is also unwise to send SIGKILL
to a postgres process — the postmaster will interpret this as a crash in postgres, and will force all
the sibling postgres processes to quit as part of its standard crash-recovery procedure.
Usage
Start a stand-alone server with a command like
Provide the correct path to the database directory with -D, or make sure that the environment variable
PGDATA is set. Also specify the name of the particular database you want to work in.
Normally, the stand-alone server treats newline as the command entry terminator; there is no intelligence
about semicolons, as there is in psql. To continue a command across multiple lines, you must type back-
slash just before each newline except the last one.
But if you use the -N command line switch, then newline does not terminate command entry. In this case,
the server will read the standard input until the end-of-file (EOF) marker, then process the input as a single
command string. Backslash-newline is not treated specially in this case.
To quit the session, type EOF (Control+D, usually). If you’ve used -N, two consecutive EOFs are needed
to exit.
Note that the stand-alone server does not provide sophisticated line-editing features (no command history,
for example).
See Also
initdb, ipcclean, postmaster
1011
postmaster
Name
postmaster — PostgreSQL multiuser database server
Synopsis
postmaster [-A 0 | 1 ] [-B nbuffers] [-c name=value] [-d debug-level] [-D datadir] [-
F] [-h hostname] [-i] [-k directory] [-l] [-N max-connections] [-o extra-options] [-p
port] [-S] [--name=value] [-n | -s]
Description
postmaster is the PostgreSQL multiuser database server. In order for a client application to access
a database it connects (over a network or locally) to a running postmaster. The postmaster then
starts a separate server process (“postgres”) to handle the connection. The postmaster also manages the
communication among server processes.
By default the postmaster starts in the foreground and prints log messages to the standard error stream.
In practical applications the postmaster should be started as a background process, perhaps at boot time.
One postmaster always manages the data from exactly one database cluster. A database cluster is a
collection of databases that is stored at a common file system location (the “data area”). More than one
postmaster process can run on a system at one time, so long as they use different data areas and different
communication ports (see below). A data area is created with initdb.
When the postmaster starts it needs to know the location of the data area. The location must be specified
by the -D option or the PGDATA environment variable; there is no default. Typically, -D or PGDATA points
directly to the data area directory created by initdb. Other possible file layouts are discussed in Section
16.4.1.
Options
postmaster accepts the following command line arguments. For a detailed discussion of the options
consult Section 16.4. You can also save typing most of these options by setting up a configuration file.
-A 0|1
Enables run-time assertion checks, which is a debugging aid to detect programming mistakes. This
option is only available if assertions were enabled when PostgreSQL was compiled. If so, the default
is on.
-B nbuffers
Sets the number of shared buffers for use by the server processes. The default value of this parameter
is chosen automatically by initdb; refer to Section 16.4.3.1 for more information.
1012
postmaster
-c name=value
Sets a named run-time parameter. The configuration parameters supported by PostgreSQL are de-
scribed in Section 16.4. Most of the other command line options are in fact short forms of such a
parameter assignment. -c can appear multiple times to set multiple parameters.
-d debug-level
Sets the debug level. The higher this value is set, the more debugging output is written to the server
log. Values are from 1 to 5.
-D datadir
Specifies the file system location of the data directory or configuration file(s). See Section 16.4.1 for
details.
-F
Disables fsync calls for improved performance, at the risk of data corruption in the event of a system
crash. Specifying this option is equivalent to disabling the fsync configuration parameter. Read the
detailed documentation before using this!
--fsync=true has the opposite effect of this option.
-h hostname
Specifies the IP host name or address on which the postmaster is to listen for TCP/IP connections
from client applications. The value can also be a space-separated list of addresses, or * to spec-
ify listening on all available interfaces. An empty value specifies not listening on any IP addresses,
in which case only Unix-domain sockets can be used to connect to the postmaster. Defaults to
listening only on localhost. Specifying this option is equivalent to setting the listen_addresses con-
figuration parameter.
-i
Allows remote clients to connect via TCP/IP (Internet domain) connections. Without this option,
only local connections are accepted. This option is equivalent to setting listen_addresses to * in
postgresql.conf or via -h.
This option is deprecated since it does not allow access to the full functionality of listen_addresses.
It’s usually better to set listen_addresses directly.
-k directory
Specifies the directory of the Unix-domain socket on which the postmaster is to listen for connec-
tions from client applications. The default is normally /tmp, but can be changed at build time.
-l
Enables secure connections using SSL. PostgreSQL must have been compiled with support for SSL
for this option to be available. For more information on using SSL, refer to Section 16.7.
-N max-connections
Sets the maximum number of client connections that this postmaster will accept. By default, this
value is 32, but it can be set as high as your system will support. (Note that -B is required to be at
least twice -N. See Section 16.5 for a discussion of system resource requirements for large numbers of
client connections.) Specifying this option is equivalent to setting the max_connections configuration
parameter.
1013
postmaster
-o extra-options
The command line-style options specified in extra-options are passed to all server processes
started by this postmaster. See postgres for possibilities. If the option string contains any spaces,
the entire string must be quoted.
-p port
Specifies the TCP/IP port or local Unix domain socket file extension on which the postmaster is
to listen for connections from client applications. Defaults to the value of the PGPORT environment
variable, or if PGPORT is not set, then defaults to the value established during compilation (normally
5432). If you specify a port other than the default port, then all client applications must specify the
same port using either command-line options or PGPORT.
-S
Specifies that the postmaster process should start up in silent mode. That is, it will disassociate
from the user’s (controlling) terminal, start its own process group, and redirect its standard output
and standard error to /dev/null.
Using this switch discards all logging output, which is probably not what you want, since it makes
it very difficult to troubleshoot problems. See below for a better way to start the postmaster in the
background.
--silent-mode=false has the opposite effect of this option.
--name=value
Two additional command line options are available for debugging problems that cause a server process to
die abnormally. The ordinary strategy in this situation is to notify all other server processes that they must
terminate and then reinitialize the shared memory and semaphores. This is because an errant server process
could have corrupted some shared state before terminating. These options select alternative behaviors of
the postmaster in this situation. Neither option is intended for use in ordinary operation.
-n
postmaster will not reinitialize shared data structures. A knowledgeable system programmer can
then use a debugger to examine shared memory and semaphore state.
-s
postmaster will stop all other server processes by sending the signal SIGSTOP, but will not cause
them to terminate. This permits system programmers to collect core dumps from all server processes
by hand.
1014
postmaster
Environment
PGCLIENTENCODING
Default character encoding used by clients. (The clients may override this individually.) This value
can also be set in the configuration file.
PGDATA
Default value of the DateStyle run-time parameter. (The use of this environment variable is depre-
cated.)
PGPORT
Diagnostics
A failure message mentioning semget or shmget probably indicates you need to configure your kernel
to provide adequate shared memory and semaphores. For more discussion see Section 16.5.
Tip: You may be able to postpone reconfiguring your kernel by decreasing shared_buffers to reduce
the shared memory consumption of PostgreSQL, and/or by reducing max_connections to reduce the
semaphore consumption.
A failure message suggesting that another postmaster is already running should be checked carefully, for
example by using the command
$ ps ax | grep postmaster
or
depending on your system. If you are certain that no conflicting postmaster is running, you may remove
the lock file mentioned in the message and try again.
A failure message indicating inability to bind to a port may indicate that that port is already in use by some
non-PostgreSQL process. You may also get this error if you terminate the postmaster and immediately
restart it using the same port; in this case, you must simply wait a few seconds until the operating system
closes the port before trying again. Finally, you may get this error if you specify a port number that your
operating system considers to be reserved. For example, many versions of Unix consider port numbers
under 1024 to be “trusted” and only permit the Unix superuser to access them.
1015
postmaster
Notes
If at all possible, do not use SIGKILL to kill the postmaster. Doing so will prevent postmaster from
freeing the system resources (e.g., shared memory and semaphores) that it holds before terminating. This
may cause problems for starting a fresh postmaster run.
To terminate the postmaster normally, the signals SIGTERM, SIGINT, or SIGQUIT can be used. The
first will wait for all clients to terminate before quitting, the second will forcefully disconnect all clients,
and the third will quit immediately without proper shutdown, resulting in a recovery run during restart.
The SIGHUP signal will reload the server configuration files.
The utility command pg_ctl can be used to start and shut down the postmaster safely and comfortably.
The -- options will not work on FreeBSD or OpenBSD. Use -c instead. This is a bug in the affected
operating systems; a future release of PostgreSQL will provide a workaround if this is not fixed.
Examples
To start postmaster in the background using default values, type:
$ postmaster -p 1234
This command will start up postmaster communicating through the port 1234. In order to connect to
this postmaster using psql, you would need to run it as
$ psql -p 1234
$ export PGPORT=1234
$ psql
$ postmaster -c work_mem=1234
$ postmaster --work-mem=1234
Either form overrides whatever setting might exist for work_mem in postgresql.conf. Notice that
underscores in parameter names can be written as either underscore or dash on the command line.
Tip: Except for short-term experiments, it’s probably better practice to edit the setting in
postgresql.conf than to rely on a command-line switch to set a parameter.
1016
postmaster
See Also
initdb, pg_ctl
1017
VII. Internals
This part contains assorted information that can be of use to PostgreSQL developers.
1018
postmaster
1019
Chapter 40. Overview of PostgreSQL Internals
Author: This chapter originated as part of Enhancement of the ANSI SQL Implementation of Post-
greSQL, Stefan Simkovics’ Master’s Thesis prepared at Vienna University of Technology under the
direction of O.Univ.Prof.Dr. Georg Gottlob and Univ.Ass. Mag. Katrin Seyr.
This chapter gives an overview of the internal structure of the backend of PostgreSQL. After having read
the following sections you should have an idea of how a query is processed. This chapter does not aim to
provide a detailed description of the internal operation of PostgreSQL, as such a document would be very
extensive. Rather, this chapter is intended to help the reader understand the general sequence of operations
that occur within the backend from the point at which a query is received, to the point at which the results
are returned to the client.
1. A connection from an application program to the PostgreSQL server has to be established. The
application program transmits a query to the server and waits to receive the results sent back by the
server.
2. The parser stage checks the query transmitted by the application program for correct syntax and
creates a query tree.
3. The rewrite system takes the query tree created by the parser stage and looks for any rules (stored
in the system catalogs) to apply to the query tree. It performs the transformations given in the rule
bodies.
One application of the rewrite system is in the realization of views. Whenever a query against a view
(i.e. a virtual table) is made, the rewrite system rewrites the user’s query to a query that accesses the
base tables given in the view definition instead.
4. The planner/optimizer takes the (rewritten) query tree and creates a query plan that will be the input
to the executor.
It does so by first creating all possible paths leading to the same result. For example if there is an index
on a relation to be scanned, there are two paths for the scan. One possibility is a simple sequential
scan and the other possibility is to use the index. Next the cost for the execution of each path is
estimated and the cheapest path is chosen. The cheapest path is expanded into a complete plan that
the executor can use.
5. The executor recursively steps through the plan tree and retrieves rows in the way represented by
the plan. The executor makes use of the storage system while scanning relations, performs sorts and
joins, evaluates qualifications and finally hands back the rows derived.
In the following sections we will cover each of the above listed items in more detail to give a better
understanding of PostgreSQL’s internal control and data structures.
1020
Chapter 40. Overview of PostgreSQL Internals
• The parser defined in gram.y and scan.l is built using the Unix tools yacc and lex.
• The transformation process does modifications and augmentations to the data structures returned by
the parser.
40.3.1. Parser
The parser has to check the query string (which arrives as plain ASCII text) for valid syntax. If the syntax
is correct a parse tree is built up and handed back; otherwise an error is returned. The parser and lexer are
implemented using the well-known Unix tools yacc and lex.
The lexer is defined in the file scan.l and is responsible for recognizing identifiers, the SQL key words
etc. For every key word or identifier that is found, a token is generated and handed to the parser.
The parser is defined in the file gram.y and consists of a set of grammar rules and actions that are
executed whenever a rule is fired. The code of the actions (which is actually C code) is used to build up
the parse tree.
The file scan.l is transformed to the C source file scan.c using the program lex and gram.y is trans-
formed to gram.c using yacc. After these transformations have taken place a normal C compiler can be
used to create the parser. Never make any changes to the generated C files as they will be overwritten the
next time lex or yacc is called.
Note: The mentioned transformations and compilations are normally done automatically using the
makefiles shipped with the PostgreSQL source distribution.
1021
Chapter 40. Overview of PostgreSQL Internals
A detailed description of yacc or the grammar rules given in gram.y would be beyond the scope of this
paper. There are many books and documents dealing with lex and yacc. You should be familiar with yacc
before you start to study the grammar given in gram.y otherwise you won’t understand what happens
there.
• The first one worked using row level processing and was implemented deep in the executor. The rule
system was called whenever an individual row had been accessed. This implementation was removed
in 1995 when the last official release of the Berkeley Postgres project was transformed into Postgres95.
• The second implementation of the rule system is a technique called query rewriting. The rewrite system
is a module that exists between the parser stage and the planner/optimizer. This technique is still
implemented.
The query rewriter is discussed in some detail in Chapter 33, so there is no need to cover it here. We will
only point out that both the input and the output of the rewriter are query trees, that is, there is no change
1022
Chapter 40. Overview of PostgreSQL Internals
in the representation or level of semantic detail in the trees. Rewriting can be thought of as a form of
macro expansion.
40.5. Planner/Optimizer
The task of the planner/optimizer is to create an optimal execution plan. A given SQL query (and hence,
a query tree) can be actually executed in a wide variety of different ways, each of which will produce
the same set of results. If it is computationally feasible, the query optimizer will examine each of these
possible execution plans, ultimately selecting the execution plan that is expected to run the fastest.
Note: In some situations, examining each possible way in which a query may be executed would take
an excessive amount of time and memory space. In particular, this occurs when executing queries
involving large numbers of join operations. In order to determine a reasonable (not optimal) query
plan in a reasonable amount of time, PostgreSQL uses a Genetic Query Optimizer .
The planner’s search procedure actually works with data structures called paths, which are simply cut-
down representations of plans containing only as much information as the planner needs to make its
decisions. After the cheapest path is determined, a full-fledged plan tree is built to pass to the executor.
This represents the desired execution plan in sufficient detail for the executor to run it. In the rest of this
section we’ll ignore the distinction between paths and plans.
• nested loop join: The right relation is scanned once for every row found in the left relation. This strategy
is easy to implement but can be very time consuming. (However, if the right relation can be scanned
with an index scan, this can be a good strategy. It is possible to use values from the current row of the
left relation as keys for the index scan of the right.)
• merge sort join: Each relation is sorted on the join attributes before the join starts. Then the two relations
are scanned in parallel, and matching rows are combined to form join rows. This kind of join is more
1023
Chapter 40. Overview of PostgreSQL Internals
attractive because each relation has to be scanned only once. The required sorting may be achieved
either by an explicit sort step, or by scanning the relation in the proper order using an index on the join
key.
• hash join: the right relation is first scanned and loaded into a hash table, using its join attributes as hash
keys. Next the left relation is scanned and the appropriate values of every row found are used as hash
keys to locate the matching rows in the table.
When the query involves more than two relations, the final result must be built up by a tree of join steps,
each with two inputs. The planner examines different possible join sequences to find the cheapest one.
The finished plan tree consists of sequential or index scans of the base relations, plus nested-loop, merge,
or hash join nodes as needed, plus any auxiliary steps needed, such as sort nodes or aggregate-function
calculation nodes. Most of these plan node types have the additional ability to do selection (discarding
rows that do not meet a specified boolean condition) and projection (computation of a derived column
set based on given column values, that is, evaluation of scalar expressions where needed). One of the
responsibilities of the planner is to attach selection conditions from the WHERE clause and computation of
required output expressions to the most appropriate nodes of the plan tree.
40.6. Executor
The executor takes the plan handed back by the planner/optimizer and recursively processes it to extract
the required set of rows. This is essentially a demand-pull pipeline mechanism. Each time a plan node is
called, it must deliver one more row, or report that it is done delivering rows.
To provide a concrete example, assume that the top node is a MergeJoin node. Before any merge can
be done two rows have to be fetched (one from each subplan). So the executor recursively calls itself to
process the subplans (it starts with the subplan attached to lefttree). The new top node (the top node of
the left subplan) is, let’s say, a Sort node and again recursion is needed to obtain an input row. The child
node of the Sort might be a SeqScan node, representing actual reading of a table. Execution of this node
causes the executor to fetch a row from the table and return it up to the calling node. The Sort node will
repeatedly call its child to obtain all the rows to be sorted. When the input is exhausted (as indicated by
the child node returning a NULL instead of a row), the Sort code performs the sort, and finally is able to
return its first output row, namely the first one in sorted order. It keeps the remaining rows stored so that
it can deliver them in sorted order in response to later demands.
The MergeJoin node similarly demands the first row from its right subplan. Then it compares the two
rows to see if they can be joined; if so, it returns a join row to its caller. On the next call, or immediately
if it cannot join the current pair of inputs, it advances to the next row of one table or the other (depending
on how the comparison came out), and again checks for a match. Eventually, one subplan or the other is
exhausted, and the MergeJoin node returns NULL to indicate that no more join rows can be formed.
Complex queries may involve many levels of plan nodes, but the general approach is the same: each node
computes and returns its next output row each time it is called. Each node is also responsible for applying
any selection or projection expressions that were assigned to it by the planner.
The executor mechanism is used to evaluate all four basic SQL query types: SELECT, INSERT, UPDATE,
and DELETE. For SELECT, the top-level executor code only needs to send each row returned by the query
plan tree off to the client. For INSERT, each returned row is inserted into the target table specified for
1024
Chapter 40. Overview of PostgreSQL Internals
the INSERT. (A simple INSERT ... VALUES command creates a trivial plan tree consisting of a single
Result node, which computes just one result row. But INSERT ... SELECT may demand the full power
of the executor mechanism.) For UPDATE, the planner arranges that each computed row includes all the
updated column values, plus the TID (tuple ID, or row ID) of the original target row; the executor top level
uses this information to create a new updated row and mark the old row deleted. For DELETE, the only
column that is actually returned by the plan is the TID, and the executor top level simply uses the TID to
visit each target row and mark it deleted.
1025
Chapter 41. System Catalogs
The system catalogs are the place where a relational database management system stores schema meta-
data, such as information about tables and columns, and internal bookkeeping information. PostgreSQL’s
system catalogs are regular tables. You can drop and recreate the tables, add columns, insert and update
values, and severely mess up your system that way. Normally, one should not change the system catalogs
by hand, there are always SQL commands to do that. (For example, CREATE DATABASE inserts a row into
the pg_database catalog — and actually creates the database on disk.) There are some exceptions for
particularly esoteric operations, such as adding index access methods.
41.1. Overview
Table 41-1 lists the system catalogs. More detailed documentation of each catalog follows below.
Most system catalogs are copied from the template database during database creation and are thereafter
database-specific. A few catalogs are physically shared across all databases in a cluster; these are noted in
the descriptions of the individual catalogs.
1026
Chapter 41. System Catalogs
41.2. pg_aggregate
The catalog pg_aggregate stores information about aggregate functions. An aggregate function is a
function that operates on a set of values (typically one column from each row that matches a query con-
dition) and returns a single value computed from all these values. Typical aggregate functions are sum,
count, and max. Each entry in pg_aggregate is an extension of an entry in pg_proc. The pg_proc
entry carries the aggregate’s name, input and output data types, and other information that is similar to
ordinary functions.
New aggregate functions are registered with the CREATE AGGREGATE command. See Section 31.10 for
1027
Chapter 41. System Catalogs
more information about writing aggregate functions and the meaning of the transition functions, etc.
41.3. pg_am
The catalog pg_am stores information about index access methods. There is one row for each index access
method supported by the system.
1028
Chapter 41. System Catalogs
An index access method that supports multiple columns (has amcanmulticol true) must support in-
dexing null values in columns after the first, because the planner will assume the index can be used for
queries on just the first column(s). For example, consider an index on (a,b) and a query with WHERE a =
4. The system will assume the index can be used to scan for rows with a = 4, which is wrong if the index
omits rows where b is null. It is, however, OK to omit rows where the first indexed column is null. (GiST
currently does so.) amindexnulls should be set true only if the index access method indexes all rows,
including arbitrary combinations of null values.
41.4. pg_amop
The catalog pg_amop stores information about operators associated with index access method operator
classes. There is one row for each operator that is a member of an operator class.
41.5. pg_amproc
The catalog pg_amproc stores information about support procedures associated with index access method
operator classes. There is one row for each support procedure belonging to an operator class.
1029
Chapter 41. System Catalogs
41.6. pg_attrdef
The catalog pg_attrdef stores column default values. The main information about columns is stored
in pg_attribute (see below). Only columns that explicitly specify a default value (when the table is
created or the column is added) will have an entry here.
The adsrc field is historical, and is best not used, because it does not track outside changes that might
affect the representation of the default value. Reverse-compiling the adbin field (with pg_get_expr for
example) is a better way to display the default value.
41.7. pg_attribute
The catalog pg_attribute stores information about table columns. There will be exactly one
pg_attribute row for every column in every table in the database. (There will also be attribute entries
for indexes, and indeed all objects that have pg_class entries.)
The term attribute is equivalent to column and is used for historical reasons.
1030
Chapter 41. System Catalogs
1031
Chapter 41. System Catalogs
1032
Chapter 41. System Catalogs
In a dropped column’s pg_attribute entry, atttypid is reset to zero, but attlen and the other fields
copied from pg_type are still valid. This arrangement is needed to cope with the situation where the
dropped column’s data type was later dropped, and so there is no pg_type row anymore. attlen and the
other fields can be used to interpret the contents of a row of the table.
41.8. pg_cast
The catalog pg_cast stores data type conversion paths, both built-in paths and those defined with CREATE
CAST.
1033
Chapter 41. System Catalogs
The cast functions listed in pg_cast must always take the cast source type as their first argument type,
and return the cast destination type as their result type. A cast function can have up to three arguments.
The second argument, if present, must be type integer; it receives the type modifier associated with the
destination type, or -1 if there is none. The third argument, if present, must be type boolean; it receives
true if the cast is an explicit cast, false otherwise.
It is legitimate to create a pg_cast entry in which the source and target types are the same, if the as-
sociated function takes more than one argument. Such entries represent “length coercion functions” that
coerce values of the type to be legal for a particular type modifier value. Note however that at present there
is no support for associating non-default type modifiers with user-created data types, and so this facility is
only of use for the small number of built-in types that have type modifier syntax built into the grammar.
When a pg_cast entry has different source and target types and a function that takes more than one
argument, it represents converting from one type to another and applying a length coercion in a single
step. When no such entry is available, coercion to a type that uses a type modifier involves two steps, one
to convert between data types and a second to apply the modifier.
41.9. pg_class
The catalog pg_class catalogs tables and most everything else that has columns or is otherwise similar
to a table. This includes indexes (but see also pg_index), sequences, views, composite types, and some
kinds of special relation; see relkind. Below, when we mean all of these kinds of objects we speak of
“relations”. Not all columns are meaningful for all relation types.
1034
Chapter 41. System Catalogs
1035
Chapter 41. System Catalogs
1036
Chapter 41. System Catalogs
41.10. pg_constraint
The catalog pg_constraint stores check, primary key, unique, and foreign key constraints on tables.
(Column constraints are not treated specially. Every column constraint is equivalent to some table con-
straint.) Not-null constraints are represented in the pg_attribute catalog.
Check constraints on domains are stored here, too.
1037
Chapter 41. System Catalogs
Note: consrc is not updated when referenced objects change; for example, it won’t track renaming
of columns. Rather than relying on this field, it’s best to use pg_get_constraintdef() to extract the
definition of a check constraint.
Note: pg_class.relchecks needs to agree with the number of check-constraint entries found in this
table for the given relation.
41.11. pg_conversion
The catalog pg_conversion describes the available encoding conversion procedures. See CREATE
CONVERSION for more information.
1038
Chapter 41. System Catalogs
41.12. pg_database
The catalog pg_database stores information about the available databases. Databases are created with
the CREATE DATABASE command. Consult Chapter 18 for details about the meaning of some of the
parameters.
Unlike most system catalogs, pg_database is shared across all databases of a cluster: there is only one
copy of pg_database per cluster, not one per database.
1039
Chapter 41. System Catalogs
41.13. pg_depend
The catalog pg_depend records the dependency relationships between database objects. This informa-
tion allows DROP commands to find which other objects must be dropped by DROP CASCADE or prevent
1040
Chapter 41. System Catalogs
In all cases, a pg_depend entry indicates that the referenced object may not be dropped without also
dropping the dependent object. However, there are several subflavors identified by deptype:
DEPENDENCY_NORMAL (n)
A normal relationship between separately-created objects. The dependent object may be dropped
without affecting the referenced object. The referenced object may only be dropped by specifying
CASCADE, in which case the dependent object is dropped, too. Example: a table column has a normal
dependency on its data type.
DEPENDENCY_AUTO (a)
The dependent object can be dropped separately from the referenced object, and should be auto-
matically dropped (regardless of RESTRICT or CASCADE mode) if the referenced object is dropped.
Example: a named constraint on a table is made autodependent on the table, so that it will go away
1041
Chapter 41. System Catalogs
The dependent object was created as part of creation of the referenced object, and is really just
a part of its internal implementation. A DROP of the dependent object will be disallowed outright
(we’ll tell the user to issue a DROP against the referenced object, instead). A DROP of the referenced
object will be propagated through to drop the dependent object whether CASCADE is specified or not.
Example: a trigger that’s created to enforce a foreign-key constraint is made internally dependent on
the constraint’s pg_constraint entry.
DEPENDENCY_PIN (p)
There is no dependent object; this type of entry is a signal that the system itself depends on the
referenced object, and so that object must never be deleted. Entries of this type are created only by
initdb. The columns for the dependent object contain zeroes.
41.14. pg_description
The catalog pg_description stores optional descriptions (comments) for each database object. Descrip-
tions can be manipulated with the COMMENT command and viewed with psql’s \d commands. Descriptions
of many built-in system objects are provided in the initial contents of pg_description.
1042
Chapter 41. System Catalogs
41.15. pg_group
The catalog pg_group defines groups and stores what users belong to what groups. Groups are created
with the CREATE GROUP command. Consult Chapter 17 for information about user privilege management.
Because user and group identities are cluster-wide, pg_group is shared across all databases of a cluster:
there is only one copy of pg_group per cluster, not one per database.
41.16. pg_index
The catalog pg_index contains part of the information about indexes. The rest is mostly in pg_class.
1043
Chapter 41. System Catalogs
1044
Chapter 41. System Catalogs
41.17. pg_inherits
The catalog pg_inherits records information about table inheritance hierarchies. There is one entry
for each direct child table in the database. (Indirect inheritance can be determined by following chains of
entries.)
41.18. pg_language
The catalog pg_language registers languages in which you can write functions or stored procedures. See
CREATE LANGUAGE and Chapter 34 for more information about language handlers.
1045
Chapter 41. System Catalogs
41.19. pg_largeobject
The catalog pg_largeobject holds the data making up “large objects”. A large object is identified by
an OID assigned when it is created. Each large object is broken into segments or “pages” small enough
to be conveniently stored as rows in pg_largeobject. The amount of data per page is defined to be
LOBLKSIZE (which is currently BLCKSZ/4, or typically 2 kB).
1046
Chapter 41. System Catalogs
Each row of pg_largeobject holds data for one page of a large object, beginning at byte offset (pageno
* LOBLKSIZE) within the object. The implementation allows sparse storage: pages may be missing, and
may be shorter than LOBLKSIZE bytes even if they are not the last page of the object. Missing regions
within a large object read as zeroes.
41.20. pg_listener
The catalog pg_listener supports the LISTEN and NOTIFY commands. A listener creates an entry in
pg_listener for each notification name it is listening for. A notifier scans pg_listener and updates
each matching entry to show that a notification has occurred. The notifier also sends a signal (using the
PID recorded in the table) to awaken the listener from sleep.
1047
Chapter 41. System Catalogs
41.21. pg_namespace
The catalog pg_namespace stores namespaces. A namespace is the structure underlying SQL schemas:
each namespace can have a separate collection of relations, types, etc. without name conflicts.
41.22. pg_opclass
The catalog pg_opclass defines index access method operator classes. Each operator class defines se-
mantics for index columns of a particular data type and a particular index access method. Note that there
can be multiple operator classes for a given data type/access method combination, thus supporting multi-
ple behaviors.
Operator classes are described at length in Section 31.14.
The majority of the information defining an operator class is actually not in its pg_opclass row, but in
the associated rows in pg_amop and pg_amproc. Those rows are considered to be part of the operator
class definition — this is not unlike the way that a relation is defined by a single pg_class row plus
associated rows in pg_attribute and other tables.
1048
Chapter 41. System Catalogs
41.23. pg_operator
The catalog pg_operator stores information about operators. See CREATE OPERATOR and Section
31.12 for more information.
1049
Chapter 41. System Catalogs
Unused column contain zeroes, for example oprleft is zero for a prefix operator.
41.24. pg_proc
The catalog pg_proc stores information about functions (or procedures). See CREATE FUNCTION and
Section 31.3 for more information.
The table contains data for aggregate functions as well as plain functions. If proisagg is true, there
should be a matching row in pg_aggregate.
1050
Chapter 41. System Catalogs
1051
Chapter 41. System Catalogs
For compiled functions, both built-in and dynamically loaded, prosrc contains the function’s C-language
name (link symbol). For all other currently-known language types, prosrc contains the function’s source
text. probin is unused except for dynamically-loaded C functions, for which it gives the name of the
shared library file containing the function.
41.25. pg_rewrite
The catalog pg_rewrite stores rewrite rules for tables and views.
1052
Chapter 41. System Catalogs
Note: pg_class.relhasrules must be true if a table has any rules in this catalog.
41.26. pg_shadow
The catalog pg_shadow contains information about database users. The name stems from the fact that
this table should not be readable by the public since it contains passwords. pg_user is a publicly readable
view on pg_shadow that blanks out the password field.
Chapter 17 contains detailed information about user and privilege management.
Because user identities are cluster-wide, pg_shadow is shared across all databases of a cluster: there is
only one copy of pg_shadow per cluster, not one per database.
41.27. pg_statistic
The catalog pg_statistic stores statistical data about the contents of the database. Entries are created
by ANALYZE and subsequently used by the query planner. There is one entry for each table column that
has been analyzed. Note that all the statistical data is inherently approximate, even assuming that it is
up-to-date.
1053
Chapter 41. System Catalogs
pg_statistic also stores statistical data about the values of index expressions. These are described as
if they were actual data columns; in particular, starelid references the index. No entry is made for
an ordinary non-expression index column, however, since it would be redundant with the entry for the
underlying table column.
Since different kinds of statistics may be appropriate for different kinds of data, pg_statistic is de-
signed not to assume very much about what sort of statistics it stores. Only extremely general statistics
(such as nullness) are given dedicated columns in pg_statistic. Everything else is stored in “slots”,
which are groups of associated columns whose content is identified by a code number in one of the slot’s
columns. For more information see src/include/catalog/pg_statistic.h.
pg_statistic should not be readable by the public, since even statistical information about a table’s
contents may be considered sensitive. (Example: minimum and maximum values of a salary column might
be quite interesting.) pg_stats is a publicly readable view on pg_statistic that only exposes infor-
mation about those tables that are readable by the current user.
1054
Chapter 41. System Catalogs
41.28. pg_tablespace
The catalog pg_tablespace stores information about the available tablespaces. Tables can be placed in
particular tablespaces to aid administration of disk layout.
Unlike most system catalogs, pg_tablespace is shared across all databases of a cluster: there is only
one copy of pg_tablespace per cluster, not one per database.
1055
Chapter 41. System Catalogs
41.29. pg_trigger
The catalog pg_trigger stores triggers on tables. See CREATE TRIGGER for more information.
1056
Chapter 41. System Catalogs
Note: pg_class.reltriggers needs to agree with the number of triggers found in this table for the
given relation.
41.30. pg_type
The catalog pg_type stores information about data types. Base types (scalar types) are created with
CREATE TYPE, and domains with CREATE DOMAIN. A composite type is automatically created for
each table in the database, to represent the row structure of the table. It is also possible to create composite
types with CREATE TYPE AS.
1057
Chapter 41. System Catalogs
1058
Chapter 41. System Catalogs
1059
Chapter 41. System Catalogs
1060
Chapter 41. System Catalogs
• s = short align-
ment (2 bytes on most
machines).
• i = int alignment
(4 bytes on most ma-
chines).
• d = double align-
ment (8 bytes on many
machines, but by no
means all).
• m: Value can be
stored compressed in-
line.
• x: Value can be
stored compressed in-
line or stored in “sec-
ondary” storage.
1062
Chapter 41. System Catalogs
1063
Chapter 41. System Catalogs
41.32. pg_indexes
The view pg_indexes provides access to useful information about each index in the database.
1064
Chapter 41. System Catalogs
41.33. pg_locks
The view pg_locks provides access to information about the locks held by open transactions within the
database server. See Chapter 12 for more discussion of locking.
pg_locks contains one row per active lockable object, requested lock mode, and relevant transaction.
Thus, the same lockable object may appear many times, if multiple transactions are holding or waiting for
locks on it. However, an object that currently has no locks on it will not appear at all. A lockable object is
either a relation (e.g., a table) or a transaction ID.
Note that this view includes only table-level locks, not row-level ones. If a transaction is waiting for a
row-level lock, it will appear in the view as waiting for the transaction ID of the current holder of that row
lock.
1065
Chapter 41. System Catalogs
granted is true in a row representing a lock held by the indicated session. False indicates that this session
is currently waiting to acquire this lock, which implies that some other session is holding a conflicting
lock mode on the same lockable object. The waiting session will sleep until the other lock is released (or
a deadlock situation is detected). A single session can be waiting to acquire at most one lock at a time.
Every transaction holds an exclusive lock on its transaction ID for its entire duration. If one transaction
finds it necessary to wait specifically for another transaction, it does so by attempting to acquire share lock
on the other transaction ID. That will succeed only when the other transaction terminates and releases its
locks.
When the pg_locks view is accessed, the internal lock manager data structures are momentarily locked,
and a copy is made for the view to display. This ensures that the view produces a consistent set of results,
while not blocking normal lock manager operations longer than necessary. Nonetheless there could be
some impact on database performance if this view is read often.
pg_locks provides a global view of all locks in the database cluster, not only those relevant to the
current database. Although its relation column can be joined against pg_class.oid to identify locked
relations, this will only work correctly for relations in the current database (those for which the database
column is either the current database’s OID or zero).
If you have enabled the statistics collector, the pid column can be joined to the procpid column of the
pg_stat_activity view to get more information on the session holding or waiting to hold the lock.
41.34. pg_rules
The view pg_rules provides access to useful information about query rewrite rules.
The pg_rules view excludes the ON SELECT rules of views; those can be seen in pg_views.
41.35. pg_settings
The view pg_settings provides access to run-time parameters of the server. It is essentially an alterna-
tive interface to the SHOW and SET commands. It also provides access to some facts about each parameter
that are not directly available from SHOW, such as minimum and maximum values.
1066
Chapter 41. System Catalogs
The pg_settings view cannot be inserted into or deleted from, but it can be updated. An UPDATE
applied to a row of pg_settings is equivalent to executing the SET command on that named parameter.
The change only affects the value used by the current session. If an UPDATE is issued within a transaction
that is later aborted, the effects of the UPDATE command disappear when the transaction is rolled back.
Once the surrounding transaction is committed, the effects will persist until the end of the session, unless
overridden by another UPDATE or SET.
41.36. pg_stats
The view pg_stats provides access to the information stored in the pg_statistic catalog. This view
allows access only to rows of pg_statistic that correspond to tables the user has permission to read,
and therefore it is safe to allow public read access to this view.
pg_stats is also designed to present the information in a more readable format than the underly-
ing catalog — at the cost that its schema must be extended whenever new slot types are defined for
pg_statistic.
1067
Chapter 41. System Catalogs
1068
Chapter 41. System Catalogs
The maximum number of entries in the most_common_vals and histogram_bounds arrays can be
set on a column-by-column basis using the ALTER TABLE SET STATISTICS command, or globally by
setting the default_statistics_target runtime parameter.
1069
Chapter 41. System Catalogs
41.37. pg_tables
The view pg_tables provides access to useful information about each table in the database.
41.38. pg_user
The view pg_user provides access to information about database users. This is simply a publicly readable
view of pg_shadow that blanks out the password field.
1070
Chapter 41. System Catalogs
41.39. pg_views
The view pg_views provides access to useful information about each view in the database.
1071
Chapter 42. Frontend/Backend Protocol
PostgreSQL uses a message-based protocol for communication between frontends and backends (clients
and servers). The protocol is supported over TCP/IP and also over Unix-domain sockets. Port number 5432
has been registered with IANA as the customary TCP port number for servers supporting this protocol,
but in practice any non-privileged port number may be used.
This document describes version 3.0 of the protocol, implemented in PostgreSQL 7.4 and later. For de-
scriptions of the earlier protocol versions, see previous releases of the PostgreSQL documentation. A
single server can support multiple protocol versions. The initial startup-request message tells the server
which protocol version the client is attempting to use, and then the server follows that protocol if it is able.
Higher level features built on this protocol (for example, how libpq passes certain environment variables
when the connection is established) are covered elsewhere.
In order to serve multiple clients efficiently, the server launches a new “backend” process for each client.
In the current implementation, a new child process is created immediately after an incoming connection is
detected. This is transparent to the protocol, however. For purposes of the protocol, the terms “backend”
and “server” are interchangeable; likewise “frontend” and “client” are interchangeable.
42.1. Overview
The protocol has separate phases for startup and normal operation. In the startup phase, the frontend opens
a connection to the server and authenticates itself to the satisfaction of the server. (This might involve a
single message, or multiple messages depending on the authentication method being used.) If all goes
well, the server then sends status information to the frontend, and finally enters normal operation. Except
for the initial startup-request message, this part of the protocol is driven by the server.
During normal operation, the frontend sends queries and other commands to the backend, and the backend
sends back query results and other responses. There are a few cases (such as NOTIFY) wherein the backend
will send unsolicited messages, but for the most part this portion of a session is driven by frontend requests.
Termination of the session is normally by frontend choice, but can be forced by the backend in certain
cases. In any case, when the backend closes the connection, it will roll back any open (incomplete) trans-
action before exiting.
Within normal operation, SQL commands can be executed through either of two sub-protocols. In the
“simple query” protocol, the frontend just sends a textual query string, which is parsed and immediately
executed by the backend. In the “extended query” protocol, processing of queries is separated into multiple
steps: parsing, binding of parameter values, and execution. This offers flexibility and performance benefits,
at the cost of extra complexity.
Normal operation has additional sub-protocols for special operations such as COPY.
1072
Chapter 42. Frontend/Backend Protocol
but not the message-type byte). The remaining contents of the message are determined by the message
type. For historical reasons, the very first message sent by the client (the startup message) has no initial
message-type byte.
To avoid losing synchronization with the message stream, both servers and clients typically read an entire
message into a buffer (using the byte count) before attempting to process its contents. This allows easy
recovery if an error is detected while processing the contents. In extreme situations (such as not having
enough memory to buffer the message), the receiver may use the byte count to determine how much input
to skip before it resumes reading messages.
Conversely, both servers and clients must take care never to send an incomplete message. This is com-
monly done by marshaling the entire message in a buffer before beginning to send it. If a communications
failure occurs partway through sending or receiving a message, the only sensible response is to abandon
the connection, since there is little hope of recovering message-boundary synchronization.
1073
Chapter 42. Frontend/Backend Protocol
Binary representations for integers use network byte order (most significant byte first). For other data
types consult the documentation or source code to learn about the binary representation. Keep in mind
that binary representations for complex data types may change across server versions; the text format is
usually the more portable choice.
42.2.1. Start-Up
To begin a session, a frontend opens a connection to the server and sends a startup message. This message
includes the names of the user and of the database the user wants to connect to; it also identifies the
particular protocol version to be used. (Optionally, the startup message can include additional settings for
run-time parameters.) The server then uses this information and the contents of its configuration files (such
as pg_hba.conf) to determine whether the connection is provisionally acceptable, and what additional
authentication is required (if any).
The server then sends an appropriate authentication request message, to which the frontend must reply
with an appropriate authentication response message (such as a password). In principle the authentication
request/response cycle could require multiple iterations, but none of the present authentication methods
use more than one request and response. In some methods, no response at all is needed from the frontend,
and so no authentication request occurs.
The authentication cycle ends with the server either rejecting the connection attempt (ErrorResponse), or
sending AuthenticationOk.
The possible messages from the server in this phase are:
ErrorResponse
The connection attempt has been rejected. The server then immediately closes the connection.
AuthenticationOk
The authentication exchange is successfully completed.
AuthenticationKerberosV4
The frontend must now take part in a Kerberos V4 authentication dialog (not described here, part of
the Kerberos specification) with the server. If this is successful, the server responds with an Authen-
ticationOk, otherwise it responds with an ErrorResponse.
AuthenticationKerberosV5
The frontend must now take part in a Kerberos V5 authentication dialog (not described here, part of
the Kerberos specification) with the server. If this is successful, the server responds with an Authen-
ticationOk, otherwise it responds with an ErrorResponse.
1074
Chapter 42. Frontend/Backend Protocol
AuthenticationCleartextPassword
The frontend must now send a PasswordMessage containing the password in clear-text form. If this
is the correct password, the server responds with an AuthenticationOk, otherwise it responds with an
ErrorResponse.
AuthenticationCryptPassword
The frontend must now send a PasswordMessage containing the password encrypted via crypt(3), us-
ing the 2-character salt specified in the AuthenticationCryptPassword message. If this is the correct
password, the server responds with an AuthenticationOk, otherwise it responds with an ErrorRe-
sponse.
AuthenticationMD5Password
The frontend must now send a PasswordMessage containing the password encrypted via MD5, using
the 4-character salt specified in the AuthenticationMD5Password message. If this is the correct pass-
word, the server responds with an AuthenticationOk, otherwise it responds with an ErrorResponse.
AuthenticationSCMCredential
This response is only possible for local Unix-domain connections on platforms that support SCM
credential messages. The frontend must issue an SCM credential message and then send a single data
byte. (The contents of the data byte are uninteresting; it’s only used to ensure that the server waits
long enough to receive the credential message.) If the credential is acceptable, the server responds
with an AuthenticationOk, otherwise it responds with an ErrorResponse.
If the frontend does not support the authentication method requested by the server, then it should imme-
diately close the connection.
After having received AuthenticationOk, the frontend must wait for further messages from the server. In
this phase a backend process is being started, and the frontend is just an interested bystander. It is still
possible for the startup attempt to fail (ErrorResponse), but in the normal case the backend will send some
ParameterStatus messages, BackendKeyData, and finally ReadyForQuery.
During this phase the backend will attempt to apply any additional run-time parameter settings that were
given in the startup message. If successful, these values become session defaults. An error causes Error-
Response and exit.
The possible messages from the backend in this phase are:
BackendKeyData
This message provides secret-key data that the frontend must save if it wants to be able to issue
cancel requests later. The frontend should not respond to this message, but should continue listening
for a ReadyForQuery message.
ParameterStatus
This message informs the frontend about the current (initial) setting of backend parameters, such as
client_encoding or DateStyle. The frontend may ignore this message, or record the settings for its
future use; see Section 42.2.6 for more details. The frontend should not respond to this message, but
should continue listening for a ReadyForQuery message.
1075
Chapter 42. Frontend/Backend Protocol
ReadyForQuery
Start-up is completed. The frontend may now issue commands.
ErrorResponse
Start-up failed. The connection is closed after sending this message.
NoticeResponse
A warning message has been issued. The frontend should display the message but continue listening
for ReadyForQuery or ErrorResponse.
The ReadyForQuery message is the same one that the backend will issue after each command cycle. De-
pending on the coding needs of the frontend, it is reasonable to consider ReadyForQuery as starting a
command cycle, or to consider ReadyForQuery as ending the start-up phase and each subsequent com-
mand cycle.
CommandComplete
An SQL command completed normally.
CopyInResponse
The backend is ready to copy data from the frontend to a table; see Section 42.2.5.
CopyOutResponse
The backend is ready to copy data from a table to the frontend; see Section 42.2.5.
RowDescription
Indicates that rows are about to be returned in response to a SELECT, FETCH, etc query. The contents
of this message describe the column layout of the rows. This will be followed by a DataRow message
for each row being returned to the frontend.
DataRow
One of the set of rows returned by a SELECT, FETCH, etc query.
EmptyQueryResponse
An empty query string was recognized.
1076
Chapter 42. Frontend/Backend Protocol
ErrorResponse
An error has occurred.
ReadyForQuery
Processing of the query string is complete. A separate message is sent to indicate this because the
query string may contain multiple SQL commands. (CommandComplete marks the end of processing
one SQL command, not the whole string.) ReadyForQuery will always be sent, whether processing
terminates successfully or with an error.
NoticeResponse
A warning message has been issued in relation to the query. Notices are in addition to other responses,
i.e., the backend will continue processing the command.
The response to a SELECT query (or other queries that return row sets, such as EXPLAIN or SHOW) normally
consists of RowDescription, zero or more DataRow messages, and then CommandComplete. COPY to or
from the frontend invokes special protocol as described in Section 42.2.5. All other query types normally
produce only a CommandComplete message.
Since a query string could contain several queries (separated by semicolons), there might be several such
response sequences before the backend finishes processing the query string. ReadyForQuery is issued
when the entire string has been processed and the backend is ready to accept a new query string.
If a completely empty (no contents other than whitespace) query string is received, the response is Emp-
tyQueryResponse followed by ReadyForQuery.
In the event of an error, ErrorResponse is issued followed by ReadyForQuery. All further processing of
the query string is aborted by ErrorResponse (even if more queries remained in it). Note that this may
occur partway through the sequence of messages generated by an individual query.
In simple Query mode, the format of retrieved values is always text, except when the given command is
a FETCH from a cursor declared with the BINARY option. In that case, the retrieved values are in binary
format. The format codes given in the RowDescription message tell which format is being used.
A frontend must be prepared to accept ErrorResponse and NoticeResponse messages whenever it is ex-
pecting any other type of message. See also Section 42.2.6 concerning messages that the backend may
generate due to outside events.
Recommended practice is to code frontends in a state-machine style that will accept any message type at
any time that it could make sense, rather than wiring in assumptions about the exact sequence of messages.
1077
Chapter 42. Frontend/Backend Protocol
prepared-statement object (an empty string selects the unnamed prepared statement). The response is
either ParseComplete or ErrorResponse. Parameter data types may be specified by OID; if not given, the
parser attempts to infer the data types in the same way as it would do for untyped literal string constants.
Note: The query string contained in a Parse message cannot include more than one SQL statement;
else a syntax error is reported. This restriction does not exist in the simple-query protocol, but it does
exist in the extended protocol, because allowing prepared statements or portals to contain multiple
commands would complicate the protocol unduly.
If successfully created, a named prepared-statement object lasts till the end of the current session, unless
explicitly destroyed. An unnamed prepared statement lasts only until the next Parse statement specifying
the unnamed statement as destination is issued. (Note that a simple Query message also destroys the
unnamed statement.) Named prepared statements must be explicitly closed before they can be redefined
by a Parse message, but this is not required for the unnamed statement. Named prepared statements can
also be created and accessed at the SQL command level, using PREPARE and EXECUTE.
Once a prepared statement exists, it can be readied for execution using a Bind message. The Bind message
gives the name of the source prepared statement (empty string denotes the unnamed prepared statement),
the name of the destination portal (empty string denotes the unnamed portal), and the values to use for
any parameter placeholders present in the prepared statement. The supplied parameter set must match
those needed by the prepared statement. Bind also specifies the format to use for any data returned by
the query; the format can be specified overall, or per-column. The response is either BindComplete or
ErrorResponse.
Note: The choice between text and binary output is determined by the format codes given in Bind,
regardless of the SQL command involved. The BINARY attribute in cursor declarations is irrelevant
when using extended query protocol.
Query planning for named prepared-statement objects occurs when the Parse message is received. If a
query will be repeatedly executed with different parameters, it may be beneficial to send a single Parse
message containing a parameterized query, followed by multiple Bind and Execute messages. This will
avoid replanning the query on each execution.
The unnamed prepared statement is likewise planned during Parse processing if the Parse message defines
no parameters. But if there are parameters, query planning is delayed until the first Bind message for the
statement is received. The planner will consider the actual values of the parameters provided in the Bind
message when planning the query.
Note: Query plans generated from a parameterized query may be less efficient than query plans gen-
erated from an equivalent query with actual parameter values substituted. The query planner cannot
make decisions based on actual parameter values (for example, index selectivity) when planning a pa-
rameterized query assigned to a named prepared-statement object. This possible penalty is avoided
when using the unnamed statement, since it is not planned until actual parameter values are available.
If a second or subsequent Bind referencing the unnamed prepared-statement object is received with-
out an intervening Parse, the query is not replanned. The parameter values used in the first Bind
message may produce a query plan that is only efficient for a subset of possible parameter values. To
force replanning of the query for a fresh set of parameters, send another Parse message to replace
the unnamed prepared-statement object.
1078
Chapter 42. Frontend/Backend Protocol
If successfully created, a named portal object lasts till the end of the current transaction, unless explicitly
destroyed. An unnamed portal is destroyed at the end of the transaction, or as soon as the next Bind
statement specifying the unnamed portal as destination is issued. (Note that a simple Query message also
destroys the unnamed portal.) Named portals must be explicitly closed before they can be redefined by
a Bind message, but this is not required for the unnamed portal. Named portals can also be created and
accessed at the SQL command level, using DECLARE CURSOR and FETCH.
Once a portal exists, it can be executed using an Execute message. The Execute message specifies the
portal name (empty string denotes the unnamed portal) and a maximum result-row count (zero meaning
“fetch all rows”). The result-row count is only meaningful for portals containing commands that return
row sets; in other cases the command is always executed to completion, and the row count is ignored. The
possible responses to Execute are the same as those described above for queries issued via simple query
protocol, except that Execute doesn’t cause ReadyForQuery or RowDescription to be issued.
If Execute terminates before completing the execution of a portal (due to reaching a nonzero result-row
count), it will send a PortalSuspended message; the appearance of this message tells the frontend that
another Execute should be issued against the same portal to complete the operation. The CommandCom-
plete message indicating completion of the source SQL command is not sent until the portal’s execution
is completed. Therefore, an Execute phase is always terminated by the appearance of exactly one of these
messages: CommandComplete, EmptyQueryResponse (if the portal was created from an empty query
string), ErrorResponse, or PortalSuspended.
At completion of each series of extended-query messages, the frontend should issue a Sync message.
This parameterless message causes the backend to close the current transaction if it’s not inside a
BEGIN/COMMIT transaction block (“close” meaning to commit if no error, or roll back if error). Then a
ReadyForQuery response is issued. The purpose of Sync is to provide a resynchronization point for error
recovery. When an error is detected while processing any extended-query message, the backend issues
ErrorResponse, then reads and discards messages until a Sync is reached, then issues ReadyForQuery
and returns to normal message processing. (But note that no skipping occurs if an error is detected while
processing Sync — this ensures that there is one and only one ReadyForQuery sent for each Sync.)
Note: Sync does not cause a transaction block opened with BEGIN to be closed. It is possible to detect
this situation since the ReadyForQuery message includes transaction status information.
In addition to these fundamental, required operations, there are several optional operations that can be
used with extended-query protocol.
The Describe message (portal variant) specifies the name of an existing portal (or an empty string for the
unnamed portal). The response is a RowDescription message describing the rows that will be returned by
executing the portal; or a NoData message if the portal does not contain a query that will return rows; or
ErrorResponse if there is no such portal.
The Describe message (statement variant) specifies the name of an existing prepared statement (or an
empty string for the unnamed prepared statement). The response is a ParameterDescription message de-
scribing the parameters needed by the statement, followed by a RowDescription message describing the
rows that will be returned when the statement is eventually executed (or a NoData message if the statement
will not return rows). ErrorResponse is issued if there is no such prepared statement. Note that since Bind
1079
Chapter 42. Frontend/Backend Protocol
has not yet been issued, the formats to be used for returned columns are not yet known to the backend; the
format code fields in the RowDescription message will be zeroes in this case.
Tip: In most scenarios the frontend should issue one or the other variant of Describe before issuing
Execute, to ensure that it knows how to interpret the results it will get back.
The Close message closes an existing prepared statement or portal and releases resources. It is not an error
to issue Close against a nonexistent statement or portal name. The response is normally CloseComplete,
but could be ErrorResponse if some difficulty is encountered while releasing resources. Note that closing
a prepared statement implicitly closes any open portals that were constructed from that statement.
The Flush message does not cause any specific output to be generated, but forces the backend to deliver
any data pending in its output buffers. A Flush must be sent after any extended-query command except
Sync, if the frontend wishes to examine the results of that command before issuing more commands.
Without Flush, messages returned by the backend will be combined into the minimum possible number
of packets to minimize network overhead.
Note: The simple Query message is approximately equivalent to the series Parse, Bind, portal De-
scribe, Execute, Close, Sync, using the unnamed prepared statement and portal objects and no pa-
rameters. One difference is that it will accept multiple SQL statements in the query string, automati-
cally performing the bind/describe/execute sequence for each one in succession. Another difference
is that it will not return ParseComplete, BindComplete, CloseComplete, or NoData messages.
Note: The Function Call sub-protocol is a legacy feature that is probably best avoided in new
code. Similar results can be accomplished by setting up a prepared statement that does SELECT
function($1, ...). The Function Call cycle can then be replaced with Bind/Execute.
A Function Call cycle is initiated by the frontend sending a FunctionCall message to the backend. The
backend then sends one or more response messages depending on the results of the function call, and
finally a ReadyForQuery response message. ReadyForQuery informs the frontend that it may safely send
a new query or function call.
The possible response messages from the backend are:
ErrorResponse
An error has occurred.
FunctionCallResponse
The function call was completed and returned the result given in the message. (Note that the Function
Call protocol can only handle a single scalar result, not a row type or set of results.)
1080
Chapter 42. Frontend/Backend Protocol
ReadyForQuery
Processing of the function call is complete. ReadyForQuery will always be sent, whether processing
terminates successfully or with an error.
NoticeResponse
A warning message has been issued in relation to the function call. Notices are in addition to other
responses, i.e., the backend will continue processing the command.
1081
Chapter 42. Frontend/Backend Protocol
The CopyInResponse and CopyOutResponse messages include fields that inform the frontend of the num-
ber of columns per row and the format codes being used for each column. (As of the present implemen-
tation, all columns in a given COPY operation will use the same format, but the message design does not
assume this.)
Note: At present, NotificationResponse can only be sent outside a transaction, and thus it will not
occur in the middle of a command-response series, though it may occur just before ReadyForQuery.
It is unwise to design frontend logic that assumes that, however. Good practice is to be able to accept
NotificationResponse at any point in the protocol.
1082
Chapter 42. Frontend/Backend Protocol
To issue a cancel request, the frontend opens a new connection to the server and sends a CancelRequest
message, rather than the StartupMessage message that would ordinarily be sent across a new connection.
The server will process this request and then close the connection. For security reasons, no direct reply is
made to the cancel request message.
A CancelRequest message will be ignored unless it contains the same key data (PID and secret key) passed
to the frontend during connection start-up. If the request matches the PID and secret key for a currently
executing backend, the processing of the current query is aborted. (In the existing implementation, this is
done by sending a special signal to the backend process that is processing the query.)
The cancellation signal may or may not have any effect — for example, if it arrives after the backend has
finished processing the query, then it will have no effect. If the cancellation is effective, it results in the
current command being terminated early with an error message.
The upshot of all this is that for reasons of both security and efficiency, the frontend has no direct way
to tell whether a cancel request has succeeded. It must continue to wait for the backend to respond to the
query. Issuing a cancel simply improves the odds that the current query will finish soon, and improves the
odds that it will fail with an error message instead of succeeding.
Since the cancel request is sent across a new connection to the server and not across the regular fron-
tend/backend communication link, it is possible for the cancel request to be issued by any process, not
just the frontend whose query is to be canceled. This may have some benefits of flexibility in building
multiple-process applications. It also introduces a security risk, in that unauthorized persons might try
to cancel queries. The security risk is addressed by requiring a dynamically generated secret key to be
supplied in cancel requests.
42.2.8. Termination
The normal, graceful termination procedure is that the frontend sends a Terminate message and immedi-
ately closes the connection. On receipt of this message, the backend closes the connection and terminates.
In rare cases (such as an administrator-commanded database shutdown) the backend may disconnect with-
out any frontend request to do so. In such cases the backend will attempt to send an error or notice message
giving the reason for the disconnection before it closes the connection.
Other termination scenarios arise from various failure cases, such as core dump at one end or the other,
loss of the communications link, loss of message-boundary synchronization, etc. If either frontend or
backend sees an unexpected closure of the connection, it should clean up and terminate. The frontend
has the option of launching a new backend by recontacting the server if it doesn’t want to terminate itself.
Closing the connection is also advisable if an unrecognizable message type is received, since this probably
indicates loss of message-boundary sync.
For either normal or abnormal termination, any open transaction is rolled back, not committed. One should
note however that if a frontend disconnects while a non-SELECT query is being processed, the backend will
probably finish the query before noticing the disconnection. If the query is outside any transaction block
(BEGIN ... COMMIT sequence) then its results may be committed before the disconnection is recognized.
1083
Chapter 42. Frontend/Backend Protocol
session traffic. For more information on encrypting PostgreSQL sessions with SSL, see Section 16.7.
To initiate an SSL-encrypted connection, the frontend initially sends an SSLRequest message rather than
a StartupMessage. The server then responds with a single byte containing S or N, indicating that it is will-
ing or unwilling to perform SSL, respectively. The frontend may close the connection at this point if it is
dissatisfied with the response. To continue after S, perform an SSL startup handshake (not described here,
part of the SSL specification) with the server. If this is successful, continue with sending the usual Star-
tupMessage. In this case the StartupMessage and all subsequent data will be SSL-encrypted. To continue
after N, send the usual StartupMessage and proceed without encryption.
The frontend should also be prepared to handle an ErrorMessage response to SSLRequest from the server.
This would only occur if the server predates the addition of SSL support to PostgreSQL. In this case the
connection must be closed, but the frontend may choose to open a fresh connection and proceed without
requesting SSL.
An initial SSLRequest may also be used in a connection that is being opened to send a CancelRequest
message.
While the protocol itself does not provide a way for the server to force SSL encryption, the administrator
may configure the server to reject unencrypted sessions as a byproduct of authentication checking.
Intn(i)
An n-bit integer in network byte order (most significant byte first). If i is specified it is the exact
value that will appear, otherwise the value is variable. Eg. Int16, Int32(42).
Intn[k]
An array of k n-bit integers, each in network byte order. The array length k is always determined by
an earlier field in the message. Eg. Int16[M].
String(s)
A null-terminated string (C-style string). There is no specific length limitation on strings. If s is spec-
ified it is the exact value that will appear, otherwise the value is variable. Eg. String, String("user").
Note: There is no predefined limit on the length of a string that can be returned by the backend.
Good coding strategy for a frontend is to use an expandable buffer so that anything that fits in
memory can be accepted. If that’s not feasible, read the full string and discard trailing characters
that don’t fit into your fixed-size buffer.
Byten(c)
Exactly n bytes. If the field width n is not a constant, it is always determinable from an earlier field
in the message. If c is specified it is the exact value. Eg. Byte2, Byte1(’\n’).
1084
Chapter 42. Frontend/Backend Protocol
AuthenticationOk (B)
Byte1(’R’)
Identifies the message as an authentication request.
Int32(8)
Length of message contents in bytes, including self.
Int32(0)
Specifies that the authentication was successful.
AuthenticationKerberosV4 (B)
Byte1(’R’)
Identifies the message as an authentication request.
Int32(8)
Length of message contents in bytes, including self.
Int32(1)
Specifies that Kerberos V4 authentication is required.
AuthenticationKerberosV5 (B)
Byte1(’R’)
Identifies the message as an authentication request.
Int32(8)
Length of message contents in bytes, including self.
1085
Chapter 42. Frontend/Backend Protocol
Int32(2)
Specifies that Kerberos V5 authentication is required.
AuthenticationCleartextPassword (B)
Byte1(’R’)
Identifies the message as an authentication request.
Int32(8)
Length of message contents in bytes, including self.
Int32(3)
Specifies that a clear-text password is required.
AuthenticationCryptPassword (B)
Byte1(’R’)
Identifies the message as an authentication request.
Int32(10)
Length of message contents in bytes, including self.
Int32(4)
Specifies that a crypt()-encrypted password is required.
Byte2
The salt to use when encrypting the password.
AuthenticationMD5Password (B)
Byte1(’R’)
Identifies the message as an authentication request.
Int32(12)
Length of message contents in bytes, including self.
Int32(5)
Specifies that an MD5-encrypted password is required.
1086
Chapter 42. Frontend/Backend Protocol
Byte4
The salt to use when encrypting the password.
AuthenticationSCMCredential (B)
Byte1(’R’)
Identifies the message as an authentication request.
Int32(8)
Length of message contents in bytes, including self.
Int32(6)
Specifies that an SCM credentials message is required.
BackendKeyData (B)
Byte1(’K’)
Identifies the message as cancellation key data. The frontend must save these values if it wishes
to be able to issue CancelRequest messages later.
Int32(12)
Length of message contents in bytes, including self.
Int32
The process ID of this backend.
Int32
The secret key of this backend.
Bind (F)
Byte1(’B’)
Identifies the message as a Bind command.
Int32
Length of message contents in bytes, including self.
String
The name of the destination portal (an empty string selects the unnamed portal).
1087
Chapter 42. Frontend/Backend Protocol
String
The name of the source prepared statement (an empty string selects the unnamed prepared
statement).
Int16
The number of parameter format codes that follow (denoted C below). This can be zero to
indicate that there are no parameters or that the parameters all use the default format (text); or
one, in which case the specified format code is applied to all parameters; or it can equal the
actual number of parameters.
Int16[C]
The parameter format codes. Each must presently be zero (text) or one (binary).
Int16
The number of parameter values that follow (possibly zero). This must match the number of
parameters needed by the query.
Next, the following pair of fields appear for each parameter:
Int32
The length of the parameter value, in bytes (this count does not include itself). Can be zero. As
a special case, -1 indicates a NULL parameter value. No value bytes follow in the NULL case.
Byten
The value of the parameter, in the format indicated by the associated format code. n is the above
length.
After the last parameter, the following fields appear:
Int16
The number of result-column format codes that follow (denoted R below). This can be zero to
indicate that there are no result columns or that the result columns should all use the default
format (text); or one, in which case the specified format code is applied to all result columns (if
any); or it can equal the actual number of result columns of the query.
Int16[R]
The result-column format codes. Each must presently be zero (text) or one (binary).
BindComplete (B)
Byte1(’2’)
Identifies the message as a Bind-complete indicator.
Int32(4)
Length of message contents in bytes, including self.
1088
Chapter 42. Frontend/Backend Protocol
CancelRequest (F)
Int32(16)
Length of message contents in bytes, including self.
Int32(80877102)
The cancel request code. The value is chosen to contain 1234 in the most significant 16 bits,
and 5678 in the least 16 significant bits. (To avoid confusion, this code must not be the same as
any protocol version number.)
Int32
The process ID of the target backend.
Int32
The secret key for the target backend.
Close (F)
Byte1(’C’)
Identifies the message as a Close command.
Int32
Length of message contents in bytes, including self.
Byte1
’S’ to close a prepared statement; or ’P’ to close a portal.
String
The name of the prepared statement or portal to close (an empty string selects the unnamed
prepared statement or portal).
CloseComplete (B)
Byte1(’3’)
Identifies the message as a Close-complete indicator.
Int32(4)
Length of message contents in bytes, including self.
1089
Chapter 42. Frontend/Backend Protocol
CommandComplete (B)
Byte1(’C’)
Identifies the message as a command-completed response.
Int32
Length of message contents in bytes, including self.
String
The command tag. This is usually a single word that identifies which SQL command was com-
pleted.
For an INSERT command, the tag is INSERT oid rows, where rows is the number of rows
inserted. oid is the object ID of the inserted row if rows is 1 and the target table has OIDs;
otherwise oid is 0.
For a DELETE command, the tag is DELETE rows where rows is the number of rows deleted.
For an UPDATE command, the tag is UPDATE rows where rows is the number of rows updated.
For a MOVE command, the tag is MOVE rows where rows is the number of rows the cursor’s
position has been changed by.
For a FETCH command, the tag is FETCH rows where rows is the number of rows that have
been retrieved from the cursor.
CopyData (F & B)
Byte1(’d’)
Identifies the message as COPY data.
Int32
Length of message contents in bytes, including self.
Byten
Data that forms part of a COPY data stream. Messages sent from the backend will always corre-
spond to single data rows, but messages sent by frontends may divide the data stream arbitrarily.
CopyDone (F & B)
Byte1(’c’)
Identifies the message as a COPY-complete indicator.
Int32(4)
Length of message contents in bytes, including self.
1090
Chapter 42. Frontend/Backend Protocol
CopyFail (F)
Byte1(’f’)
Identifies the message as a COPY-failure indicator.
Int32
Length of message contents in bytes, including self.
String
An error message to report as the cause of failure.
CopyInResponse (B)
Byte1(’G’)
Identifies the message as a Start Copy In response. The frontend must now send copy-in data (if
not prepared to do so, send a CopyFail message).
Int32
Length of message contents in bytes, including self.
Int8
0 indicates the overall COPY format is textual (rows separated by newlines, columns separated
by separator characters, etc). 1 indicates the overall copy format is binary (similar to DataRow
format). See COPY for more information.
Int16
The number of columns in the data to be copied (denoted N below).
Int16[N ]
The format codes to be used for each column. Each must presently be zero (text) or one (binary).
All must be zero if the overall copy format is textual.
CopyOutResponse (B)
Byte1(’H’)
Identifies the message as a Start Copy Out response. This message will be followed by copy-out
data.
Int32
Length of message contents in bytes, including self.
1091
Chapter 42. Frontend/Backend Protocol
Int8
0 indicates the overall COPY format is textual (rows separated by newlines, columns separated
by separator characters, etc). 1 indicates the overall copy format is binary (similar to DataRow
format). See COPY for more information.
Int16
The number of columns in the data to be copied (denoted N below).
Int16[N ]
The format codes to be used for each column. Each must presently be zero (text) or one (binary).
All must be zero if the overall copy format is textual.
DataRow (B)
Byte1(’D’)
Identifies the message as a data row.
Int32
Length of message contents in bytes, including self.
Int16
The number of column values that follow (possibly zero).
Next, the following pair of fields appear for each column:
Int32
The length of the column value, in bytes (this count does not include itself). Can be zero. As a
special case, -1 indicates a NULL column value. No value bytes follow in the NULL case.
Byten
The value of the column, in the format indicated by the associated format code. n is the above
length.
Describe (F)
Byte1(’D’)
Identifies the message as a Describe command.
Int32
Length of message contents in bytes, including self.
Byte1
’S’ to describe a prepared statement; or ’P’ to describe a portal.
1092
Chapter 42. Frontend/Backend Protocol
String
The name of the prepared statement or portal to describe (an empty string selects the unnamed
prepared statement or portal).
EmptyQueryResponse (B)
Byte1(’I’)
Identifies the message as a response to an empty query string. (This substitutes for Command-
Complete.)
Int32(4)
Length of message contents in bytes, including self.
ErrorResponse (B)
Byte1(’E’)
Identifies the message as an error.
Int32
Length of message contents in bytes, including self.
The message body consists of one or more identified fields, followed by a zero byte as a terminator.
Fields may appear in any order. For each field there is the following:
Byte1
A code identifying the field type; if zero, this is the message terminator and no string follows.
The presently defined field types are listed in Section 42.5. Since more field types may be added
in future, frontends should silently ignore fields of unrecognized type.
String
The field value.
Execute (F)
Byte1(’E’)
Identifies the message as an Execute command.
Int32
Length of message contents in bytes, including self.
1093
Chapter 42. Frontend/Backend Protocol
String
The name of the portal to execute (an empty string selects the unnamed portal).
Int32
Maximum number of rows to return, if portal contains a query that returns rows (ignored other-
wise). Zero denotes “no limit”.
Flush (F)
Byte1(’H’)
Identifies the message as a Flush command.
Int32(4)
Length of message contents in bytes, including self.
FunctionCall (F)
Byte1(’F’)
Identifies the message as a function call.
Int32
Length of message contents in bytes, including self.
Int32
Specifies the object ID of the function to call.
Int16
The number of argument format codes that follow (denoted C below). This can be zero to
indicate that there are no arguments or that the arguments all use the default format (text);
or one, in which case the specified format code is applied to all arguments; or it can equal the
actual number of arguments.
Int16[C]
The argument format codes. Each must presently be zero (text) or one (binary).
Int16
Specifies the number of arguments being supplied to the function.
Next, the following pair of fields appear for each argument:
Int32
The length of the argument value, in bytes (this count does not include itself). Can be zero. As
a special case, -1 indicates a NULL argument value. No value bytes follow in the NULL case.
1094
Chapter 42. Frontend/Backend Protocol
Byten
The value of the argument, in the format indicated by the associated format code. n is the above
length.
After the last argument, the following field appears:
Int16
The format code for the function result. Must presently be zero (text) or one (binary).
FunctionCallResponse (B)
Byte1(’V’)
Identifies the message as a function call result.
Int32
Length of message contents in bytes, including self.
Int32
The length of the function result value, in bytes (this count does not include itself). Can be zero.
As a special case, -1 indicates a NULL function result. No value bytes follow in the NULL case.
Byten
The value of the function result, in the format indicated by the associated format code. n is the
above length.
NoData (B)
Byte1(’n’)
Identifies the message as a no-data indicator.
Int32(4)
Length of message contents in bytes, including self.
NoticeResponse (B)
Byte1(’N’)
Identifies the message as a notice.
1095
Chapter 42. Frontend/Backend Protocol
Int32
Length of message contents in bytes, including self.
The message body consists of one or more identified fields, followed by a zero byte as a terminator.
Fields may appear in any order. For each field there is the following:
Byte1
A code identifying the field type; if zero, this is the message terminator and no string follows.
The presently defined field types are listed in Section 42.5. Since more field types may be added
in future, frontends should silently ignore fields of unrecognized type.
String
The field value.
NotificationResponse (B)
Byte1(’A’)
Identifies the message as a notification response.
Int32
Length of message contents in bytes, including self.
Int32
The process ID of the notifying backend process.
String
The name of the condition that the notify has been raised on.
String
Additional information passed from the notifying process. (Currently, this feature is unimple-
mented so the field is always an empty string.)
ParameterDescription (B)
Byte1(’t’)
Identifies the message as a parameter description.
Int32
Length of message contents in bytes, including self.
Int16
The number of parameters used by the statement (may be zero).
Then, for each parameter, there is the following:
1096
Chapter 42. Frontend/Backend Protocol
Int32
Specifies the object ID of the parameter data type.
ParameterStatus (B)
Byte1(’S’)
Identifies the message as a run-time parameter status report.
Int32
Length of message contents in bytes, including self.
String
The name of the run-time parameter being reported.
String
The current value of the parameter.
Parse (F)
Byte1(’P’)
Identifies the message as a Parse command.
Int32
Length of message contents in bytes, including self.
String
The name of the destination prepared statement (an empty string selects the unnamed prepared
statement).
String
The query string to be parsed.
Int16
The number of parameter data types specified (may be zero). Note that this is not an indication
of the number of parameters that might appear in the query string, only the number that the
frontend wants to prespecify types for.
Then, for each parameter, there is the following:
Int32
Specifies the object ID of the parameter data type. Placing a zero here is equivalent to leaving
the type unspecified.
1097
Chapter 42. Frontend/Backend Protocol
ParseComplete (B)
Byte1(’1’)
Identifies the message as a Parse-complete indicator.
Int32(4)
Length of message contents in bytes, including self.
PasswordMessage (F)
Byte1(’p’)
Identifies the message as a password response.
Int32
Length of message contents in bytes, including self.
String
The password (encrypted, if requested).
PortalSuspended (B)
Byte1(’s’)
Identifies the message as a portal-suspended indicator. Note this only appears if an Execute
message’s row-count limit was reached.
Int32(4)
Length of message contents in bytes, including self.
Query (F)
Byte1(’Q’)
Identifies the message as a simple query.
Int32
Length of message contents in bytes, including self.
1098
Chapter 42. Frontend/Backend Protocol
String
The query string itself.
ReadyForQuery (B)
Byte1(’Z’)
Identifies the message type. ReadyForQuery is sent whenever the backend is ready for a new
query cycle.
Int32(5)
Length of message contents in bytes, including self.
Byte1
Current backend transaction status indicator. Possible values are ’I’ if idle (not in a transaction
block); ’T’ if in a transaction block; or ’E’ if in a failed transaction block (queries will be rejected
until block is ended).
RowDescription (B)
Byte1(’T’)
Identifies the message as a row description.
Int32
Length of message contents in bytes, including self.
Int16
Specifies the number of fields in a row (may be zero).
Then, for each field, there is the following:
String
The field name.
Int32
If the field can be identified as a column of a specific table, the object ID of the table; otherwise
zero.
Int16
If the field can be identified as a column of a specific table, the attribute number of the column;
otherwise zero.
Int32
The object ID of the field’s data type.
1099
Chapter 42. Frontend/Backend Protocol
Int16
The data type size (see pg_type.typlen). Note that negative values denote variable-width
types.
Int32
The type modifier (see pg_attribute.atttypmod). The meaning of the modifier is type-
specific.
Int16
The format code being used for the field. Currently will be zero (text) or one (binary). In a
RowDescription returned from the statement variant of Describe, the format code is not yet
known and will always be zero.
SSLRequest (F)
Int32(8)
Length of message contents in bytes, including self.
Int32(80877103)
The SSL request code. The value is chosen to contain 1234 in the most significant 16 bits, and
5679 in the least 16 significant bits. (To avoid confusion, this code must not be the same as any
protocol version number.)
StartupMessage (F)
Int32
Length of message contents in bytes, including self.
Int32(196608)
The protocol version number. The most significant 16 bits are the major version number (3 for
the protocol described here). The least significant 16 bits are the minor version number (0 for
the protocol described here).
The protocol version number is followed by one or more pairs of parameter name and value strings.
A zero byte is required as a terminator after the last name/value pair. Parameters can appear in any
order. user is required, others are optional. Each parameter is specified as:
String
The parameter name. Currently recognized names are:
user
1100
Chapter 42. Frontend/Backend Protocol
database
Command-line arguments for the backend. (This is deprecated in favor of setting individual
run-time parameters.)
In addition to the above, any run-time parameter that can be set at backend start time may
be listed. Such settings will be applied during backend start (after parsing the command-line
options if any). The values will act as session defaults.
String
The parameter value.
Sync (F)
Byte1(’S’)
Identifies the message as a Sync command.
Int32(4)
Length of message contents in bytes, including self.
Terminate (F)
Byte1(’X’)
Identifies the message as a termination.
Int32(4)
Length of message contents in bytes, including self.
1101
Chapter 42. Frontend/Backend Protocol
Severity: the field contents are ERROR, FATAL, or PANIC (in an error message), or WARNING, NOTICE,
DEBUG, INFO, or LOG (in a notice message), or a localized translation of one of these. Always present.
Code: the SQLSTATE code for the error (see Appendix A). Not localizable. Always present.
M
Message: the primary human-readable error message. This should be accurate but terse (typically
one line). Always present.
D
Detail: an optional secondary error message carrying more detail about the problem. May run to
multiple lines.
H
Hint: an optional suggestion what to do about the problem. This is intended to differ from Detail in
that it offers advice (potentially inappropriate) rather than hard facts. May run to multiple lines.
P
Position: the field value is a decimal ASCII integer, indicating an error cursor position as an index
into the original query string. The first character has index 1, and positions are measured in characters
not bytes.
p
Internal position: this is defined the same as the P field, but it is used when the cursor position refers
to an internally generated command rather than the one submitted by the client. The q field will
always appear when this field appears.
q
Internal query: the text of a failed internally-generated command. This could be, for example, a SQL
query issued by a PL/pgSQL function.
W
Where: an indication of the context in which the error occurred. Presently this includes a call stack
traceback of active procedural language functions and internally-generated queries. The trace is one
entry per line, most recent first.
F
File: the file name of the source-code location where the error was reported.
L
Line: the line number of the source-code location where the error was reported.
R
1102
Chapter 42. Frontend/Backend Protocol
The client is responsible for formatting displayed information to meet its needs; in particular it should
break long lines as needed. Newline characters appearing in the error message fields should be treated as
paragraph breaks, not line breaks.
1103
Chapter 42. Frontend/Backend Protocol
The NotificationResponse (’A’) message has an additional string field, which is presently empty but may
someday carry additional data passed from the NOTIFY event sender.
The EmptyQueryResponse (’I’) message used to include an empty string parameter; this has been re-
moved.
1104
Chapter 43. PostgreSQL Coding Conventions
43.1. Formatting
Source code formatting uses 4 column tab spacing, with tabs preserved (i.e. tabs are not expanded to
spaces). Each logical indentation level is one additional tab stop. Layout rules (brace positioning, etc)
follow BSD conventions.
While submitted patches do not absolutely have to follow these formatting rules, it’s a good idea to do so.
Your code will get run through pgindent, so there’s no point in making it look nice under some other set
of formatting conventions.
For Emacs, add the following (or something similar) to your ~/.emacs initialization file:
(defun pgsql-c-mode ()
;; sets up formatting for PostgreSQL C code
(interactive)
(c-mode)
(setq-default tab-width 4)
(c-set-style "bsd") ; set c-basic-offset to 4, plus other stuff
(c-set-offset ’case-label ’+) ; tweak case indent to match PG custom
(setq indent-tabs-mode t)) ; make sure we keep tabs when indenting
For vi, your ~/.vimrc or equivalent file should contain the following:
set tabstop=4
:set ts=4
more -x4
less -x4
1105
Chapter 43. PostgreSQL Coding Conventions
ereport(ERROR,
(errcode(ERRCODE_DIVISION_BY_ZERO),
errmsg("division by zero")));
This specifies error severity level ERROR (a run-of-the-mill error). The errcode call specifies the SQL-
STATE error code using a macro defined in src/include/utils/errcodes.h. The errmsg call pro-
vides the primary message text. Notice the extra set of parentheses surrounding the auxiliary function
calls — these are annoying but syntactically necessary.
Here is a more complex example:
ereport(ERROR,
(errcode(ERRCODE_AMBIGUOUS_FUNCTION),
errmsg("function %s is not unique",
func_signature_string(funcname, nargs,
actual_arg_types)),
errhint("Unable to choose a best candidate function. "
"You may need to add explicit typecasts.")));
This illustrates the use of format codes to embed run-time values into a message text. Also, an optional
“hint” message is provided.
The available auxiliary routines for ereport are:
• errcode(sqlerrcode) specifies the SQLSTATE error identifier code for the condition. If this routine
is not called, the error identifier defaults to ERRCODE_INTERNAL_ERROR when the error severity level
is ERROR or higher, ERRCODE_WARNING when the error level is WARNING, otherwise (for NOTICE and
below) ERRCODE_SUCCESSFUL_COMPLETION. While these defaults are often convenient, always think
whether they are appropriate before omitting the errcode() call.
• errmsg(const char *msg, ...) specifies the primary error message text, and possibly run-time
values to insert into it. Insertions are specified by sprintf-style format codes. In addition to the stan-
dard format codes accepted by sprintf, the format code %m can be used to insert the error message
returned by strerror for the current value of errno. 1 %m does not require any corresponding entry in
1. That is, the value that was current when the ereport call was reached; changes of errno within the auxiliary reporting
routines will not affect it. That would not be true if you were to write strerror(errno) explicitly in errmsg’s parameter list;
accordingly, do not do so.
1106
Chapter 43. PostgreSQL Coding Conventions
the parameter list for errmsg. Note that the message string will be run through gettext for possible
localization before format codes are processed.
• errmsg_internal(const char *msg, ...) is the same as errmsg, except that the message
string will not be included in the internationalization message dictionary. This should be used for
“can’t happen” cases that are probably not worth expending translation effort on.
• errdetail(const char *msg, ...) supplies an optional “detail” message; this is to be used when
there is additional information that seems inappropriate to put in the primary message. The message
string is processed in just the same way as for errmsg.
• errhint(const char *msg, ...) supplies an optional “hint” message; this is to be used when
offering suggestions about how to fix the problem, as opposed to factual details about what went wrong.
The message string is processed in just the same way as for errmsg.
• errcontext(const char *msg, ...) is not normally called directly from an ereport message
site; rather it is used in error_context_stack callback functions to provide information about the
context in which an error occurred, such as the current location in a PL function. The message string is
processed in just the same way as for errmsg. Unlike the other auxiliary functions, this can be called
more than once per ereport call; the successive strings thus supplied are concatenated with separating
newlines.
• errposition(int cursorpos) specifies the textual location of an error within a query string. Cur-
rently it is only useful for errors detected in the lexical and syntactic analysis phases of query process-
ing.
• errcode_for_file_access() is a convenience function that selects an appropriate SQLSTATE er-
ror identifier for a failure in a file-access-related system call. It uses the saved errno to determine
which error code to generate. Usually this should be used in combination with %m in the primary error
message text.
• errcode_for_socket_access() is a convenience function that selects an appropriate SQLSTATE
error identifier for a failure in a socket-related system call.
There is an older function elog that is still heavily used. An elog call
is exactly equivalent to
Notice that the SQLSTATE errcode is always defaulted, and the message string is not included in the
internationalization message dictionary. Therefore, elog should be used only for internal errors and low-
level debug logging. Any message that is likely to be of interest to ordinary users should go through
ereport. Nonetheless, there are enough internal “can’t happen” error checks in the system that elog is
still widely used; it is preferred for those messages for its notational simplicity.
Advice about writing good error messages can be found in Section 43.3.
1107
Chapter 43. PostgreSQL Coding Conventions
write
Rationale: keeping the primary message short helps keep it to the point, and lets clients lay out screen
space on the assumption that one line is enough for error messages. Detail and hint messages may be
relegated to a verbose mode, or perhaps a pop-up error-details window. Also, details and hints would
normally be suppressed from the server log to save space. Reference to implementation details is best
avoided since users don’t know the details anyway.
43.3.2. Formatting
Don’t put any specific assumptions about formatting into the message texts. Expect clients and the server
log to wrap lines to fit their own needs. In long messages, newline characters (\n) may be used to indicate
suggested paragraph breaks. Don’t end a message with a newline. Don’t use tabs or other formatting
characters. (In error context displays, newlines are automatically added to separate levels of context such
as function calls.)
Rationale: Messages are not necessarily displayed on terminal-type displays. In GUI displays or browsers
these formatting instructions are at best ignored.
1108
Chapter 43. PostgreSQL Coding Conventions
Rationale: The choice of double quotes over single quotes is somewhat arbitrary, but tends to be the
preferred use. Some have suggested choosing the kind of quotes depending on the type of object according
to SQL conventions (namely, strings single quoted, identifiers double quoted). But this is a language-
internal technical issue that many users aren’t even familiar with, it won’t scale to other kinds of quoted
terms, it doesn’t translate to other languages, and it’s pretty pointless, too.
Rationale: Objects can have names that create ambiguity when embedded in a message. Be consistent
about denoting where a plugged-in name starts and ends. But don’t clutter messages with unnecessary or
duplicate quote marks.
1109
Chapter 43. PostgreSQL Coding Conventions
and
The first one means that the attempt to open the file failed. The message should give a reason, such as
“disk full” or “file doesn’t exist”. The past tense is appropriate because next time the disk might not be
full anymore or the file in question may exist.
The second form indicates the the functionality of opening the named file does not exist at all in the
program, or that it’s conceptually impossible. The present tense is appropriate because the condition will
persist indefinitely.
Rationale: Granted, the average user will not be able to draw great conclusions merely from the tense of
the message, but since the language provides us with a grammar we should use it correctly.
43.3.10. Brackets
Square brackets are only to be used (1) in command synopses to denote optional arguments, or (2) to
denote an array subscript.
Rationale: Anything else does not correspond to widely-known customary usage and will confuse people.
Rationale: It would be difficult to account for all possible error codes to paste this into a single smooth
sentence, so some sort of punctuation is needed. Putting the embedded text in parentheses has also been
suggested, but it’s unnatural if the embedded text is likely to be the most important part of the message,
as is often the case.
1110
Chapter 43. PostgreSQL Coding Conventions
Avoid mentioning called function names, either; instead say what the code was trying to do:
If it really seems necessary, mention the system call in the detail message. (In some cases, providing the
actual values passed to the system call might be appropriate information for the detail message.)
Rationale: Users don’t know what all those functions do.
Find vs. Exists. If the program uses a nontrivial algorithm to locate a resource (e.g., a path search) and
that algorithm fails, it is fair to say that the program couldn’t “find” the resource. If, on the other hand,
the expected location of the resource is known but the program cannot access it there then say that the
resource doesn’t “exist”. Using “find” in this case sounds weak and confuses the issue.
1111
Chapter 43. PostgreSQL Coding Conventions
• spec
• stats
• parens
• auth
• xact
43.3.16. Localization
Keep in mind that error message texts need to be translated into other languages. Follow the guidelines in
Section 44.2.2 to avoid making life difficult for translators.
1112
Chapter 44. Native Language Support
Peter Eisentraut
44.1.1. Requirements
We won’t judge your language skills — this section is about software tools. Theoretically, you only need
a text editor. But this is only in the unlikely event that you do not want to try out your translated messages.
When you configure your source tree, be sure to use the --enable-nls option. This will also check for
the libintl library and the msgfmt program, which all end users will need anyway. To try out your work,
follow the applicable portions of the installation instructions.
If you want to start a new translation effort or want to do a message catalog merge (described later), you
will need the programs xgettext and msgmerge, respectively, in a GNU-compatible implementation.
Later, we will try to arrange it so that if you use a packaged source distribution, you won’t need xgettext.
(From CVS, you will still need it.) GNU Gettext 0.10.36 or later is currently recommended.
Your local gettext implementation should come with its own documentation. Some of that is probably
duplicated in what follows, but for additional details you should look there.
44.1.2. Concepts
The pairs of original (English) messages and their (possibly) translated equivalents are kept in message
catalogs, one for each program (although related programs can share a message catalog) and for each
target language. There are two file formats for message catalogs: The first is the “PO” file (for Portable
Object), which is a plain text file with special syntax that translators edit. The second is the “MO” file
(for Machine Object), which is a binary file generated from the respective PO file and is used while the
internationalized program is run. Translators do not deal with MO files; in fact hardly anyone does.
The extension of the message catalog file is to no surprise either .po or .mo. The base name is either the
name of the program it accompanies, or the language the file is for, depending on the situation. This is a
bit confusing. Examples are psql.po (PO file for psql) or fr.mo (MO file in French).
The file format of the PO files is illustrated here:
# comment
1113
Chapter 44. Native Language Support
...
The msgid’s are extracted from the program source. (They need not be, but this is the most common way.)
The msgstr lines are initially empty and are filled in with useful strings by the translator. The strings can
contain C-style escape characters and can be continued across lines as illustrated. (The next line must start
at the beginning of the line.)
The # character introduces a comment. If whitespace immediately follows the # character, then this is
a comment maintained by the translator. There may also be automatic comments, which have a non-
whitespace character immediately following the #. These are maintained by the various tools that operate
on the PO files and are intended to aid the translator.
#. automatic comment
#: filename.c:1023
#, flags, flags
The #. style comments are extracted from the source file where the message is used. Possibly the pro-
grammer has inserted information for the translator, such as about expected alignment. The #: comment
indicates the exact location(s) where the message is used in the source. The translator need not look at
the program source, but he can if there is doubt about the correct translation. The #, comments contain
flags that describe the message in some way. There are currently two flags: fuzzy is set if the message
has possibly been outdated because of changes in the program source. The translator can then verify this
and possibly remove the fuzzy flag. Note that fuzzy messages are not made available to the end user. The
other flag is c-format, which indicates that the message is a printf-style format template. This means
that the translation should also be a format string with the same number and type of placeholders. There
are tools that can verify this, which key off the c-format flag.
gmake init-po
1. http://lcweb.loc.gov/standards/iso639-2/englangn.html
2. http://www.din.de/gremien/nas/nabd/iso3166ma/codlstp1/en_listp1.html
1114
Chapter 44. Native Language Support
This will create a file progname.pot. (.pot to distinguish it from PO files that are “in production”.
The T stands for “template”.) Copy this file to language.po and edit it. To make it known that the new
language is available, also edit the file nls.mk and add the language (or language and country) code to
the line that looks like:
AVAIL_LANGUAGES := de fr
gmake update-po
which will create a new blank message catalog file (the pot file you started with) and will merge it with
the existing PO files. If the merge algorithm is not sure about a particular message it marks it “fuzzy”
as explained above. For the case where something went really wrong, the old PO file is saved with a
.po.old extension.
• Make sure that if the original ends with a newline, the translation does, too. Similarly for tabs, etc.
• If the original is a printf format string, the translation also needs to be. The translation also needs to
have the same format specifiers in the same order. Sometimes the natural rules of the language make
this impossible or at least awkward. In that case you can modify the format specifiers like this:
msgstr "Die Datei %2$s hat %1$u Zeichen."
Then the first placeholder will actually use the second argument from the list. The digits$ needs
to follow the % immediately, before any other format manipulators. (This feature really exists in the
printf family of functions. You may not have heard of it before because there is little use for it outside
of message internationalization.)
• If the original string contains a linguistic mistake, report that (or fix it yourself in the program source)
and translate normally. The corrected string can be merged in when the program sources have been
updated. If the original string contains a factual mistake, report that (or fix it yourself) and do not
translate it. Instead, you may mark the string with a comment in the PO file.
1115
Chapter 44. Native Language Support
• Maintain the style and tone of the original string. Specifically, messages that are not sentences (cannot
open file %s) should probably not start with a capital letter (if your language distinguishes letter
case) or end with a period (if your language uses punctuation marks). It may help to read Section 43.3.
• If you don’t know what a message means, or if it is ambiguous, ask on the developers’ mailing list.
Chances are that English speaking end users might also not understand it or find it ambiguous, so it’s
best to improve the message.
44.2.1. Mechanics
This section describes how to implement native language support in a program or library that is part of
the PostgreSQL distribution. Currently, it only applies to C programs.
...
#ifdef ENABLE_NLS
setlocale(LC_ALL, "");
bindtextdomain("progname", LOCALEDIR);
textdomain("progname");
#endif
would be changed to
fprintf(stderr, gettext("panic level %d\n"), lvl);
Another solution is feasible if the program does much of its communication through one or a few
functions, such as ereport() in the backend. Then you make this function call gettext internally
on all input strings.
1116
Chapter 44. Native Language Support
3. Add a file nls.mk in the directory with the program sources. This file will be read as a makefile. The
following variable assignments need to be made here:
CATALOG_NAME
List of files that contain translatable strings, i.e., those marked with gettext or an alternative
solution. Eventually, this will include nearly all source files of the program. If this list gets too
long you can make the first “file” be a + and the second word be a file that contains one file name
per line.
GETTEXT_TRIGGERS
The tools that generate message catalogs for the translators to work on need to know what
function calls contain translatable strings. By default, only gettext() calls are known. If you
used _ or other identifiers you need to list them here. If the translatable string is not the first
argument, the item needs to be of the form func:2 (for the second argument).
The build system will automatically take care of building and installing the message catalogs.
The word order within the sentence may be different in other languages. Also, even if you remember to
call gettext() on each fragment, the fragments may not translate well separately. It’s better to duplicate
a little code so that each message to be translated is a coherent whole. Only numbers, file names, and
such-like run-time variables should be inserted at runtime into a message text.
• For similar reasons, this won’t work:
printf("copied %d file%s", n, n!=1 ? "s" : "");
because it assumes how the plural is formed. If you figured you could solve it like this
if (n==1)
printf("copied 1 file");
else
printf("copied %d files", n):
then be disappointed. Some languages have more than two forms, with some peculiar rules. We may
have a solution for this in the future, but for now the matter is best avoided altogether. You could write:
1117
Chapter 44. Native Language Support
• If you want to communicate something to the translator, such as about how a message is intended to line
up with other output, precede the occurrence of the string with a comment that starts with translator,
e.g.,
/* translator: This message is not what it seems to be. */
These comments are copied to the message catalog files so that the translators can see them.
1118
Chapter 45. Writing A Procedural Language
Handler
All calls to functions that are written in a language other than the current “version 1” interface for compiled
languages (this includes functions in user-defined procedural languages, functions written in SQL, and
functions using the version 0 compiled language interface), go through a call handler function for the
specific language. It is the responsibility of the call handler to execute the function in a meaningful way,
such as by interpreting the supplied source text. This chapter outlines how a new procedural language’s
call handler can be written.
The call handler for a procedural language is a “normal” function that must be written in a compiled
language such as C, using the version-1 interface, and registered with PostgreSQL as taking no arguments
and returning the type language_handler. This special pseudotype identifies the function as a call
handler and prevents it from being called directly in SQL commands.
The call handler is called in the same way as any other function: It receives a pointer to a
FunctionCallInfoData struct containing argument values and information about the called
function, and it is expected to return a Datum result (and possibly set the isnull field of the
FunctionCallInfoData structure, if it wishes to return an SQL null result). The difference
between a call handler and an ordinary callee function is that the flinfo->fn_oid field of the
FunctionCallInfoData structure will contain the OID of the actual function to be called, not of the
call handler itself. The call handler must use this field to determine which function to execute. Also, the
passed argument list has been set up according to the declaration of the target function, not of the call
handler.
It’s up to the call handler to fetch the entry of the function from the system table pg_proc and to analyze
the argument and return types of the called function. The AS clause from the CREATE FUNCTION com-
mand for the function will be found in the prosrc column of the pg_proc row. This is commonly source
text in the procedural language, but in theory it could be something else, such as a path name to a file, or
anything else that tells the call handler what to do in detail.
Often, the same function is called many times per SQL statement. A call handler can avoid repeated
lookups of information about the called function by using the flinfo->fn_extra field. This will ini-
tially be NULL, but can be set by the call handler to point at information about the called function. On
subsequent calls, if flinfo->fn_extra is already non-NULL then it can be used and the information
lookup step skipped. The call handler must make sure that flinfo->fn_extra is made to point at
memory that will live at least until the end of the current query, since an FmgrInfo data structure could
be kept that long. One way to do this is to allocate the extra data in the memory context specified by
flinfo->fn_mcxt; such data will normally have the same lifespan as the FmgrInfo itself. But the
handler could also choose to use a longer-lived memory context so that it can cache function definition
information across queries.
When a procedural-language function is invoked as a trigger, no arguments are passed in the usual way,
but the FunctionCallInfoData’s context field points at a TriggerData structure, rather than being
NULL as it is in a plain function call. A language handler should provide mechanisms for procedural-
language functions to get at the trigger information.
1119
Chapter 45. Writing A Procedural Language Handler
#include "postgres.h"
#include "executor/spi.h"
#include "commands/trigger.h"
#include "fmgr.h"
#include "access/heapam.h"
#include "utils/syscache.h"
#include "catalog/pg_proc.h"
#include "catalog/pg_type.h"
PG_FUNCTION_INFO_V1(plsample_call_handler);
Datum
plsample_call_handler(PG_FUNCTION_ARGS)
{
Datum retval;
if (CALLED_AS_TRIGGER(fcinfo))
{
/*
* Called as a trigger procedure
*/
TriggerData *trigdata = (TriggerData *) fcinfo->context;
retval = ...
}
else
{
/*
* Called as a function
*/
retval = ...
}
return retval;
}
Only a few thousand lines of code have to be added instead of the dots to complete the call handler.
After having compiled the handler function into a loadable module (see Section 31.9.6), the following
commands then register the sample procedural language:
The procedural languages included in the standard distribution are good references when trying to write
your own call handler. Look into the src/pl subdirectory of the source tree.
1120
Chapter 46. Genetic Query Optimizer
Martin Utesch1997-10-02
1121
Chapter 46. Genetic Query Optimizer
The coordinates of an individual in the search space are represented by chromosomes, in essence a set of
character strings. A gene is a subsection of a chromosome which encodes the value of a single parameter
being optimized. Typical encodings for a gene could be binary or integer.
Through simulation of the evolutionary operations recombination, mutation, and selection new genera-
tions of search points are found that show a higher average fitness than their ancestors.
According to the comp.ai.genetic FAQ it cannot be stressed too strongly that a GA is not a pure random
search for a solution to a problem. A GA uses stochastic processes, but the result is distinctly non-random
(better than random).
+=========================================+
|>>>>>>>>>>> Algorithm GA <<<<<<<<<<<<<<|
+=========================================+
| INITIALIZE t := 0 |
+=========================================+
| INITIALIZE P(t) |
+=========================================+
| evaluate FITNESS of P(t) |
+=========================================+
| while not STOPPING CRITERION do |
| +-------------------------------------+
| | P’(t) := RECOMBINATION{P(t)} |
| +-------------------------------------+
| | P”(t) := MUTATION{P’(t)} |
| +-------------------------------------+
| | P(t+1) := SELECTION{P”(t) + P(t)} |
| +-------------------------------------+
| | evaluate FITNESS of P”(t) |
| +-------------------------------------+
| | t := t + 1 |
+===+=====================================+
/\
/\ 2
/\ 3
4 1
1122
Chapter 46. Genetic Query Optimizer
is encoded by the integer string ’4-1-3-2’, which means, first join relation ’4’ and ’1’, then ’3’, and then
’2’, where 1, 2, 3, 4 are relation IDs within the PostgreSQL optimizer.
Parts of the GEQO module are adapted from D. Whitley’s Genitor algorithm.
Specific characteristics of the GEQO implementation in PostgreSQL are:
• Usage of a steady state GA (replacement of the least fit individuals in a population, not
whole-generational replacement) allows fast convergence towards improved query plans. This is
essential for query handling with reasonable time;
• Usage of edge recombination crossover which is especially suited to keep edge losses low for the
solution of the TSP by means of a GA;
• Mutation as genetic operator is deprecated so that no repair mechanisms are needed to generate legal
TSP tours.
The GEQO module allows the PostgreSQL query optimizer to support large join queries effectively
through non-exhaustive search.
At a more basic level, it is not clear that solving query optimization with a GA algorithm designed for
TSP is appropriate. In the TSP case, the cost associated with any substring (partial tour) is independent
of the rest of the tour, but this is certainly not true for query optimization. Thus it is questionable whether
edge recombination crossover is the most effective mutation procedure.
1. http://surf.de.uu.net/encore/www/
2. news://comp.ai.genetic
3. http://www.red3d.com/cwr/evolve.html
1123
Chapter 46. Genetic Query Optimizer
1124
Chapter 47. Index Cost Estimation Functions
Author: Written by Tom Lane (<[email protected]>) on 2000-01-24
Note: This must eventually become part of a much larger chapter about writing new index access
methods.
Every index access method must provide a cost estimation function for use by the planner/optimizer. The
procedure OID of this function is given in the amcostestimate field of the access method’s pg_am
entry.
Note: Prior to PostgreSQL 7.0, a different scheme was used for registering index-specific cost esti-
mation functions.
The amcostestimate function is given a list of WHERE clauses that have been determined to be usable with
the index. It must return estimates of the cost of accessing the index and the selectivity of the WHERE
clauses (that is, the fraction of main-table rows that will be retrieved during the index scan). For simple
cases, nearly all the work of the cost estimator can be done by calling standard routines in the optimizer;
the point of having an amcostestimate function is to allow index access methods to provide index-type-
specific knowledge, in case it is possible to improve on the standard estimates.
Each amcostestimate function must have the signature:
void
amcostestimate (Query *root,
RelOptInfo *rel,
IndexOptInfo *index,
List *indexQuals,
Cost *indexStartupCost,
Cost *indexTotalCost,
Selectivity *indexSelectivity,
double *indexCorrelation);
root
The query being processed.
1125
Chapter 47. Index Cost Estimation Functions
rel
The relation the index is on.
index
The index itself.
indexQuals
List of index qual clauses (implicitly ANDed); a NIL list indicates no qualifiers are available.
*indexStartupCost
Set to cost of index start-up processing
*indexTotalCost
Set to total cost of index processing
*indexSelectivity
Set to index selectivity
*indexCorrelation
Set to correlation coefficient between index scan order and underlying table’s order
Note that cost estimate functions must be written in C, not in SQL or any available procedural language,
because they must access internal data structures of the planner/optimizer.
The index access costs should be computed in the units used by
src/backend/optimizer/path/costsize.c: a sequential disk block fetch has
cost 1.0, a nonsequential fetch has cost random_page_cost, and the cost of processing one index row
should usually be taken as cpu_index_tuple_cost (which is a user-adjustable optimizer parameter). In
addition, an appropriate multiple of cpu_operator_cost should be charged for any comparison operators
invoked during index processing (especially evaluation of the indexQuals themselves).
The access costs should include all disk and CPU costs associated with scanning the index itself, but NOT
the costs of retrieving or processing the main-table rows that are identified by the index.
The “start-up cost” is the part of the total scan cost that must be expended before we can begin to fetch
the first row. For most indexes this can be taken as zero, but an index type with a high start-up cost might
want to set it nonzero.
The indexSelectivity should be set to the estimated fraction of the main table rows that will be retrieved
during the index scan. In the case of a lossy index, this will typically be higher than the fraction of rows
that actually pass the given qual conditions.
The indexCorrelation should be set to the correlation (ranging between -1.0 and 1.0) between the index
order and the table order. This is used to adjust the estimate for the cost of fetching rows from the main
table.
1126
Chapter 47. Index Cost Estimation Functions
Cost Estimation
2. Estimate the number of index rows that will be visited during the scan. For many index types this is
the same as indexSelectivity times the number of rows in the index, but it might be more. (Note that
the index’s size in pages and rows is available from the IndexOptInfo struct.)
3. Estimate the number of index pages that will be retrieved during the scan. This might be just indexS-
electivity times the index’s size in pages.
4. Compute the index access cost. A generic estimator might do this:
/*
* Our generic assumption is that the index pages will be read
* sequentially, so they have cost 1.0 each, not random_page_cost.
* Also, we charge for evaluation of the indexquals at each index row.
* All the costs are assumed to be paid incrementally during the scan.
*/
cost_qual_eval(&index_qual_cost, indexQuals);
*indexStartupCost = index_qual_cost.startup;
*indexTotalCost = numIndexPages +
(cpu_index_tuple_cost + index_qual_cost.per_tuple) * numIndexTuples;
5. Estimate the index correlation. For a simple ordered index on a single field, this can be retrieved from
pg_statistic. If the correlation is not known, the conservative estimate is zero (no correlation).
1127
Chapter 48. GiST Indexes
48.1. Introduction
GiST stands for Generalized Search Tree. It is a balanced, tree-structured access method, that acts as
a base template in which to implement arbitrary indexing schemes. B+-trees, R-trees and many other
indexing schemes can be implemented in GiST.
One advantage of GiST is that it allows the development of custom data types with the appropriate access
methods, by an expert in the domain of the data type, rather than a database expert.
Some of the information here is derived from the University of California at Berkeley’s GiST Indexing
Project web site1 and Marcel Kornacker’s thesis, Access Methods for Next-Generation Database Systems2.
The GiST implementation in PostgreSQL is primarily maintained by Teodor Sigaev and Oleg Bartunov,
and there is more information on their website: http://www.sai.msu.su/~megera/postgres/gist/.
48.2. Extensibility
Traditionally, implementing a new index access method meant a lot of difficult work. It was necessary to
understand the inner workings of the database, such as the lock manager and Write-Ahead Log. The GiST
interface has a high level of abstraction, requiring the access method implementor to only implement the
semantics of the data type being accessed. The GiST layer itself takes care of concurrency, logging and
searching the tree structure.
This extensibility should not be confused with the extensibility of the other standard search trees in terms
of the data they can handle. For example, PostgreSQL supports extensible B+-trees and R-trees. That
means that you can use PostgreSQL to build a B+-tree or R-tree over any data type you want. But B+-trees
only support range predicates (<, =, >), and R-trees only support n-D range queries (contains, contained,
equals).
So if you index, say, an image collection with a PostgreSQL B+-tree, you can only issue queries such as “is
imagex equal to imagey”, “is imagex less than imagey” and “is imagex greater than imagey”? Depending
on how you define “equals”, “less than” and “greater than” in this context, this could be useful. However,
by using a GiST based index, you could create ways to ask domain-specific questions, perhaps “find all
images of horses” or “find all over-exposed images”.
All it takes to get a GiST access method up and running is to implement seven user-defined methods,
which define the behavior of keys in the tree. Of course these methods have to be pretty fancy to support
fancy queries, but for all the standard queries (B+-trees, R-trees, etc.) they’re relatively straightforward.
In short, GiST combines extensibility along with generality, code reuse, and a clean interface.
1. http://gist.cs.berkeley.edu/
2. http://citeseer.nj.nec.com/448594.html
1128
Chapter 48. GiST Indexes
48.3. Implementation
There are seven methods that an index operator class for GiST must provide:
consistent
Given a predicate p on a tree page, and a user query, q, this method will return false if it is certain
that both p and q cannot be true for a given data item.
union
This method consolidates information in the tree. Given a set of entries, this function generates a new
predicate that is true for all the entries.
compress
Converts the data item into a format suitable for physical storage in an index page.
decompress
The reverse of the compress method. Converts the index representation of the data item into a format
that can be manipulated by the database.
penalty
Returns a value indicating the “cost” of inserting the new entry into a particular branch of the tree.
items will be inserted down the path of least penalty in the tree.
picksplit
When a page split is necessary, this function decides which entries on the page are to stay on the old
page, and which are to move to the new page.
same
Returns true if two entries are identical, false otherwise.
48.4. Limitations
The current implementation of GiST within PostgreSQL has some major limitations: GiST access is not
concurrent; the GiST interface doesn’t allow the development of certain data types, such as digital trees
(see papers by Aoki et al); and there is not yet any support for write-ahead logging of updates in GiST
indexes.
Solutions to the concurrency problems appear in Marcel Kornacker’s thesis; however these ideas have not
yet been put into practice in the PostgreSQL implementation.
The lack of write-ahead logging is just a small matter of programming, but since it isn’t done yet, a crash
could render a GiST index inconsistent, forcing a REINDEX.
48.5. Examples
To see example implementations of index methods implemented using GiST, examine the following con-
trib modules:
1129
Chapter 48. GiST Indexes
btree_gist
B-Tree
cube
Indexing for multi-dimensional cubes
intarray
RD-Tree for one-dimensional array of int4 values
ltree
Indexing for tree-like stuctures
rtree_gist
R-Tree
seg
Storage and indexed access for “float ranges”
tsearch and tsearch2
Full text indexing
1130
Chapter 49. Database Physical Storage
This chapter provides an overview of the physical storage format used by PostgreSQL databases.
Item Description
PG_VERSION A file containing the major version number of
PostgreSQL
base Subdirectory containing per-database subdirectories
global Subdirectory containing cluster-wide tables, such as
pg_database
pg_clog Subdirectory containing transaction commit status
data
pg_subtrans Subdirectory containing subtransaction status data
pg_tblspc Subdirectory containing symbolic links to
tablespaces
pg_xlog Subdirectory containing WAL (Write Ahead Log)
files
postmaster.opts A file recording the command-line options the
postmaster was last started with
postmaster.pid A lock file recording the current postmaster PID and
shared memory segment ID (not present after
postmaster shutdown)
For each database in the cluster there is a subdirectory within PGDATA/base, named after the database’s
OID in pg_database. This subdirectory is the default location for the database’s files; in particular, its
system catalogs are stored there.
Each table and index is stored in a separate file, named after the table or index’s filenode number, which
1131
Chapter 49. Database Physical Storage
Caution
Note that while a table’s filenode often matches its OID, this is not necessarily
the case; some operations, like TRUNCATE, REINDEX, CLUSTER and some forms of
ALTER TABLE, can change the filenode while preserving the OID. Avoid assuming
that filenode and table OID are the same.
When a table or index exceeds 1Gb, it is divided into gigabyte-sized segments. The first segment’s file
name is the same as the filenode; subsequent segments are named filenode.1, filenode.2, etc. This arrange-
ment avoids problems on platforms that have file size limitations. The contents of tables and indexes are
discussed further in Section 49.3.
A table that has columns with potentially large entries will have an associated TOAST table, which
is used for out-of-line storage of field values that are too large to keep in the table rows proper.
pg_class.reltoastrelid links from a table to its TOAST table, if any. See Section 49.2 for more
information.
Tablespaces make the scenario more complicated. Each user-defined tablespace has a symbolic link
inside the PGDATA/pg_tblspc directory, which points to the physical tablespace directory (as specified
in its CREATE TABLESPACE command). The symbolic link is named after the tablespace’s OID.
Inside the physical tablespace directory there is a subdirectory for each database that has elements in
the tablespace, named after the database’s OID. Tables within that directory follow the filenode
naming scheme. The pg_default tablespace is not accessed through pg_tblspc, but corresponds
to PGDATA/base. Similarly, the pg_global tablespace is not accessed through pg_tblspc, but
corresponds to PGDATA/global.
49.2. TOAST
This section provides an overview of TOAST (The Oversized-Attribute Storage Technique).
Since PostgreSQL uses a fixed page size (commonly 8Kb), and does not allow tuples to span multiple
pages, it’s not possible to store very large field values directly. Before PostgreSQL 7.1 there was a hard
limit of just under one page on the total amount of data that could be put into a table row. In release 7.1
and later, this limit is overcome by allowing large field values to be compressed and/or broken up into
multiple physical rows. This happens transparently to the user, with only small impact on most of the
backend code. The technique is affectionately known as TOAST (or “the best thing since sliced bread”).
Only certain data types support TOAST — there is no need to impose the overhead on data types that
cannot produce large field values. To support TOAST, a data type must have a variable-length (varlena)
representation, in which the first 32-bit word of any stored value contains the total length of the value in
bytes (including itself). TOAST does not constrain the rest of the representation. All the C-level functions
supporting a TOAST-able data type must be careful to handle TOASTed input values. (This is normally
done by invoking PG_DETOAST_DATUM before doing anything with an input value; but in some cases more
efficient approaches are possible.)
TOAST usurps the high-order two bits of the varlena length word, thereby limiting the logical size of any
value of a TOAST-able data type to 1Gb (230 - 1 bytes). When both bits are zero, the value is an ordinary
un-TOASTed value of the data type. One of these bits, if set, indicates that the value has been compressed
1132
Chapter 49. Database Physical Storage
and must be decompressed before use. The other bit, if set, indicates that the value has been stored out-of-
line. In this case the remainder of the value is actually just a pointer, and the correct data has to be found
elsewhere. When both bits are set, the out-of-line data has been compressed too. In each case the length
in the low-order bits of the varlena word indicates the actual size of the datum, not the size of the logical
value that would be extracted by decompression or fetching of the out-of-line data.
If any of the columns of a table are TOAST-able, the table will have an associated TOAST table, whose
OID is stored in the table’s pg_class.reltoastrelid entry. Out-of-line TOASTed values are kept in
the TOAST table, as described in more detail below.
The compression technique used is a fairly simple and very fast member of the LZ family of compression
techniques. See src/backend/utils/adt/pg_lzcompress.c for the details.
Out-of-line values are divided (after compression if used) into chunks of at most
TOAST_MAX_CHUNK_SIZE bytes (this value is a little less than BLCKSZ/4, or about 2000 bytes by
default). Each chunk is stored as a separate row in the TOAST table for the owning table. Every TOAST
table has the columns chunk_id (an OID identifying the particular TOASTed value), chunk_seq
(a sequence number for the chunk within its value), and chunk_data (the actual data of the chunk).
A unique index on chunk_id and chunk_seq provides fast retrieval of the values. A pointer datum
representing an out-of-line TOASTed value therefore needs to store the OID of the TOAST table in
which to look and the OID of the specific value (its chunk_id). For convenience, pointer datums also
store the logical datum size (original uncompressed data length) and actual stored size (different if
compression was applied). Allowing for the varlena header word, the total size of a TOAST pointer
datum is therefore 20 bytes regardless of the actual size of the represented value.
The TOAST code is triggered only when a row value to be stored in a table is wider than BLCKSZ/4 bytes
(normally 2Kb). The TOAST code will compress and/or move field values out-of-line until the row value
is shorter than BLCKSZ/4 bytes or no more gains can be had. During an UPDATE operation, values of
unchanged fields are normally preserved as-is; so an UPDATE of a row with out-of-line values incurs no
TOAST costs if none of the out-of-line values change.
The TOAST code recognizes four different strategies for storing TOAST-able columns:
• PLAIN prevents either compression or out-of-line storage. This is the only possible strategy for columns
of non-TOAST-able data types.
• EXTENDED allows both compression and out-of-line storage. This is the default for most TOAST-able
data types. Compression will be attempted first, then out-of-line storage if the row is still too big.
• EXTERNAL allows out-of-line storage but not compression. Use of EXTERNAL will make substring op-
erations on wide text and bytea columns faster (at the penalty of increased storage space) because
these operations are optimized to fetch only the required parts of the out-of-line value when it is not
compressed.
• MAIN allows compression but not out-of-line storage. (Actually, out-of-line storage will still be per-
formed for such columns, but only as a last resort when there is no other way to make the row small
enough.)
Each TOAST-able data type specifies a default strategy for columns of that data type, but the strategy for
a given table column can be altered with ALTER TABLE SET STORAGE.
This scheme has a number of advantages compared to a more straightforward approach such as allowing
row values to span pages. Assuming that queries are usually qualified by comparisons against relatively
small key values, most of the work of the executor will be done using the main row entry. The big values
1133
Chapter 49. Database Physical Storage
of TOASTed attributes will only be pulled out (if selected at all) at the time the result set is sent to the
client. Thus, the main table is much smaller and more of its rows fit in the shared buffer cache than would
be the case without any out-of-line storage. Sort sets shrink also, and sorts will more often be done entirely
in memory. A little test showed that a table containing typical HTML pages and their URLs was stored
in about half of the raw data size including the TOAST table, and that the main table contained only
about 10% of the entire data (the URLs and some small HTML pages). There was no runtime difference
compared to an un-TOASTed comparison table, in which all the HTML pages were cut down to 7Kb to
fit.
Item Description
PageHeaderData 20 bytes long. Contains general information about
the page, including free space pointers.
ItemPointerData Array of (offset,length) pairs pointing to the actual
items. 4 bytes per item.
Free space The unallocated space. New item pointers are
allocated from the start of this area, new items from
the end.
Items The actual items themselves.
Special space Index access method specific data. Different
methods store different data. Empty in ordinary
tables.
The first 20 bytes of each page consists of a page header (PageHeaderData). Its format is detailed in Table
49-3. The first two fields track the most recent WAL entry related to this page. They are followed by
three 2-byte integer fields (pd_lower, pd_upper, and pd_special). These contain byte offsets from
the page start to the start of unallocated space, to the end of unallocated space, and to the start of the
special space. The last 2 bytes of the page header, pd_pagesize_version, store both the page size and
1. Actually, index access methods need not use this page format. All the existing index methods do use this basic format, but the
data kept on index metapages usually doesn’t follow the item layout rules.
1134
Chapter 49. Database Physical Storage
a version indicator. Beginning with PostgreSQL 8.0 the version number is 2; PostgreSQL 7.3 and 7.4 used
version number 1; prior releases used version number 0. (The basic page layout and header format has not
changed in these versions, but the layout of heap row headers has.) The page size is basically only present
as a cross-check; there is no support for having more than one page size in an installation.
All table rows are structured in the same way. There is a fixed-size header (occupying 27 bytes on most
machines), followed by an optional null bitmap, an optional object ID field, and the user data. The header
is detailed in Table 49-4. The actual user data (columns of the row) begins at the offset indicated by
t_hoff, which must always be a multiple of the MAXALIGN distance for the platform. The null bitmap
is only present if the HEAP_HASNULL bit is set in t_infomask. If it is present it begins just after the
fixed header and occupies enough bytes to have one bit per data column (that is, t_natts bits altogether).
In this list of bits, a 1 bit indicates not-null, a 0 bit is a null. When the bitmap is not present, all columns
1135
Chapter 49. Database Physical Storage
are assumed not-null. The object ID is only present if the HEAP_HASOID bit is set in t_infomask. If
present, it appears just before the t_hoff boundary. Any padding needed to make t_hoff a MAXALIGN
multiple will appear between the null bitmap and the object ID. (This in turn ensures that the object ID is
suitably aligned.)
1136
Chapter 50. BKI Backend Interface
Backend Interface (BKI) files are scripts in a special language that is understood by the PostgreSQL
backend when running in the “bootstrap” mode. The bootstrap mode allows system catalogs to be created
and filled from scratch, whereas ordinary SQL commands require the catalogs to exist already. BKI files
can therefore be used to create the database system in the first place. (And they are probably not useful
for anything else.)
initdb uses a BKI file to do part of its job when creating a new database cluster. The input file used by
initdb is created as part of building and installing PostgreSQL by a program named genbki.sh, which
reads some specially formatted C header files in the src/include/catalog/ directory of the source
tree. The created BKI file is called postgres.bki and is normally installed in the share subdirectory
of the installation tree.
Related information may be found in the documentation for initdb.
1137
Chapter 50. BKI Backend Interface
operations until such entries are made the hard way (with insert commands). This option is used
for creating pg_class etc themselves.
The table is created as shared if shared_relation is specified. It will have OIDs unless
without_oids is specified.
open tablename
Open the table called tablename for further manipulation.
close [tablename]
Close the open table called tablename. It is an error if tablename is not already opened. If no
tablename is given, then the currently open table is closed.
insert [OID = oid_value] (value1 value2 ...)
Insert a new row into the open table using value1, value2, etc., for its column values and
oid_value for its OID. If oid_value is zero (0) or the clause is omitted, then the next available
OID is used.
NULL values can be specified using the special key word _null_. Values containing spaces must be
double quoted.
declare [unique] index indexname on tablename using amname (opclass1 name1 [, ...])
Create an index named indexname on the table named tablename using the amname access
method. The fields to index are called name1, name2 etc., and the operator classes to use are
opclass1, opclass2 etc., respectively. The index file is created and appropriate catalog entries
are made for it, but the index contents are not initialized by this command.
build indices
Fill in the indices that have previously been declared.
50.3. Example
The following sequence of commands will create the table test_table with two columns cola and
colb of type int4 and text, respectively, and insert two rows into the table.
1138
VIII. Appendixes
Appendix A. PostgreSQL Error Codes
All messages emitted by the PostgreSQL server are assigned five-character error codes that follow the SQL
standard’s conventions for “SQLSTATE” codes. Applications that need to know which error condition
has occurred should usually test the error code, rather than looking at the textual error message. The error
codes are less likely to change across PostgreSQL releases, and also are not subject to change due to
localization of error messages. Note that some, but not all, of the error codes produced by PostgreSQL are
defined by the SQL standard; some additional error codes for conditions not defined by the standard have
been invented or borrowed from other databases.
According to the standard, the first two characters of an error code denote a class of errors, while the last
three characters indicate a specific condition within that class. Thus, an application that does not recognize
the specific error code may still be able to infer what to do from the error class.
Table A-1 lists all the error codes defined in PostgreSQL 8.0.0. (Some are not actually used at present,
but are defined by the SQL standard.) The error classes are also shown. For each error class there is a
“standard” error code having the last three characters 000. This code is used only for error conditions that
fall within the class but do not have any more-specific code assigned.
The PL/pgSQL condition name for each error code is the same as the phrase shown in the table, with
underscores substituted for spaces. For example, code 22012, DIVISION BY ZERO, has condition name
DIVISION_BY_ZERO. Condition names can be written in either upper or lower case. (Note that PL/pgSQL
does not recognize warning, as opposed to error, condition names; those are classes 00, 01, and 02.)
1140
Appendix A. PostgreSQL Error Codes
1141
Appendix A. PostgreSQL Error Codes
1142
Appendix A. PostgreSQL Error Codes
1143
Appendix A. PostgreSQL Error Codes
1144
Appendix A. PostgreSQL Error Codes
1145
Appendix A. PostgreSQL Error Codes
1146
Appendix B. Date/Time Support
PostgreSQL uses an internal heuristic parser for all date/time input support. Dates and times are input as
strings, and are broken up into distinct fields with a preliminary determination of what kind of information
may be in the field. Each field is interpreted and either assigned a numeric value, ignored, or rejected. The
parser contains internal lookup tables for all textual fields, including months, days of the week, and time
zones.
This appendix includes information on the content of these lookup tables and describes the steps used by
the parser to decode dates and times.
1. Break the input string into tokens and categorize each token as a string, time, time zone, or number.
a. If the numeric token contains a colon (:), this is a time string. Include all subsequent digits
and colons.
b. If the numeric token contains a dash (-), slash (/), or two or more dots (.), this is a date
string which may have a text month.
c. If the token is numeric only, then it is either a single field or an ISO 8601 concatenated date
(e.g., 19990113 for January 13, 1999) or time (e.g., 141516 for 14:15:16).
d. If the token starts with a plus (+) or minus (-), then it is either a time zone or a special field.
2. If the token is a text string, match up with possible strings.
a. Do a binary-search table lookup for the token as either a special string (e.g., today), day
(e.g., Thursday), month (e.g., January), or noise word (e.g., at, on).
Set field values and bit mask for fields. For example, set year, month, day for today, and
additionally hour, minute, second for now.
b. If not found, do a similar binary-search table lookup to match the token with a time zone.
c. If still not found, throw an error.
3. When the token is a number or number field:
a. If there are eight or six digits, and if no other date fields have been previously read,
then interpret as a “concatenated date” (e.g., 19990118 or 990118). The interpretation
is YYYYMMDD or YYMMDD.
b. If the token is three digits and a year has already been read, then interpret as day of year.
c. If four or six digits and a year has already been read, then interpret as a time (HHMM or
HHMMSS).
1147
Appendix B. Date/Time Support
d. If three or more digits and no date fields have yet been found, interpret as a year (this forces
yy-mm-dd ordering of the remaining date fields).
e. Otherwise the date field ordering is assumed to follow the DateStyle setting: mm-dd-yy,
dd-mm-yy, or yy-mm-dd. Throw an error if a month or day field is found to be out of range.
4. If BC has been specified, negate the year and add one for internal storage. (There is no year zero in
the Gregorian calendar, so numerically 1 BC becomes year zero.)
5. If BC was not specified, and if the year field was two digits in length, then adjust the year to four
digits. If the field is less than 70, then add 2000, otherwise add 1900.
Tip: Gregorian years AD 1-99 may be entered by using 4 digits with leading zeros (e.g., 0099 is AD
99). Previous versions of PostgreSQL accepted years with three digits and with single digits, but
as of version 7.0 the rules have been tightened up to reduce the possibility of ambiguity.
Month Abbreviations
January Jan
February Feb
March Mar
April Apr
May
June Jun
July Jul
August Aug
September Sep, Sept
October Oct
November Nov
December Dec
Table B-2 shows the tokens that are recognized as names of days of the week.
Day Abbreviations
Sunday Sun
Monday Mon
1148
Appendix B. Date/Time Support
Day Abbreviations
Tuesday Tue, Tues
Wednesday Wed, Weds
Thursday Thu, Thur, Thurs
Friday Fri
Saturday Sat
Table B-3 shows the tokens that serve various modifier purposes.
Identifier Description
ABSTIME Ignored
AM Time is before 12:00
AT Ignored
JULIAN, JD, J Next field is Julian Day
ON Ignored
PM Time is on or after 12:00
T Next field is time
The key word ABSTIME is ignored for historical reasons: In very old releases of PostgreSQL, invalid
values of type abstime were emitted as Invalid Abstime. This is no longer the case however and this
key word will likely be dropped in a future release.
Table B-4 shows the time zone abbreviations recognized by PostgreSQL in date/time input values. Note
that these names are not necessarily used for date/time output — output is driven by the official timezone
abbreviation(s) associated with the currently selected timezone parameter setting. (It is likely that future
releases will make some use of timezone for input as well.)
The table is organized by time zone offset from UTC, rather than alphabetically. This is intended to
facilitate matching local usage with recognized abbreviations for cases where these might differ.
1149
Appendix B. Date/Time Support
1150
Appendix B. Date/Time Support
1151
Appendix B. Date/Time Support
Australian Time Zones. There are three naming conflicts between Australian time zone names and time
zone names commonly used in North and South America: ACST, CST, and EST. If the run-time option
australian_timezones is set to true then ACST, CST, EST, and SAT are interpreted as Australian time
zone names, as shown in Table B-5. If it is false (which is the default), then ACST, CST, and EST are taken
as American time zone names, and SAT is interpreted as a noise word indicating Saturday.
1152
Appendix B. Date/Time Support
Table B-6 shows the time zone names recognized by PostgreSQL as valid settings for the timezone pa-
rameter. Note that these names are conceptually as well as practically different from the names shown in
Table B-4: most of these names imply a local daylight-savings time rule, whereas the former names each
represent just a fixed offset from UTC.
In many cases there are several equivalent names for the same zone. These are listed on the same line. The
table is primarily sorted by the name of the principal city of the zone.
Time Zone
Africa/Abidjan
Africa/Accra
Africa/Addis_Ababa
Africa/Algiers
Africa/Asmera
Africa/Bamako
Africa/Bangui
Africa/Banjul
Africa/Bissau
Africa/Blantyre
Africa/Brazzaville
Africa/Bujumbura
Africa/Cairo Egypt
Africa/Casablanca
Africa/Ceuta
Africa/Conakry
Africa/Dakar
Africa/Dar_es_Salaam
Africa/Djibouti
Africa/Douala
Africa/El_Aaiun
Africa/Freetown
Africa/Gaborone
Africa/Harare
1153
Appendix B. Date/Time Support
Time Zone
Africa/Johannesburg
Africa/Kampala
Africa/Khartoum
Africa/Kigali
Africa/Kinshasa
Africa/Lagos
Africa/Libreville
Africa/Lome
Africa/Luanda
Africa/Lubumbashi
Africa/Lusaka
Africa/Malabo
Africa/Maputo
Africa/Maseru
Africa/Mbabane
Africa/Mogadishu
Africa/Monrovia
Africa/Nairobi
Africa/Ndjamena
Africa/Niamey
Africa/Nouakchott
Africa/Ouagadougou
Africa/Porto-Novo
Africa/Sao_Tome
Africa/Timbuktu
Africa/Tripoli Libya
Africa/Tunis
Africa/Windhoek
America/Adak America/Atka US/Aleutian
America/Anchorage SystemV/YST9YDT US/Alaska
America/Anguilla
America/Antigua
America/Araguaina
America/Aruba
America/Asuncion
America/Bahia
America/Barbados
America/Belem
1154
Appendix B. Date/Time Support
Time Zone
America/Belize
America/Boa_Vista
America/Bogota
America/Boise
America/Buenos_Aires
America/Cambridge_Bay
America/Campo_Grande
America/Cancun
America/Caracas
America/Catamarca
America/Cayenne
America/Cayman
America/Chicago CST6CDT SystemV/CST6CDT US/Central
America/Chihuahua
America/Cordoba America/Rosario
America/Costa_Rica
America/Cuiaba
America/Curacao
America/Danmarkshavn
America/Dawson
America/Dawson_Creek
America/Denver MST7MDT SystemV/MST7MDT US/Mountain America/Shiprock Navajo
America/Detroit US/Michigan
America/Dominica
America/Edmonton Canada/Mountain
America/Eirunepe
America/El_Salvador
America/Ensenada America/Tijuana Mexico/BajaNorte
America/Fortaleza
America/Glace_Bay
America/Godthab
America/Goose_Bay
America/Grand_Turk
America/Grenada
America/Guadeloupe
America/Guatemala
America/Guayaquil
America/Guyana
1155
Appendix B. Date/Time Support
Time Zone
America/Halifax Canada/Atlantic SystemV/AST4ADT
America/Havana Cuba
America/Hermosillo
America/Indiana/Indianapolis America/Indianapolis America/Fort_Wayne EST SystemV/EST5
US/East-Indiana
America/Indiana/Knox America/Knox_IN US/Indiana-Starke
America/Indiana/Marengo
America/Indiana/Vevay
America/Inuvik
America/Iqaluit
America/Jamaica Jamaica
America/Jujuy
America/Juneau
America/Kentucky/Louisville America/Louisville
America/Kentucky/Monticello
America/La_Paz
America/Lima
America/Los_Angeles PST8PDT SystemV/PST8PDT US/Pacific US/Pacific-New
America/Maceio
America/Managua
America/Manaus Brazil/West
America/Martinique
America/Mazatlan Mexico/BajaSur
America/Mendoza
America/Menominee
America/Merida
America/Mexico_City Mexico/General
America/Miquelon
America/Monterrey
America/Montevideo
America/Montreal
America/Montserrat
America/Nassau
America/New_York EST5EDT SystemV/EST5EDT US/Eastern
America/Nipigon
America/Nome
America/Noronha Brazil/DeNoronha
America/North_Dakota/Center
America/Panama
1156
Appendix B. Date/Time Support
Time Zone
America/Pangnirtung
America/Paramaribo
America/Phoenix MST SystemV/MST7 US/Arizona
America/Port-au-Prince
America/Port_of_Spain
America/Porto_Acre America/Rio_Branco Brazil/Acre
America/Porto_Velho
America/Puerto_Rico SystemV/AST4
America/Rainy_River
America/Rankin_Inlet
America/Recife
America/Regina Canada/East-Saskatchewan Canada/Saskatchewan SystemV/CST6
America/Santiago Chile/Continental
America/Santo_Domingo
America/Sao_Paulo Brazil/East
America/Scoresbysund
America/St_Johns Canada/Newfoundland
America/St_Kitts
America/St_Lucia
America/St_Thomas America/Virgin
America/St_Vincent
America/Swift_Current
America/Tegucigalpa
America/Thule
America/Thunder_Bay
America/Toronto Canada/Eastern
America/Tortola
America/Vancouver Canada/Pacific
America/Whitehorse Canada/Yukon
America/Winnipeg Canada/Central
America/Yakutat
America/Yellowknife
Antarctica/Casey
Antarctica/Davis
Antarctica/DumontDUrville
Antarctica/Mawson
Antarctica/McMurdo Antarctica/South_Pole
Antarctica/Palmer
1157
Appendix B. Date/Time Support
Time Zone
Antarctica/Rothera
Antarctica/Syowa
Antarctica/Vostok
Asia/Aden
Asia/Almaty
Asia/Amman
Asia/Anadyr
Asia/Aqtau
Asia/Aqtobe
Asia/Ashgabat Asia/Ashkhabad
Asia/Baghdad
Asia/Bahrain
Asia/Baku
Asia/Bangkok
Asia/Beirut
Asia/Bishkek
Asia/Brunei
Asia/Calcutta
Asia/Choibalsan
Asia/Chongqing Asia/Chungking
Asia/Colombo
Asia/Dacca Asia/Dhaka
Asia/Damascus
Asia/Dili
Asia/Dubai
Asia/Dushanbe
Asia/Gaza
Asia/Harbin
Asia/Hong_Kong Hongkong
Asia/Hovd
Asia/Irkutsk
Asia/Jakarta
Asia/Jayapura
Asia/Jerusalem Asia/Tel_Aviv Israel
Asia/Kabul
Asia/Kamchatka
Asia/Karachi
Asia/Kashgar
1158
Appendix B. Date/Time Support
Time Zone
Asia/Katmandu
Asia/Krasnoyarsk
Asia/Kuala_Lumpur
Asia/Kuching
Asia/Kuwait
Asia/Macao Asia/Macau
Asia/Magadan
Asia/Makassar Asia/Ujung_Pandang
Asia/Manila
Asia/Muscat
Asia/Nicosia Europe/Nicosia
Asia/Novosibirsk
Asia/Omsk
Asia/Oral
Asia/Phnom_Penh
Asia/Pontianak
Asia/Pyongyang
Asia/Qatar
Asia/Qyzylorda
Asia/Rangoon
Asia/Riyadh
Asia/Riyadh87 Mideast/Riyadh87
Asia/Riyadh88 Mideast/Riyadh88
Asia/Riyadh89 Mideast/Riyadh89
Asia/Saigon
Asia/Sakhalin
Asia/Samarkand
Asia/Seoul ROK
Asia/Shanghai PRC
Asia/Singapore Singapore
Asia/Taipei ROC
Asia/Tashkent
Asia/Tbilisi
Asia/Tehran Iran
Asia/Thimbu Asia/Thimphu
Asia/Tokyo Japan
Asia/Ulaanbaatar Asia/Ulan_Bator
Asia/Urumqi
1159
Appendix B. Date/Time Support
Time Zone
Asia/Vientiane
Asia/Vladivostok
Asia/Yakutsk
Asia/Yekaterinburg
Asia/Yerevan
Atlantic/Azores
Atlantic/Bermuda
Atlantic/Canary
Atlantic/Cape_Verde
Atlantic/Faeroe
Atlantic/Madeira
Atlantic/Reykjavik Iceland
Atlantic/South_Georgia
Atlantic/St_Helena
Atlantic/Stanley
Australia/ACT Australia/Canberra Australia/NSW Australia/Sydney
Australia/Adelaide Australia/South
Australia/Brisbane Australia/Queensland
Australia/Broken_Hill Australia/Yancowinna
Australia/Darwin Australia/North
Australia/Hobart Australia/Tasmania
Australia/LHI Australia/Lord_Howe
Australia/Lindeman
Australia/Melbourne Australia/Victoria
Australia/Perth Australia/West
CET
EET
Etc/GMT+1
Etc/GMT+2
Etc/GMT+3
Etc/GMT+4
Etc/GMT+5
Etc/GMT+6
Etc/GMT+7
Etc/GMT+8
Etc/GMT+9
Etc/GMT+10
Etc/GMT+11
1160
Appendix B. Date/Time Support
Time Zone
Etc/GMT+12
Etc/GMT-1
Etc/GMT-2
Etc/GMT-3
Etc/GMT-4
Etc/GMT-5
Etc/GMT-6
Etc/GMT-7
Etc/GMT-8
Etc/GMT-9
Etc/GMT-10
Etc/GMT-11
Etc/GMT-12
Etc/GMT-13
Etc/GMT-14
Europe/Amsterdam
Europe/Andorra
Europe/Athens
Europe/Belfast
Europe/Belgrade Europe/Ljubljana Europe/Sarajevo Europe/Skopje Europe/Zagreb
Europe/Berlin
Europe/Brussels
Europe/Bucharest
Europe/Budapest
Europe/Chisinau Europe/Tiraspol
Europe/Copenhagen
Europe/Dublin Eire
Europe/Gibraltar
Europe/Helsinki
Europe/Istanbul Asia/Istanbul Turkey
Europe/Kaliningrad
Europe/Kiev
Europe/Lisbon Portugal
Europe/London GB GB-Eire
Europe/Luxembourg
Europe/Madrid
Europe/Malta
Europe/Minsk
1161
Appendix B. Date/Time Support
Time Zone
Europe/Monaco
Europe/Moscow W-SU
Europe/Oslo Arctic/Longyearbyen Atlantic/Jan_Mayen
Europe/Paris
Europe/Prague Europe/Bratislava
Europe/Riga
Europe/Rome Europe/San_Marino Europe/Vatican
Europe/Samara
Europe/Simferopol
Europe/Sofia
Europe/Stockholm
Europe/Tallinn
Europe/Tirane
Europe/Uzhgorod
Europe/Vaduz
Europe/Vienna
Europe/Vilnius
Europe/Warsaw Poland
Europe/Zaporozhye
Europe/Zurich
Factory
GMT GMT+0 GMT-0 GMT0 Greenwich Etc/GMT Etc/GMT+0 Etc/GMT-0 Etc/GMT0 Etc/Greenwich
Indian/Antananarivo
Indian/Chagos
Indian/Christmas
Indian/Cocos
Indian/Comoro
Indian/Kerguelen
Indian/Mahe
Indian/Maldives
Indian/Mauritius
Indian/Mayotte
Indian/Reunion
MET
Pacific/Apia
Pacific/Auckland NZ
Pacific/Chatham NZ-CHAT
Pacific/Easter Chile/EasterIsland
1162
Appendix B. Date/Time Support
Time Zone
Pacific/Efate
Pacific/Enderbury
Pacific/Fakaofo
Pacific/Fiji
Pacific/Funafuti
Pacific/Galapagos
Pacific/Gambier SystemV/YST9
Pacific/Guadalcanal
Pacific/Guam
Pacific/Honolulu HST SystemV/HST10 US/Hawaii
Pacific/Johnston
Pacific/Kiritimati
Pacific/Kosrae
Pacific/Kwajalein Kwajalein
Pacific/Majuro
Pacific/Marquesas
Pacific/Midway
Pacific/Nauru
Pacific/Niue
Pacific/Norfolk
Pacific/Noumea
Pacific/Pago_Pago Pacific/Samoa US/Samoa
Pacific/Palau
Pacific/Pitcairn SystemV/PST8
Pacific/Ponape
Pacific/Port_Moresby
Pacific/Rarotonga
Pacific/Saipan
Pacific/Tahiti
Pacific/Tarawa
Pacific/Tongatapu
Pacific/Truk
Pacific/Wake
Pacific/Wallis
Pacific/Yap
UCT Etc/UCT
UTC Universal Zulu Etc/UTC Etc/Universal Etc/Zulu
WET
1163
Appendix B. Date/Time Support
In addition to the names listed in the table, PostgreSQL will accept time zone names of the form
STDoffset or STDoffsetDST, where STD is a zone abbreviation, offset is a numeric offset in
hours west from UTC, and DST is an optional daylight-savings zone abbreviation, assumed to stand
for one hour ahead of the given offset. For example, if EST5EDT were not already a recognized zone
name, it would be accepted and would be functionally equivalent to USA East Coast time. When a
daylight-savings zone name is present, it is assumed to be used according to USA time zone rules, so this
feature is of limited use outside North America. One should also be wary that this provision can lead to
silently accepting bogus input, since there is no check on the reasonableness of the zone abbreviations.
For example, SET TIMEZONE TO FOOBAR0 will work, leaving the system effectively using a rather
peculiar abbreviation for GMT.
$ cal 9 1752
September 1752
S M Tu W Th F S
1 2 14 15 16
1164
Appendix B. Date/Time Support
17 18 19 20 21 22 23
24 25 26 27 28 29 30
Note: The SQL standard states that “Within the definition of a ‘datetime literal’, the ‘datetime value’s
are constrained by the natural rules for dates and times according to the Gregorian calendar”. Dates
between 1752-09-03 and 1752-09-13, although eliminated in some countries by Papal fiat, conform
to “natural rules” and are hence valid dates.
Different calendars have been developed in various parts of the world, many predating the Gregorian
system. For example, the beginnings of the Chinese calendar can be traced back to the 14th century BC.
Legend has it that the Emperor Huangdi invented the calendar in 2637 BC. The People’s Republic of China
uses the Gregorian calendar for civil purposes. The Chinese calendar is used for determining festivals.
1165
Appendix C. SQL Key Words
Table C-1 lists all tokens that are key words in the SQL standard and in PostgreSQL 8.0.0. Background
information can be found in Section 4.1.1.
SQL distinguishes between reserved and non-reserved key words. According to the standard, reserved
key words are the only real key words; they are never allowed as identifiers. Non-reserved key words only
have a special meaning in particular contexts and can be used as identifiers in other contexts. Most non-
reserved key words are actually the names of built-in tables and functions specified by SQL. The concept
of non-reserved key words essentially only exists to declare that some predefined meaning is attached to
a word in some contexts.
In the PostgreSQL parser life is a bit more complicated. There are several different classes of tokens
ranging from those that can never be used as an identifier to those that have absolutely no special status in
the parser as compared to an ordinary identifier. (The latter is usually the case for functions specified by
SQL.) Even reserved key words are not completely reserved in PostgreSQL, but can be used as column
labels (for example, SELECT 55 AS CHECK, even though CHECK is a reserved key word).
In Table C-1 in the column for PostgreSQL we classify as “non-reserved” those key words that are explic-
itly known to the parser but are allowed in most or all contexts where an identifier is expected. Some key
words that are otherwise non-reserved cannot be used as function or data type names and are marked ac-
cordingly. (Most of these words represent built-in functions or data types with special syntax. The function
or type is still available but it cannot be redefined by the user.) Labeled “reserved” are those tokens that
are only allowed as “AS” column label names (and perhaps in very few other contexts). Some reserved
key words are allowable as names for functions; this is also shown in the table.
As a general rule, if you get spurious parser errors for commands that contain any of the listed key words
as an identifier you should try to quote the identifier to see if the problem goes away.
It is important to understand before studying Table C-1 that the fact that a key word is not reserved
in PostgreSQL does not mean that the feature related to the word is not implemented. Conversely, the
presence of a key word does not indicate the existence of a feature.
1166
Appendix C. SQL Key Words
1167
Appendix C. SQL Key Words
1168
Appendix C. SQL Key Words
COLLECT reserved
COLUMN reserved reserved reserved reserved
COLUMN_NAME non-reserved non-reserved non-reserved
COMMAND_FUNCTION non-reserved non-reserved non-reserved
COMMENT non-reserved
COMMIT non-reserved reserved reserved reserved
COMMITTED non-reserved non-reserved non-reserved non-reserved
COMPLETION reserved
CONDITION reserved
CONDITION_NUMBER non-reserved non-reserved non-reserved
1169
Appendix C. SQL Key Words
CURRENT_TRANSFORM_GROUP_FOR_TYPEreserved
1170
Appendix C. SQL Key Words
1171
Appendix C. SQL Key Words
1172
Appendix C. SQL Key Words
1173
Appendix C. SQL Key Words
1174
Appendix C. SQL Key Words
1175
Appendix C. SQL Key Words
1176
Appendix C. SQL Key Words
1177
Appendix C. SQL Key Words
1178
Appendix C. SQL Key Words
1179
Appendix C. SQL Key Words
1180
Appendix C. SQL Key Words
1181
Appendix C. SQL Key Words
1182
Appendix C. SQL Key Words
1183
Appendix C. SQL Key Words
USER_DEFINED_TYPE_CODE non-reserved
1184
Appendix C. SQL Key Words
1185
Appendix D. SQL Conformance
This section attempts to outline to what extent PostgreSQL conforms to the current SQL standard. The
following information is not a full statement of conformance, but it presents the main topics in as much
detail as is both reasonable and useful for users.
The formal name of the SQL standard is ISO/IEC 9075 “Database Language SQL”. A revised version
of the standard is released from time to time; the most recent one appearing in late 2003. That version is
referred to as ISO/IEC 9075:2003, or simply as SQL:2003. The versions prior to that were SQL:1999 and
SQL-92. Each version replaces the previous one, so claims of conformance to earlier versions have no
official merit. PostgreSQL development aims for conformance with the latest official version of the stan-
dard where such conformance does not contradict traditional features or common sense. The PostgreSQL
project was not represented in the ISO/IEC 9075 Working Group during the preparation of SQL:2003.
Even so, many of the features required by SQL:2003 are already supported, though sometimes with
slightly differing syntax or function. Further moves towards conformance may be expected in later re-
leases.
SQL-92 defined three feature sets for conformance: Entry, Intermediate, and Full. Most database man-
agement systems claiming SQL standard conformance were conforming at only the Entry level, since the
entire set of features in the Intermediate and Full levels was either too voluminous or in conflict with
legacy behaviors.
Starting with SQL:1999, the SQL standard defines a large set of individual features rather than the ineffec-
tively broad three levels found in SQL-92. A large subset of these features represents the “Core” features,
which every conforming SQL implementation must supply. The rest of the features are purely optional.
Some optional features are grouped together to form “packages”, which SQL implementations can claim
conformance to, thus claiming conformance to particular groups of features.
The SQL:2003 standard is also split into a number of parts. Each is known by a shorthand name. Note
that these parts are not consecutively numbered.
1186
Appendix D. SQL Conformance
PostgreSQL covers parts 1, 2, and 11. Part 3 is similar to the ODBC interface, and part 4 is similar to the
PL/pgSQL programming language, but exact conformance is not specifically intended or verified in either
case.
PostgreSQL supports most of the major features of SQL:2003. Out of 164 mandatory features required for
full Core conformance, PostgreSQL conforms to at least 150. In addition, there is a long list of supported
optional features. It may be worth noting that at the time of writing, no current version of any database
management system claims full conformance to Core SQL:2003.
In the following two sections, we provide a list of those features that PostgreSQL supports, followed
by a list of the features defined in SQL:2003 which are not yet supported in PostgreSQL. Both of these
lists are approximate: There may be minor details that are nonconforming for a feature that is listed as
supported, and large parts of an unsupported feature may in fact be implemented. The main body of the
documentation always contains the most accurate information about what does and does not work.
Note: Feature codes containing a hyphen are subfeatures. Therefore, if a particular subfeature is not
supported, the main feature is listed as unsupported even if some other subfeatures are supported.
1187
Appendix D. SQL Conformance
1188
Appendix D. SQL Conformance
1189
Appendix D. SQL Conformance
1190
Appendix D. SQL Conformance
1191
Appendix D. SQL Conformance
1192
Appendix D. SQL Conformance
1193
Appendix D. SQL Conformance
1194
Appendix D. SQL Conformance
1195
Appendix D. SQL Conformance
1196
Appendix D. SQL Conformance
1197
Appendix D. SQL Conformance
1198
Appendix D. SQL Conformance
1199
Appendix D. SQL Conformance
1200
Appendix D. SQL Conformance
1201
Appendix D. SQL Conformance
1202
Appendix D. SQL Conformance
1203
Appendix D. SQL Conformance
1204
Appendix D. SQL Conformance
1205
Appendix E. Release Notes
E.1.1. Overview
Major changes in this release:
1. http://pgfoundry.org/projects/pginstaller
1206
Appendix E. Release Notes
• In READ COMMITTED serialization mode, volatile functions now see the results of concurrent trans-
actions committed up to the beginning of each statement within the function, rather than up to the
beginning of the interactive command that called the function.
• Functions declared STABLE or IMMUTABLE always use the snapshot of the calling query, and therefore
do not see the effects of actions taken after the calling query starts, whether in their own transaction
or other transactions. Such a function must be read-only, too, meaning that it cannot use any SQL
commands other than SELECT.
• Non-deferred AFTER triggers are now fired immediately after completion of the triggering query, rather
than upon finishing the current interactive command. This makes a difference when the triggering query
occurred within a function: the trigger is invoked before the function proceeds to its next operation.
• Server configuration parameters virtual_host and tcpip_socket have been replaced with a more
general parameter listen_addresses. Also, the server now listens on localhost by default, which
eliminates the need for the -i postmaster switch in many scenarios.
• Server configuration parameters SortMem and VacuumMem have been renamed to work_mem and
maintenance_work_mem to better reflect their use. The original names are still supported in SET and
SHOW.
• Server configuration parameters log_pid, log_timestamp, and log_source_port have been re-
placed with a more general parameter log_line_prefix.
• Server configuration parameter syslog has been replaced with a more logical log_destination
variable to control the log output destination.
1207
Appendix E. Release Notes
• Server configuration parameter log_statement has been changed so it can selectively log just
database modification or data definition statements. Server configuration parameter log_duration
now prints only when log_statement prints the query.
• Server configuration parameter max_expr_depth parameter has been replaced with
max_stack_depth which measures the physical stack size rather than the expression nesting depth.
This helps prevent session termination due to stack overflow caused by recursive functions.
• The length() function no longer counts trailing spaces in CHAR(n) values.
• Casting an integer to BIT(N) selects the rightmost N bits of the integer, not the leftmost N bits as
before.
• Updating an element or slice of a NULL array value now produces a non-NULL array result, namely
an array containing just the assigned-to positions.
• Syntax checking of array input values has been tightened up considerably. Junk that was previously
allowed in odd places with odd results now causes an error. Empty-string element values must now be
written as "", rather than writing nothing. Also changed behavior with respect to whitespace surround-
ing array elements: trailing whitespace is now ignored, for symmetry with leading whitespace (which
has always been ignored).
• Overflow in integer arithmetic operations is now detected and reported as an error.
• The arithmetic operators associated with the single-byte "char" data type have been removed.
• The extract() function (also called date_part) now returns the proper year for BC dates. It pre-
viously returned one less than the correct year. The function now also returns the proper values for
millennium and century.
• CIDR values now must have their non-masked bits be zero. For example, we no longer allow
204.248.199.1/31 as a CIDR value. Such values should never have been accepted by PostgreSQL
and will now be rejected.
• EXECUTE now returns a completion tag that matches the executed statement.
• psql’s \copy command now reads or writes to the query’s stdin/stdout, rather than psql’s
stdin/stdout. The previous behavior can be accessed via new pstdin/pstdout parameters.
• The JDBC client interface has been removed from the core distribution, and is now hosted at
http://jdbc.postgresql.org.
• The Tcl client interface has also been removed. There are several Tcl interfaces now hosted at
http://gborg.postgresql.org.
• The server now uses its own time zone database, rather than the one supplied by the operating system.
This will provide consistent behavior across all platforms. In most cases, there should be little noticeable
difference in time zone behavior, except that the time zone names used by SET/SHOW TimeZone may
be different from what your platform provides.
• Configure’s threading option no longer requires users to run tests or edit configuration files; threading
options are now detected automatically.
• Now that tablespaces have been implemented, initlocation has been removed.
1208
Appendix E. Release Notes
E.1.4. Changes
Below you will find a detailed account of the changes between release 8.0 and the previous major release.
• Add subprocess to write dirty buffers periodically to reduce checkpoint writes (Jan)
In previous releases, the checkpoint process, which runs every few minutes, would write all dirty buffers
to the operating system’s buffer cache then flush all dirty operating system buffers to disk. This resulted
in a periodic spike in disk usage that often hurt performance. The new code uses a background writer
to trickle disk writes at a steady pace so checkpoints have far fewer dirty pages to write to disk. Also,
the new code does not issue a global sync() call, but instead fsync()s just the files written since the
last checkpoint. This should improve performance and minimize degradation during checkpoints.
1209
Appendix E. Release Notes
On busy systems, VACUUM performs many I/O requests which can hurt performance for other users.
This release allows you to slow down VACUUM to reduce its impact on other users, though this increases
the total duration of VACUUM.
• Improve B-tree index performance for duplicate keys (Dmitry Tkach, Tom)
This improves the way indexes are scanned when many duplicate values exist in the index.
1210
Appendix E. Release Notes
Expression indexes (also called functional indexes) allow users to index not just columns but the results
of expressions and function calls. With this release, the optimizer can gather and use statistics about the
contents of expression indexes. This will greatly improve the quality of planning for queries in which
an expression index is relevant.
• Add new read-only server configuration parameters to show server compile-time settings:
block_size, integer_datetimes, max_function_args, max_identifier_length,
max_index_keys (Joe)
• Make quoting of sameuser, samegroup, and all remove special meaning of these terms in
pg_hba.conf (Andrew)
• Use clearer IPv6 name ::1/128 for localhost in default pg_hba.conf (Andrew)
• Use CIDR format in pg_hba.conf examples (Andrew)
• Rename server configuration parameters SortMem and VacuumMem to work_mem and
maintenance_work_mem (Old names still supported) (Tom)
This change was made to clarify that bulk operations such as index and foreign key creation use
maintenance_work_mem, while work_mem is for workspaces used during query execution.
1211
Appendix E. Release Notes
• Listen on localhost by default, which eliminates the need for the -i postmaster switch in many scenarios
(Andrew)
Listening on localhost (127.0.0.1) opens no new security holes but allows configurations like Win-
dows and JDBC, which do not support local sockets, to work without special adjustments.
• Remove syslog server configuration parameter, and add more logical log_destination variable to
control log output location (Magnus)
• Change server configuration parameter log_statement to take values all, mod, ddl, or none to
select which queries are logged (Bruce)
This allows administrators to log only data definition changes or only data modification statements.
• Some logging-related configuration parameters could formerly be adjusted by ordinary users, but only
in the “more verbose” direction. They are now treated more strictly: only superusers can set them. How-
ever, a superuser may use ALTER USER to provide per-user settings of these values for non-superusers.
Also, it is now possible for superusers to set values of superuser-only configuration parameters via
PGOPTIONS.
• Plan prepared queries only when first executed so constants can be used for statistics (Oliver Jowett)
Prepared statements plan queries once and execute them many times. While prepared queries avoid
the overhead of re-planning on each use, the quality of the plan suffers from not knowing the exact
parameters to be used in the query. In this release, planning of unnamed prepared statements is delayed
until the first execution, and the actual parameter values of that execution are used as optimization hints.
This allows use of out-of-line parameter passing without incurring a performance penalty.
• Fix hash joins and aggregates of inet and cidr data types (Tom)
1212
Appendix E. Release Notes
Release 7.4 handled hashing of mixed inet and cidr values incorrectly. (This bug did not exist in
prior releases because they wouldn’t try to hash either data type.)
• Make log_duration print only when log_statement prints the query (Ed L.)
• Allow BEGIN WORK to specify transaction isolation levels like START TRANSACTION does (Bruce)
• Fix table permission checking for cases in which rules generate a query type different from the origi-
nally submitted query (Tom)
• Implement dollar quoting to simplify single-quote usage (Andrew, Tom, David Fetter)
In previous releases, because single quotes had to be used to quote a function’s body, the use of single
quotes inside the function text required use of two single quotes or other error-prone notations. With
this release we add the ability to use "dollar quoting" to quote a block of text. The ability to use different
quoting delimiters at different nesting levels greatly simplifies the task of quoting correctly, especially
in complex functions. Dollar quoting can be used anywhere quoted text is needed.
• Make CASE val WHEN compval1 THEN ... evaluate val only once (Tom)
CASE no longer evaluates the tested expression multiple times. This has benefits when the expression is
complex or is volatile.
1213
Appendix E. Release Notes
• Allow LIKE/ILIKE to be used as the operator in row and subselect comparisons (Fabien Coelho)
• Avoid locale-specific case conversion of basic ASCII letters in identifiers and keywords (Tom)
This solves the “Turkish problem” with mangling of words containing I and i. Folding of characters
outside the 7-bit-ASCII set is still locale-aware.
• Change EXECUTE to return a completion tag matching the executed statement (Kris Jurka)
Previous releases return an EXECUTE tag for any EXECUTE call. In this release, the tag returned will
reflect the command executed.
• Add COMMENT ON for casts, conversions, languages, operator classes, and large objects (Christopher)
• Add new server configuration parameter default_with_oids to control whether tables are created
with OIDs by default (Neil)
This allows administrators to control whether CREATE TABLE commands create tables with or without
OID columns by default. (Note: the current factory default setting for default_with_oids is TRUE,
but the default will become FALSE in future releases.)
1214
Appendix E. Release Notes
• Allow changing the owners of aggregates, conversions, databases, functions, operators, operator
classes, schemas, types, and tablespaces (Christopher, Euler Taveira de Oliveira)
Previously this required modifying the system tables directly.
• Allow temporary object creation to be limited to SECURITY DEFINER functions (Sean Chittenden)
• Add ALTER TABLE ... SET WITHOUT CLUSTER (Christopher)
Prior to this release, there was no way to clear an auto-cluster specification except to modify the system
tables.
• Warn when primary/foreign key data type mismatch requires costly lookup
• New ALTER INDEX command to allow moving of indexes between tablespaces (Gavin)
• Make ALTER TABLE OWNER change dependent sequence ownership too (Alvaro)
1215
Appendix E. Release Notes
This allows the LOCK command to fail if it would have to wait for the requested lock.
• Allow COPY to read and write comma-separated-value (CSV) files (Andrew, Bruce)
• Generate error if the COPY delimiter and NULL string conflict (Bruce)
• GRANT/REVOKE behavior follows the SQL spec more closely
• Avoid locking conflict between CREATE INDEX and CHECKPOINT (Tom)
In 7.3 and 7.4, a long-running B-tree index build could block concurrent CHECKPOINTs from complet-
ing, thereby causing WAL bloat because the WAL log could not be recycled.
• REINDEX does not exclusively lock the index’s parent table anymore
The index itself is still exclusively locked, but readers of the table can continue if they are not using the
particular index being rebuilt.
1216
Appendix E. Release Notes
• Empty-string array element values must now be written as "", rather than writing nothing (Joe)
Formerly, both ways of writing an empty-string element value were allowed, but now a quoted empty
string is required. The case where nothing at all appears will probably be considered to be a NULL
element value in some future release.
• Emit array values with explicit array bounds when lower bound is not one (Joe)
• Accept YYYY-monthname-DD as a date string (Tom)
• Make netmask and hostmask functions return maximum-length mask length (Tom)
• Change factorial function to return numeric (Gavin)
Returning numeric allows the factorial function to work for a wider range of input values.
• Warn about empty string being passed to OID/float4/float8 data types (Neil)
8.1 will throw an error instead.
1217
Appendix E. Release Notes
• Add ceiling() as an alias for ceil(), and power() as an alias for pow() for standards compliance
(Neil)
• Change ln(), log(), power(), and sqrt() to emit the correct SQLSTATE error codes for certain
error conditions, as specified by SQL:2003 (Neil)
• Add width_bucket() function as defined by SQL:2003 (Neil)
• Add generate_series() functions to simplify working with numeric sets (Joe)
• Fix upper/lower/initcap() functions to work with multibyte encodings (Tom)
• Add boolean and bitwise integer AND/OR aggregates (Fabien Coelho)
• New session information functions to return network addresses for client and server (Sean Chittenden)
• Add function to determine the area of a closed path (Sean Chittenden)
• Add function to send cancel request to other backends (Magnus)
• Add interval plus datetime operators (Tom)
The reverse ordering, datetime plus interval, was already supported, but both are required by the
SQL standard.
• Casting an integer to BIT(N) selects the rightmost N bits of the integer (Tom)
In prior releases, the leftmost N bits were selected, but this was deemed unhelpful, not to mention
inconsistent with casting from bit to int.
• Require CIDR values to have all non-masked bits be zero (Kevin Brintnall)
• In READ COMMITTED serialization mode, volatile functions now see the results of concurrent trans-
actions committed up to the beginning of each statement within the function, rather than up to the
beginning of the interactive command that called the function.
1218
Appendix E. Release Notes
• Functions declared STABLE or IMMUTABLE always use the snapshot of the calling query, and therefore
do not see the effects of actions taken after the calling query starts, whether in their own transaction
or other transactions. Such a function must be read-only, too, meaning that it cannot use any SQL
commands other than SELECT. There is a considerable performance gain from declaring a function
STABLE or IMMUTABLE rather than VOLATILE.
• Non-deferred AFTER triggers are now fired immediately after completion of the triggering query, rather
than upon finishing the current interactive command. This makes a difference when the triggering query
occurred within a function: the trigger is invoked before the function proceeds to its next operation. For
example, if a function inserts a new row into a table, any non-deferred foreign key checks occur before
proceeding with the function.
• Allow function parameters to be declared with names (Dennis Bjorklund)
This allows better documentation of functions. Whether the names actually do anything depends on the
specific function language being used.
• More support for composite types (row and record variables) in PL/pgSQL
For example, it now works to pass a rowtype variable to another function as a single variable.
• Default values for PL/pgSQL variables can now reference previously declared variables
• Improve parsing of PL/pgSQL FOR loops (Tom)
Parsing is now driven by presence of ".." rather than data type of FOR variable. This makes no differ-
ence for correct functions, but should result in more understandable error messages when a mistake is
made.
1219
Appendix E. Release Notes
• Have psql \d+ indicate if the table has an OID column (Neil)
• On Windows, use binary mode in psql when reading files so control-Z is not seen as end-of-file
• Have \dn+ show permissions and description for schemas (Dennis Bjorklund)
• Improve tab completion support (Stefan Kaltenbrunn, Greg Sabino Mullane)
• Allow boolean settings to be set using upper or lower case (Michael Paesold)
1220
Appendix E. Release Notes
• Allow the database server to run natively on Windows (Claudio, Magnus, Andrew)
• Shell script commands converted to C versions for Windows support (Andrew)
• Create an extension makefile framework (Fabien Coelho, Peter)
This simplifies the task of building extensions outside the original source tree.
• Use --with-docdir to choose installation location of documentation; also allow --infodir (Peter)
• Add --without-docdir to prevent installation of documentation (Peter)
• Upgrade to DocBook V4.2 SGML (Peter)
• New PostgreSQL CVS tag (Marc)
This was done to make it easier for organizations to manage their own copies of the PostgreSQL CVS
repository. File version stamps from the master repository will not get munged by checking into or out
of a copied repository.
1221
Appendix E. Release Notes
• Allow dynamically loaded modules to create their own server configuration parameters (Thomas Hall-
gren)
• New Brazilian version of FAQ (Euler Taveira de Oliveira)
• Add French FAQ (Guillaume Lelarge)
• New pgevent for Windows logging
• Make libpq and ECPG build as proper shared libraries on OS X (Tom)
1222
Appendix E. Release Notes
E.2.2. Changes
1223
Appendix E. Release Notes
E.3.2. Changes
1224
Appendix E. Release Notes
This patch fixes a rare case in which concurrent insertions into a B-tree index could result in a server
panic. No permanent damage would result, but it’s still worth a re-release. The bug does not exist in
pre-7.4 releases.
E.4.2. Changes
1225
Appendix E. Release Notes
E.5.2. Changes
1226
Appendix E. Release Notes
This can be done in a live database, but beware that all backends running in the altered database must be
restarted before it is safe to repopulate pg_statistic.
To repair the pg_settings error, simply do:
The above procedures must be carried out in each database of an installation, including template1, and
ideally including template0 as well. If you do not fix the template databases then any subsequently
created databases will contain the same errors. template1 can be fixed in the same way as any other
database, but fixing template0 requires additional steps. First, from any database issue
1227
Appendix E. Release Notes
Next connect to template0 and perform the above repair procedures. Finally, do
-- re-freeze template0:
VACUUM FREEZE;
-- and protect it against future alterations:
UPDATE pg_database SET datallowconn = false WHERE datname = ’template0’;
E.6.2. Changes
Release 7.4.2 incorporates all the fixes included in release 7.3.6, plus the following fixes:
1228
Appendix E. Release Notes
E.7.2. Changes
1229
Appendix E. Release Notes
Fix crash on Solaris caused by use of any type of password authentication when no passwords were
defined.
1230
Appendix E. Release Notes
E.8.1. Overview
Major changes in this release:
In previous releases, IN/NOT IN subqueries were joined to the upper query by sequentially scanning
the subquery looking for a match. The 7.4 code uses the same sophisticated techniques used by
ordinary joins and so is much faster. An IN will now usually be as fast as or faster than an equivalent
EXISTS subquery; this reverses the conventional wisdom that applied to previous releases.
1231
Appendix E. Release Notes
Array handling has been improved and moved into the server core
Many array limitations have been removed, and arrays behave more like fully-supported data types.
• The server-side autocommit setting was removed and reimplemented in client applications and lan-
guages. Server-side autocommit was causing too many problems with languages and applications that
wanted to control their own autocommit behavior, so autocommit was removed from the server and
added to individual client APIs as appropriate.
• Error message wording has changed substantially in this release. Significant effort was invested to
make the messages more consistent and user-oriented. If your applications try to detect different error
conditions by parsing the error message, you are strongly encouraged to use the new error code facility
instead.
1232
Appendix E. Release Notes
• Inner joins using the explicit JOIN syntax may behave differently because they are now better opti-
mized.
• A number of server configuration parameters have been renamed for clarity, primarily those related to
logging.
• FETCH 0 or MOVE 0 now does nothing. In prior releases, FETCH 0 would fetch all remaining rows,
and MOVE 0 would move to the end of the cursor.
• FETCH and MOVE now return the actual number of rows fetched/moved, or zero if at the beginning/end
of the cursor. Prior releases would return the row count passed to the command, not the number of rows
actually fetched or moved.
• COPY now can process files that use carriage-return or carriage-return/line-feed end-of-line sequences.
Literal carriage-returns and line-feeds are no longer accepted in data values; use \r and \n instead.
• Trailing spaces are now trimmed when converting from type char(n) to varchar(n) or text. This
is what most people always expected to happen anyway.
• The data type float(p) now measures p in binary digits, not decimal digits. The new behavior follows
the SQL standard.
• Ambiguous date values now must match the ordering specified by the datestyle setting. In prior
releases, a date specification of 10/20/03 was interpreted as a date in October even if datestyle
specified that the day should be first. 7.4 will throw an error if a date specification is invalid for the
current setting of datestyle.
• The functions oidrand, oidsrand, and userfntest have been removed. These functions were de-
termined to be no longer useful.
• String literals specifying time-varying date/time values, such as ’now’ or ’today’ will no longer work
as expected in column default expressions; they now cause the time of the table creation to be the de-
fault, not the time of the insertion. Functions such as now(), current_timestamp, or current_date
should be used instead.
In previous releases, there was special code so that strings such as ’now’ were interpreted at INSERT
time and not at table creation time, but this work around didn’t cover all cases. Release 7.4 now requires
that defaults be defined properly using functions such as now() or current_timestamp. These will
work in all situations.
• The dollar sign ($) is no longer allowed in operator names. It can instead be a non-first character in
identifiers. This was done to improve compatibility with other database systems, and to avoid syntax
problems when parameter placeholders ($n) are written adjacent to operators.
E.8.3. Changes
Below you will find a detailed account of the changes between release 7.4 and the previous major release.
1233
Appendix E. Release Notes
• Allow IPv6 server connections (Nigel Kukard, Johan Jordaan, Bruce, Tom, Kurt Roeckx, Andrew Dun-
stan)
• Fix SSL to handle errors cleanly (Nathan Mueller)
In prior releases, certain SSL API error reports were not handled correctly. This release fixes those
problems.
• Update /tmp socket modification times regularly to avoid their removal (Tom)
This should help prevent /tmp directory cleaner administration scripts from removing server socket
files.
1234
Appendix E. Release Notes
• New client/server protocol: faster, no username length limit, allow clean exit from COPY (Tom)
• Add transaction status, table ID, column ID to client/server protocol (Tom)
• Add binary I/O to client/server protocol (Tom)
• Remove autocommit server setting; move to client applications (Tom)
• New error message wording, error codes, and three levels of error detail (Tom, Joe, Peter)
1235
Appendix E. Release Notes
• Align shared buffers on 32-byte boundary for copy speed improvement (Manfred Spraul)
Certain CPU’s perform faster data copies when addresses are 32-byte aligned.
1236
Appendix E. Release Notes
• postgres --describe-config now dumps server config variables (Aizaz Ahmed, Peter)
This option is useful for administration tools that need to know the configuration variable names and
their minimums, maximums, defaults, and descriptions.
• Add new columns in pg_settings: context, type, source, min_val, max_val (Joe)
• Make default shared_buffers 1000 and max_connections 100, if possible (Tom)
Prior versions defaulted to 64 shared buffers so PostgreSQL would start on even very old systems.
This release tests the amount of shared memory allowed by the platform and selects more reasonable
default values if possible. Of course, users are still encouraged to evaluate their resource load and size
shared_buffers accordingly.
1237
Appendix E. Release Notes
• New pg_hba.conf record type hostnossl to prevent SSL connections (Jon Jensen)
In prior releases, there was no way to prevent SSL connections if both the client and server supported
SSL. This option allows that capability.
1238
Appendix E. Release Notes
• Have ALTER TABLE ... ADD PRIMARY KEY add not-null constraint (Rod)
In prior releases, ALTER TABLE ... ADD PRIMARY would add a unique index, but not a not-null
constraint. That is fixed in this release.
• Add ALTER SEQUENCE to modify minimum, maximum, increment, cache, cycle values (Rod)
• Add ALTER TABLE ... CLUSTER ON (Alvaro Herrera)
This command is used by pg_dump to record the cluster column for each table previously clustered.
This information is used by database-wide cluster to cluster all previously clustered tables.
1239
Appendix E. Release Notes
• Cause FETCH and MOVE to return the number of rows fetched/moved, or zero if at the beginning/end of
cursor, per SQL standard (Bruce)
In prior releases, the row count returned by FETCH and MOVE did not accurately reflect the number of
rows processed.
• Implement SQL-compatible options FIRST, LAST, ABSOLUTE n, RELATIVE n for FETCH and MOVE
(Tom)
• Allow EXPLAIN on DECLARE CURSOR (Tom)
• Allow CLUSTER to use index marked as pre-clustered by default (Alvaro Herrera)
• Allow CLUSTER to cluster all tables (Alvaro Herrera)
This allows all previously clustered tables in a database to be reclustered with a single command.
1240
Appendix E. Release Notes
• Have SHOW TRANSACTION ISOLATION match input to SET TRANSACTION ISOLATION (Tom)
• Have COMMENT ON DATABASE on nonlocal database generate a warning, rather than an error (Rod)
Database comments are stored in database-local tables so comments on a database have to be stored in
each database.
1241
Appendix E. Release Notes
This allows the creation of functions that can work with any data type.
• Allow proper comparisons for arrays, including ORDER BY and DISTINCT support (Joe)
• Allow indexes on array columns (Joe)
• Allow array concatenation with || (Joe)
• Allow WHERE qualification expr op ANY/SOME/ALL (array_expr) (Joe)
This allows arrays to behave like a list of values, for purposes like SELECT * FROM tab WHERE col
IN (array_val).
1242
Appendix E. Release Notes
• Make EXTRACT(TIMEZONE) and SET/SHOW TIME ZONE follow the SQL convention for the sign of
time zone offsets, i.e., positive is east from UTC (Tom)
• Fix date_trunc(’quarter’, ...) (Böjthe Zoltán)
Prior releases returned an incorrect value for this function call.
• Allow only datestyle field order for date values not in ISO-8601 format (Greg)
• Add new datestyle values MDY, DMY, and YMD to set input field order; honor US and European for
backward compatibility (Tom)
• String literals like ’now’ or ’today’ will no longer work as a column default. Use functions such as
now(), current_timestamp instead. (change required for prepared statements) (Tom)
• Prevent PL/pgSQL crash when RETURN NEXT is used on a zero-row record variable (Tom)
• Make PL/Python’s spi_execute interface handle null values properly (Andrew Bosma)
• Allow PL/pgSQL to declare variables of composite types without %ROWTYPE (Tom)
• Fix PL/Python’s _quote() function to handle big integers
• Make PL/Python an untrusted language, now called plpythonu (Kevin Jacobs, Tom)
The Python language no longer supports a restricted execution environment, so the trusted version of
PL/Python was removed. If this situation changes, a version of PL/Python that can be used by non-
superusers will be readded.
1243
Appendix E. Release Notes
1244
Appendix E. Release Notes
• Add function PQfreemem for freeing memory on Windows, suggested for NOTIFY (Bruce)
Windows requires that memory allocated in a library be freed by a function in the same library, hence
free() doesn’t work for freeing memory allocated by libpq. PQfreemem is the proper way to free
libpq memory, especially on Windows, and is recommended for other platforms as well.
1245
Appendix E. Release Notes
• Control SSL negotiation with sslmode values disable, allow, prefer, and require (Jon Jensen)
• Allow new error codes and levels of text (Tom)
• Allow access to the underlying table and column of a query result (Tom)
This is helpful for query-builder applications that want to know the underlying table and column names
associated with a specific result set.
• Prevent possible memory leak or core dump during libpgtcl shutdown (Tom)
• Add Informix compatibility to ECPG (Michael)
This allows ECPG to process embedded C programs that were written using certain Informix exten-
sions.
• Add type decimal to ECPG that is fixed length, for Informix (Michael)
• Allow thread-safe embedded SQL programs with configure option --enable-thread-safety
(Lee Kindness, Bruce)
This allows multiple threads to access the database at the same time.
• Prevent need for separate platform geometry regression result files (Tom)
• Improved PPC locking primitive (Reinhard Max)
1246
Appendix E. Release Notes
1247
Appendix E. Release Notes
E.9.2. Changes
1248
Appendix E. Release Notes
This release contains one critical fix over 7.3.6, and some minor items.
E.10.2. Changes
1249
Appendix E. Release Notes
E.11.2. Changes
• Ensure configure selects -fno-strict-aliasing even when an external value for CFLAGS is supplied
On some platforms, building with -fstrict-aliasing causes bugs.
• Make contrib/dblink not assume that local and remote type OIDs match (Joe)
• Quote connectby()’s start_with argument properly (Joe)
• Don’t crash when a rowtype argument to a plpgsql function is NULL
• Avoid generating invalid character encoding sequences in corner cases when planning LIKE operations
• Ensure text_position() cannot scan past end of source string in multibyte cases (Korea PostgreSQL
Users’ Group)
• Fix index optimization and selectivity estimates for LIKE operations on bytea columns (Joe)
1250
Appendix E. Release Notes
E.12.2. Changes
1251
Appendix E. Release Notes
E.13.2. Changes
E.14.2. Changes
1252
Appendix E. Release Notes
1253
Appendix E. Release Notes
1254
Appendix E. Release Notes
E.15.2. Changes
1255
Appendix E. Release Notes
E.16.2. Changes
• Fix a core dump of COPY TO when client/server encodings don’t match (Tom)
• Allow pg_dump to work with pre-7.2 servers (Philip)
• contrib/adddepend fixes (Tom)
• Fix problem with deletion of per-user/per-database config settings (Tom)
• contrib/vacuumlo fix (Tom)
• Allow ’password’ encryption even when pg_shadow contains MD5 passwords (Bruce)
• contrib/dbmirror fix (Steven Singer)
• Optimizer fixes (Tom)
• contrib/tsearch fixes (Teodor Sigaev, Magnus)
• Allow locale names to be mixed case (Nicolai Tufar)
• Increment libpq library’s major version number (Bruce)
• pg_hba.conf error reporting fixes (Bruce, Neil)
• Add SCO Openserver 5.0.4 as a supported platform (Bruce)
• Prevent EXPLAIN from crashing server (Tom)
• SSL fixes (Nathan Mueller)
• Prevent composite column creation via ALTER TABLE (Tom)
1256
Appendix E. Release Notes
E.17.1. Overview
Major changes in this release:
Schemas
Schemas allow users to create objects in separate namespaces, so two people or applications can have
tables with the same name. There is also a public schema for shared tables. Table/index creation can
be restricted by removing privileges on the public schema.
Drop Column
PostgreSQL now supports the ALTER TABLE ... DROP COLUMN functionality.
Table Functions
Functions returning multiple rows and/or multiple columns are now much easier to use than before.
You can call such a “table function” in the SELECT FROM clause, treating its output like a table. Also,
PL/pgSQL functions can now return sets.
Prepared Queries
PostgreSQL now supports prepared queries, for improved performance.
Dependency Tracking
PostgreSQL now records object dependencies, which allows improvements in many areas. DROP
statements now take either CASCADE or RESTRICT to control whether dependent objects are also
dropped.
Privileges
Functions and procedural languages now have privileges, and functions can be defined to run with
the privileges of their creator.
Internationalization
Both multibyte and locale support are now always enabled.
Logging
A variety of logging options have been enhanced.
Interfaces
A large number of interfaces have been moved to http://gborg.postgresql.org where they can be de-
veloped and released independently.
Functions/Identifiers
By default, functions can now take up to 32 parameters, and identifiers can be up to 63 bytes long.
Also, OPAQUE is now deprecated: there are specific “pseudo-datatypes” to represent each of the for-
mer meanings of OPAQUE in function argument and result types.
1257
Appendix E. Release Notes
• Pre-7.3 databases loaded into 7.3 will not have the new object dependencies for serial columns,
unique constraints, and foreign keys. See the directory contrib/adddepend/ for a detailed descrip-
tion and a script that will add such dependencies.
• An empty string (”) is no longer allowed as the input into an integer field. Formerly, it was silently
interpreted as 0.
E.17.3. Changes
6. http://developer.postgresql.org/~momjian/upgrade_tips_7.3
1258
Appendix E. Release Notes
E.17.3.2. Performance
E.17.3.3. Privileges
• Server log messages now tagged with LOG, not DEBUG (Bruce)
• Add user column to pg_hba.conf (Bruce)
• Have log_connections output two lines in log file (Tom)
• Remove debug_level from postgresql.conf, now server_min_messages (Bruce)
1259
Appendix E. Release Notes
• New ALTER DATABASE/USER ... SET command for per-user/database initialization (Peter)
• New parameters server_min_messages and client_min_messages to control which messages are sent to
the server logs or client applications (Bruce)
• Allow pg_hba.conf to specify lists of users/databases separated by commas, group names prepended
with +, and file names prepended with @ (Bruce)
• Remove secondary password file capability and pg_password utility (Bruce)
• Add variable db_user_namespace for database-local user names (Bruce)
• SSL improvements (Bear Giles)
• Make encryption of stored passwords the default (Bruce)
• Allow pg_statistics to be reset by calling pg_stat_reset() (Christopher)
• Add log_duration parameter (Bruce)
• Rename debug_print_query to log_statement (Bruce)
• Rename show_query_stats to show_statement_stats (Bruce)
• Add param log_min_error_statement to print commands to logs on error (Gavin)
E.17.3.5. Queries
1260
Appendix E. Release Notes
1261
Appendix E. Release Notes
• Have COPY TO output embedded carriage returns and newlines as \r and \n (Tom)
• Allow DELIMITER in COPY FROM to be 8-bit clean (Tatsuo)
• Make pg_dump use ALTER TABLE ADD PRIMARY KEY, for performance (Neil)
• Disable brackets in multistatement rules (Bruce)
• Disable VACUUM from being called inside a function (Bruce)
• Allow dropdb and other scripts to use identifiers with spaces (Bruce)
• Restrict database comment changes to the current database
• Allow comments on operators, independent of the underlying function (Rod)
• Rollback SET commands in aborted transactions (Tom)
• EXPLAIN now outputs as a query (Tom)
• Display condition expressions and sort keys in EXPLAIN (Tom)
• Add ’SET LOCAL var = value’ to set configuration variables for a single transaction (Tom)
• Allow ANALYZE to run in a transaction (Bruce)
• Improve COPY syntax using new WITH clauses, keep backward compatibility (Bruce)
• Fix pg_dump to consistently output tags in non-ASCII dumps (Bruce)
• Make foreign key constraints clearer in dump file (Rod)
• Add COMMENT ON CONSTRAINT (Rod)
• Allow COPY TO/FROM to specify column names (Brent Verner)
• Dump UNIQUE and PRIMARY KEY constraints as ALTER TABLE (Rod)
• Have SHOW output a query result (Joe)
• Generate failure on short COPY lines rather than pad NULLs (Neil)
• Fix CLUSTER to preserve all table attributes (Alvaro Herrera)
• New pg_settings table to view/modify GUC settings (Joe)
• Add smart quoting, portability improvements to pg_dump output (Peter)
• Dump serial columns out as SERIAL (Tom)
• Enable large file support, >2G for pg_dump (Peter, Philip Warner, Bruce)
• Disallow TRUNCATE on tables that are involved in referential constraints (Rod)
• Have TRUNCATE also auto-truncate the toast table of the relation (Tom)
• Add clusterdb utility that will auto-cluster an entire database based on previous CLUSTER operations
(Alvaro Herrera)
• Overhaul pg_dumpall (Peter)
• Allow REINDEX of TOAST tables (Tom)
• Implemented START TRANSACTION, per SQL99 (Neil)
1262
Appendix E. Release Notes
• Fix rare index corruption when a page split affects bulk delete (Tom)
• Fix ALTER TABLE ... ADD COLUMN for inheritance (Alvaro Herrera)
1263
Appendix E. Release Notes
E.17.3.9. Internationalization
• Add additional encodings: Korean (JOHAB), Thai (WIN874), Vietnamese (TCVN), Arabic
(WIN1256), Simplified Chinese (GBK), Korean (UHC) (Eiji Tokuya)
• Enable locale support by default (Peter)
• Add locale variables (Peter)
• Escape byes >= 0x7f for multibyte in PQescapeBytea/PQunescapeBytea (Tatsuo)
• Add locale awareness to regular expression character classes
• Enable multibyte support by default (Tatsuo)
• Add GB18030 multibyte support (Bill Huang)
• Add CREATE/DROP CONVERSION, allowing loadable encodings (Tatsuo, Kaori)
• Add pg_conversion table (Tatsuo)
• Add SQL99 CONVERT() function (Tatsuo)
• pg_dumpall, pg_controldata, and pg_resetxlog now national-language aware (Peter)
• New and updated translations
1264
Appendix E. Release Notes
E.17.3.11. psql
• Don’t lowercase psql \connect database name for 7.2.0 compatibility (Tom)
• Add psql \timing to time user queries (Greg Sabino Mullane)
• Have psql \d show index information (Greg Sabino Mullane)
• New psql \dD shows domains (Jonathan Eisler)
• Allow psql to show rules on views (Paul ?)
• Fix for psql variable substitution (Tom)
• Allow psql \d to show temporary table structure (Tom)
• Allow psql \d to show foreign keys (Rod)
• Fix \? to honor \pset pager (Bruce)
• Have psql reports its version number on startup (Tom)
• Allow \copy to specify column names (Tom)
E.17.3.12. libpq
E.17.3.13. JDBC
1265
Appendix E. Release Notes
1266
Appendix E. Release Notes
1267
Appendix E. Release Notes
E.17.3.16. Contrib
1268
Appendix E. Release Notes
E.18.2. Changes
E.19.2. Changes
1269
Appendix E. Release Notes
• Fix corner case for btree search in parallel with first root page split
• Fix buffer overrun in to_ascii (Guido Notari)
• Fix core dump in deadlock detection on machines where char is unsigned
• Fix failure to respond to pg_ctl stop -m fast after Async_NotifyHandler runs
• Repair memory leaks in pg_dump
• Avoid conflict with system definition of isblank() function or macro
This release contains a variety of fixes for version 7.2.3, including fixes to prevent possible data loss.
E.20.2. Changes
• Fix some additional cases of VACUUM "No one parent tuple was found" error
• Prevent VACUUM from being called inside a function (Bruce)
• Ensure pg_clog updates are sync’d to disk before marking checkpoint complete
• Avoid integer overflow during large hash joins
• Make GROUP commands work when pg_group.grolist is large enough to be toasted
• Fix errors in datetime tables; some timezone names weren’t being recognized
• Fix integer overflows in circle_poly(), path_encode(), path_add() (Neil)
• Repair long-standing logic errors in lseg_eq(), lseg_ne(), lseg_center()
This release contains a variety of fixes for version 7.2.2, including fixes to prevent possible data loss.
1270
Appendix E. Release Notes
E.21.2. Changes
E.22.2. Changes
1271
Appendix E. Release Notes
E.23.2. Changes
1272
Appendix E. Release Notes
E.24.1. Overview
This release improves PostgreSQL for use in high-volume applications.
Major changes in this release:
VACUUM
Vacuuming no longer locks tables, thus allowing normal user access during the vacuum. A new
VACUUM FULL command does old-style vacuum by locking the table and shrinking the on-disk copy
of the table.
Transactions
There is no longer a problem with installations that exceed four billion transactions.
OIDs
OIDs are now optional. Users can now create tables without OIDs for cases where OID usage is
excessive.
Optimizer
The system now computes histogram column statistics during ANALYZE, allowing much better opti-
mizer choices.
Security
A new MD5 encryption option allows more secure storage and transfer of passwords. A new Unix-
domain socket authentication option is available on Linux and BSD systems.
Statistics
Administrators can use the new table access statistics module to get fine-grained information about
table and index usage.
Internationalization
Program and library messages can now be displayed in several languages.
• The semantics of the VACUUM command have changed in this release. You may wish to update your
maintenance procedures accordingly.
1273
Appendix E. Release Notes
• In this release, comparisons using = NULL will always return false (or NULL, more precisely). Previous
releases automatically transformed this syntax to IS NULL. The old behavior can be re-enabled using
a postgresql.conf parameter.
• The pg_hba.conf and pg_ident.conf configuration is now only reloaded after receiving a SIGHUP
signal, not with each connection.
• The function octet_length() now returns the uncompressed data length.
• The date/time value ’current’ is no longer available. You will need to rewrite your applications.
• The timestamp(), time(), and interval() functions are no longer available. Instead of
timestamp(), use timestamp ’string’ or CAST.
The SELECT ... LIMIT #,# syntax will be removed in the next release. You should change your
queries to use separate LIMIT and OFFSET clauses, e.g. LIMIT 10 OFFSET 20.
E.24.3. Changes
1274
Appendix E. Release Notes
E.24.3.2. Performance
E.24.3.3. Privileges
1275
Appendix E. Release Notes
• Interpretation of some time zone abbreviations as Australian rather than North American now settable
at run time (Bruce)
• New parameter to set default transaction isolation level (Peter E)
• New parameter to enable conversion of "expr = NULL" into "expr IS NULL", off by default (Peter E)
• New parameter to control memory usage by VACUUM (Tom)
• New parameter to set client authentication timeout (Tom)
• New parameter to set maximum number of open files (Tom)
E.24.3.6. Queries
• Statements added by INSERT rules now execute after the INSERT (Jan)
• Prevent unadorned relation names in target list (Bruce)
• NULLs now sort after all normal values in ORDER BY (Tom)
• New IS UNKNOWN, IS NOT UNKNOWN Boolean tests (Tom)
• New SHARE UPDATE EXCLUSIVE lock mode (Tom)
• New EXPLAIN ANALYZE command that shows run times and row counts (Martijn van Oosterhout)
• Fix problem with LIMIT and subqueries (Tom)
• Fix for LIMIT, DISTINCT ON pushed into subqueries (Tom)
• Fix nested EXCEPT/INTERSECT (Tom)
1276
Appendix E. Release Notes
• Fix for ALTER TABLE / ADD CONSTRAINT ... CHECK with inherited tables (Stephan Szabo)
• ALTER TABLE RENAME update foreign-key trigger arguments correctly (Brent Verner)
• DROP AGGREGATE and COMMENT ON AGGREGATE now accept an aggtype (Tom)
• Add automatic return type data casting for SQL functions (Tom)
• Allow GiST indexes to handle NULLs and multikey indexes (Oleg Bartunov, Teodor Sigaev, Tom)
• Enable partial indexes (Martijn van Oosterhout)
• SUM(), AVG(), COUNT() now uses int8 internally for speed (Tom)
• Add convert(), convert2() (Tatsuo)
• New function bit_length() (Peter E)
• Make the "n" in CHAR(n)/VARCHAR(n) represents letters, not bytes (Tatsuo)
• CHAR(), VARCHAR() now reject strings that are too long (Peter E)
• BIT VARYING now rejects bit strings that are too long (Peter E)
• BIT now rejects bit strings that do not match declared size (Peter E)
• INET, CIDR text conversion functions (Alex Pilosov)
• INET, CIDR operators << and <<= indexable (Alex Pilosov)
• Bytea \### now requires valid three digit octal number
• Bytea comparison improvements, now supports =, <>, >, >=, <, and <=
• Bytea now supports B-tree indexes
• Bytea now supports LIKE, LIKE...ESCAPE, NOT LIKE, NOT LIKE...ESCAPE
1277
Appendix E. Release Notes
E.24.3.10. Internationalization
1278
Appendix E. Release Notes
E.24.3.11. PL/pgSQL
• Now uses portals for SELECT loops, allowing huge result sets (Jan)
• CURSOR and REFCURSOR support (Jan)
• Can now return open cursors (Jan)
• Add ELSEIF (Klaus Reger)
• Improve PL/pgSQL error reporting, including location of error (Tom)
• Allow IS or FOR key words in cursor declaration, for compatibility (Bruce)
• Fix for SELECT ... FOR UPDATE (Tom)
• Fix for PERFORM returning multiple rows (Tom)
• Make PL/pgSQL use the server’s type coercion code (Tom)
• Memory leak fix (Jan, Tom)
• Make trailing semicolon optional (Tom)
E.24.3.12. PL/Perl
E.24.3.13. PL/Tcl
E.24.3.14. PL/Python
E.24.3.15. psql
1279
Appendix E. Release Notes
E.24.3.16. libpq
E.24.3.17. JDBC
1280
Appendix E. Release Notes
E.24.3.18. ODBC
E.24.3.19. ECPG
1281
Appendix E. Release Notes
1282
Appendix E. Release Notes
E.24.3.23. Contrib
E.25.2. Changes
1283
Appendix E. Release Notes
E.26.2. Changes
E.27.2. Changes
1284
Appendix E. Release Notes
This release focuses on removing limitations that have existed in the PostgreSQL code for many years.
Major changes in this release:
1285
Appendix E. Release Notes
E.28.2. Changes
Bug Fixes
---------
Many multibyte/Unicode/locale fixes (Tatsuo and others)
More reliable ALTER TABLE RENAME (Tom)
Kerberos V fixes (David Wragg)
Fix for INSERT INTO...SELECT where targetlist has subqueries (Tom)
Prompt username/password on standard error (Bruce)
Large objects inv_read/inv_write fixes (Tom)
Fixes for to_char(), to_date(), to_ascii(), and to_timestamp() (Karel,
Daniel Baldoni)
Prevent query expressions from leaking memory (Tom)
Allow UPDATE of arrays elements (Tom)
Wake up lock waiters during cancel (Hiroshi)
Fix rare cursor crash when using hash join (Tom)
Fix for DROP TABLE/INDEX in rolled-back transaction (Hiroshi)
Fix psql crash from \l+ if MULTIBYTE enabled (Peter E)
Fix truncation of rule names during CREATE VIEW (Ross Reedstrom)
Fix PL/perl (Alex Kapranoff)
Disallow LOCK on views (Mark Hollomon)
Disallow INSERT/UPDATE/DELETE on views (Mark Hollomon)
Disallow DROP RULE, CREATE INDEX, TRUNCATE on views (Mark Hollomon)
Allow PL/pgSQL accept non-ASCII identifiers (Tatsuo)
Allow views to proper handle GROUP BY, aggregates, DISTINCT (Tom)
Fix rare failure with TRUNCATE command (Tom)
Allow UNION/INTERSECT/EXCEPT to be used with ALL, subqueries, views,
DISTINCT, ORDER BY, SELECT...INTO (Tom)
Fix parser failures during aborted transactions (Tom)
Allow temporary relations to properly clean up indexes (Bruce)
Fix VACUUM problem with moving rows in same page (Tom)
Modify pg_dump to better handle user-defined items in template1 (Philip)
Allow LIMIT in VIEW (Tom)
Require cursor FETCH to honor LIMIT (Tom)
Allow PRIMARY/FOREIGN Key definitions on inherited columns (Stephan)
Allow ORDER BY, LIMIT in subqueries (Tom)
Allow UNION in CREATE RULE (Tom)
Make ALTER/DROP TABLE rollback-able (Vadim, Tom)
Store initdb collation in pg_control so collation cannot be changed (Tom)
Fix INSERT...SELECT with rules (Tom)
Fix FOR UPDATE inside views and subselects (Tom)
Fix OVERLAPS operators conform to SQL92 spec regarding NULLs (Tom)
Fix lpad() and rpad() to handle length less than input string (Tom)
Fix use of NOTIFY in some rules (Tom)
Overhaul btree code (Tom)
Fix NOT NULL use in Pl/pgSQL variables (Tom)
Overhaul GIST code (Oleg)
Fix CLUSTER to preserve constraints and column default (Tom)
1286
Appendix E. Release Notes
Enhancements
------------
Add OUTER JOINs (Tom)
Function manager overhaul (Tom)
Allow ALTER TABLE RENAME on indexes (Tom)
Improve CLUSTER (Tom)
Improve ps status display for more platforms (Peter E, Marc)
Improve CREATE FUNCTION failure message (Ross)
JDBC improvements (Peter, Travis Bauer, Christopher Cain, William Webber,
Gunnar)
Grand Unified Configuration scheme/GUC. Many options can now be set in
data/postgresql.conf, postmaster/postgres flags, or SET commands (Peter E)
Improved handling of file descriptor cache (Tom)
New warning code about auto-created table alias entries (Bruce)
Overhaul initdb process (Tom, Peter E)
Overhaul of inherited tables; inherited tables now accessed by default;
new ONLY key word prevents it (Chris Bitmead, Tom)
ODBC cleanups/improvements (Nick Gorham, Stephan Szabo, Zoltan Kovacs,
Michael Fork)
Allow renaming of temp tables (Tom)
Overhaul memory manager contexts (Tom)
pg_dumpall uses CREATE USER or CREATE GROUP rather using COPY (Peter E)
Overhaul pg_dump (Philip Warner)
Allow pg_hba.conf secondary password file to specify only username (Peter E)
Allow TEMPORARY or TEMP key word when creating temporary tables (Bruce)
New memory leak checker (Karel)
New SET SESSION CHARACTERISTICS (Thomas)
Allow nested block comments (Thomas)
Add WITHOUT TIME ZONE type qualifier (Thomas)
New ALTER TABLE ADD CONSTRAINT (Stephan)
Use NUMERIC accumulators for INTEGER aggregates (Tom)
Overhaul aggregate code (Tom)
New VARIANCE and STDDEV() aggregates
Improve dependency ordering of pg_dump (Philip)
New pg_restore command (Philip)
New pg_dump tar output option (Philip)
New pg_dump of large objects (Philip)
New ESCAPE option to LIKE (Thomas)
New case-insensitive LIKE - ILIKE (Thomas)
Allow functional indexes to use binary-compatible type (Tom)
Allow SQL functions to be used in more contexts (Tom)
New pg_config utility (Peter E)
New PL/pgSQL EXECUTE command which allows dynamic SQL and utility statements
(Jan)
New PL/pgSQL GET DIAGNOSTICS statement for SPI value access (Jan)
New quote_identifiers() and quote_literal() functions (Jan)
New ALTER TABLE table OWNER TO user command (Mark Hollomon)
Allow subselects in FROM, i.e. FROM (SELECT ...) [AS] alias (Tom)
Update PyGreSQL to version 3.1 (D’Arcy)
1287
Appendix E. Release Notes
Types
-----
Fix INET/CIDR type ordering and add new functions (Tom)
Make OID behave as an unsigned type (Tom)
Allow BIGINT as synonym for INT8 (Peter E)
New int2 and int8 comparison operators (Tom)
New BIT and BIT VARYING types (Adriaan Joubert, Tom, Peter E)
CHAR() no longer faster than VARCHAR() because of TOAST (Tom)
New GIST seg/cube examples (Gene Selkov)
Improved round(numeric) handling (Tom)
Fix CIDR output formatting (Tom)
New CIDR abbrev() function (Tom)
Performance
-----------
Write-Ahead Log (WAL) to provide crash recovery with less performance
overhead (Vadim)
ANALYZE stage of VACUUM no longer exclusively locks table (Bruce)
Reduced file seeks (Denis Perchine)
Improve BTREE code for duplicate keys (Tom)
Store all large objects in a single table (Denis Perchine, Tom)
Improve memory allocation performance (Karel, Tom)
Source Code
-----------
New function manager call conventions (Tom)
SGI portability fixes (David Kaelbling)
New configure --enable-syslog option (Peter E)
New BSDI README (Bruce)
1288
Appendix E. Release Notes
1289
Appendix E. Release Notes
E.29.2. Changes
1290
Appendix E. Release Notes
E.30.2. Changes
E.31.2. Changes
1291
Appendix E. Release Notes
Add SJIS UDC (NEC selection IBM kanji) support (Eiji Tokuya)
Fix too long syslog message (Tatsuo)
Fix problem with quoted indexes that are too long (Tom)
JDBC ResultSet.getTimestamp() fix (Gregory Krasnow & Floyd Marinescu)
ecpg changes (Michael)
This release contains improvements in many areas, demonstrating the continued growth of PostgreSQL.
There are more improvements and fixes in 7.0 than in any previous release. The developers have confi-
dence that this is the best release yet; we do our best to put out only solid releases, and this one is no
exception.
Major changes in this release:
Foreign Keys
Foreign keys are now implemented, with the exception of PARTIAL MATCH foreign keys. Many
users have been asking for this feature, and we are pleased to offer it.
Optimizer Overhaul
Continuing on work started a year ago, the optimizer has been improved, allowing better query plan
selection and faster performance with less memory usage.
Updated psql
psql, our interactive terminal monitor, has been updated with a variety of new features. See the psql
manual page for details.
Join Syntax
SQL92 join syntax is now supported, though only as INNER JOIN for this release. JOIN, NATURAL
JOIN, JOIN/USING, and JOIN/ON are available, as are column correlation names.
1292
Appendix E. Release Notes
• The date/time types datetime and timespan have been superseded by the SQL92-defined types
timestamp and interval. Although there has been some effort to ease the transition by allowing
PostgreSQL to recognize the deprecated type names and translate them to the new type names, this
mechanism may not be completely transparent to your existing application.
• The optimizer has been substantially improved in the area of query cost estimation. In some cases,
this will result in decreased query times as the optimizer makes a better choice for the preferred plan.
However, in a small number of cases, usually involving pathological distributions of data, your query
times may go up. If you are dealing with large amounts of data, you may want to check your queries to
verify performance.
• The JDBC and ODBC interfaces have been upgraded and extended.
• The string function CHAR_LENGTH is now a native function. Previous versions translated this into a
call to LENGTH, which could result in ambiguity with other types implementing LENGTH such as the
geometric types.
E.32.2. Changes
Bug Fixes
---------
Prevent function calls exceeding maximum number of arguments (Tom)
Improve CASE construct (Tom)
Fix SELECT coalesce(f1,0) FROM int4_tbl GROUP BY f1 (Tom)
Fix SELECT sentence.words[0] FROM sentence GROUP BY sentence.words[0] (Tom)
Fix GROUP BY scan bug (Tom)
Improvements in SQL grammar processing (Tom)
Fix for views involved in INSERT ... SELECT ... (Tom)
Fix for SELECT a/2, a/2 FROM test_missing_target GROUP BY a/2 (Tom)
Fix for subselects in INSERT ... SELECT (Tom)
Prevent INSERT ... SELECT ... ORDER BY (Tom)
Fixes for relations greater than 2GB, including vacuum
Improve propagating system table changes to other backends (Tom)
Improve propagating user table changes to other backends (Tom)
Fix handling of temp tables in complex situations (Bruce, Tom)
Allow table locking at table open, improving concurrent reliability (Tom)
Properly quote sequence names in pg_dump (Ross J. Reedstrom)
Prevent DROP DATABASE while others accessing
Prevent any rows from being returned by GROUP BY if no rows processed (Tom)
Fix SELECT COUNT(1) FROM table WHERE ...’ if no rows matching WHERE (Tom)
Fix pg_upgrade so it works for MVCC (Tom)
Fix for SELECT ... WHERE x IN (SELECT ... HAVING SUM(x) > 1) (Tom)
Fix for "f1 datetime DEFAULT ’now’" (Tom)
Fix problems with CURRENT_DATE used in DEFAULT (Tom)
Allow comment-only lines, and ;;; lines too. (Tom)
Improve recovery after failed disk writes, disk full (Hiroshi)
Fix cases where table is mentioned in FROM but not joined (Tom)
Allow HAVING clause without aggregate functions (Tom)
Fix for "--" comment and no trailing newline, as seen in perl interface
Improve pg_dump failure error reports (Bruce)
1293
Appendix E. Release Notes
Enhancements
------------
New CLI interface include file sqlcli.h, based on SQL3/SQL98
Remove all limits on query length, row length limit still exists (Tom)
Update jdbc protocol to 2.0 (Jens Glaser <[email protected]>)
Add TRUNCATE command to quickly truncate relation (Mike Mascari)
Fix to give super user and createdb user proper update catalog rights (Peter E)
1294
Appendix E. Release Notes
1295
Appendix E. Release Notes
Allow --with-mb=SQL_ASCII
Increase maximum number of index keys to 16 (Bruce)
Increase maximum number of function arguments to 16 (Bruce)
Allow configuration of maximum number of index keys and arguments (Bruce)
Allow unprivileged users to change their passwords (Peter E)
Password authentication enabled; required for new users (Peter E)
Disallow dropping a user who owns a database (Peter E)
Change initdb option --with-mb to --enable-multibyte
Add option for initdb to prompts for superuser password (Peter E)
Allow complex type casts like col::numeric(9,2) and col::int2::float8 (Tom)
Updated user interfaces on initdb, initlocation, pg_dump, ipcclean (Peter E)
New pg_char_to_encoding() and pg_encoding_to_char() functions (Tatsuo)
libpq non-blocking mode (Alfred Perlstein)
Improve conversion of types in casts that don’t specify a length
New plperl internal programming language (Mark Hollomon)
Allow COPY IN to read file that do not end with a newline (Tom)
Indicate when long identifiers are truncated (Tom)
Allow aggregates to use type equivalency (Peter E)
Add Oracle’s to_char(), to_date(), to_datetime(), to_timestamp(), to_number()
conversion functions (Karel Zak <[email protected]>)
Add SELECT DISTINCT ON (expr [, expr ...]) targetlist ... (Tom)
Check to be sure ORDER BY is compatible with the DISTINCT operation (Tom)
Add NUMERIC and int8 types to ODBC
Improve EXPLAIN results for Append, Group, Agg, Unique (Tom)
Add ALTER TABLE ... ADD FOREIGN KEY (Stephan Szabo)
Allow SELECT .. FOR UPDATE in PL/pgSQL (Hiroshi)
Enable backward sequential scan even after reaching EOF (Hiroshi)
Add btree indexing of boolean values, >= and <= (Don Baccus)
Print current line number when COPY FROM fails (Massimo)
Recognize POSIX time zone e.g. "PST+8" and "GMT-8" (Thomas)
Add DEC as synonym for DECIMAL (Thomas)
Add SESSION_USER as SQL92 key word, same as CURRENT_USER (Thomas)
Implement SQL92 column aliases (aka correlation names) (Thomas)
Implement SQL92 join syntax (Thomas)
Make INTERVAL reserved word allowed as a column identifier (Thomas)
Implement REINDEX command (Hiroshi)
Accept ALL in aggregate function SUM(ALL col) (Tom)
Prevent GROUP BY from using column aliases (Tom)
New psql \encoding option (Tatsuo)
Allow PQrequestCancel() to terminate when in waiting-for-lock state (Hiroshi)
Allow negation of a negative number in all cases
Add ecpg descriptors (Christof, Michael)
Allow CREATE VIEW v AS SELECT f1::char(8) FROM tbl
Allow casts with length, like foo::char(8)
New libpq functions PQsetClientEncoding(), PQclientEncoding() (Tatsuo)
Add support for SJIS user defined characters (Tatsuo)
Larger views/rules supported
Make libpq’s PQconndefaults() thread-safe (Tom)
Disable // as comment to be ANSI conforming, should use -- (Tom)
Allow column aliases on views CREATE VIEW name (collist)
Fixes for views with subqueries (Tom)
Allow UPDATE table SET fld = (SELECT ...) (Tom)
SET command options no longer require quotes
1296
Appendix E. Release Notes
Types
-----
Many array fixes (Tom)
Allow bare column names to be subscripted as arrays (Tom)
Improve type casting of int and float constants (Tom)
Cleanups for int8 inputs, range checking, and type conversion (Tom)
Fix for SELECT timespan(’21:11:26’::time) (Tom)
netmask(’x.x.x.x/0’) is 255.255.255.255 instead of 0.0.0.0 (Oleg Sharoiko)
Add btree index on NUMERIC (Jan)
Perl fix for large objects containing NUL characters (Douglas Thomson)
ODBC fix for for large objects (free)
Fix indexing of cidr data type
Fix for Ethernet MAC addresses (macaddr type) comparisons
Fix for date/time types when overflows happened in computations (Tom)
Allow array on int8 (Peter E)
Fix for rounding/overflow of NUMERIC type, like NUMERIC(4,4) (Tom)
Allow NUMERIC arrays
Fix bugs in NUMERIC ceil() and floor() functions (Tom)
Make char_length()/octet_length including trailing blanks (Tom)
Made abstime/reltime use int4 instead of time_t (Peter E)
New lztext data type for compressed text fields
Revise code to handle coercion of int and float constants (Tom)
Start at new code to implement a BIT and BIT VARYING type (Adriaan Joubert)
NUMERIC now accepts scientific notation (Tom)
NUMERIC to int4 rounds (Tom)
Convert float4/8 to NUMERIC properly (Tom)
Allow type conversion with NUMERIC (Thomas)
Make ISO date style (2000-02-16 09:33) the default (Thomas)
Add NATIONAL CHAR [ VARYING ] (Thomas)
Allow NUMERIC round and trunc to accept negative scales (Tom)
New TIME WITH TIME ZONE type (Thomas)
Add MAX()/MIN() on time type (Thomas)
Add abs(), mod(), fac() for int8 (Thomas)
Rename functions to round(), sqrt(), cbrt(), pow() for float8 (Thomas)
Add transcendental math functions (e.g. sin(), acos()) for float8 (Thomas)
Add exp() and ln() for NUMERIC type
Rename NUMERIC power() to pow() (Thomas)
Improved TRANSLATE() function (Edwin Ramirez, Tom)
Allow X=-Y operators (Tom)
Allow SELECT float8(COUNT(*))/(SELECT COUNT(*) FROM t) FROM t GROUP BY f1; (Tom)
Allow LOCALE to use indexes in regular expression searches (Tom)
Allow creation of functional indexes to use default types
Performance
1297
Appendix E. Release Notes
-----------
Prevent exponential space consumption with many AND’s and OR’s (Tom)
Collect attribute selectivity values for system columns (Tom)
Reduce memory usage of aggregates (Tom)
Fix for LIKE optimization to use indexes with multibyte encodings (Tom)
Fix r-tree index optimizer selectivity (Thomas)
Improve optimizer selectivity computations and functions (Tom)
Optimize btree searching for cases where many equal keys exist (Tom)
Enable fast LIKE index processing only if index present (Tom)
Re-use free space on index pages with duplicates (Tom)
Improve hash join processing (Tom)
Prevent descending sort if result is already sorted(Hiroshi)
Allow commuting of index scan query qualifications (Tom)
Prefer index scans in cases where ORDER BY/GROUP BY is required (Tom)
Allocate large memory requests in fix-sized chunks for performance (Tom)
Fix vacuum’s performance by reducing memory allocation requests (Tom)
Implement constant-expression simplification (Bernard Frankpitt, Tom)
Use secondary columns to be used to determine start of index scan (Hiroshi)
Prevent quadruple use of disk space when doing internal sorting (Tom)
Faster sorting by calling fewer functions (Tom)
Create system indexes to match all system caches (Bruce, Hiroshi)
Make system caches use system indexes (Bruce)
Make all system indexes unique (Bruce)
Improve pg_statistics management for VACUUM speed improvement (Tom)
Flush backend cache less frequently (Tom, Hiroshi)
COPY now reuses previous memory allocation, improving performance (Tom)
Improve optimization cost estimation (Tom)
Improve optimizer estimate of range queries x > lowbound AND x < highbound (Tom)
Use DNF instead of CNF where appropriate (Tom, Taral)
Further cleanup for OR-of-AND WHERE-clauses (Tom)
Make use of index in OR clauses (x = 1 AND y = 2) OR (x = 2 AND y = 4) (Tom)
Smarter optimizer computations for random index page access (Tom)
New SET variable to control optimizer costs (Tom)
Optimizer queries based on LIMIT, OFFSET, and EXISTS qualifications (Tom)
Reduce optimizer internal housekeeping of join paths for speedup (Tom)
Major subquery speedup (Tom)
Fewer fsync writes when fsync is not disabled (Tom)
Improved LIKE optimizer estimates (Tom)
Prevent fsync in SELECT-only queries (Vadim)
Make index creation use psort code, because it is now faster (Tom)
Allow creation of sort temp tables > 1 Gig
1298
Appendix E. Release Notes
This is basically a cleanup release for 6.5.2. We have added a new PgAccess that was missing in 6.5.2,
and installed an NT-specific fix.
E.33.2. Changes
1299
Appendix E. Release Notes
This is basically a cleanup release for 6.5.1. We have fixed a variety of problems reported by 6.5.1 users.
E.34.2. Changes
subselect+CASE fixes(Tom)
Add SHLIB_LINK setting for solaris_i386 and solaris_sparc ports(Daren Sefcik)
Fixes for CASE in WHERE join clauses(Tom)
Fix BTScan abort(Tom)
Repair the check for redundant UNIQUE and PRIMARY KEY indexes(Thomas)
Improve it so that it checks for multicolumn constraints(Thomas)
Fix for Windows making problem with MB enabled(Hiroki Kataoka)
Allow BSD yacc and bison to compile pl code(Bruce)
Fix SET NAMES working
int8 fixes(Thomas)
Fix vacuum’s memory consumption(Hiroshi,Tatsuo)
Reduce the total memory consumption of vacuum(Tom)
Fix for timestamp(datetime)
Rule deparsing bugfixes(Tom)
Fix quoting problems in mkMakefile.tcldefs.sh.in and mkMakefile.tkdefs.sh.in(Tom)
This is to re-use space on index pages freed by vacuum(Vadim)
document -x for pg_dump(Bruce)
Fix for unary operators in rule deparser(Tom)
Comment out FileUnlink of excess segments during mdtruncate()(Tom)
IRIX linking fix from Yu Cao >[email protected]<
Repair logic error in LIKE: should not return LIKE_ABORT
when reach end of pattern before end of text(Tom)
Repair incorrect cleanup of heap memory allocation during transaction abort(Tom)
Updated version of pgaccess 0.98
1300
Appendix E. Release Notes
This is basically a cleanup release for 6.5. We have fixed a variety of problems reported by 6.5 users.
E.35.2. Changes
This release marks a major step in the development team’s mastery of the source code we inherited from
Berkeley. You will see we are now easily adding major features, thanks to the increasing size and experi-
ence of our world-wide development team.
Here is a brief summary of the more notable changes:
1301
Appendix E. Release Notes
1302
Appendix E. Release Notes
Note: Note that if you run a transaction in SERIALIZABLE mode then you must
execute the LOCK commands above before execution of any DML statement
(SELECT/INSERT/DELETE/UPDATE/FETCH/COPY_TO) in the transaction.
These inconveniences will disappear in the future when the ability to read dirty (uncommitted) data (re-
gardless of isolation level) and true referential integrity will be implemented.
E.36.2. Changes
Bug Fixes
---------
Fix text<->float8 and text<->float4 conversion functions(Thomas)
Fix for creating tables with mixed-case constraints(Billy)
Change exp()/pow() behavior to generate error on underflow/overflow(Jan)
Fix bug in pg_dump -z
Memory overrun cleanups(Tatsuo)
Fix for lo_import crash(Tatsuo)
Adjust handling of data type names to suppress double quotes(Thomas)
Use type coercion for matching columns and DEFAULT(Thomas)
Fix deadlock so it only checks once after one second of sleep(Bruce)
1303
Appendix E. Release Notes
Enhancements
------------
1304
Appendix E. Release Notes
1305
Appendix E. Release Notes
1306
Appendix E. Release Notes
The 6.4.1 release was improperly packaged. This also has one additional bug fix.
E.37.2. Changes
This is basically a cleanup release for 6.4. We have fixed a variety of problems reported by 6.4 users.
E.38.2. Changes
1307
Appendix E. Release Notes
SunOS fixes(Tom)
Change exp() behavior to generate error on underflow(Thomas)
pg_dump fixes for memory leak, inheritance constraints, layout change
update pgaccess to 0.93
Fix prototype for 64-bit platforms
Multibyte fixes(Tatsuo)
New ecpg man page
Fix memory overruns(Tatsuo)
Fix for lo_import() crash(Bruce)
Better search for install program(Tom)
Timezone fixes(Tom)
HP-UX fixes(Tom)
Use implicit type coercion for matching DEFAULT values(Thomas)
Add routines to help with single-byte (internal) character type(Thomas)
Compilation of libpq for Windows fixes(Magnus)
Upgrade to PyGreSQL 2.2(D’Arcy)
There are many new features and improvements in this release. Thanks to our developers and maintainers,
nearly every aspect of the system has received some attention since the previous release. Here is a brief,
incomplete summary:
• Views and rules are now functional thanks to extensive new code in the rewrite rules system from Jan
Wieck. He also wrote a chapter on it for the Programmer’s Guide.
• Jan also contributed a second procedural language, PL/pgSQL, to go with the original PL/pgTCL pro-
cedural language he contributed last release.
• We have optional multiple-byte character set support from Tatsuo Ishii to complement our existing
locale support.
• Client/server communications has been cleaned up, with better support for asynchronous messages and
interrupts thanks to Tom Lane.
• The parser will now perform automatic type coercion to match arguments to available operators and
functions, and to match columns and expressions with target columns. This uses a generic mechanism
which supports the type extensibility features of PostgreSQL. There is a new chapter in the User’s
Guide which covers this topic.
• Three new data types have been added. Two types, inet and cidr, support various forms of IP network,
subnet, and machine addressing. There is now an 8-byte integer type available on some platforms. See
the chapter on data types in the User’s Guide for details. A fourth type, serial, is now supported by
the parser as an amalgam of the int4 type, a sequence, and a unique index.
1308
Appendix E. Release Notes
• Several more SQL92-compatible syntax features have been added, including INSERT DEFAULT
VALUES
• The automatic configuration and installation system has received some attention, and should be more
robust for more platforms than it has ever been.
E.39.2. Changes
Bug Fixes
---------
Fix for a tiny memory leak in PQsetdb/PQfinish(Bryan)
Remove char2-16 data types, use char/varchar(Darren)
Pqfn not handles a NOTICE message(Anders)
Reduced busywaiting overhead for spinlocks with many backends (dg)
Stuck spinlock detection (dg)
Fix up "ISO-style" timespan decoding and encoding(Thomas)
Fix problem with table drop after rollback of transaction(Vadim)
Change error message and remove non-functional update message(Vadim)
Fix for COPY array checking
Fix for SELECT 1 UNION SELECT NULL
Fix for buffer leaks in large object calls(Pascal)
Change owner from oid to int4 type(Bruce)
Fix a bug in the oracle compatibility functions btrim() ltrim() and rtrim()
Fix for shared invalidation cache overflow(Massimo)
Prevent file descriptor leaks in failed COPY’s(Bruce)
Fix memory leak in libpgtcl’s pg_select(Constantin)
Fix problems with username/passwords over 8 characters(Tom)
Fix problems with handling of asynchronous NOTIFY in backend(Tom)
Fix of many bad system table entries(Tom)
Enhancements
------------
Upgrade ecpg and ecpglib,see src/interfaces/ecpc/ChangeLog(Michael)
Show the index used in an EXPLAIN(Zeugswetter)
EXPLAIN invokes rule system and shows plan(s) for rewritten queries(Jan)
Multibyte awareness of many data types and functions, via configure(Tatsuo)
New configure --with-mb option(Tatsuo)
New initdb --pgencoding option(Tatsuo)
New createdb -E multibyte option(Tatsuo)
Select version(); now returns PostgreSQL version(Jeroen)
libpq now allows asynchronous clients(Tom)
Allow cancel from client of backend query(Tom)
psql now cancels query with Control-C(Tom)
libpq users need not issue dummy queries to get NOTIFY messages(Tom)
1309
Appendix E. Release Notes
NOTIFY now sends sender’s PID, so you can tell whether it was your own(Tom)
PGresult struct now includes associated error message, if any(Tom)
Define "tz_hour" and "tz_minute" arguments to date_part()(Thomas)
Add routines to convert between varchar and bpchar(Thomas)
Add routines to allow sizing of varchar and bpchar into target columns(Thomas)
Add bit flags to support timezonehour and minute in data retrieval(Thomas)
Allow more variations on valid floating point numbers (e.g. ".1", "1e6")(Thomas)
Fixes for unary minus parsing with leading spaces(Thomas)
Implement TIMEZONE_HOUR, TIMEZONE_MINUTE per SQL92 specs(Thomas)
Check for and properly ignore FOREIGN KEY column constraints(Thomas)
Define USER as synonym for CURRENT_USER per SQL92 specs(Thomas)
Enable HAVING clause but no fixes elsewhere yet.
Make "char" type a synonym for "char(1)" (actually implemented as bpchar)(Thomas)
Save string type if specified for DEFAULT clause handling(Thomas)
Coerce operations involving different data types(Thomas)
Allow some index use for columns of different types(Thomas)
Add capabilities for automatic type conversion(Thomas)
Cleanups for large objects, so file is truncated on open(Peter)
Readline cleanups(Tom)
Allow psql \f \ to make spaces as delimiter(Bruce)
Pass pg_attribute.atttypmod to the frontend for column field lengths(Tom,Bruce)
Msql compatibility library in /contrib(Aldrin)
Remove the requirement that ORDER/GROUP BY clause identifiers be
included in the target list(David)
Convert columns to match columns in UNION clauses(Thomas)
Remove fork()/exec() and only do fork()(Bruce)
Jdbc cleanups(Peter)
Show backend status on ps command line(only works on some platforms)(Bruce)
Pg_hba.conf now has a sameuser option in the database field
Make lo_unlink take oid param, not int4
New DISABLE_COMPLEX_MACRO for compilers that can’t handle our macros(Bruce)
Libpgtcl now handles NOTIFY as a Tcl event, need not send dummy queries(Tom)
libpgtcl cleanups(Tom)
Add -error option to libpgtcl’s pg_result command(Tom)
New locale patch, see docs/README/locale(Oleg)
Fix for pg_dump so CONSTRAINT and CHECK syntax is correct(ccb)
New contrib/lo code for large object orphan removal(Peter)
New psql command "SET CLIENT_ENCODING TO ’encoding’" for multibytes
feature, see /doc/README.mb(Tatsuo)
contrib/noupdate code to revoke update permission on a column
libpq can now be compiled on Windows(Magnus)
Add PQsetdbLogin() in libpq
New 8-byte integer type, checked by configure for OS support(Thomas)
Better support for quoted table/column names(Thomas)
Surround table and column names with double-quotes in pg_dump(Thomas)
PQreset() now works with passwords(Tom)
Handle case of GROUP BY target list column number out of range(David)
Allow UNION in subselects
Add auto-size to screen to \d? commands(Bruce)
Use UNION to show all \d? results in one query(Bruce)
Add \d? field search feature(Bruce)
Pg_dump issues fewer \connect requests(Tom)
Make pg_dump -z flag work better, document it in manual page(Tom)
1310
Appendix E. Release Notes
Add HAVING clause with full support for subselects and unions(Stephan)
Full text indexing routines in contrib/fulltextindex(Maarten)
Transaction ids now stored in shared memory(Vadim)
New PGCLIENTENCODING when issuing COPY command(Tatsuo)
Support for SQL92 syntax "SET NAMES"(Tatsuo)
Support for LATIN2-5(Tatsuo)
Add UNICODE regression test case(Tatsuo)
Lock manager cleanup, new locking modes for LLL(Vadim)
Allow index use with OR clauses(Bruce)
Allows "SELECT NULL ORDER BY 1;"
Explain VERBOSE prints the plan, and now pretty-prints the plan to
the postmaster log file(Bruce)
Add indexes display to \d command(Bruce)
Allow GROUP BY on functions(David)
New pg_class.relkind for large objects(Bruce)
New way to send libpq NOTICE messages to a different location(Tom)
New \w write command to psql(Bruce)
New /contrib/findoidjoins scans oid columns to find join relationships(Bruce)
Allow binary-compatible indexes to be considered when checking for valid
Indexes for restriction clauses containing a constant(Thomas)
New ISBN/ISSN code in /contrib/isbn_issn
Allow NOT LIKE, IN, NOT IN, BETWEEN, and NOT BETWEEN constraint(Thomas)
New rewrite system fixes many problems with rules and views(Jan)
* Rules on relations work
* Event qualifications on insert/update/delete work
* New OLD variable to reference CURRENT, CURRENT will be remove in future
* Update rules can reference NEW and OLD in rule qualifications/actions
* Insert/update/delete rules on views work
* Multiple rule actions are now supported, surrounded by parentheses
* Regular users can create views/rules on tables they have RULE permits
* Rules and views inherit the privileges of the creator
* No rules at the column level
* No UPDATE NEW/OLD rules
* New pg_tables, pg_indexes, pg_rules and pg_views system views
* Only a single action on SELECT rules
* Total rewrite overhaul, perhaps for 6.5
* handle subselects
* handle aggregates on views
* handle insert into select from view works
System indexes are now multikey(Bruce)
Oidint2, oidint4, and oidname types are removed(Bruce)
Use system cache for more system table lookups(Bruce)
New backend programming language PL/pgSQL in backend/pl(Jan)
New SERIAL data type, auto-creates sequence/index(Thomas)
Enable assert checking without a recompile(Massimo)
User lock enhancements(Massimo)
New setval() command to set sequence value(Massimo)
Auto-remove unix socket file on start-up if no postmaster running(Massimo)
Conditional trace package(Massimo)
New UNLISTEN command(Massimo)
psql and libpq now compile under Windows using win32.mak(Magnus)
Lo_read no longer stores trailing NULL(Bruce)
Identifiers are now truncated to 31 characters internally(Bruce)
1311
Appendix E. Release Notes
1312
Appendix E. Release Notes
This is a bug-fix release for 6.3.x. Refer to the release notes for version 6.3 for a more complete summary
of new features.
Summary:
• Repairs automatic configuration support for some platforms, including Linux, from breakage inadver-
tently introduced in version 6.3.1.
• Correctly handles function calls on the left side of BETWEEN and LIKE clauses.
A dump/restore is NOT required for those running 6.3 or 6.3.1. A make distclean, make, and make
install is all that is required. This last step should be performed while the postmaster is not running.
You should re-link any custom applications that use PostgreSQL libraries.
For upgrades from pre-6.3 installations, refer to the installation and migration instructions for version 6.3.
E.40.1. Changes
1313
Appendix E. Release Notes
Summary:
A dump/restore is NOT required for those running 6.3. A make distclean, make, and make install
is all that is required. This last step should be performed while the postmaster is not running. You should
re-link any custom applications that use PostgreSQL libraries.
For upgrades from pre-6.3 installations, refer to the installation and migration instructions for version 6.3.
E.41.1. Changes
1314
Appendix E. Release Notes
There are many new features and improvements in this release. Here is a brief, incomplete summary:
• Many new SQL features, including full SQL92 subselect capability (everything is here but target-list
subselects).
• Support for client-side environment variables to specify time zone and date style.
• Socket interface for client/server connection. This is the default now so you may need to start postmaster
with the -i flag.
• Better password authorization mechanisms. Default table privileges have changed.
• Old-style time travel has been removed. Performance has been improved.
Note: Bruce Momjian wrote the following notes to introduce the new release.
There are some general 6.3 issues that I want to mention. These are only the big items that can not be
described in one sentence. A review of the detailed changes list is still needed.
First, we now have subselects. Now that we have them, I would like to mention that without subselects,
SQL is a very limited language. Subselects are a major feature, and you should review your code for
places where subselects provide a better solution for your queries. I think you will find that there are more
uses for subselects than you may think. Vadim has put us on the big SQL map with subselects, and fully
functional ones too. The only thing you can’t do with subselects is to use them in the target list.
Second, 6.3 uses Unix domain sockets rather than TCP/IP by default. To enable connections from other
machines, you have to use the new postmaster -i option, and of course edit pg_hba.conf. Also, for this
reason, the format of pg_hba.conf has changed.
Third, char() fields will now allow faster access than varchar() or text. Specifically, the text and
varchar() have a penalty for access to any columns after the first column of this type. char() used
to also have this access penalty, but it no longer does. This may suggest that you redesign some of your
tables, especially if you have short character columns that you have defined as varchar() or text. This
and other changes make 6.3 even faster than earlier releases.
We now have passwords definable independent of any Unix file. There are new SQL USER commands.
See the Administrator’s Guide for more information. There is a new table, pg_shadow, which is used to
store user information and user passwords, and it by default only SELECT-able by the postgres super-user.
1315
Appendix E. Release Notes
pg_user is now a view of pg_shadow, and is SELECT-able by PUBLIC. You should keep using pg_user
in your application without changes.
User-created tables now no longer have SELECT privilege to PUBLIC by default. This was done because
the ANSI standard requires it. You can of course GRANT any privileges you want after the table is created.
System tables continue to be SELECT-able by PUBLIC.
We also have real deadlock detection code. No more sixty-second timeouts. And the new locking code
implements a FIFO better, so there should be less resource starvation during heavy use.
Many complaints have been made about inadequate documentation in previous releases. Thomas has put
much effort into many new manuals for this release. Check out the doc/ directory.
For performance reasons, time travel is gone, but can be implemented using triggers (see
pgsql/contrib/spi/README). Please check out the new \d command for types, operators, etc. Also,
views have their own privileges now, not based on the underlying tables, so privileges on them have to be
set separately. Check /pgsql/interfaces for some new ways to talk to PostgreSQL.
This is the first release that really required an explanation for existing users. In many ways, this was
necessary because the new release removes many limitations, and the work-arounds people were using
are no longer needed.
E.42.2. Changes
Bug Fixes
---------
Fix binary cursors broken by MOVE implementation(Vadim)
Fix for tcl library crash(Jan)
Fix for array handling, from Gerhard Hintermayer
Fix acl error, and remove duplicate pqtrace(Bruce)
Fix psql \e for empty file(Bruce)
Fix for textcat on varchar() fields(Bruce)
Fix for DBT Sendproc (Zeugswetter Andres)
Fix vacuum analyze syntax problem(Bruce)
Fix for international identifiers(Tatsuo)
Fix aggregates on inherited tables(Bruce)
Fix substr() for out-of-bounds data
Fix for select 1=1 or 2=2, select 1=1 and 2=2, and select sum(2+2)(Bruce)
Fix notty output to show status result. -q option still turns it off(Bruce)
Fix for count(*), aggs with views and multiple tables and sum(3)(Bruce)
Fix cluster(Bruce)
Fix for PQtrace start/stop several times(Bruce)
Fix a variety of locking problems like newer lock waiters getting
lock before older waiters, and having readlock people not share
locks if a writer is waiting for a lock, and waiting writers not
getting priority over waiting readers(Bruce)
Fix crashes in psql when executing queries from external files(James)
1316
Appendix E. Release Notes
Fix problem with multiple order by columns, with the first one having
NULL values(Jeroen)
Use correct hash table support functions for float8 and int4(Thomas)
Re-enable JOIN= option in CREATE OPERATOR statement (Thomas)
Change precedence for boolean operators to match expected behavior(Thomas)
Generate elog(ERROR) on over-large integer(Bruce)
Allow multiple-argument functions in constraint clauses(Thomas)
Check boolean input literals for ’true’,’false’,’yes’,’no’,’1’,’0’
and throw elog(ERROR) if unrecognized(Thomas)
Major large objects fix
Fix for GROUP BY showing duplicates(Vadim)
Fix for index scans in MergeJion(Vadim)
Enhancements
------------
Subselects with EXISTS, IN, ALL, ANY key words (Vadim, Bruce, Thomas)
New User Manual(Thomas, others)
Speedup by inlining some frequently-called functions
Real deadlock detection, no more timeouts(Bruce)
Add SQL92 "constants" CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP,
CURRENT_USER(Thomas)
Modify constraint syntax to be SQL92-compliant(Thomas)
Implement SQL92 PRIMARY KEY and UNIQUE clauses using indexes(Thomas)
Recognize SQL92 syntax for FOREIGN KEY. Throw elog notice(Thomas)
Allow NOT NULL UNIQUE constraint clause (each allowed separately before)(Thomas)
Allow PostgreSQL-style casting ("::") of non-constants(Thomas)
Add support for SQL3 TRUE and FALSE boolean constants(Thomas)
Support SQL92 syntax for IS TRUE/IS FALSE/IS NOT TRUE/IS NOT FALSE(Thomas)
Allow shorter strings for boolean literals (e.g. "t", "tr", "tru")(Thomas)
Allow SQL92 delimited identifiers(Thomas)
Implement SQL92 binary and hexadecimal string decoding (b’10’ and x’1F’)(Thomas)
Support SQL92 syntax for type coercion of literal strings
(e.g. "DATETIME ’now’")(Thomas)
Add conversions for int2, int4, and OID types to and from text(Thomas)
Use shared lock when building indexes(Vadim)
Free memory allocated for an user query inside transaction block after
this query is done, was turned off in <= 6.2.1(Vadim)
New SQL statement CREATE PROCEDURAL LANGUAGE(Jan)
New PostgreSQL Procedural Language (PL) backend interface(Jan)
Rename pg_dump -H option to -h(Bruce)
Add Java support for passwords, European dates(Peter)
Use indexes for LIKE and ~, !~ operations(Bruce)
Add hash functions for datetime and timespan(Thomas)
Time Travel removed(Vadim, Bruce)
Add paging for \d and \z, and fix \i(Bruce)
Add Unix domain socket support to backend and to frontend library(Goran)
Implement CREATE DATABASE/WITH LOCATION and initlocation utility(Thomas)
Allow more SQL92 and/or PostgreSQL reserved words as column identifiers(Thomas)
Augment support for SQL92 SET TIME ZONE...(Thomas)
SET/SHOW/RESET TIME ZONE uses TZ backend environment variable(Thomas)
Implement SET keyword = DEFAULT and SET TIME ZONE DEFAULT(Thomas)
Enable SET TIME ZONE using TZ environment variable(Thomas)
Add PGDATESTYLE environment variable to frontend and backend initialization(Thomas)
1317
Appendix E. Release Notes
1318
Appendix E. Release Notes
-------------------
Add new html development tools, and flow chart in /tools/backend
Fix for SCO compiles
Stratus computer port Robert Gillies
Added support for shlib for BSD44_derived & i386_solaris
Make configure more automated(Brook)
Add script to check regression test results
Break parser functions into smaller files, group together(Bruce)
Rename heap_create to heap_create_and_catalog, rename heap_creatr
to heap_create()(Bruce)
Sparc/Linux patch for locking(TomS)
Remove PORTNAME and reorganize port-specific stuff(Marc)
Add optimizer README file(Bruce)
Remove some recursion in optimizer and clean up some code there(Bruce)
Fix for NetBSD locking(Henry)
Fix for libptcl make(Tatsuo)
AIX patch(Darren)
Change IS TRUE, IS FALSE, ... to expressions using "=" rather than
function calls to istrue() or isfalse() to allow optimization(Thomas)
Various fixes NetBSD/Sparc related(TomH)
Alpha linux locking(Travis,Ryan)
Change elog(WARN) to elog(ERROR)(Bruce)
FAQ for FreeBSD(Marc)
Bring in the PostODBC source tree as part of our standard distribution(Marc)
A minor patch for HP/UX 10 vs 9(Stan)
New pg_attribute.atttypmod for type-specific info like varchar length(Bruce)
UnixWare patches(Billy)
New i386 ’lock’ for spinlock asm(Billy)
Support for multiplexed backends is removed
Start an OpenBSD port
Start an AUX port
Start a Cygnus port
Add string functions to regression suite(Thomas)
Expand a few function names formerly truncated to 16 characters(Thomas)
Remove un-needed malloc() calls and replace with palloc()(Bruce)
1319
Appendix E. Release Notes
This is a minor bug-fix release on 6.2. For upgrades from pre-6.2 systems, a full dump/reload is required.
Refer to the 6.2 release notes for instructions.
E.43.2. Changes
A dump/restore is required for those wishing to migrate data from previous releases of PostgreSQL.
1320
Appendix E. Release Notes
E.44.3. Changes
Bug Fixes
---------
Fix problems with pg_dump for inheritance, sequences, archive tables(Bruce)
Fix compile errors on overflow due to shifts, unsigned, and bad prototypes
from Solaris(Diab Jerius)
Fix bugs in geometric line arithmetic (bad intersection calculations)(Thomas)
Check for geometric intersections at endpoints to avoid rounding ugliness(Thomas)
Catch non-functional delete attempts(Vadim)
Change time function names to be more consistent(Michael Reifenberg)
Check for zero divides(Michael Reifenberg)
Fix very old bug which made rows changed/inserted by a command
visible to the command itself (so we had multiple update of
updated rows, etc.)(Vadim)
Fix for SELECT null, ’fail’ FROM pg_am (Patrick)
SELECT NULL as EMPTY_FIELD now allowed(Patrick)
Remove un-needed signal stuff from contrib/pginterface
Fix OR (where x != 1 or x isnull didn’t return rows with x NULL) (Vadim)
Fix time_cmp function (Vadim)
Fix handling of functions with non-attribute first argument in
WHERE clauses (Vadim)
Fix GROUP BY when order of entries is different from order
in target list (Vadim)
Fix pg_dump for aggregates without sfunc1 (Vadim)
Enhancements
------------
Default genetic optimizer GEQO parameter is now 8(Bruce)
Allow use parameters in target list having aggregates in functions(Vadim)
Added JDBC driver as an interface(Adrian & Peter)
pg_password utility
Return number of rows inserted/affected by INSERT/UPDATE/DELETE etc.(Vadim)
Triggers implemented with CREATE TRIGGER (SQL3)(Vadim)
SPI (Server Programming Interface) allows execution of queries inside
C-functions (Vadim)
NOT NULL implemented (SQL92)(Robson Paniago de Miranda)
Include reserved words for string handling, outer joins, and unions(Thomas)
Implement extended comments ("/* ... */") using exclusive states(Thomas)
Add "//" single-line comments(Bruce)
1321
Appendix E. Release Notes
1322
Appendix E. Release Notes
E.45.2. Changes
1323
Appendix E. Release Notes
The regression tests have been adapted and extensively modified for the 6.1 release of PostgreSQL.
Three new data types (datetime, timespan, and circle) have been added to the native set of Post-
greSQL types. Points, boxes, paths, and polygons have had their output formats made consistent across
the data types. The polygon output in misc.out has only been spot-checked for correctness relative to the
original regression output.
PostgreSQL 6.1 introduces a new, alternate optimizer which uses genetic algorithms. These algorithms
introduce a random behavior in the ordering of query results when the query contains multiple qualifiers
or multiple tables (giving the optimizer a choice on order of evaluation). Several regression tests have been
modified to explicitly order the results, and hence are insensitive to optimizer choices. A few regression
tests are for data types which are inherently unordered (e.g. points and time intervals) and tests involving
those types are explicitly bracketed with set geqo to ’off’ and reset geqo.
The interpretation of array specifiers (the curly braces around atomic values) appears to have changed
sometime after the original regression tests were generated. The current ./expected/*.out files reflect
this new interpretation, which may not be correct!
The float8 regression test fails on at least some platforms. This is due to differences in implementations
of pow() and exp() and the signaling mechanisms used for overflow and underflow conditions.
The “random” results in the random test should cause the “random” test to be “failed”, since the regression
tests are evaluated using a simple diff. However, “random” does not seem to produce random results on
my test machine (Linux/gcc/i686).
E.46.2. Changes
Bug Fixes
---------
packet length checking in library routines
1324
Appendix E. Release Notes
Enhancements
------------
attribute optimization statistics(Bruce)
much faster new btree bulk load code(Paul)
BTREE UNIQUE added to bulk load code(Vadim)
new lock debug code(Massimo)
massive changes to libpg++(Leo)
new GEQO optimizer speeds table multitable optimization(Martin)
new WARN message for non-unique insert into unique key(Marc)
update x=-3, no spaces, now valid(Bruce)
remove case-sensitive identifier handling(Bruce,Thomas,Dan)
debug backend now pretty-prints tree(Darren)
new Oracle character functions(Edmund)
new plaintext password functions(Dan)
no such class or insufficient privilege changed to distinct messages(Dan)
new ANSI timestamp function(Dan)
new ANSI Time and Date types (Thomas)
move large chunks of data in backend(Martin)
multicolumn btree indexes(Vadim)
new SET var TO value command(Martin)
update transaction status on reads(Dan)
new locale settings for character types(Oleg)
new SEQUENCE serial number generator(Vadim)
GROUP BY function now possible(Vadim)
re-organize regression test(Thomas,Marc)
new optimizer operation weights(Vadim)
1325
Appendix E. Release Notes
A dump/restore is required for those wishing to migrate data from previous releases of PostgreSQL.
1326
Appendix E. Release Notes
E.47.3. Changes
Bug Fixes
---------
ALTER TABLE bug - running postgress process needs to re-read table definition
Allow vacuum to be run on one table or entire database(Bruce)
Array fixes
Fix array over-runs of memory writes(Kurt)
Fix elusive btree range/non-range bug(Dan)
Fix for hash indexes on some types like time and date
Fix for pg_log size explosion
Fix permissions on lo_export()(Bruce)
Fix unitialized reads of memory(Kurt)
Fixed ALTER TABLE ... char(3) bug(Bruce)
Fixed a few small memory leaks
Fixed EXPLAIN handling of options and changed full_path option name
Fixed output of group acl privileges
Memory leaks (hunt and destroy with tools like Purify(Kurt)
Minor improvements to rules system
NOTIFY fixes
New asserts for run-checking
Overhauled parser/analyze code to properly report errors and increase speed
Pg_dump -d now handles NULL’s properly(Bruce)
Prevent SELECT NULL from crashing server (Bruce)
Properly report errors when INSERT ... SELECT columns did not match
Properly report errors when insert column names were not correct
psql \g filename now works(Bruce)
psql fixed problem with multiple statements on one line with multiple outputs
Removed duplicate system OIDs
SELECT * INTO TABLE . GROUP/ORDER BY gives unlink error if table exists(Bruce)
Several fixes for queries that crashed the backend
Starting quote in insert string errors(Bruce)
Submitting an empty query now returns empty status, not just " " query(Bruce)
Enhancements
------------
Add EXPLAIN manual page(Bruce)
Add UNIQUE index capability(Dan)
Add hostname/user level access control rather than just hostname and user
Add synonym of != for <>(Bruce)
Allow "select oid,* from table"
1327
Appendix E. Release Notes
1328
Appendix E. Release Notes
Sorry, we didn’t keep track of changes from 1.02 to 1.09. Some of the changes listed in 6.0 were actually
included in the 1.02.1 to 1.09 releases.
Note: The following notes are for the benefit of users who want to migrate databases from Postgres95
1.01 and 1.02 to Postgres95 1.02.1.
If you are starting afresh with Postgres95 1.02.1 and do not need to migrate old databases, you do
not need to read any further.
In order to upgrade older Postgres95 version 1.01 or 1.02 databases to version 1.02.1, the following steps
are required:
1329
Appendix E. Release Notes
2. Add the new built-in functions and operators of 1.02.1 to 1.01 or 1.02 databases. This is done by run-
ning the new 1.02.1 server against your own 1.01 or 1.02 database and applying the queries attached
at the end of the file. This can be done easily through psql. If your 1.01 or 1.02 database is named
testdb and you have cut the commands from the end of this file and saved them in addfunc.sql:
% psql testdb -f addfunc.sql
Those upgrading 1.02 databases will get a warning when executing the last two statements in the file
because they are already present in 1.02. This is not a cause for concern.
If you are loading an older binary copy or non-stdout copy, there is no end-of-data character, and hence
no conversion necessary.
E.49.3. Changes
Enhancements
* psql (and underlying libpq library) now has many more options for
formatting output, including HTML
* pg_dump now output the schema and/or the data, with many fixes to
enhance completeness.
* psql used in place of monitor in administration shell scripts.
monitor to be deprecated in next release.
* date/time functions enhanced
* NULL insert/update/comparison fixed/enhanced
* TCL/TK lib and shell fixed to work with both tck7.4/tk4.0 and tcl7.5/tk4.1
1330
Appendix E. Release Notes
* indexes
* storage management
* check for NULL pointer before dereferencing
* Makefile fixes
New Ports
* added SolarisX86 port
* added BSD/OS 2.1 port
* added DG/UX port
a. If you do, you must create a file name pg_hba in your top-level data directory (typically the
value of your $PGDATA). src/libpq/pg_hba shows an example syntax.
b. If you do not want host-based authentication, you can comment out the line
HBA = 1
in src/Makefile.global
Note that host-based authentication is turned on by default, and if you do not take steps A
or B above, the out-of-the-box 1.01 will not allow you to connect to 1.0 databases.
1331
Appendix E. Release Notes
and then execute the following commands (cut and paste from here):
-- add builtin functions that are new to 1.01
1332
Appendix E. Release Notes
E.50.2. Changes
Incompatibilities:
* 1.01 is backwards compatible with 1.0 database provided the user
follow the steps outlined in the MIGRATION_from_1.0_to_1.01 file.
If those steps are not taken, 1.01 is not compatible with 1.0 database.
Enhancements:
* added PQdisplayTuples() to libpq and changed monitor and psql to use it
* added NeXT port (requires SysVIPC implementation)
* added CAST .. AS ... syntax
* added ASC and DESC key words
* added ’internal’ as a possible language for CREATE FUNCTION
internal functions are C functions which have been statically linked
into the postgres backend.
* a new type "name" has been added for system identifiers (table names,
attribute names, etc.) This replaces the old char16 type. The
of name is set by the NAMEDATALEN #define in src/Makefile.global
* a readable reference manual that describes the query language.
* added host-based access control. A configuration file ($PGDATA/pg_hba)
is used to hold the configuration data. If host-based access control
is not desired, comment out HBA=1 in src/Makefile.global.
* changed regex handling to be uniform use of Henry Spencer’s regex code
regardless of platform. The regex code is included in the distribution
* added functions and operators for case-insensitive regular expressions.
The operators are ~* and !~*.
* pg_dump uses COPY instead of SELECT loop for better performance
Bug fixes:
* fixed an optimizer bug that was causing core dumps when
functions calls were used in comparisons in the WHERE clause
* changed all uses of getuid to geteuid so that effective uids are used
* psql now returns non-zero status on errors when using -c
* applied public patches 1-14
E.51.1. Changes
Copyright change:
* The copyright of Postgres 1.0 has been loosened to be freely modifiable
and modifiable for any purpose. Please read the COPYRIGHT file.
Thanks to Professor Michael Stonebraker for making this possible.
1333
Appendix E. Release Notes
Incompatibilities:
* date formats have to be MM-DD-YYYY (or DD-MM-YYYY if you’re using
EUROPEAN STYLE). This follows SQL-92 specs.
* "delimiters" is now a key word
Enhancements:
* sql LIKE syntax has been added
* copy command now takes an optional USING DELIMITER specification.
delimiters can be any single-character string.
* IRIX 5.3 port has been added.
Thanks to Paul Walmsley and others.
* updated pg_dump to work with new libpq
* \d has been added psql
Thanks to Keith Parks
* regexp performance for architectures that use POSIX regex has been
improved due to caching of precompiled patterns.
Thanks to Alistair Crooks
* a new version of libpq++
Thanks to William Wanders
Bug fixes:
* arbitrary userids can be specified in the createuser script
* \c to connect to other databases in psql now works.
* bad pg_proc entry for float4inc() is fixed
* users with usecreatedb field set can now create databases without
having to be usesuper
* remove access control entries when the entry no longer has any
privileges
* fixed non-portable datetimes implementation
* added kerberos flags to the src/backend/Makefile
* libpq now works with kerberos
* typographic errors in the user manual have been corrected.
* btrees with multiple index never worked, now we tell you they don’t
work when you try to use them
E.52.1. Changes
Incompatible changes:
* BETA-0.3 IS INCOMPATIBLE WITH DATABASES CREATED WITH PREVIOUS VERSIONS
(due to system catalog changes and indexing structure changes).
1334
Appendix E. Release Notes
"WITH GRANT OPTION" is not supported. Only class owners can change
access control
- The default access control is to to grant users readonly access.
You must explicitly grant insert/update access to users. To change
this, modify the line in
src/backend/utils/acl.h
that defines ACL_WORLD_DEFAULT
Bug fixes:
* the bug where aggregates of empty tables were not run has been fixed. Now,
aggregates run on empty tables will return the initial conditions of the
1335
Appendix E. Release Notes
libpgtcl changes:
* The -oid option has been added to the "pg_result" tcl command.
pg_result -oid returns oid of the last row inserted. If the
last command was not an INSERT, then pg_result -oid returns "".
* the large object interface is available as pg_lo* tcl commands:
pg_lo_open, pg_lo_close, pg_lo_creat, etc.
New utilities:
* ipcclean added to the distribution
ipcclean usually does not need to be run, but if your backend crashes
and leaves shared memory segments hanging around, ipcclean will
clean them up for you.
New documentation:
* the user manual has been revised and libpq documentation added.
1336
Appendix E. Release Notes
E.53.1. Changes
Incompatible changes:
* The SQL statement for creating a database is ’CREATE DATABASE’ instead
of ’CREATEDB’. Similarly, dropping a database is ’DROP DATABASE’ instead
of ’DESTROYDB’. However, the names of the executables ’createdb’ and
’destroydb’ remain the same.
New tools:
* pgperl - a Perl (4.036) interface to Postgres95
* pg_dump - a utility for dumping out a postgres database into a
script file containing query commands. The script files are in a ASCII
format and can be used to reconstruct the database, even on other
machines and other architectures. (Also good for converting
a Postgres 4.2 database to Postgres95 database.)
1337
Appendix E. Release Notes
Initial release.
1338
Appendix F. The CVS Repository
Marc Fournier, Tom Lane, and Thomas Lockhart1999-05-20
The PostgreSQL source code is stored and managed using the CVS code management system.
At least two methods, anonymous CVS and CVSup, are available to pull the CVS code tree from the
PostgreSQL server to your local machine.
Anonymous CVS
1. You will need a local copy of CVS (Concurrent Version Control System), which you can get from
http://www.cvshome.org/ (the official site with the latest version) or any GNU software archive site
(often somewhat outdated). We recommend version 1.10 or newer. Many systems have a recent ver-
sion of cvs installed by default.
2. Do an initial login to the CVS server:
cvs -d :pserver:[email protected]:/projects/cvsroot login
You will be prompted for a password; you can enter anything except an empty string.
You should only need to do this once, since the password will be saved in .cvspass in your home
directory.
3. Fetch the PostgreSQL sources:
cvs -z3 -d :pserver:[email protected]:/projects/cvsroot co -P pgsql
This installs the PostgreSQL sources into a subdirectory pgsql of the directory you are currently in.
Note: If you have a fast link to the Internet, you may not need -z3, which instructs CVS to use gzip
compression for transferred data. But on a modem-speed link, it’s a very substantial win.
This initial checkout is a little slower than simply downloading a tar.gz file; expect it to take 40
minutes or so if you have a 28.8K modem. The advantage of CVS doesn’t show up until you want to
update the file set later on.
4. Whenever you want to update to the latest CVS sources, cd into the pgsql subdirectory, and issue
1339
Appendix F. The CVS Repository
This will fetch only the changes since the last time you updated. You can update in just a couple of
minutes, typically, even over a modem-speed line.
5. You can save yourself some typing by making a file .cvsrc in your home directory that contains
cvs -z3
update -d -P
This supplies the -z3 option to all cvs commands, and the -d and -P options to cvs update. Then you
just have to say
$ cvs update
Caution
Some older versions of CVS have a bug that causes all checked-out files to be
stored world-writable in your directory. If you see that this has happened, you can
do something like
to set the permissions properly. This bug is fixed as of CVS version 1.9.28.
CVS can do a lot of other things, such as fetching prior revisions of the PostgreSQL sources rather than
the latest development version. For more info consult the manual that comes with CVS, or see the online
documentation at http://www.cvshome.org/.
The command cvs checkout has a flag, -r, that lets you check out a certain revision of a module. This
flag makes it easy to, for example, retrieve the sources that make up release 6_4 of the module ‘tc’ at any
time in the future:
This is useful, for instance, if someone claims that there is a bug in that release, but you cannot find the
bug in the current working copy.
1340
Appendix F. The CVS Repository
Tip: You can also check out a module as it was at any given date using the -D option.
When you tag more than one file with the same tag you can think about the tag as “a curve drawn through
a matrix of filename vs. revision number”. Say we have 5 files with the following revisions:
Note: For creating a release branch, other than a -b option added to the command, it’s the same
thing.
$ cd pgsql
$ cvs tag -b REL6_4
which will create the tag and the branch for the RELEASE tree.
For those with CVS access, it’s simple to create directories for different versions. First, create two subdi-
rectories, RELEASE and CURRENT, so that you don’t mix up the two. Then do:
cd RELEASE
cvs checkout -P -r REL6_4 pgsql
cd ../CURRENT
cvs checkout -P pgsql
which results in two directory trees, RELEASE/pgsql and CURRENT/pgsql. From that point on, CVS
will keep track of which repository branch is in which directory tree, and will allow independent updates
of either tree.
If you are only working on the CURRENT source tree, you just do everything as before we started tagging
release branches.
After you’ve done the initial checkout on a branch
1341
Appendix F. The CVS Repository
anything you do within that directory structure is restricted to that branch. If you apply a patch to that
directory structure and do a
cvs commit
while inside of it, the patch is applied to the branch and only the branch.
in your .cshrc file, or a similar line in your .bashrc or .profile file, depending on your shell.
The cvs repository area must be initialized. Once CVSROOT is set, then this can be done with a single
command:
$ cvs init
after which you should see at least a directory named CVSROOT when listing the CVSROOT directory:
$ ls $CVSROOT
CVSROOT/
3. http://www.freebsd.org
1342
Appendix F. The CVS Repository
which cvsup
$ cvsup -L 2 postgres.cvsup
where -L 2 enables some status messages so you can monitor the progress of the update, and
postgres.cvsup is the path and name you have given to your CVSup configuration file.
Here is a CVSup configuration file modified for a specific installation, and which maintains a full local
CVS repository:
1343
Appendix F. The CVS Repository
If you specify repository instead of pgsql in the above setup, you will get a complete copy of the entire
repository at cvsup.postgresql.org, including its CVSROOT directory. If you do that, you will probably want
to exclude those files in that directory that you want to modify locally, using a refuse file. For example,
for the above setup you might put this in /home/cvs/sup/repository/refuse:
CVSROOT/config*
CVSROOT/commitinfo*
CVSROOT/loginfo*
See the CVSup manual pages for how to use refuse files.
The following is a suggested CVSup config file from the PostgreSQL ftp site4 which will fetch the current
snapshot only:
4. ftp://ftp.postgresql.org/pub/CVSup/README.cvsup
1344
Appendix F. The CVS Repository
You can use pre-built binaries if you have a platform for which binaries are posted on the PostgreSQL ftp
site5, or if you are running FreeBSD, for which CVSup is available as a port.
Note: CVSup was originally developed as a tool for distributing the FreeBSD source tree. It is available
as a “port”, and for those running FreeBSD, if this is not sufficient to tell how to obtain and install it
then please contribute a procedure here.
At the time of writing, binaries are available for Alpha/Tru64, ix86/xBSD, HPPA/HP-UX 10.20,
MIPS/IRIX, ix86/linux-libc5, ix86/linux-glibc, Sparc/Solaris, and Sparc/SunOS.
1. Retrieve the binary tar file for cvsup (cvsupd is not required to be a client) appropriate for your
platform.
a. If the binary is in the top level of the tar file, then simply unpack the tar file into your target
directory:
$ cd /usr/local/bin
$ tar zxvf /usr/local/src/cvsup-16.0-linux-i386.tar.gz
$ mv cvsup.1 ../doc/man/man1/
b. If there is a directory structure in the tar file, then unpack the tar file within /usr/local/src
and move the binaries into the appropriate location as above.
3. Ensure that the new binaries are in your path.
$ rehash
$ which cvsup
$ set path=(path to cvsup $path)
$ which cvsup
/usr/local/bin/cvsup
5. ftp://ftp.postgresql.org/pub
6. ftp://ftp.postgresql.org/pub
1345
Appendix F. The CVS Repository
Note: A clean-source installation of Modula-3 takes roughly 200MB of disk space, which shrinks to
roughly 50MB of space when the sources are removed.
Linux installation
1. Install Modula-3.
a. Pick up the Modula-3 distribution from Polytechnique Montréal7, who are actively main-
taining the code base originally developed by the DEC Systems Research Center8. The PM3
RPM distribution is roughly 30MB compressed. At the time of writing, the 1.1.10-1 release
installed cleanly on RH-5.2, whereas the 1.1.11-1 release is apparently built for another
release (RH-6.0?) and does not run on RH-5.2.
Tip: This particular rpm packaging has many RPM files, so you will likely want to place them
into a separate directory.
3. Build the cvsup distribution, suppressing the GUI interface feature to avoid requiring X11 libraries:
# make M3FLAGS="-DNOGUI"
and if you want to build a static binary to move to systems that may not have Modula-3 installed, try:
# make M3FLAGS="-DNOGUI -DSTATIC"
7. http://m3.polymtl.ca/m3
8. http://www.research.digital.com/SRC/modula-3/html/home.html
1346
Appendix F. The CVS Repository
1347
Appendix G. Documentation
PostgreSQL has four primary documentation formats:
G.1. DocBook
The documentation sources are written in DocBook, which is a markup language superficially similar to
HTML. Both of these languages are applications of the Standard Generalized Markup Language, SGML,
which is essentially a language for describing other languages. In what follows, the terms DocBook and
SGML are both used, but technically they are not interchangeable.
DocBook allows an author to specify the structure and content of a technical document without worrying
about presentation details. A document style defines how that content is rendered into one of several final
forms. DocBook is maintained by the OASIS1 group. The official DocBook site2 has good introductory and
reference documentation and a complete O’Reilly book for your online reading pleasure. The FreeBSD
Documentation Project3 also uses DocBook and has some good information, including a number of style
guidelines that might be worth considering.
DocBook DTD4
This is the definition of DocBook itself. We currently use version 4.2; you cannot use later or earlier
versions. Note that there is also an XML version of DocBook — do not use that.
1. http://www.oasis-open.org
2. http://www.oasis-open.org/docbook
3. http://www.freebsd.org/docproj/docproj.html
4. http://www.oasis-open.org/docbook/sgml/
1348
Appendix G. Documentation
We have documented experience with several installation methods for the various tools that are needed to
process the documentation. These will be described below. There may be some other packaged distribu-
tions for these tools. Please report package status to the documentation mailing list, and we will include
that information here.
5. http://www.oasis-open.org/cover/ISOEnts.zip
6. http://openjade.sourceforge.net
7. http://docbook.sourceforge.net/projects/dsssl/index.html
8. http://docbook2x.sourceforge.net
9. http://jadetex.sourceforge.net
1349
Appendix G. Documentation
• textproc/sp
• textproc/openjade
• textproc/iso8879
• textproc/dsssl-docbook-modular
Apparently, there is no port for the DocBook V4.2 SGML DTD available right now. You will need to
install it manually.
A number of things from /usr/ports/print (tex, jadetex) might also be of interest.
It’s possible that the ports do not update the main catalog file in /usr/local/share/sgml/catalog.
Be sure to have the following line in there:
CATALOG "/usr/local/share/sgml/docbook/4.2/docbook.cat"
If you do not want to edit the file you can also set the environment variable SGML_CATALOG_FILES to a
colon-separated list of catalog files (such as the one above).
More information about the FreeBSD documentation tools can be found in the FreeBSD Documentation
Project’s instructions10.
1. The installation of OpenJade offers a GNU-style ./configure; make; make install build
process. Details can be found in the OpenJade source distribution. In a nutshell:
./configure --enable-default-catalog=/usr/local/share/sgml/catalog
make
10. http://www.freebsd.org/doc/en_US.ISO8859-1/books/fdp-primer/tools.html
1350
Appendix G. Documentation
make install
Be sure to remember where you put the “default catalog”; you will need it below. You can also leave
it off, but then you will have to set the environment variable SGML_CATALOG_FILES to point to the
file whenever you use jade later on. (This method is also an option if OpenJade is already installed
and you want to install the rest of the toolchain locally.)
2. Additionally, you should install the files dsssl.dtd, fot.dtd, style-sheet.dtd, and catalog
from the dsssl directory somewhere, perhaps into /usr/local/share/sgml/dsssl. It’s proba-
bly easiest to copy the entire directory:
cp -R dsssl /usr/local/share/sgml
3. Finally, create the file /usr/local/share/sgml/catalog and add this line to it:
CATALOG "dsssl/catalog"
(This is a relative path reference to the file installed in step 2. Be sure to adjust it if you chose your
installation layout differently.)
(The archive will unpack its files into the current directory.)
4. Edit the file /usr/local/share/sgml/catalog (or whatever you told jade during installation)
and put a line like this into it:
CATALOG "docbook-4.2/docbook.cat"
5. Download the ISO 8879 character entities12 archive, unpack it, and put the files in the same directory
you put the DocBook files in.
$ cd /usr/local/share/sgml/docbook-4.2
$ unzip ...../ISOEnts.zip
6. Run the following command in the directory with the DocBook and ISO files:
perl -pi -e ’s/iso-(.*).gml/ISO\1/g’ docbook.cat
11. http://www.docbook.org/sgml/4.2/docbook-4.2.zip
12. http://www.oasis-open.org/cover/ISOEnts.zip
1351
Appendix G. Documentation
(This fixes a mixup between the names used in the DocBook catalog file and the actual names of the
ISO character entity files.)
$ gunzip docbook-dsssl-1.xx.tar.gz
$ tar -C /usr/local/share/sgml -xf docbook-dsssl-1.xx.tar
CATALOG "docbook-dsssl-1.xx/catalog
Because stylesheets change rather often, and it’s sometimes beneficial to try out alternative versions,
PostgreSQL doesn’t use this catalog entry. See Section G.2.5 for information about how to select the
stylesheets instead.
$ gunzip jadetex-xxx.tar.gz
$ tar xf jadetex-xxx.tar
$ cd jadetex
$ make install
$ mktexlsr
13. http://www.ctan.org
1352
Appendix G. Documentation
If neither onsgmls nor nsgmls were found then you will not see the remaining 4 lines. nsgmls is part
of the Jade package. If “DocBook V4.2” was not found then you did not install the DocBook DTD kit in
a place where jade can find it, or you have not set up the catalog files correctly. See the installation hints
above. The DocBook stylesheets are looked for in a number of relatively standard places, but if you have
them some other place then you should set the environment variable DOCBOOKSTYLE to the location and
rerun configure afterwards.
G.3.1. HTML
To build the HTML version of the documentation:
cd doc/src
gmake postgres.tar.gz
In the distribution, these archives live in the doc directory and are installed by default with gmake
install.
1353
Appendix G. Documentation
G.3.2. Manpages
We use the docbook2man utility to convert DocBook refentry pages to *roff output suitable for man
pages. The man pages are also distributed as a tar archive, similar to the HTML version. To create the man
page package, use the commands
cd doc/src
gmake man.tar.gz
which will result in a tar file being generated in the doc/src directory.
To generate quality man pages, it might be necessary to use a hacked version of the conversion utility or
do some manual postprocessing. All man pages should be manually inspected before distribution.
• To make a PDF:
doc/src/sgml$ gmake postgres.pdf
(Of course you can also make a PDF version from the Postscript, but if you generate PDF directly, it
will have hyperlinks and other enhanced features.)
Note: It appears that current versions of the PostgreSQL documentation trigger some bug in or exceed
the size limit of OpenJade. If the build process of the RTF version hangs for a long time and the output
file still has size 0, then you may have hit that problem. (But keep in mind that a normal build takes 5
to 10 minutes, so don’t abort too soon.)
1354
Appendix G. Documentation
OpenJade omits specifying a default style for body text. In the past, this undiagnosed problem led to a long
process of table of contents generation. However, with great help from the Applixware folks the symptom
was diagnosed and a workaround is available.
1. Generate the RTF version by typing:
doc/src/sgml$ gmake postgres.rtf
2. Repair the RTF file to correctly specify all styles, in particular the default style. If the document
contains refentry sections, one must also replace formatting hints which tie a preceding paragraph
to the current paragraph, and instead tie the current paragraph to the following one. A utility, fixrtf,
is available in doc/src/sgml to accomplish these repairs:
doc/src/sgml$ ./fixrtf --refentry postgres.rtf
The script adds {\s0 Normal;} as the zeroth style in the document. According to Applixware,
the RTF standard would prohibit adding an implicit zeroth style, though Microsoft Word happens to
handle this case. For repairing refentry sections, the script replaces \keepn tags with \keep.
3. Open a new document in Applixware Words and then import the RTF file.
4. Generate a new table of contents (ToC) using Applixware.
a. Select the existing ToC lines, from the beginning of the first character on the first line to
the last character of the last line.
b. Build a new ToC using Tools−→Book Building−→Create Table of Contents. Select
the first three levels of headers for inclusion in the ToC. This will replace the existing lines
imported in the RTF with a native Applixware ToC.
c. Adjust the ToC formatting by using Format−→Style, selecting each of the three ToC
styles, and adjusting the indents for First and Left. Use the following values:
6. Replace the right-justified page numbers in the Examples and Figures portions of the ToC with
correct values. This only takes a few minutes.
7. Delete the index section from the document if it is empty.
1355
Appendix G. Documentation
G.4.1. Emacs/PSGML
PSGML is the most common and most powerful mode for editing SGML documents. When properly
configured, it will allow you to use Emacs to insert tags and check markup consistency. You could use
it for HTML as well. Check the PSGML web site14 for downloads, installation instructions, and detailed
documentation.
14. http://www.lysator.liu.se/projects/about_psgml.html
1356
Appendix G. Documentation
There is one important thing to note with PSGML: its author assumed that your main SGML
DTD directory would be /usr/local/lib/sgml. If, as in the examples in this chapter, you use
/usr/local/share/sgml, you have to compensate for this, either by setting SGML_CATALOG_FILES
environment variable, or you can customize your PSGML installation (its manual tells you how).
Put the following in your ~/.emacs environment file (adjusting the path names to be appropriate for your
system):
(setq sgml-omittag t)
(setq sgml-shorttag t)
(setq sgml-minimize-attributes nil)
(setq sgml-always-quote-attributes t)
(setq sgml-indent-step 1)
(setq sgml-indent-data t)
(setq sgml-parent-document nil)
(setq sgml-default-dtd-file "./reference.ced")
(setq sgml-exposed-tags nil)
(setq sgml-catalog-files ’("/usr/local/share/sgml/catalog"))
(setq sgml-ecat-files nil)
and in the same file add an entry for SGML into the (existing) definition for auto-mode-alist:
(setq
auto-mode-alist
’(("\\.sgml$" . sgml-mode)
))
Currently, each SGML source file has the following block at the end of the file:
1357
Appendix G. Documentation
This will set up a number of editing mode parameters even if you do not set up your ~/.emacs file, but
it is a bit unfortunate, since if you followed the installation instructions above, then the catalog path will
not match your location. Hence you might need to turn off local variables:
(setq inhibit-local-variables t)
The PostgreSQL distribution includes a parsed DTD definitions file reference.ced. You may find that
when using PSGML, a comfortable way of working with these separate files of book parts is to insert a
proper DOCTYPE declaration while you’re editing them. If you are working on this source, for instance,
it is an appendix chapter, so you would specify the document as an “appendix” instance of a DocBook
document by making the first line look like this:
This means that anything and everything that reads SGML will get it right, and I can verify the docu-
ment with nsgmls -s docguide.sgml. (But you need to take out that line before building the entire
documentation set.)
Name
This section is generated automatically. It contains the command name and a half-sentence summary
of its functionality.
15. http://nwalsh.com/emacs/docbookide/index.html
1358
Appendix G. Documentation
Synopsis
This section contains the syntax diagram of the command. The synopsis should normally not list
each command-line option; that is done below. Instead, list the major components of the command
line, such as where input and output files go.
Description
Several paragraphs explaining what the command does.
Options
A list describing each command-line option. If there are a lot of options, subsections may be used.
Exit Status
If the program uses 0 for success and non-zero for failure, then you do not need to document it. If
there is a meaning behind the different non-zero exit codes, list them here.
Usage
Describe any sublanguage or run-time interface of the program. If the program is not interactive, this
section can usually be omitted. Otherwise, this section is a catch-all for describing run-time features.
Use subsections if appropriate.
Environment
List all environment variables that the program might use. Try to be complete; even seemingly trivial
variables like SHELL might be of interest to the user.
Files
List any files that the program might access implicitly. That is, do not list input and output files that
were specified on the command line, but list configuration files, etc.
Diagnostics
Explain any unusual output that the program might create. Refrain from listing every possible error
message. This is a lot of work and has little use in practice. But if, say, the error messages have a
standard format that the user can parse, this would be the place to explain it.
Notes
Anything that doesn’t fit elsewhere, but in particular bugs, implementation flaws, security consider-
ations, compatibility issues.
Examples
Examples
History
If there were some major milestones in the history of the program, they might be listed here. Usually,
this section can be omitted.
See Also
Cross-references, listed in the following order: other PostgreSQL command reference pages, Post-
greSQL SQL command reference pages, citation of PostgreSQL manuals, other reference pages (e.g.,
operating system, other packages), other documentation. Items in the same group are listed alphabet-
ically.
1359
Appendix G. Documentation
Reference pages describing SQL commands should contain the following sections: Name, Synopsis, De-
scription, Parameters, Outputs, Notes, Examples, Compatibility, History, See Also. The Parameters sec-
tion is like the Options section, but there is more freedom about which clauses of the command can
be listed. The Outputs section is only needed if the command returns something other than a default
command-completion tag. The Compatibility section should explain to what extent this command con-
forms to the SQL standard(s), or to which other database system it is compatible. The See Also section of
SQL commands should list SQL commands before cross-references to programs.
1360
Appendix H. External Projects
PostgreSQL is a complex software project, and managing it is difficult. We have found that many en-
hancements to PostgreSQL can be more efficiently developed separately from the core project. Separate
projects can have their own developer teams, email lists, bug tracking, and release schedules. While their
independence makes development easier, it makes users’ jobs harder. They have to hunt around looking
for database enhancements to meet their needs. This section describes some of the more popular externally
developed enhancements and guides you on how to find them.
Many PostgreSQL-related projects are hosted at either GBorg at http://gborg.postgresql.org or pgFoundry
at http://pgfoundry.org. There are other PostgreSQL-related projects that are hosted elsewhere, but you
will have to do an Internet search to find them.
psqlODBC
This is the most common interface for Windows applications.
pgjdbc
A JDBC interface.
Npgsql
.Net interface for more recent Windows applications.
libpqxx
A newer C++ interface.
libpq++
An older C++ interface.
pgperl
A Perl interface with an API similar to libpq.
DBD-Pg
A Perl interface that uses the DBD-standard API.
pgtclng
A newer version of the Tcl interface.
1361
Appendix H. External Projects
pgtcl
The original version of the Tcl interface.
PyGreSQL
A Python interface library.
All of these can be found at GBorg (http://gborg.postgresql.org) or pgFoundry (http://pgfoundry.org).
H.2. Extensions
PostgreSQL was designed from the start to be extensible. For this reason, extensions loaded into the
database can function just like features that are packaged with the database. The contrib/ directory
shipped with the source code contains a large number of extensions. The README file in that directory
contains a summary. They include conversion tools, full-text indexing, XML tools, and additional data
types and indexing methods. Other extensions are developed independently, like PostGIS. Even Post-
greSQL replication solutions are developed externally. For example, Slony-I is a popular master/slave
replication solution that is developed independently from the core project.
There are several administration tools available for PostgreSQL. The most popular is pgAdmin, and there
are several commercially available ones.
1362
Bibliography
Selected references and readings for SQL and PostgreSQL.
Some white papers and technical reports from the original POSTGRES development team are available at
the University of California, Berkeley, Computer Science Department web site1
C. J. Date and Hugh Darwen, A Guide to the SQL Standard: A user’s guide to the standard database
language SQL, Fourth Edition, Addison-Wesley, ISBN 0-201-96426-0, 1997.
Ramez Elmasri and Shamkant Navathe, Fundamentals of Database Systems, 3rd Edition,
Addison-Wesley, ISBN 0-805-31755-4, August 1999.
Jim Melton and Alan R. Simon, Understanding the New SQL: A complete guide, Morgan Kaufmann,
ISBN 1-55860-245-3, 1993.
Jeffrey D. Ullman, Principles of Database and Knowledge: Base Systems, Volume 1, Computer Science
Press, 1988.
PostgreSQL-Specific Documentation
Stefan Simkovics, Enhancement of the ANSI SQL Implementation of PostgreSQL, Department of Infor-
mation Systems, Vienna University of Technology, November 29, 1998.
Discusses SQL history and syntax, and describes the addition of INTERSECT and EXCEPT constructs
into PostgreSQL. Prepared as a Master’s Thesis with the support of O. Univ. Prof. Dr. Georg Gottlob
and Univ. Ass. Mag. Katrin Seyr at Vienna University of Technology.
A. Yu and J. Chen, The POSTGRES Group, The Postgres95 User Manual, University of California, Sept.
5, 1995.
Zelaine Fong, The design and implementation of the POSTGRES query optimizer2, University of Califor-
nia, Berkeley, Computer Science Department.
1. http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/
2. http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/UCB-MS-zfong.pdf
1363
Bibliography
L. Ong and J. Goh, “A Unified Framework for Version Modeling Using Production Rules in a Database
System”, ERL Technical Memorandum M90/33, University of California, April, 1990.
L. Rowe and M. Stonebraker, “The POSTGRES data model3”, Proc. VLDB Conference, Sept. 1987.
P. Seshadri and A. Swami, “Generalized Partial Indexes4 ”, Proc. Eleventh International Conference on
Data Engineering, 6-10 March 1995, IEEE Computer Society Press, Cat. No.95CH35724, 1995,
420-7.
M. Stonebraker and L. Rowe, “The design of POSTGRES5”, Proc. ACM-SIGMOD Conference on Man-
agement of Data, May 1986.
M. Stonebraker, E. Hanson, and C. H. Hong, “The design of the POSTGRES rules system”, Proc. IEEE
Conference on Data Engineering, Feb. 1987.
M. Stonebraker, “The design of the POSTGRES storage system6”, Proc. VLDB Conference, Sept. 1987.
M. Stonebraker, M. Hearst, and S. Potamianos, “A commentary on the POSTGRES rules system7”, SIG-
MOD Record 18(3), Sept. 1989.
M. Stonebraker, “The case for partial indexes8”, SIGMOD Record 18(4), Dec. 1989, 4-11.
M. Stonebraker, A. Jhingran, J. Goh, and S. Potamianos, “On Rules, Procedures, Caching and Views in
Database Systems10”, Proc. ACM-SIGMOD Conference on Management of Data, June 1990.
3. http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/ERL-M87-13.pdf
4. http://simon.cs.cornell.edu/home/praveen/papers/partindex.de95.ps.Z
5. http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/ERL-M85-95.pdf
6. http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/ERL-M87-06.pdf
7. http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/ERL-M89-82.pdf
8. http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/ERL-M89-17.pdf
9. http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/ERL-M90-34.pdf
10. http://s2k-ftp.CS.Berkeley.EDU:8000/postgres/papers/ERL-M90-36.pdf
1364
Index anyelement, 117
archive_command configuration parameter,
259
ARRAY, 36, 103
Symbols constant, 104
constructor, 36
$, 32
determination of result type, 196
$libdir, 486
of user-defined type, 512
*, 76
australian_timezones configuration parameter,
.pgpass, 391
271
authentication_timeout configuration parame-
ter, 253
A
auto-increment
ABORT, 672 (See serial)
add_missing_from configuration parameter, autocommit
273 bulk-loading data, 224
aggregate function, 12 psql, 984
built-in, 172 average, 12, 173
invocation, 34
user-defined, 507
AIX B
IPC configuration, 281
alias B-tree
for table name in query, 12 (See index)
in the FROM clause, 70 backup, 186, 321
in the select list, 77 base type, 472
ALL, 175, 178 BEGIN, 712
ALTER AGGREGATE, 674 BETWEEN, 120
ALTER CONVERSION, 676 bgwriter_delay configuration parameter, 257
ALTER DATABASE, 678 bgwriter_maxpages configuration parameter,
ALTER DOMAIN, 680 258
ALTER FUNCTION, 683 bgwriter_percent configuration parameter, 258
ALTER GROUP, 685 bigint, 28, 83
ALTER INDEX, 687 bigserial, 85
ALTER LANGUAGE, 689 binary data, 88
ALTER OPERATOR, 690 functions, 132
ALTER OPERATOR CLASS, 692 binary string
ALTER SCHEMA, 693 concatenation, 132
ALTER SEQUENCE, 694 length, 133
ALTER TABLE, 696 bison, 230
ALTER TABLESPACE, 703 bit string
ALTER TRIGGER, 705 constant, 27
ALTER TYPE, 706 data type, 102
ALTER USER, 288, 707 bit strings
ANALYZE, 316, 710 functions, 134
AND (operator), 119 bit_and, 173
any, 117, 174, 175, 178 bit_or, 173
anyarray, 117 BLOB
1365
(See large object) check_function_bodies configuration parame-
block_size configuration parameter, 274 ter, 270
Boolean cid, 115
data type, 97 cidr, 101
operators circle, 100
(See operators, logical) client authentication, 297
bool_and, 173 timeout during, 253
bool_or, 173 client_encoding configuration parameter, 271
booting client_min_messages configuration parameter,
starting the server during, 248 265
CLOSE, 715
box (data type), 99
CLUSTER, 717
BSD/OS
of databases
IPC configuration, 279
(See database cluster)
shared library, 495
clusterdb, 926
bytea, 88
cmax, 50
in libpq, 375
cmin, 50
COALESCE, 170
column, 6, 40
C adding, 53
C, 354, 412 removing, 54
canceling renaming, 55
SQL command, 380 system column, 49
CASCADE column data type
changing, 55
with DROP, 61
column reference, 32
foreign key action, 48
col_description, 181
CASE, 169
COMMENT, 720
determination of result type, 196
about database objects, 181
case sensitivity
in SQL, 30
of SQL commands, 25
COMMIT, 723
char, 86
commit_delay configuration parameter, 259
character, 86
commit_siblings configuration parameter, 259
character set, 271, 275, 309
comparison
character string operators, 119
concatenation, 124 row-wise, 178
constant, 25 subquery result row, 175
data types, 86 compiling
length, 124 libpq applications, 392
character varying, 86 composite type, 111, 472
check constraint, 42 constant, 113
checkpoint, 343, 714 constructor, 37
checkpoint_segments configuration parameter, computed field, 479
259 concurrency, 208
checkpoint_timeout configuration parameter, conditional expression, 169
259 configuration
checkpoint_warning configuration parameter, of the server, 250
259 of the server
1366
functions, 186 cross join, 67
configure, 232 crypt, 302
config_file configuration parameter, 251 thread safety, 392
conjunction, 119 cstring, 117
constant, 25 ctid, 50, 543
constraint, 42 currval, 167
adding, 54 cursor
check, 42 CLOSE, 715
foreign key, 47 DECLARE, 809
name, 42 FETCH, 851
NOT NULL, 44 in PL/pgSQL, 584
primary key, 46 MOVE, 869
removing, 54 showing the query plan, 848
unique, 45 custom_variable_classes configuration param-
COPY, 8, 725 eter, 275
with libpq, 383
count, 12
cpu_index_tuple_cost configuration parame- D
ter, 261
cpu_operator_cost configuration parameter, data area
261 (See database cluster)
cpu_tuple_cost configuration parameter, 261 data type, 81
CREATE DATABASE, 291 base, 472
CREATE AGGREGATE, 733 category, 190
CREATE CAST, 736 composite, 472
CREATE CONSTRAINT, 740 constant, 28
CREATE CONVERSION, 741 conversion, 189
CREATE DATABASE, 743 internal organisation, 486
CREATE DOMAIN, 746 numeric, 82
CREATE FUNCTION, 749 type cast, 35
CREATE GROUP, 754 user-defined, 509
CREATE INDEX, 756 database, 291
CREATE LANGUAGE, 759 creating, 2
CREATE OPERATOR, 762 privilege to create, 288
CREATE OPERATOR CLASS, 766 database activity
CREATE RULE, 769 monitoring, 334
CREATE SCHEMA, 772 database cluster, 6, 246
CREATE SEQUENCE, 775 data_directory configuration parameter, 251
CREATE TABLE, 6, 779 date, 90, 92
CREATE TABLE AS, 789 constants, 94
CREATE TABLESPACE, 294, 791 current, 160
CREATE TRIGGER, 793 output format, 95
CREATE TYPE, 796 (See Also formatting)
CREATE USER, 287, 802 DateStyle configuration parameter, 270
CREATE VIEW, 805 DBI, 614
createdb, 2, 292, 929 db_user_namespace configuration parameter,
createlang, 932 254
createuser, 287, 935 deadlock, 214
1367
timeout during, ?? DROP INDEX, 824
deadlock_timeout configuration parameter, DROP LANGUAGE, 825
272 DROP OPERATOR, 826
DEALLOCATE, 808 DROP OPERATOR CLASS, 828
debug_assertions configuration parameter, 276 DROP RULE, 830
debug_pretty_print configuration parameter, DROP SCHEMA, 832
266 DROP SEQUENCE, 834
debug_print_parse configuration parameter, DROP TABLE, 7, 835
266 DROP TABLESPACE, 837
debug_print_plan configuration parameter, 266 DROP TRIGGER, 838
debug_print_rewritten configuration parame- DROP TYPE, 840
ter, 266 DROP USER, 287, 841
debug_shared_buffers configuration parame- DROP VIEW, 843
ter, 276 dropdb, 294, 938
decimal droplang, 941
(See numeric) dropuser, 287, 944
DECLARE, 809 duplicate, 10
default value, 41 duplicates, 77
changing, 55 dynamic loading, 272, 485
default_statistics_target configuration parame- dynamic_library_path, 486
ter, 262 dynamic_library_path configuration parame-
default_tablespace configuration parameter, ter, 272
270
default_transaction_isolation configuration pa-
rameter, 270 E
default_transaction_read_only configuration
parameter, 270 ECPG, 412, 947
default_with_oids configuration parameter, effective_cache_size configuration parameter,
273 261
DELETE, 14, 65, 812 elog, 1106
deleting, 65 in PL/Perl, 615
Digital UNIX in PL/Python, 621
(See Tru64 UNIX) in PL/Tcl, 609
dirty read, 208 embedded SQL
disjunction, 119 in C, 412
disk drive, 345 enable_hashagg configuration parameter, 260
disk space, 315 enable_hashjoin configuration parameter, 260
disk usage, 341 enable_indexscan configuration parameter,
DISTINCT, 10, 77 260
dollar quoting, 26 enable_mergejoin configuration parameter,
double precision, 84 260
DROP AGGREGATE, 814 enable_nestloop configuration parameter, 260
DROP CAST, 816 enable_seqscan configuration parameter, 260
DROP CONVERSION, 818 enable_sort configuration parameter, 260
DROP DATABASE, 294, 819 enable_tidscan configuration parameter, 261
DROP DOMAIN, 820 END, 844
DROP FUNCTION, 821 environment variable, 390
DROP GROUP, 823 ereport, 1106
1368
error codes fsync, 343
libpq, 368 fsync configuration parameter, ??
list of, 1140 function, 119
error message, 363 in the FROM clause, 72
escaping strings, 374 internal, 484
every, 173 invocation, 34
EXCEPT, 77 polymorphic, 473
EXECUTE, 846 type resolution in an invocation, 193
EXISTS, 175 user-defined, 474
EXPLAIN, 217, 848 in C, 474
explain_pretty_print configuration parameter, in SQL, 474
272
expression
order of evaluation, 38 G
syntax, 31
genetic query optimization, ??
extending SQL, 472
GEQO
extensions, 1362
(See genetic query optimization)
external_pid_file configuration parameter, 252
geqo configuration parameter, 261
extra_float_digits configuration parameter, 271
geqo_effort configuration parameter, 262
geqo_generations configuration parameter, 262
geqo_pool_size configuration parameter, 262
F geqo_selection_bias configuration parameter,
false, 97 262
FAQ-Liste, iv geqo_threshold configuration parameter, 262
fast path, 381 get_bit, 132
FETCH, 851 get_byte, 132
field GiST
(See index)
computed, 479
global data
field selection, 33
in PL/Python, 620
flex, 230
in PL/Tcl, 607
float4
GRANT, 289, 855
(See real)
group, 288
float8
GROUP BY, 13, 74
(See double precision)
grouping, 74
floating point, 84
floating-point
display, 271
foreign key, 16, 47
H
formatting, 147 hash
FreeBSD (See index)
IPC configuration, 280 has_database_privilege, 181
shared library, 495 has_function_privilege, 181
start script, 248 has_language_privilege, 181
FROM has_schema_privilege, 181
missing, 273 has_tablespace_privilege, 181
from_collapse_limit configuration parameter, has_table_privilege, 181
262 HAVING, 13, 75
1369
hba_file configuration parameter, 251 (See smallint)
hierarchical database, 6 int4
history
(See integer)
of PostgreSQL, ii
host name, 354 int8
HP-UX (See bigint)
IPC configuration, 280 integer, 28, 83
shared library, 495 integer_datetimes configuration parameter,
274
interfaces, 1361
I
internal, 117
ident, 303
INTERSECT, 77
identifier
interval, 90, 94
length, 25
syntax of, 24 ipcclean, 1000
ident_file configuration parameter, 252 IRIX
IMMUTABLE, 483 shared library, 495
IN, 175, 178
IS DISTINCT FROM, 121, 178
index, 199
B-tree, 200 IS FALSE, 121
examining usage, 206 IS NOT FALSE, 121
on expressions, 202 IS NOT NULL, 120, 178
for user-defined data type, 518
IS NOT TRUE, 121
GiST, 1128
IS NOT UNKNOWN, 121
hash, 200
locks, 215 IS NULL, 120, 178, 273
multicolumn, 201 IS TRUE, 121
partial, 204 IS UNKNOWN, 121
R-tree, 200
ISNULL, 120
unique, 202
index scan, 260
inet (data type), 100
inet_client_addr, 181 J
inet_client_port, 181
inet_server_addr, 181 join, 10, 67
inet_server_port, 181 controlling the order, 222
information schema, 431
cross, 67
inheritance, 19, 273
initdb, 246, 997 left, 68
input function, 509 natural, 68
of a data type, 510 outer, 11, 67
INSERT, 7, 63, 860 right, 68
inserting, 63
self, ??
installation, 228
on Windows, 229, 245 join_collapse_limit configuration parameter,
instr, 597 262
int2
1370
K monitoring, 339
log_connections configuration parameter, 266
Kerberos, 302
key word log_destination configuration parameter, 263
list of, 1166 log_directory configuration parameter, 263
syntax of, 24 log_disconnections configuration parameter,
krb_server_keyfile configuration parameter, 266
254 log_duration configuration parameter, 267
log_error_verbosity configuration parameter,
265
L log_executor_stats configuration parameter,
label 268
(See alias) log_filename configuration parameter, 263
language_handler, 117 log_hostname configuration parameter, 268
large object log_line_prefix configuration parameter, 267
backup, 323
log_min_duration_statement configuration pa-
large object, 403
rameter, 265
lc_collate configuration parameter, 274
log_min_error_statement configuration param-
lc_ctype configuration parameter, 274
eter, 265
lc_messages configuration parameter, 271
lc_monetary configuration parameter, 271 log_min_messages configuration parameter,
lc_numeric configuration parameter, 271 265
lc_time configuration parameter, 271 log_parser_stats configuration parameter, 268
ldconfig, 238 log_planner_stats configuration parameter, 268
left join, 68 log_rotation_age configuration parameter, 264
length
log_rotation_size configuration parameter, 264
of a binary string
log_statement configuration parameter, 268
(See binary strings, length)
of a character string log_statement_stats configuration parameter,
(See character strings, length) 268
libperl, 229 log_truncate_on_rotation configuration param-
libpq, 354 eter, 264
libpq-fe.h, 354, 360 loop
libpq-int.h, 360, 394 in PL/pgSQL, 579
libpython, 229 lo_close, 405
LIKE, 135
lo_creat, 404, 406
and locales, 308
lo_export, 404, 406
LIMIT, 79
line segment, 99 lo_import, 404, 406
Linux lo_lseek, 405
IPC configuration, 280 lo_open, 404
shared library, 496 lo_read, 405
start script, 248 lo_tell, 405
LISTEN, 863
lo_unlink, 406, 406
listen_addresses configuration parameter, 252
lo_write, 405
LOAD, 865
locale, 247, 307 lseg, 99
lock, 212, 212, 866
1371
M start script, 248
network
MAC address
data types, 100
(See macaddr)
nextval, 167
macaddr (data type), 102
nonblocking connection, 357, 376
MacOS X
nonrepeatable read, 208
IPC configuration, 281
NOT (operator), 119
shared library, 496
NOT IN, 175, 178
maintenance, 315
not-null constraint, 44
maintenance_work_mem configuration param-
notice processing
eter, 255
in libpq, 388
make, 228
notice processor, 389
MANPATH, 238
notice receiver, 389
max, 12
NOTIFY, 871
max_connections configuration parameter, 252
in libpq, 382
max_files_per_process configuration parame-
NOTNULL, 120
ter, 256
null value
max_fsm_pages configuration parameter, 255
with check constraints, 44
max_fsm_relations configuration parameter,
255 comparing, 120
max_function_args configuration parameter, default value, 41
274 in DISTINCT, 77
max_identifier_length configuration parame- in libpq, 373
ter, 275 in PL/Perl, 612
max_index_keys configuration parameter, 275 in PL/Python, 620
max_locks_per_transaction configuration pa- with unique constraints, 46
rameter, 272 NULLIF, 171
max_stack_depth configuration parameter, 255 number
MD5, 302 constant, 27
memory context numeric, 28
in SPI, 654 numeric (data type), 83
min, 12
monitoring
database activity, 334 O
MOVE, 869
object identifier
MVCC, 208
data type, 115
object-oriented database, 6
obj_description, 181
N OFFSET, 79
name oid, 115
qualified, 57 column, 49
syntax of, 24 in libpq, 374
unqualified, 58 on-line backup, 321
natural join, 68 ONLY, 67
negation, 119 opaque, 117
NetBSD OpenBSD
IPC configuration, 280 IPC configuration, 280
shared library, 496 shared library, 496
1372
start script, 248 (See privilege)
OpenSSL, 234 pfree, 494
(See Also SSL) PGcancel, 380
operator, 119 PGCLIENTENCODING, 391
invocation, 34 PGconn, 354
logical, 119 PGCONNECT_TIMEOUT, 391
precedence, 30 PGDATA, 246
syntax, 28 PGDATABASE, 390
type resolution in an invocation, 190 PGDATESTYLE, 391
user-defined, 512 PGGEQO, 391
operator class, 203, 519 PGHOST, 390
OR (operator), 119 PGHOSTADDR, 390
Oracle PGLOCALEDIR, 391
porting from PL/SQL to PL/pgSQL, 594 PGOPTIONS, 390
ORDER BY, 9, 78 PGPASSWORD, 390
and locales, 308 PGPORT, 390
ordering operator, 525 PGREALM, 390
outer join, 67 PGREQUIRESSL, 391
output function PGresult, 366
of a data type, 510 PGSERVICE, 390
output function, 509 PGSSLMODE, 390
overlay, 124 PGSYSCONFDIR, 391
overloading PGTZ, 391
functions, 482 PGUSER, 390
operators, 513 pgxs, 497
owner, 289 pg_aggregate, 1027
pg_am, 1028
pg_amop, 1029
P pg_amproc, 1029
pg_attrdef, 1030
palloc, 494 pg_attribute, 1030
PAM, 234, 305 pg_cancel_backend, 186
parameter pg_cast, 1033
syntax, 32 pg_class, 1034
parenthesis, 32 pg_config, 949
password, 288 with libpq, 393
authentication, 302 with user-defined C functions, 494
of the superuser, 247 pg_constraint, 1037
password file, 391 pg_controldata, 1001
password_encryption configuration parameter, pg_conversion, 1038
254 pg_conversion_is_visible, 181
PATH, 238 pg_ctl, 247, 1002
for schemas, 269 pg_database, 293, 1039
path (data type), 99 pg_depend, 1040
pattern matching, 135 pg_description, 1042
performance, 217 pg_dump, 951
Perl, 612 pg_dumpall, 958
permission use during upgrade, 231
1373
pg_function_is_visible, 181 PITR, 321
pg_get_constraintdef, 181 PL/Perl, 612
pg_get_expr, 181 PL/PerlU, 617
pg_get_indexdef, 181 PL/pgSQL, 561
pg_get_ruledef, 181 PL/Python, 620
pg_get_serial_sequence, 181 PL/SQL (Oracle)
pg_get_triggerdef, 181 porting to PL/pgSQL, 594
pg_get_userbyid, 181 PL/Tcl, 605
pg_get_viewdef, 181 point, 98
pg_group, 1042 point-in-time recovery, 321
pg_hba.conf, 297 polygon, 100
pg_ident.conf, 304 polymorphic function, 473
pg_index, 1043 polymorphic type, 473
pg_indexes, 1064 port, 355
pg_inherits, 1045 port configuration parameter, 252
pg_language, 1045 POSTGRES, ii
pg_largeobject, 1046 postgres (the program), 1008
pg_listener, 1047 postgres user, 246
pg_locks, 1065 Postgres95, iii
pg_namespace, 1047 postgresql.conf, 250
pg_opclass, 1048 postmaster, 1, 247, 1012
pg_opclass_is_visible, 181 PQbackendPID, 363
pg_operator, 1049 PQbinaryTuples, 372
pg_operator_is_visible, 181 with COPY, 384
pg_proc, 1050 PQcancel, 381
pg_restore, 962 PQclear, 369
pg_rewrite, 1052 PQcmdStatus, 374
pg_rules, 1066 PQcmdTuples, 374
pg_settings, 1066 PQconndefaults, 359
pg_shadow, 1053 PQconnectdb, 354
pg_start_backup, 186 PQconnectPoll, 357
pg_statistic, 221, 1053 PQconnectStart, 357
pg_stats, 221, 1067 PQconsumeInput, 378
pg_stop_backup, 186 PQdb, 360
pg_tables, 1069 PQendcopy, 387
pg_tablespace, 1055 PQerrorMessage, 363
pg_tablespace_databases, 181 PQescapeBytea, 375
pg_table_is_visible, 181 PQescapeString, 374
pg_trigger, 1056 PQexec, 364
pg_type, 1057 PQexecParams, 364
pg_type_is_visible, 181 PQexecPrepared, 366
pg_user, 1070 PQfformat, 371
pg_views, 1071 with COPY, 384
phantom read, 208 PQfinish, 359
PIC, 495 PQflush, 380
PID PQfmod, 372
determining PID of server process PQfn, 381
in libpq, 363 PQfname, 370
1374
PQfnumber, 370 PQsendQueryPrepared, 378
PQfreeCancel, 380 PQserverVersion, 363
PQfreemem, 376 PQsetdb, 356
PQfsize, 372
PQsetdbLogin, 356
PQftable, 371
PQsetErrorVerbosity, 388
PQftablecol, 371
PQftype, 371 PQsetnonblocking, 379
PQgetCancel, 380 PQsetNoticeProcessor, 389
PQgetCopyData, 385 PQsetNoticeReceiver, 389
PQgetisnull, 373 PQsocket, 363
PQgetlength, 373 PQstatus, 361
PQgetline, 386
PQtrace, 388
PQgetlineAsync, 386
PQtransactionStatus, 362
PQgetResult, 378
PQgetssl, 363 PQtty, 361
PQgetvalue, 372 PQunescapeBytea, 376
PQhost, 361 PQuntrace, 388
PQisBusy, 379 PQuser, 360
PQisnonblocking, 380 predicate locking, 211
PQmakeEmptyPGresult, 369
preload_libraries configuration parameter, 256
PQnfields, 370
PREPARE, 873
with COPY, 384
PQnotifies, 382 prepared statements
PQntuples, 370 creating, 873
PQoidStatus, 374 executing, 846
PQoidValue, 374 removing, 808
PQoptions, 361 showing the query plan, 848
PQparameterStatus, 362
preparing a query
PQpass, 361
in PL/Tcl, 608
PQport, 361
PQprepare, 365 in PL/pgSQL, 561
PQprint, 373 in PL/Python, 621
PQprotocolVersion, 362 pre_auth_delay configuration parameter, 276
PQputCopyData, 384 primary key, 46
PQputCopyEnd, 385 privilege, 56, 289
PQputline, 387
querying, 182
PQputnbytes, 387
with rules, 554
PQrequestCancel, 381
PQreset, 359 for schemas, 59
PQresetPoll, 360 with views, 554
PQresetStart, 360 procedural language, 559
PQresStatus, 367 handler for, 1119
PQresultErrorField, 368
ps
PQresultErrorMessage, 367
to monitor activity, 334
PQresultStatus, 366
psql, 3, 969
PQsendPrepare, 377
PQsendQuery, 377 Python, 620
PQsendQueryParams, 377
1375
Q with DROP, 61
foreign key action, 48
qualified name, 57
REVOKE, 289, 881
query, 8, 66
right join, 68
query plan, 217
ROLLBACK, 884
query tree, 535
ROLLBACK TO SAVEPOINT, 886
quotation marks
row, 6, 37, 40
and identifiers, 25
row type, 111
escaping, 26
constructor, 37
quote_ident, 125
rule, 535
use in PL/pgSQL, 575
and views, 537
quote_literal, 125
for DELETE, 544
use in PL/pgSQL, 575
for INSERT, 544
for SELECT, 537
compared with triggers, 556
R
for UPDATE, 544
R-tree
(See index)
random_page_cost configuration parameter, S
261
SAVEPOINT, 888
range table, 535
read-only transaction, ?? savepoints
readline, 228 defining, 888
real, 84 releasing, 878
record, 117 rolling back, 886
rectangle, 99 scalar
redirect_stderr configuration parameter, 263 (See expression)
referential integrity, 16, 47 schema, 56, 291
regclass, 115 creating, 57
regex_flavor configuration parameter, 273 current, 58, 181
regoper, 115 public, 58
regoperator, 115 removing, 58
regproc, 115 SCO OpenServer
regprocedure, 115 IPC configuration, 281
regression test, 236 search path, 58
regression tests, 347 current, 181
regtype, 115 search_path, 58
regular expression, 136, 137 search_path configuration parameter, 269
(See Also pattern matching) SELECT, 8, 66, 890
regular expressions, 273 select list, 76
reindex, 319, 875 SELECT INTO, 902
relation, 6 in PL/pgSQL, 572
relational database, 6 semaphores, 277
RELEASE SAVEPOINT, 878 sequence, 167
rendezvous_name configuration parameter, and serial type, 85
253 sequential scan, 260
RESET, 880 serial, 85
RESTRICT serial4, 85
1376
serial8, 85 SPI_cursor_fetch, 643
serializability, 211 SPI_cursor_find, 642
server log, 263 SPI_cursor_move, 644
log file maintenance, 319 SPI_cursor_open, 640
server_encoding configuration parameter, 275 SPI_exec, 631
server_version configuration parameter, 275 SPI_execp, 639
SET, 186, 904 SPI_execute, 628
SET CONSTRAINTS, 907 SPI_execute_plan, 637
set difference, 77 spi_exec_query
set intersection, 77 in PL/Perl, 614
set operation, 77 SPI_finish, 625
set returning functions SPI_fname, 647
functions, 180 SPI_fnumber, 648
SET SESSION AUTHORIZATION, 908 SPI_freeplan, 664
SET TRANSACTION, 910 SPI_freetuple, 662
set union, 77 SPI_freetuptable, 663
SETOF, 474
SPI_getargcount, 634
setval, 167
SPI_getargtypeid, 635
set_bit, 132
SPI_getbinval, 650
set_byte, 132
SPI_getrelname, 653
shared library, 237, 495
SPI_gettype, 651
shared memory, 277
SPI_gettypeid, 652
shared_buffers configuration parameter, 254
SPI_getvalue, 649
SHMMAX, 278
SPI_is_cursor_plan, 636
SHOW, 186, 912
spi_lastoid, 608
shutdown, 284
SPI_modifytuple, 660
SIGHUP, 250, 300, 305
SPI_palloc, 654
SIGINT, 284
SPI_pfree, 657
signal
SPI_pop, 627
backend processes, 186
SPI_prepare, 632
significant digits, ??
SIGQUIT, 284 SPI_push, 626
SIGTERM, 284 SPI_repalloc, 656
silent_mode configuration parameter, 265 SPI_returntuple, 659
SIMILAR TO, 136 SPI_saveplan, 646
sliced bread sql_inheritance configuration parameter, 273
(See TOAST) ssh, 285
smallint, 83 SSL, 284, 392
Solaris with libpq, 356, 363
IPC configuration, 281 ssl configuration parameter, 254
shared library, 496 STABLE, 483
start script, 248 standard deviation, 173
SOME, 174, 175, 178 START TRANSACTION, 915
sorting, 78 statement_timeout configuration parameter,
SPI, 623 270
SPI_connect, 623 statistics, 334
SPI_copytuple, 658 of the planner, 220, 316
SPI_cursor_close, 645 stats_block_level configuration parameter, 269
1377
stats_command_string configuration parame- output format, 95
ter, 269 (See Also formatting)
stats_reset_on_server_start configuration pa- time span, 90
rameter, 269 time with time zone, 90, 92
stats_row_level configuration parameter, 269 time without time zone, 90, 92
stats_start_collector configuration parameter, time zone, 96, 271
268 Australian, 271
string configuration names, 1153
(See character string) conversion, 160
subquery, 12, 36, 71, 175 input abbreviations, 1149
subscript, 33 timelines, 321
substring, 124, 132, 136 timeout
sum, 12 client authentication, 253
superuser, 4, 288 deadlock, 272
superuser_reserved_connections configuration timestamp, 90, 93
parameter, 253 timestamp with time zone, 90, 93
syntax timestamp without time zone, 90, 93
SQL, 24 timezone configuration parameter, 271
syslog_facility configuration parameter, 264 TOAST, 1132
syslog_identity configuration parameter, 264 and user-defined types, 512
system catalog per-column storage settings, 697
schema, 60 versus large objects, 403
token, 24
to_char, 147
T trace_notify configuration parameter, 276
transaction, 17
table, 6, 40 transaction ID
creating, 40 wraparound, 317
modifying, 53 transaction isolation, 208
removing, 41 transaction isolation level, 208, ??
renaming, 56 read committed, 209
table expression, 66 serializable, 210
table function, 72 transaction log
tableoid, 49 (See WAL)
tablespace, 294 transform_null_equals configuration parame-
default, 270 ter, 273
target list, 536 trigger, 117, 527
Tcl, 605 arguments for trigger functions, 528
template0, 292 in C, 529
template1, 292, 292 in PL/pgSQL, 589
test, 347 in PL/Python, 621
text, 86 in PL/Tcl, 609
threads compared with rules, 556
with libpq, 392 Tru64 UNIX
tid, 115 shared library, 496
time, 90, 92 true, 97
constants, 94 TRUNCATE, 916
current, 160 trusted
1378
PL/Perl, 617 compatibility, 332
type view, 16
(See data type) implementation through rules, 537
polymorphic, 473 updating, 549
type cast, 28, 35 void, 117
VOLATILE, 483
volatility
U functions, 483
UNION, 77
determination of result type, 196
W
unique constraint, 45
Unix domain socket, 355 WAL, 343
UnixWare wal_buffers configuration parameter, 259
IPC configuration, 282 wal_debug configuration parameter, 276
shared library, 497 wal_sync_method configuration parameter,
unix_socket_directory configuration parame- 258
ter, 253 WHERE, 73
unix_socket_group configuration parameter, where to log, 263
253 work_mem configuration parameter, 255
unix_socket_permissions configuration param-
eter, 253
UNLISTEN, 917 X
unqualified name, 58
xid, 115
UPDATE, 14, 64, 919
xmax, 50
updating, 64
xmin, 50
upgrading, 230, 332
user, 287
current, 181
Y
yacc, 230
V
vacuum, 315, 922
vacuumdb, 993
Z
vacuum_cost_delay configuration parameter, zero_damaged_pages configuration parameter,
256 276
vacuum_cost_limit configuration parameter, zlib, 235
257
vacuum_cost_page_dirty configuration param-
eter, 257
vacuum_cost_page_hit configuration parame-
ter, 257
vacuum_cost_page_miss configuration param-
eter, 257
value expression, 31
varchar, 86
variance, 173
version, 4, 181
1379