PostgreSQL>]]>
Installation from Source Codeinstallation
This
describes the installation of
PostgreSQL using the source code
distribution. (If you are installing a pre-packaged distribution,
such as an RPM or Debian package, ignore this
and read the packager's instructions instead.)
Short Version
The following short installation allows to install a simple cluster on a local machine with
1 Coordinator, 2 Datanodes and 1 GTM. When installing a more complex cluster, you might
change the number of Coordinators and Datanodes, and might have to start nodes on different
servers.
Also, you can instead use the pgxc_ctl utility, which simplifies the installation and configuration process.
./configure
gmake
su
gmake install
adduser postgres
mkdir /usr/local/pgsql/data_coord1
mkdir /usr/local/pgsql/data_datanode_1
mkdir /usr/local/pgsql/data_datanode_2
mkdir /usr/local/pgsql/data_gtm
chown postgres /usr/local/pgsql/data_coord1
chown postgres /usr/local/pgsql/data_datanode_1
chown postgres /usr/local/pgsql/data_datanode_2
chown postgres /usr/local/pgsql/data_gtm
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_coord1 \
--nodename coord1
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_datanode_1 \
--nodename datanode_1
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_datanode_2 \
--nodename datanode_2
/usr/local/pgsql/bin/initgtm -D /usr/local/pgsql/data_gtm -Z gtm
/usr/local/pgsql/bin/gtm -D /usr/local/pgsql/data_gtm >logfile 2>&1 &
/usr/local/pgsql/bin/postgres --datanode -p 15432 -c pooler_port=40101 \
-D /usr/local/pgsql/data_datanode_1 >logfile 2>&1 &
/usr/local/pgsql/bin/postgres --datanode -p 15433 -c pooler_port=40102 \
-D /usr/local/pgsql/data_datanode_2 >logfile 2>&1 &
/usr/local/pgsql/bin/postgres --coordinator -c pooler_port=40100 \
-D /usr/local/pgsql/data_coord1 >logfile 2>&1 &
/usr/local/pgsql/bin/psql -c "ALTER NODE coord1 \
WITH (TYPE = 'coordinator', PORT = 5432)" postgres
/usr/local/pgsql/bin/psql -c "CREATE NODE datanode_1 \
WITH (TYPE = 'datanode', PORT = 15432)" postgres
/usr/local/pgsql/bin/psql -c "CREATE NODE datanode_2 \
WITH (TYPE = 'datanode', PORT = 15433)" postgres
/usr/local/pgsql/bin/psql -c "EXECUTE DIRECT ON (datanode_1) \
'ALTER NODE datanode_1 WITH (TYPE = ''datanode'', PORT = 15432)'" postgres
/usr/local/pgsql/bin/psql -c "EXECUTE DIRECT ON (datanode_1) \
'CREATE NODE datanode_2 WITH (TYPE = ''datanode'', PORT = 15433)'" postgres
/usr/local/pgsql/bin/psql -c "EXECUTE DIRECT ON (datanode_1) \
'CREATE NODE coord1 WITH (TYPE = ''coordinator'', PORT = 5432)'" postgres
/usr/local/pgsql/bin/psql -c "EXECUTE DIRECT ON (datanode_2) \
'ALTER NODE datanode_2 WITH (TYPE = ''datanode'', PORT = 15433)'" postgres
/usr/local/pgsql/bin/psql -c "EXECUTE DIRECT ON (datanode_2) \
'CREATE NODE datanode_1 WITH (TYPE = ''datanode'', PORT = 15432)'" postgres
/usr/local/pgsql/bin/psql -c "EXECUTE DIRECT ON (datanode_2) \
'CREATE NODE coord1 WITH (TYPE = ''coordinator'', PORT = 5432)'" postgres
/usr/local/pgsql/bin/psql -c "SELECT pgxc_pool_reload()" postgres
/usr/local/pgsql/bin/psql -c "EXECUTE DIRECT ON (datanode_1) \
'SELECT pgxc_pool_reload()'" postgres
/usr/local/pgsql/bin/psql -c "EXECUTE DIRECT ON (datanode_2) \
'SELECT pgxc_pool_reload()'" postgres
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test
The long version is the rest of this
Requirements
In general, a modern Unix-compatible platform should be able to run
PostgreSQL>.
The platforms that had received specific testing at the
time of release are listed in
below. In the doc> subdirectory of the distribution
there are several platform-specific FAQ> documents you
might wish to consult if you are having trouble.
The following software packages are required for building
PostgreSQL>:
make
GNU> make> version 3.80 or newer is required; other
make> programs or older GNU> make> versions will not> work.
(GNU> make> is sometimes installed under
the name gmake.) To test for GNU
make enter:
make --version
You need an ISO>/ANSI> C compiler (at least
C89-compliant). Recent
versions of GCC> are recommended, but
PostgreSQL> is known to build using a wide variety
of compilers from different vendors.
tar> is required to unpack the source
distribution, in addition to either
gzip> or bzip2>.
readlinelibedit
The GNU> Readline> library is used by
default. It allows psql (the
PostgreSQL command line SQL interpreter) to remember each
command you type, and allows you to use arrow keys to recall and
edit previous commands. This is very helpful and is strongly
recommended. If you don't want to use it then you must specify
the option to
configure>. As an alternative, you can often use the
BSD-licensed libedit library, originally
developed on NetBSD. The
libedit library is
GNU Readline-compatible and is used if
libreadline is not found, or if
is used as an
option to configure>. If you are using a package-based
Linux distribution, be aware that you need both the
readline> and readline-devel> packages, if
those are separate in your distribution.
zlib
The zlib compression library is
used by default. If you don't want to use it then you must
specify the option to
configure. Using this option disables
support for compressed archives in pg_dump> and
pg_restore>.
The following packages are optional. They are not required in the
default configuration, but they are needed when certain build
options are enabled, as explained below:
To build the server programming language
PL/Perl you need a full
Perl installation, including the
libperl library and the header files.
The minimum required version is Perl 5.8.3.
Since PL/Perl will be a shared
library, the libperllibperl library must be a shared library
also on most platforms. This appears to be the default in
recent Perl versions, but it was not
in earlier versions, and in any case it is the choice of whomever
installed Perl at your site. configure will fail
if building PL/Perl is selected but it cannot
find a shared libperl. In that case, you will have
to rebuild and install Perl manually to be
able to build PL/Perl. During the
configuration process for Perl, request a
shared library.
If you intend to make more than incidental use of
PL/Perl, you should ensure that the
Perl installation was built with the
usemultiplicity> option enabled (perl -V>
will show whether this is the case).
To build the PL/Python> server programming
language, you need a Python
installation with the header files and
the distutils module. The minimum
required version is Python 2.4.
Python 3 is supported if it's
version 3.1 or later; but see
PL/Python> documentation]]>
]]>
when using Python 3.
Since PL/Python will be a shared
library, the libpythonlibpython library must be a shared library
also on most platforms. This is not the case in a default
Python installation built from source, but a
shared library is available in many operating system
distributions. configure will fail if
building PL/Python is selected but it cannot
find a shared libpython. That might mean that you
either have to install additional packages or rebuild (part of) your
Python installation to provide this shared
library. When building from source, run Python>'s
configure with the --enable-shared> flag.
To build the PL/Tcl
procedural language, you of course need a Tcl>
installation. The minimum required version is
Tcl 8.4.
To enable Native Language Support (NLS), that
is, the ability to display a program's messages in a language
other than English, you need an implementation of the
Gettext> API. Some operating
systems have this built-in (e.g., Linux>, NetBSD>,
Solaris>), for other systems you
can download an add-on package from .
If you are using the Gettext> implementation in
the GNU C library then you will additionally
need the GNU Gettext package for some
utility programs. For any of the other implementations you will
not need it.
You need OpenSSL>, if you want to support
encrypted client connections. The minimum required version is
0.9.8.
You need Kerberos>, OpenLDAP>,
and/or PAM>, if you want to support authentication
using those services.
To build the PostgreSQL documentation,
there is a separate set of requirements; see
.]]>
If you are building from a Git tree instead of
using a released source package, or if you want to do server development,
you also need the following packages:
flexlexbisonyacc
GNU Flex> and Bison>
are needed to build from a Git checkout, or if you changed the actual
scanner and parser definition files. If you need them, be sure
to get Flex> 2.5.31 or later and
Bison> 1.875 or later. Other lex>
and yacc> programs cannot be used.
perlPerl> 5.8.3 or later is needed to build from a Git checkout,
or if you changed the input files for any of the build steps that
use Perl scripts. If building on Windows you will need
Perl> in any case. Perl is
also required to run some test suites.
If you need to get a GNU package, you can find
it at your local GNU mirror site (see >
for a list) or at .
Also check that you have sufficient disk space. You will need about
100 MB for the source tree during compilation and about 20 MB for
the installation directory. An empty database cluster takes about
35 MB; databases take about five times the amount of space that a
flat text file with the same data would take. If you are going to
run the regression tests you will temporarily need up to an extra
150 MB. Use the df command to check free disk
space.
Getting The Source
You can also get the source directly from the version control repository, see
.
]]>
Installation ProcedureConfigurationconfigure
The first step of the installation procedure is to configure the
source tree for your system and choose the options you would like.
This is done by running the configure> script. For a
default installation simply enter:
./configure
This script will run a number of tests to determine values for various
system dependent variables and detect any quirks of your
operating system, and finally will create several files in the
build tree to record what it found. You can also run
configure in a directory outside the source
tree, if you want to keep the build directory separate. This
procedure is also called a
VPATHVPATH
build. Here's how:
mkdir build_dircd build_dir/path/to/source/tree/configure [options go here]make
The default configuration will build the server and utilities, as
well as all client applications and interfaces that require only a
C compiler. All files will be installed under
/usr/local/pgsql> by default.
You can customize the build and installation process by supplying one
or more of the following command line options to
configure:
Install all files under the directory PREFIX>
instead of /usr/local/pgsql. The actual
files will be installed into various subdirectories; no files
will ever be installed directly into the
PREFIX> directory.
If you have special needs, you can also customize the
individual subdirectories with the following options. However,
if you leave these with their defaults, the installation will be
relocatable, meaning you can move the directory after
installation. (The man> and doc>
locations are not affected by this.)
For relocatable installs, you might want to use
configure's --disable-rpath>
option. Also, you will need to tell the operating system how
to find the shared libraries.
You can install architecture-dependent files under a
different prefix, EXEC-PREFIX>, than what
PREFIX> was set to. This can be useful to
share architecture-independent files between hosts. If you
omit this, then EXEC-PREFIX> is set equal to
PREFIX> and both architecture-dependent and
independent files will be installed under the same tree,
which is probably what you want.
Specifies the directory for executable programs. The default
is EXEC-PREFIX>/bin>, which
normally means /usr/local/pgsql/bin>.
Sets the directory for various configuration files,
PREFIX>/etc> by default.
Sets the location to install libraries and dynamically loadable
modules. The default is
EXEC-PREFIX>/lib>.
Sets the directory for installing C and C++ header files. The
default is PREFIX>/include>.
Sets the root directory for various types of read-only data
files. This only sets the default for some of the following
options. The default is
PREFIX>/share>.
Sets the directory for read-only data files used by the
installed programs. The default is
DATAROOTDIR>>. Note that this has
nothing to do with where your database files will be placed.
Sets the directory for installing locale data, in particular
message translation catalog files. The default is
DATAROOTDIR>/locale>.
The man pages that come with PostgreSQL> will be installed under
this directory, in their respective
manx>> subdirectories.
The default is DATAROOTDIR>/man>.
Sets the root directory for installing documentation files,
except man> pages. This only sets the default for
the following options. The default value for this option is
DATAROOTDIR>/doc/postgresql>.
The HTML-formatted documentation for
PostgreSQL will be installed under
this directory. The default is
DATAROOTDIR>>.
Care has been taken to make it possible to install
PostgreSQL> into shared installation locations
(such as /usr/local/include) without
interfering with the namespace of the rest of the system. First,
the string /postgresql is
automatically appended to datadir,
sysconfdir, and docdir,
unless the fully expanded directory name already contains the
string postgres> or
pgsql>. For example, if you choose
/usr/local as prefix, the documentation will
be installed in /usr/local/doc/postgresql,
but if the prefix is /opt/postgres, then it
will be in /opt/postgres/doc. The public C
header files of the client interfaces are installed into
includedir and are namespace-clean. The
internal header files and the server header files are installed
into private directories under includedir. See
the documentation of each interface for information about how to
access its header files. Finally, a private subdirectory will
also be created, if appropriate, under libdir
for dynamically loadable modules.
Append STRING> to the PostgreSQL version number. You
can use this, for example, to mark binaries built from unreleased Git
snapshots or containing custom patches with an extra version string
such as a git describe identifier or a
distribution package release number.
DIRECTORIES> is a colon-separated list of
directories that will be added to the list the compiler
searches for header files. If you have optional packages
(such as GNU Readline>) installed in a non-standard
location,
you have to use this option and probably also the corresponding
Example: --with-includes=/opt/gnu/include:/usr/sup/include>.
DIRECTORIES> is a colon-separated list of
directories to search for libraries. You will probably have
to use this option (and the corresponding
Example: --with-libraries=/opt/gnu/lib:/usr/sup/lib>.
Enables Native Language Support (NLS),
that is, the ability to display a program's messages in a
language other than English.
LANGUAGES is an optional space-separated
list of codes of the languages that you want supported, for
example --enable-nls='de fr'>. (The intersection
between your list and the set of actually provided
translations will be computed automatically.) If you do not
specify a list, then all available translations are
installed.
To use this option, you will need an implementation of the
Gettext> API; see above.
Set NUMBER> as the default port number for
server and clients. The default is 5432. The port can always
be changed later on, but if you specify it here then both
server and clients will have the same default compiled in,
which can be very convenient. Usually the only good reason
to select a non-default value is if you intend to run multiple
PostgreSQL> servers on the same machine.
Build the PL/Perl> server-side language.
Build the PL/Python> server-side language.
Build the PL/Tcl> server-side language.
Tcl installs the file tclConfig.sh, which
contains configuration information needed to build modules
interfacing to Tcl. This file is normally found automatically
at a well-known location, but if you want to use a different
version of Tcl you can specify the directory in which to look
for it.
Build with support for GSSAPI authentication. On many
systems, the GSSAPI (usually a part of the Kerberos installation)
system is not installed in a location
that is searched by default (e.g., /usr/include>,
/usr/lib>), so you must use the options
The default name of the Kerberos service principal used
by GSSAPI.
postgres is the default. There's usually no
reason to change this unless you have a Windows environment,
in which case it must be set to upper case
POSTGRES.
Build with support for
the ICUICU>>
library. This requires the ICU4C package
to be installed. The minimum required version
of ICU4C is currently 4.2.
By default,
pkg-configpkg-config>>
will be used to find the required compilation options. This is
supported for ICU4C version 4.6 and later.
For older versions, or if pkg-config is
not available, the variables ICU_CFLAGS
and ICU_LIBS can be specified
to configure, like in this example:
./configure ... --with-icu ICU_CFLAGS='-I/some/where/include' ICU_LIBS='-L/some/where/lib -licui18n -licuuc -licudata'
(If ICU4C is in the default search path
for the compiler, then you still need to specify a nonempty string in
order to avoid use of pkg-config, for
example, ICU_CFLAGS=' '.)
OpenSSLSSL
Build with support for SSL> (encrypted)
connections. This requires the OpenSSL>
package to be installed. configure> will check
for the required header files and libraries to make sure that
your OpenSSL> installation is sufficient
before proceeding.
Build with PAM>PAM>>
(Pluggable Authentication Modules) support.
Build with BSD Authentication support.
(The BSD Authentication framework is
currently only available on OpenBSD.)
Build with LDAP>LDAP>>
support for authentication and connection parameter lookup (see
and
]]> for more information). On Unix,
this requires the OpenLDAP> package to be
installed. On Windows, the default WinLDAP>
library is used. configure> will check for the required
header files and libraries to make sure that your
OpenLDAP> installation is sufficient before
proceeding.
Build with support
for systemdsystemd
service notifications. This improves integration if the server binary
is started under systemd but has no impact
otherwise for more
information]]>. libsystemd and the
associated header files need to be installed to be able to use this
option.
Prevents use of the Readline> library
(and libedit> as well). This option disables
command-line editing and history in
psql, so it is not recommended.
Favors the use of the BSD-licensed libedit> library
rather than GPL-licensed Readline>. This option
is significant only if you have both libraries installed; the
default in that case is to use Readline>.
Build with Bonjour support. This requires Bonjour support
in your operating system. Recommended on macOS.
Build the
]]> module
(which provides functions to generate UUIDs), using the specified
UUID library.UUIDLIBRARY must be one of:
Obsolete equivalent of --with-uuid=ossp.
Build with libxml (enables SQL/XML support). Libxml version 2.6.23 or
later is required for this feature.
Libxml installs a program xml2-config that
can be used to detect the required compiler and linker
options. PostgreSQL will use it automatically if found. To
specify a libxml installation at an unusual location, you can
either set the environment variable
XML2_CONFIG to point to the
xml2-config program belonging to the
installation, or use the options
and
.
Use libxslt when building the
]]>
module. xml2> relies on this library
to perform XSL transformations of XML.
Disable passing float4 values by value>, causing them
to be passed by reference> instead. This option costs
performance, but may be needed for compatibility with old
user-defined functions that are written in C and use the
version 0> calling convention. A better long-term
solution is to update any such functions to use the
version 1> calling convention.
Disable passing float8 values by value>, causing them
to be passed by reference> instead. This option costs
performance, but may be needed for compatibility with old
user-defined functions that are written in C and use the
version 0> calling convention. A better long-term
solution is to update any such functions to use the
version 1> calling convention.
Note that this option affects not only float8, but also int8 and some
related types such as timestamp.
On 32-bit platforms,
Set the segment size>, in gigabytes. Large tables are
divided into multiple operating-system files, each of size equal
to the segment size. This avoids problems with file size limits
that exist on many platforms. The default segment size, 1 gigabyte,
is safe on all supported platforms. If your operating system has
largefile> support (which most do, nowadays), you can use
a larger segment size. This can be helpful to reduce the number of
file descriptors consumed when working with very large tables.
But be careful not to select a value larger than is supported
by your platform and the file systems you intend to use. Other
tools you might wish to use, such as tar>, could
also set limits on the usable file size.
It is recommended, though not absolutely required, that this value
be a power of 2.
Note that changing this value requires an initdb.
Set the block size>, in kilobytes. This is the unit
of storage and I/O within tables. The default, 8 kilobytes,
is suitable for most situations; but other values may be useful
in special cases.
The value must be a power of 2 between 1 and 32 (kilobytes).
Note that changing this value requires an initdb.
Set the WAL segment size>, in megabytes. This is
the size of each individual file in the WAL log. It may be useful
to adjust this size to control the granularity of WAL log shipping.
The default size is 16 megabytes.
The value must be a power of 2 between 1 and 1024 (megabytes).
Note that changing this value requires an initdb.
Set the WAL block size>, in kilobytes. This is the unit
of storage and I/O within the WAL log. The default, 8 kilobytes,
is suitable for most situations; but other values may be useful
in special cases.
The value must be a power of 2 between 1 and 64 (kilobytes).
Note that changing this value requires an initdb.
Allow the build to succeed even if PostgreSQL>
has no CPU spinlock support for the platform. The lack of
spinlock support will result in poor performance; therefore,
this option should only be used if the build aborts and
informs you that the platform lacks spinlock support. If this
option is required to build PostgreSQL> on
your platform, please report the problem to the
PostgreSQL> developers.
Allow the build to succeed even if PostgreSQL>
has no support for strong random numbers on the platform.
A source of random numbers is needed for some authentication
protocols, as well as some routines in the
]]>
module. disables functionality that
requires cryptographically strong random numbers, and substitutes
a weak pseudo-random-number-generator for the generation of
authentication salt values and query cancel keys. It may make
authentication less secure.
Disable the thread-safety of client libraries. This prevents
concurrent threads in libpq and
ECPG programs from safely controlling
their private connection handles.
time zone dataPostgreSQL> includes its own time zone database,
which it requires for date and time operations. This time zone
database is in fact compatible with the IANA time zone
database provided by many operating systems such as FreeBSD,
Linux, and Solaris, so it would be redundant to install it again.
When this option is used, the system-supplied time zone database
in DIRECTORY is used instead of the one
included in the PostgreSQL source distribution.
DIRECTORY must be specified as an
absolute path. /usr/share/zoneinfo is a
likely directory on some operating systems. Note that the
installation routine will not detect mismatching or erroneous time
zone data. If you use this option, you are advised to run the
regression tests to verify that the time zone data you have
pointed to works correctly with PostgreSQL>.
cross compilation
This option is mainly aimed at binary package distributors
who know their target operating system well. The main
advantage of using this option is that the PostgreSQL package
won't need to be upgraded whenever any of the many local
daylight-saving time rules change. Another advantage is that
PostgreSQL can be cross-compiled more straightforwardly if the
time zone database files do not need to be built during the
installation.
zlib
Prevents use of the Zlib> library. This disables
support for compressed archives in pg_dump
and pg_restore.
This option is only intended for those rare systems where this
library is not available.
Compiles all programs and libraries with debugging symbols.
This means that you can run the programs in a debugger
to analyze problems. This enlarges the size of the installed
executables considerably, and on non-GCC compilers it usually
also disables compiler optimization, causing slowdowns. However,
having the symbols available is extremely helpful for dealing
with any problems that might arise. Currently, this option is
recommended for production installations only if you use GCC.
But you should always have it on if you are doing development work
or running a beta version.
If using GCC, all programs and libraries are compiled with
code coverage testing instrumentation. When run, they
generate files in the build directory with code coverage
metrics.
for more information.]]> This option is for use only with GCC
and when doing development work.
If using GCC, all programs and libraries are compiled so they
can be profiled. On backend exit, a subdirectory will be created
that contains the gmon.out> file for use in profiling.
This option is for use only with GCC and when doing development work.
Enables assertion> checks in the server, which test for
many cannot happen> conditions. This is invaluable for
code development purposes, but the tests can slow down the
server significantly.
Also, having the tests turned on won't necessarily enhance the
stability of your server! The assertion checks are not categorized
for severity, and so what might be a relatively harmless bug will
still lead to server restarts if it triggers an assertion
failure. This option is not recommended for production use, but
you should have it on for development work or when running a beta
version.
Enables automatic dependency tracking. With this option, the
makefiles are set up so that all affected object files will
be rebuilt when any header file is changed. This is useful
if you are doing development work, but is just wasted overhead
if you intend only to compile once and install. At present,
this option only works with GCC.
DTrace
Compiles PostgreSQL with support for the
dynamic tracing tool DTrace.
for more information.]]>
To point to the dtrace program, the
environment variable DTRACE can be set. This
will often be necessary because dtrace is
typically installed under /usr/sbin,
which might not be in the path.
Extra command-line options for the dtrace program
can be specified in the environment variable
DTRACEFLAGS. On Solaris,
to include DTrace support in a 64-bit binary, you must specify
DTRACEFLAGS="-64"> to configure. For example,
using the GCC compiler:
./configure CC='gcc -m64' --enable-dtrace DTRACEFLAGS='-64' ...
Using Sun's compiler:
./configure CC='/opt/SUNWspro/bin/cc -xtarget=native64' --enable-dtrace DTRACEFLAGS='-64' ...
Enable tests using the Perl TAP tools. This requires a Perl
installation and the Perl module IPC::Run.
for more information.]]>
If you prefer a C compiler different from the one
configure picks, you can set the
environment variable CC> to the program of your choice.
By default, configure will pick
gcc if available, else the platform's
default (usually cc>). Similarly, you can override the
default compiler flags if needed with the CFLAGS variable.
You can specify environment variables on the
configure command line, for example:
./configure CC=/opt/bin/gcc CFLAGS='-O2 -pipe'>
Here is a list of the significant variables that can be set in
this manner:
BISON
Bison program
CC
C compiler
CFLAGS
options to pass to the C compiler
CPP
C preprocessor
CPPFLAGS
options to pass to the C preprocessor
DTRACE
location of the dtrace program
DTRACEFLAGS
options to pass to the dtrace program
FLEX
Flex program
LDFLAGS
options to use when linking either executables or shared libraries
LDFLAGS_EX
additional options for linking executables only
LDFLAGS_SL
additional options for linking shared libraries only
MSGFMTmsgfmt program for native language support
PERL
Full path name of the Perl interpreter. This will be used to
determine the dependencies for building PL/Perl.
PYTHON
Full path name of the Python interpreter. This will be used to
determine the dependencies for building PL/Python. Also,
whether Python 2 or 3 is specified here (or otherwise
implicitly chosen) determines which variant of the PL/Python
language becomes available. See
PL/Python>
documentation]]>
]]>
for more information.
TCLSH
Full path name of the Tcl interpreter. This will be used to
determine the dependencies for building PL/Tcl, and it will
be substituted into Tcl scripts.
XML2_CONFIGxml2-config program used to locate the
libxml installation.
Sometimes it is useful to add compiler flags after-the-fact to the set
that were chosen by configure>. An important example is
that gcc>'s
When developing code inside the server, it is recommended to
use the configure options
If using GCC, it is best to build with an optimization level of
at least
The COPT> and PROFILE> environment variables are
actually handled identically by the PostgreSQL>
makefiles. Which to use is a matter of preference, but a common habit
among developers is to use PROFILE> for one-time flag
adjustments, while COPT> might be kept set all the time.
Build
To start the build, type:
make
(Remember to use GNU> make>.) The build
will take a few minutes depending on your
hardware. The last line displayed should be:
All of PostgreSQL successfully made. Ready to install.
If you want to build everything that can be built, including the
documentation (HTML and man pages), and the additional modules
(contrib), type instead:
make world
The last line displayed should be:
PostgreSQL, contrib, and documentation successfully made. Ready to install.
Regression Testsregression test
If you want to test the newly built server before you install it,
you can run the regression tests at this point. The regression
tests are a test suite to verify that PostgreSQL>
runs on your machine in the way the developers expected it
to. Type:
make check
(This won't work as root; do it as an unprivileged user.)
src/test/regress/README> and the
documentation contain]]>
contains]]>
detailed information about interpreting the test results. You can
repeat this test at any later time by issuing the same command.
Installing the Files
Before learning how to install Postgres-XL>, you should
determine what you are going to install on each server. The following
lists the Postgres-XL> components that you've built are
going to install.
GTM
GTM stands for Global Transaction Manager. It provides global
transaction IDs and snapshots to each transaction in the
Postgres-XL> database cluster. It also provides several
global values such as sequences and global timestamps.
GTM itself can be configured as a backup of another GTM as a
GTM-Standby so that the GTM can continue to run even if the main GTM
fails. You may want to install a GTM-Standby to a separate server.
GTM-Proxy
Because the GTM has to take care of each transaction, it has to
read and write enormous amounts of messages, which may
restrict Postgres-XL>'s scalability. GTM-Proxy is
a proxy of the GTM feature that groups requests and responses to
reduce network read/write by GTM. Distributing one snapshot to
multiple transactions also contributes to reduce the GTM network
workload.
Coordinator
The Coordinator is an entry point to Postgres-XL> from
applications. You can run more than one Coordinator simultaneously in
the cluster. Each Coordinator behaves just as a
PostgreSQL> database server, while all the Coordinators
handles transactions in harmonized way so that any transaction coming
into one Coordinator is protected against any other transactions coming
into others. Updates by a transaction is visible immediately to others
running in other Coordinators. To simplify the load balancing of
Coordinators and Datanodes, as mentioned below, it is highly
recommended to install same number of Coordinator and Datanode in a
server.
Datanode
Datanode
A Coordinator and Datanode share the same binaries but their behavior
is a little different. The Coordinator decomposes incoming statements
into those handled by Datanodes. If necessary, the Coordinator
materializes responses from Datanodes to calculate final response to
applications.
The Datanode is very close to PostgreSQL itself because it just handles
incoming statements locally.
If you are upgrading an existing system be sure to read
]]>
which has instructions about upgrading a
cluster.
To install PostgreSQL> enter:
make install
This will install files into the directories that were specified
in . Make sure that you have appropriate
permissions to write into that area. Normally you need to do this
step as root. Alternatively, you can create the target
directories in advance and arrange for appropriate permissions to
be granted.
To install the documentation (HTML and man pages), enter:
make install-docs
If you built the world above, type instead:
make install-world
This also installs the documentation.
You can use make install-strip instead of
make install to strip the executable files and
libraries as they are installed. This will save some space. If
you built with debugging support, stripping will effectively
remove the debugging support, so it should only be done if
debugging is no longer needed. install-strip
tries to do a reasonable job saving space, but it does not have
perfect knowledge of how to strip every unneeded byte from an
executable file, so if you want to save all the disk space you
possibly can, you will have to do manual work.
The standard installation provides all the header files needed for client
application development as well as for server-side program
development, such as custom functions or data types written in C.
(Prior to PostgreSQL> 8.0, a separate make
install-all-headers> command was needed for the latter, but this
step has been folded into the standard install.)
Client-only installation:
If you want to install only the client applications and
interface libraries, then you can use these commands:
make -C src/bin install>
make -C src/include install>
make -C src/interfaces install>
make -C doc install>
src/bin> has a few binaries for server-only use,
but they are small.
Uninstallation:
To undo the installation use the command make
uninstall>. However, this will not remove any created directories.
Cleaning:
After the installation you can free disk space by removing the built
files from the source tree with the command make
clean>. This will preserve the files made by the configure
program, so that you can rebuild everything with make>
later on. To reset the source tree to the state in which it was
distributed, use make distclean>. If you are going to
build for several platforms within the same source tree you must do
this and re-configure for each platform. (Alternatively, use
a separate build tree for each platform, so that the source tree
remains unmodified.)
If you perform a build and then discover that your configure>
options were wrong, or if you change anything that configure>
investigates (for example, software upgrades), then it's a good
idea to do make distclean> before reconfiguring and
rebuilding. Without this, your changes in configuration choices
might not propagate everywhere they need to.
Post-Installation SetupShared Librariesshared library
On some systems with shared libraries
you need to tell the system how to find the newly installed
shared libraries. The systems on which this is
not necessary include
FreeBSD>,
HP-UX>,
Linux>,
NetBSD>, OpenBSD>, and
Solaris>.
The method to set the shared library search path varies between
platforms, but the most widely-used method is to set the
environment variable LD_LIBRARY_PATH> like so: In Bourne
shells (sh>, ksh>, bash>, zsh>):
LD_LIBRARY_PATH=/usr/local/pgsql/lib
export LD_LIBRARY_PATH
or in csh> or tcsh>:
setenv LD_LIBRARY_PATH /usr/local/pgsql/lib
Replace /usr/local/pgsql/lib> with whatever you set
On some systems it might be preferable to set the environment
variable LD_RUN_PATHbefore
building.
On Cygwin, put the library
directory in the PATH or move the
.dll files into the bin
directory.
If in doubt, refer to the manual pages of your system (perhaps
ld.so or rld). If you later
get a message like:
psql: error in loading shared libraries
libpq.so.2.1: cannot open shared object file: No such file or directory
then this step was necessary. Simply take care of it then.
ldconfig
If you are on Linux> and you have root
access, you can run:
/sbin/ldconfig /usr/local/pgsql/lib
(or equivalent directory) after installation to enable the
run-time linker to find the shared libraries faster. Refer to the
manual page of ldconfig> for more information. On
FreeBSD>, NetBSD>, and OpenBSD> the command is:
/sbin/ldconfig -m /usr/local/pgsql/lib
instead. Other systems are not known to have an equivalent
command.
Environment VariablesPATH
If you installed into /usr/local/pgsql> or some other
location that is not searched for programs by default, you should
add /usr/local/pgsql/bin> (or whatever you set
To do this, add the following to your shell start-up file, such as
~/.bash_profile> (or /etc/profile>, if you
want it to affect all users):
PATH=/usr/local/pgsql/bin:$PATH
export PATH
If you are using csh> or tcsh>, then use this command:
set path = ( /usr/local/pgsql/bin $path )
MANPATH
To enable your system to find the man>
documentation, you need to add lines like the following to a
shell start-up file unless you installed into a location that is
searched by default:
MANPATH=/usr/local/pgsql/share/man:$MANPATH
export MANPATH
The environment variables PGHOST> and PGPORT>
specify to client applications the host and port of the database
server, overriding the compiled-in defaults. If you are going to
run client applications remotely then it is convenient if every
user that plans to use the database sets PGHOST>. This
is not required, however; the settings can be communicated via command
line options to most client programs.
When, as typical case, you're configuring both a Coordinator and Datanode
in a same server, please be careful not to assign the same resource, such
as listening point (IP address and port number) to the different
components. Otherwise they will conflict and Postgres-XL>
will not run correctly.
Getting Started
The following is a quick summary of how to get PostgreSQL> up and
running once installed. The main documentation contains more information.
Create a user account for the PostgreSQL>
server. This is the user the server will run as. For production
use you should create a separate, unprivileged account
(postgres> is commonly used). If you do not have root
access or just want to play around, your own user account is
enough, but running the server as root is a security risk and
will not work.
adduser postgres>
If you follow the previous steps, you will have files ready to distribute
to servers where you want to run one or more Postgres-XL>
components.
After you've installed your build locally, the build target will include
the following directories.
bin/ include/ lib/ share/
The bin> directory contains executable binaries and scripts.
The include> directory contains header files needed to build
Postgres-XL> applications. The lib> directory
contains shared libraries needed to run binaries, as well as static
libraries that should be included into your application binaries.
Finally, the share> directory contains miscellaneous files
that Postgres-XL> should read at runtime, as well as sample
files.
If your servers has sufficient file space, you can copy all the files to
the target server. Total size is less than 30 megabytes. If you want to
install minimal files to each servers, please follow the following
paragraphs.
For the server to run a GTM or a GTM-Standby, you need to copy the
following files to your path: bin/gtm> and
bin/gtm_ctl>.
For a server to run a GTM-Proxy (the server you run a Coordinator and/or
a Datanode), you need to copy the following files to your path:
bin/gtm_proxy and bin/gtm_ctl>.
For a server to run a Coordinator or a Datanode, or both, you should copy
the following files to your path: bin/initdb>. You should
also copy everything in the path> directory to your library
search path.
Create a database installation with the initdb>
command. To run initdb> you must be logged in to your
PostgreSQL> server account. It will not work as
root.
root# mkdir /usr/local/pgsql/data>
root# chown postgres /usr/local/pgsql/data>
root# su - postgres>
postgres$ /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data>
The
If you're configuring both Datanodes and Coordinators on the same server,
you should specify different
At this point, if you did not use the initdb> -A>
option, you might want to modify pg_hba.conf> to control
local access to the server before you start it. The default is to
trust all local users.
You should configure a GTM and a GTM-Proxy, as well as a GTM-Standby if
you need high-availability capability for the GTM before you really run a
Postgres-XL> database cluster. You can do the following
before you run initdb>.
Each GTM, GTM-Proxy and GTM-Standby need their own working directories.
Create them as the Postgres-XL> owner's user. Please
assign a port number to each of them, although you don't have to do any
configuration work now.
Now you should configure each Coordinator and Datanode. Because they have
to communicate each other and a number of other servers, Datanodes and
Coordinators need to be configured correctly, we do not provide default
configuration files for them.
You can configure Datanodes and Coordinators by editing the
postgresql.conf> file located under the directory you
specified with max_prepared_transactionsmin_pool_size
A Coordinator is associated with a connection pooler which takes care of
connections with other Coordinators and Datanodes. This parameter
specifies minimum number of connection to pool. If you're not
configuring the Postgres-XL> cluster in an unbalanced
way, you should specify the same value to all the Coordinators.
max_pool_size
This parameter specifies maximum number of the pooled connections. This
value should be at least more than the number of all the Coordinators
and Datanodes. If you specify a lesser value, you will see very
frequent close and open connections, which leads to serious performance
problems. If you're not configuring a Postgres-XL>
cluster in an unbalanced way, you should specify the same value to all
the Coordinators.
pool_conn_keepalive
This parameter specifies how long to keep the connection alive. If
older than this amount, the pooler discards the connection. This
parameter is useful in multi-tenant environments where many connections
to many different databases may be used, so that idle connections may be
cleaned up. It is also useful for automatically closing connections
occasionally in case there is some unknown memory leak so that this
memory can be freed.
pool_maintenance_timeout
This parameter specifies how long to wait until pooler maintenance is
performed. During such maintenance, old idle connections are discarded.
This parameter is useful in multi-tenant environments where many
connections to many different databases may be used, so that idle
connections may cleaned up.
remote_query_cost
This parameter specifies the cost overhead of setting up a remote query
to obtain remote data. It is used by the planner in costing queries.
network_byte_cost
This parameter is used in query cost planning to estimate the cost
involved in row shipping and obtaining remote data based on the expected
data size. Row shipping is expensive and adds latency, so this setting
helps to favor plans that minimizes row shipping.
sequence_range
This parameter is used to get several sequence values at once from the
GTM. This greatly speeds up COPY, INSERT and SELECT operations where
the target table uses sequences. Postgres-XL
will not use this entire amount at once, but will increase the request
size over time if many requests are done in a short time frame in the
same session. After a short time without any sequence requests, the
number of sequences decreases back down to 1.
max_coordinators
This parameter specifies maximum number of the Coordinators that can be
added to the cluster. The cluster will have to be restarted to increase
the value.
max_datanodes
This parameter specifies maximum number of the Datanodes that can be
added to the cluster. The cluster will have to be restarted to increase
the value.
pgxc_node_name
Specify the name of this cluster node.
port
Specify the port number listened to by this Coordinator.
pooler_port
Connection pooler needs a separate port.
gtm_port
Specify the port number of the GTM you're connecting to. This is local
to the server and you should specify the port assigned to the GTM-Proxy
local to the Coordinator.
Now you should configure postgresql.conf> for each Datanode.
Please note, as in the case of the Coordinator, you can specify other
postgresql.conf> parameters as in standalone
PostgreSQL>.
max_connectionsmax_prepared_transactionsport
Specify the port number listened to by the Datanode.
gtm_port
Specify the port number of the GTM that you're connecting to. This is
local to the server and you should specify the port assigned to the
GTM-Proxy local to the Datanode.
Postgres-XL introduces some additional parameters for the Datanodes as
well
shared_queues
For some joins that occur in queries, data from one Datanode may need to
be joined with data from another Datanode.
Postgres-XL uses shared queues for this
purpose. During execution each Datanode knows if it needs to produce or
consume tuples, or both.
Note that there may be mulitple shared_queues used even for a single
query. So a value should be set taking into account the number of
connections it can accept and the expected number of such joins
occurring simultaneously.
shared_queue_size
This parameter sets the size of each each shared queue allocated.
Then you are ready to start the Postgres-XL> cluster.
First, you should start the GTM with something like:
gtm -D /usr/local/pgsql/gtm -h localhost -p 20001 -n 1 -x 1000
This will start GTM.
Next, you should start a GTM-Proxy on each server you're running a
Coordinator and/or Datanode like:
gtm_proxy -h localhost -p 20002 -s localhost -t 20001 -i 1 -n 2 -D /usr/local/pgsql/gtm_proxy
This will start a GTM-Proxy.
-h> option is the host name or IP
address which GTM-Proxy listens to.
-p> option is the port
number to listen to.
-s> and
-t> are IP address or
the host name and the port number of the GTM as specified above.
-i> is the node number of GTM-Proxy beginning with 1.
-n> is the number of worker thread of GTM-Proxy. Usually, 1 or
2 is advised. Then
-D> option is the working directory of the
GTM-Proxy.
Please note that you should start a GTM-Proxy on all the servers you run
Coordinator/Datanode.
Now you can start a Datanode on each server like:
postgres --datanode -D /usr/local/pgsql/Datanode
This will start the Datanode.
--datanode> specifies
postgres> to start as a Datanode.
-D> specifies the
data directory of the Datanode. You can specify other options of
standalone postgres>.
Please note that you should issue postgres> commands at
all the servers you're running Datanode.
Finally, you can start a Coordinator like:
postgres --coordinator -D /usr/local/pgsql/Coordinator
This will start the Coordinator.
--coordinator> specifies
postgres> to start as a Coordinator.
-D> specifies
the data directory of the Coordinator. You can specify other options of
standalone postgres>.
Please note that you should issue postgres> commands at
all the servers you're running Coordinators.
The previous step should have told you how to start up the whole database
cluster. Do so now. The command should look something like:
postgres --datanode -D /usr/local/pgsql/Datanode
This will start the Datanode in the foreground. To put the Datanode in the
background use something like:
nohup postgres --datanode -D /usr/local/pgsql/data \
</dev/null >>server.log 2>&1 </dev/null &
You can apply this to all the other components, GTM, GTM-Proxies, and
Coordinators.
To stop a Datanode running in the background you can type:
kill `cat /usr/local/pgsql/Datanode/postmaster.pid`
You can apply this to stop a Coordinator too.
To stop the GTM running in the background you can type
kill `cat /usr/local/pgsql/gtm/gtm.pid`
To stop a GTM-Proxy running in the background, you can type
kill `cat /usr/local/pgsql/gtm-proxy/gtm_proxy.pid
Create a database:
psql -p 20004 testdb>
Then enter:
psql -p 20004 testdb>
Please do not forget to give the port number of one of the Coordinators.
Then you are connected to a Coordinator listening to the port you
specified.
What Now?
The PostgreSQL> distribution contains a
comprehensive documentation set, which you should read sometime.
After installation, the documentation can be accessed by
pointing your browser to
/usr/local/pgsql/doc/html/index.html>, unless you
changed the installation directories.
The first few chapters of the main documentation are the Tutorial,
which should be your first reading if you are completely new to
SQL> databases. If you are familiar with database
concepts then you want to proceed with part on server
administration, which contains information about how to set up
the database server, database users, and authentication.
Run the regression tests against the installed server (using
make installcheck). If you didn't run the
tests before installation, you should definitely do it now. This
is also explained in the documentation.
By default, PostgreSQL> is configured to run on
minimal hardware. This allows it to start up with almost any
hardware configuration. The default configuration is, however,
not designed for optimum performance. To achieve optimum
performance, several server parameters must be adjusted, the two
most common being shared_buffers and
work_mem.
Other parameters mentioned in the documentation also affect
performance.
]]>
Supported PlatformsPostgres-XL> can be expected to work on these operating
systems: Linux (all recent distributions), FreeBSD and Mac OS X. Other
Unix-like systems may also work but are not currently being tested.
If you have installation problems on a platform that is known
to be supported according to recent build farm results, please report it to
postgres-xl-bugs@lists.sourceforge.net. If you are
interested in porting Postgres-XL> to a new platform,
postgres-xl-developers@lists.sourceforge.net is the
appropriate place to discuss that.
Platform-specific Notes
This section documents additional platform-specific issues
regarding the installation and setup of PostgreSQL. Be sure to
read the installation instructions, and in
particular as well. Also,
check src/test/regress/README> and the documentation]]>
]]> regarding the
interpretation of regression test results.
Platforms that are not covered here have no known platform-specific
installation issues.
AIXAIXinstallation onPostgres-XL has not been tested on AIX.
PostgreSQL works on AIX, but getting it installed properly can be
challenging. AIX versions from 4.3.3 to 6.1 are considered supported.
You can use GCC or the native IBM compiler xlc. In
general, using recent versions of AIX and PostgreSQL helps. Check
the build farm for up to date information about which versions of
AIX are known to work.
The minimum recommended fix levels for supported AIX versions are:
AIX 4.3.3Maintenance Level 11 + post ML11 bundleAIX 5.1Maintenance Level 9 + post ML9 bundleAIX 5.2Technology Level 10 Service Pack 3AIX 5.3Technology Level 7AIX 6.1Base Level
To check your current fix level, use
oslevel -r in AIX 4.3.3 to AIX 5.2 ML 7, or
oslevel -s in later versions.
Use the following configure flags in addition
to your own if you have installed Readline or libz in
/usr/local>:
--with-includes=/usr/local/include
--with-libraries=/usr/local/lib.
GCC Issues
On AIX 5.3, there have been some problems getting PostgreSQL to
compile and run using GCC.
You will want to use a version of GCC subsequent to 3.3.2,
particularly if you use a prepackaged version. We had good
success with 4.0.1. Problems with earlier versions seem to have
more to do with the way IBM packaged GCC than with actual issues
with GCC, so that if you compile GCC yourself, you might well
have success with an earlier version of GCC.
Unix-Domain Sockets Broken
AIX 5.3 has a problem
where sockaddr_storage is not defined to
be large enough. In version 5.3, IBM increased the size of
sockaddr_un, the address structure for
Unix-domain sockets, but did not correspondingly increase the
size of sockaddr_storage. The result of
this is that attempts to use Unix-domain sockets with PostgreSQL
lead to libpq overflowing the data structure. TCP/IP connections
work OK, but not Unix-domain sockets, which prevents the
regression tests from working.
The problem was reported to IBM, and is recorded as bug report
PMR29657. If you upgrade to maintenance level 5300-03 or later,
that will include this fix. A quick workaround
is to alter _SS_MAXSIZE to 1025 in
/usr/include/sys/socket.h. In either case,
recompile PostgreSQL once you have the corrected header file.
Internet Address Issues
PostgreSQL relies on the system's getaddrinfo> function
to parse IP addresses in listen_addresses>,
pg_hba.conf>, etc. Older versions of AIX have assorted
bugs in this function. If you have problems related to these settings,
updating to the appropriate AIX fix level shown above
should take care of it.
One user reports:
When implementing PostgreSQL version 8.1 on AIX 5.3, we
periodically ran into problems where the statistics collector
would mysteriously not come up successfully. This
appears to be the result of unexpected behavior in the IPv6
implementation. It looks like PostgreSQL and IPv6 do not play
very well together on AIX 5.3.
Any of the following actions fix the problem.
Delete the IPv6 address for localhost:
(as root)
# ifconfig lo0 inet6 ::1/0 delete
Remove IPv6 from net services. The
file /etc/netsvc.conf on AIX is roughly
equivalent to /etc/nsswitch.conf on
Solaris/Linux. The default, on AIX, is thus:
hosts=local,bind
Replace this with:
hosts=local4,bind4
to deactivate searching for IPv6 addresses.
This is really a workaround for problems relating
to immaturity of IPv6 support, which improved visibly during the
course of AIX 5.3 releases. It has worked with AIX version 5.3,
but does not represent an elegant solution to the problem. It has
been reported that this workaround is not only unnecessary, but
causes problems on AIX 6.1, where IPv6 support has become more mature.
Memory Management
AIX can be somewhat peculiar with regards to the way it does
memory management. You can have a server with many multiples of
gigabytes of RAM free, but still get out of memory or address
space errors when running applications. One example
is loading of extensions failing with unusual errors.
For example, running as the owner of the PostgreSQL installation:
=# CREATE EXTENSION plperl;
ERROR: could not load library "/opt/dbs/pgsql/lib/plperl.so": A memory address is not in the address space for the process.
Running as a non-owner in the group possessing the PostgreSQL
installation:
=# CREATE EXTENSION plperl;
ERROR: could not load library "/opt/dbs/pgsql/lib/plperl.so": Bad address
Another example is out of memory errors in the PostgreSQL server
logs, with every memory allocation near or greater than 256 MB
failing.
The overall cause of all these problems is the default bittedness
and memory model used by the server process. By default, all
binaries built on AIX are 32-bit. This does not depend upon
hardware type or kernel in use. These 32-bit processes are
limited to 4 GB of memory laid out in 256 MB segments using one
of a few models. The default allows for less than 256 MB in the
heap as it shares a single segment with the stack.
In the case of the plperl example, above,
check your umask and the permissions of the binaries in your
PostgreSQL installation. The binaries involved in that example
were 32-bit and installed as mode 750 instead of 755. Due to the
permissions being set in this fashion, only the owner or a member
of the possessing group can load the library. Since it isn't
world-readable, the loader places the object into the process'
heap instead of the shared library segments where it would
otherwise be placed.
The ideal solution for this is to use a 64-bit
build of PostgreSQL, but that is not always practical, because
systems with 32-bit processors can build, but not run, 64-bit
binaries.
If a 32-bit binary is desired, set LDR_CNTRL to
MAXDATA=0xn0000000,
where 1 <= n <= 8, before starting the PostgreSQL server,
and try different values and postgresql.conf
settings to find a configuration that works satisfactorily. This
use of LDR_CNTRL tells AIX that you want the
server to have MAXDATA bytes set aside for the
heap, allocated in 256 MB segments. When you find a workable
configuration,
ldedit can be used to modify the binaries so
that they default to using the desired heap size. PostgreSQL can
also be rebuilt, passing configure
LDFLAGS="-Wl,-bmaxdata:0xn0000000"
to achieve the same effect.
For a 64-bit build, set OBJECT_MODE to 64 and
pass CC="gcc -maix64"
and LDFLAGS="-Wl,-bbigtoc"
to configure. (Options for
xlc might differ.) If you omit the export of
OBJECT_MODE, your build may fail with linker errors. When
OBJECT_MODE is set, it tells AIX's build utilities
such as ar>, as>, and ld> what
type of objects to default to handling.
By default, overcommit of paging space can happen. While we have
not seen this occur, AIX will kill processes when it runs out of
memory and the overcommit is accessed. The closest to this that
we have seen is fork failing because the system decided that
there was not enough memory for another process. Like many other
parts of AIX, the paging space allocation method and
out-of-memory kill is configurable on a system- or process-wide
basis if this becomes a problem.
References and ResourcesLarge Program SupportAIX Documentation: General Programming Concepts: Writing and Debugging ProgramsProgram Address Space OverviewAIX Documentation: General Programming Concepts: Writing and Debugging ProgramsPerformance Overview of the Virtual Memory Manager (VMM)AIX Documentation: Performance Management GuidePage Space AllocationAIX Documentation: Performance Management GuidePaging-space thresholds tuningAIX Documentation: Performance Management GuideDeveloping and Porting C and C++ Applications on AIXIBM RedbookCygwinCygwininstallation onPostgres-XL has not been tested on Cygwin.
PostgreSQL can be built using Cygwin, a Linux-like environment for
Windows, but that method is inferior to the native Windows build
)]]> and
running a server under Cygwin is no longer recommended.
When building from source, proceed according to the normal
installation procedure (i.e., ./configure;
make; etc.), noting the following-Cygwin specific
differences:
Set your path to use the Cygwin bin directory before the
Windows utilities. This will help prevent problems with
compilation.
The adduser command is not supported; use
the appropriate user management application on Windows NT,
2000, or XP. Otherwise, skip this step.
The su command is not supported; use ssh to
simulate su on Windows NT, 2000, or XP. Otherwise, skip this
step.
OpenSSL is not supported.
Start cygserver for shared memory support.
To do this, enter the command /usr/sbin/cygserver
&. This program needs to be running anytime you
start the PostgreSQL server or initialize a database cluster
(initdb). The
default cygserver configuration may need to
be changed (e.g., increase SEMMNS) to prevent
PostgreSQL from failing due to a lack of system resources.
Building might fail on some systems where a locale other than
C is in use. To fix this, set the locale to C by doing
export LANG=C.utf8 before building, and then
setting it back to the previous setting, after you have installed
PostgreSQL.
The parallel regression tests (make check)
can generate spurious regression test failures due to
overflowing the listen() backlog queue
which causes connection refused errors or hangs. You can limit
the number of connections using the make
variable MAX_CONNECTIONS thus:
make MAX_CONNECTIONS=5 check
(On some systems you can have up to about 10 simultaneous
connections).
It is possible to install cygserver and the
PostgreSQL server as Windows NT services. For information on how
to do this, please refer to the README
document included with the PostgreSQL binary package on Cygwin.
It is installed in the
directory /usr/share/doc/Cygwin.
HP-UXHP-UXinstallation onPostgres-XL has not been tested on HP-UX.
PostgreSQL 7.3+ should work on Series 700/800 PA-RISC machines
running HP-UX 10.X or 11.X, given appropriate system patch levels
and build tools. At least one developer routinely tests on HP-UX
10.20, and we have reports of successful installations on HP-UX
11.00 and 11.11.
Aside from the PostgreSQL source distribution, you will need GNU
make (HP's make will not do), and either GCC or HP's full ANSI C
compiler. If you intend to build from Git sources rather than a
distribution tarball, you will also need Flex (GNU lex) and Bison
(GNU yacc). We also recommend making sure you are fairly
up-to-date on HP patches. At a minimum, if you are building 64
bit binaries on HP-UX 11.11 you may need PHSS_30966 (11.11) or a
successor patch otherwise initdb may hang:
PHSS_30966 s700_800 ld(1) and linker tools cumulative patch
On general principles you should be current on libc and ld/dld
patches, as well as compiler patches if you are using HP's C
compiler. See HP's support sites such
as and
for free
copies of their latest patches.
If you are building on a PA-RISC 2.0 machine and want to have
64-bit binaries using GCC, you must use GCC 64-bit version. GCC
binaries for HP-UX PA-RISC and Itanium are available from
. Don't forget to
get and install binutils at the same time.
If you are building on a PA-RISC 2.0 machine and want the compiled
binaries to run on PA-RISC 1.1 machines you will need to specify
+DAportable
in CFLAGS.
If you are building on a HP-UX Itanium machine, you will need the
latest HP ANSI C compiler with its dependent patch or successor
patches:
PHSS_30848 s700_800 HP C Compiler (A.05.57)
PHSS_30849 s700_800 u2comp/be/plugin library Patch
If you have both HP's C compiler and GCC's, then you might want to
explicitly select the compiler to use when you
run configure:
./configure CC=cc
for HP's C compiler, or
./configure CC=gcc
for GCC. If you omit this setting, then configure will
pick gcc if it has a choice.
The default install target location
is /usr/local/pgsql, which you might want to
change to something under /opt. If so, use
the
--prefix
switch to configure.
In the regression tests, there might be some low-order-digit
differences in the geometry tests, which vary depending on which
compiler and math library versions you use. Any other error is
cause for suspicion.
MinGW/Native WindowsMinGWinstallation onPostgres-XL has not been tested on MinGW.
PostgreSQL for Windows can be built using MinGW, a Unix-like build
environment for Microsoft operating systems, or using
Microsoft's Visual C++ compiler suite.
The MinGW build variant uses the normal build system described in
this chapter; the Visual C++ build works completely differently
and is described in ]]>.
It is a fully native build and uses no additional software like
MinGW. A ready-made installer is available on the main
PostgreSQL web site.
The native Windows port requires a 32 or 64-bit version of Windows
2000 or later. Earlier operating systems do
not have sufficient infrastructure (but Cygwin may be used on
those). MinGW, the Unix-like build tools, and MSYS, a collection
of Unix tools required to run shell scripts
like configure, can be downloaded
from . Neither is
required to run the resulting binaries; they are needed only for
creating the binaries.
To build 64 bit binaries using MinGW, install the 64 bit tool set
from , put its bin
directory in the PATH, and run
configure with the
--host=x86_64-w64-mingw32 option.
After you have everything installed, it is suggested that you
run psql
under CMD.EXE, as the MSYS console has
buffering issues.
Collecting Crash Dumps on Windows
If PostgreSQL on Windows crashes, it has the ability to generate
minidumps> that can be used to track down the cause
for the crash, similar to core dumps on Unix. These dumps can be
read using the Windows Debugger Tools> or using
Visual Studio>. To enable the generation of dumps
on Windows, create a subdirectory named crashdumps
inside the cluster data directory. The dumps will then be written
into this directory with a unique name based on the identifier of
the crashing process and the current time of the crash.
SolarisSolarisinstallation onPostgres-XL has not been tested on Solaris.
PostgreSQL is well-supported on Solaris. The more up to date your
operating system, the fewer issues you will experience; details
below.
Required Tools
You can build with either GCC or Sun's compiler suite. For
better code optimization, Sun's compiler is strongly recommended
on the SPARC architecture. We have heard reports of problems
when using GCC 2.95.1; GCC 2.95.3 or later is recommended. If
you are using Sun's compiler, be careful not to select
/usr/ucb/cc;
use /opt/SUNWspro/bin/cc.
You can download Sun Studio
from .
Many of GNU tools are integrated into Solaris 10, or they are
present on the Solaris companion CD. If you like packages for
older version of Solaris, you can find these tools
at .
If you prefer
sources, look
at .
configure Complains About a Failed Test Program
If configure complains about a failed test
program, this is probably a case of the run-time linker being
unable to find some library, probably libz, libreadline or some
other non-standard library such as libssl. To point it to the
right location, set the LDFLAGS environment
variable on the configure command line, e.g.,
configure ... LDFLAGS="-R /usr/sfw/lib:/opt/sfw/lib:/usr/local/lib"
See
the ld>1>
man page for more information.
64-bit Build Sometimes Crashes
On Solaris 7 and older, the 64-bit version of libc has a buggy
vsnprintf routine, which leads to erratic
core dumps in PostgreSQL. The simplest known workaround is to
force PostgreSQL to use its own version of vsnprintf rather than
the library copy. To do this, after you
run configure edit a file produced by
configure:
In src/Makefile.global, change the line
LIBOBJS =
to read
LIBOBJS = snprintf.o
(There might be other files already listed in this variable.
Order does not matter.) Then build as usual.
Compiling for Optimal Performance
On the SPARC architecture, Sun Studio is strongly recommended for
compilation. Try using the
-xO5
optimization
flag to generate significantly faster binaries. Do not use any
flags that modify behavior of floating-point operations
and errno processing (e.g.,
-fast
). These flags could raise some
nonstandard PostgreSQL behavior for example in the date/time
computing.
If you do not have a reason to use 64-bit binaries on SPARC,
prefer the 32-bit version. The 64-bit operations are slower and
64-bit binaries are slower than the 32-bit variants. And on
other hand, 32-bit code on the AMD64 CPU family is not native,
and that is why 32-bit code is significant slower on this CPU
family.
Using DTrace for Tracing PostgreSQL
Yes, using DTrace is possible. See
]]> for further
information. You can also find more information in this
article: .
If you see the linking of the postgres executable abort with an
error message like:
Undefined first referenced
symbol in file
AbortTransaction utils/probes.o
CommitTransaction utils/probes.o
ld: fatal: Symbol referencing errors. No output written to postgres
collect2: ld returned 1 exit status
make: *** [postgres] Error 1
your DTrace installation is too old to handle probes in static
functions. You need Solaris 10u4 or newer.